Nov 26 2025
Business

European Union Reschedules High-Risk AI Rules for 2027

Image Credit : Wikimedia
Source Credit : Portfolio Prints

Background

  • In 2024, the European Union (EU) passed the landmark AI Act, one of the first comprehensive legal frameworks aimed at regulating artificial intelligence.

  • The law classifies AI systems according to risk: “unacceptable risk,” “high risk,” and lower-risk categories. Systems deemed high-risk include those used in biometric identification, healthcare, credit scoring, job recruitment, law enforcement, and road traffic.

  • The rules are meant to ensure strong guardrails around AI’s most sensitive applications, protecting fundamental rights, safety, and non-discrimination.

What’s Changing: The Delay to 2027

  • On November 19, 2025, the European Commission proposed delaying the enforcement of several of the AI Act’s “high-risk” provisions from August 2026 to December 2027.

  • This move is part of a broader package called the “Digital Omnibus”, aimed at simplifying overlapping digital regulation (AI Act, GDPR, e-Privacy, Data Act).

  • According to the Commission, the delay is not a retreat from regulation but a practical step: key support tools (such as technical standards, specifications, and guidelines) are not yet ready.

  • Executive Vice-President Henna Virkkunen stressed that the postponement ensures those tools are in place before the most stringent AI obligations kick in.

Portfolio Prints

Stakeholders and Reactions

Support for the Delay

  • Big Tech firms welcomed the change. The delay gives them more breathing room to comply without facing immediate heavy regulatory burdens.

  • Proponents argue that simplification (not deregulation) could boost Europe’s competitiveness in AI, making the rules more manageable for companies.

Critics and Concerns

  • Privacy and consumer groups argue this amounts to watering down protections. More than 50 organisations—including Access Now, the European Consumer Organisation (BEUC), and the Centre for Democracy & Technology Europe—warned against using the simplification agenda as an excuse for deregulation.

  • Critics fear that fundamental rights could be compromised, especially in domains like biometric identification, credit scoring, and employment decisions.

  • Some worry the delay gives extra leverage to large tech companies, allowing them to shape how the regulation is finally implemented.

  • On the other hand, a few Member States and MEPs are cautious: reopening parts of the AI Act for revision may create regulatory uncertainty, especially for businesses that have already begun preparing for compliance.

Portfolio Prints

Implications: Why This Matters

Regulatory Maturity

By postponing, the EU wants to ensure that the technical infrastructure (standards, guidelines) needed for high-risk AI compliance is fully developed. Without these, companies may struggle to align with obligations in a meaningful way.

Balancing Innovation and Safety

The delay reflects a tension: how to protect citizens while not stifling innovation. The Commission is signaling that it wants robust AI oversight, but isn’t blind to the challenges firms face.

Global Competitiveness

Big tech voices (especially U.S.-based ones) have pushed back on the original timelines, arguing that too-strict regulations could hamper their ability to operate and compete in Europe. The delay may be a concession to maintain EU attractiveness to global AI developers.

Precedent for Other Regulations

The Digital Omnibus isn’t just about AI—it's part of a broader reconsideration of how digital laws operate. If successful, this might become a model: regulatory frameworks that evolve more flexibly in line with technology maturity.

Risk to Civil Liberties

There’s a real concern: delaying high-risk AI rules could increase exposure to harms (e.g., biased decision-making in hiring or credit). Critics argue this delay may dilute the Act’s original purpose.

Portfolio Prints

What’s Next

  • The proposed Digital Omnibus still needs to be approved by EU countries and the European Parliament

  • If passed, member states and companies will get more time to align with high-risk AI obligations—but that also means a prolonged period of regulatory uncertainty.

  • Civil society groups will likely continue to push back, arguing for stronger safeguards and less compromise.

  • Observers will watch closely whether the EU’s approach becomes a template for AI regulation globally: balancing strictness with flexibility.

Conclusion

The EU’s decision to reschedule high-risk AI rules until December 2027 marks a significant shift. While it provides relief to companies and a more gradual regulatory ramp-up, it also raises serious questions about whether fundamental protections might be weakened. As Europe navigates this delicate balance, the outcome could shape not just AI regulation within the bloc—but also global norms for AI governance.
Further articles