EU’s AI Act Gets a Makeover: What the New Deadlines Mean for Businesses
European Union lawmakers have reached a provisional agreement to soften the implementation of the AI Act, granting businesses more time to comply with high-risk rules and reducing administrative burdens. The deal, struck between the European Parliament and Council, extends key deadlines and clarifies regulatory overlaps, offering a more pragmatic approach for companies developing or using AI systems. Below, we answer some of the most pressing questions about what this means for your organization.
What are the new deadlines for high-risk AI systems under the AI Act?
Under the provisional deal, the compliance deadlines for high-risk AI systems have been pushed back significantly. For stand-alone high-risk AI systems—those not embedded in other regulated products—the new deadline is December 2, 2027. For AI used in products covered by EU sectoral safety rules (e.g., medical devices, machinery, lifts), the deadline is August 2, 2028. Originally, both categories were required to comply by August 2, 2026. This extra time allows enterprises to better prepare for the rigorous conformity assessments and documentation requirements. Note that the deal still needs formal adoption by both co-legislators before it becomes law, and the original deadline applies until that happens.

Why did the EU decide to soften the AI Act’s deadlines?
The primary motivation was to give businesses, especially smaller and mid-sized companies, more breathing room to adapt to complex high-risk compliance obligations. According to Marilena Raouna, Cyprus’s deputy minister for European affairs (Cyprus holds the rotating Council presidency), the agreement “significantly supports our companies by reducing recurring administrative costs.” The previous negotiations had collapsed just nine days earlier, highlighting the intense pressure from industry stakeholders who argued that the original 2026 deadline was unrealistic. By extending the timeline, the EU aims to balance innovation with safety, ensuring that companies can implement robust risk management systems without rushing.
How does the deal reduce overlapping rules for AI in machinery and other products?
The provisional agreement removes duplicative regulations for AI integrated into machinery products. Instead of being subject to both the AI Act and sectoral safety rules, such products will now follow only the sectoral rules (e.g., the Machinery Directive), provided they include safeguards that ensure equivalent health and safety protection. For wider sectors like medical devices, toys, lifts, and watercraft, the co-legislators established a mechanism to resolve any remaining overlaps between the AI Act and existing sectoral laws. This streamlining is expected to cut red tape and reduce compliance costs for manufacturers, all while maintaining high safety standards.
What counts as a “safety component” under the revised AI Act?
The definition of a “safety component” has been narrowed. An AI feature will only be treated as high-risk if its failure poses a direct health or safety risk. Features that merely assist users or improve performance—like a recommendation algorithm or a voice assistant—will not automatically be classified as high-risk, even if they are part of a safety-related system. This change prevents overclassification and removes unnecessary compliance burdens for low-risk AI functionalities. The European Parliament explicitly stated that this clarification ensures that only genuinely dangerous AI components are subject to the strictest rules.

Which mid-size companies benefit from new exemptions?
The deal extends exemptions that were previously reserved for small and medium-sized enterprises (SMEs) to also include small mid-cap companies—firms with up to 500 employees. These exemptions may include reduced documentation requirements, lighter conformity assessment procedures, or longer transition periods. This change acknowledges that mid-size businesses face many of the same resource constraints as SMEs when preparing for high-risk AI compliance. By broadening the exemption pool, the EU encourages innovation among a larger segment of the market while still ensuring proper oversight of the largest players.
What are the new timelines for AI regulatory sandboxes and watermarking obligations?
Two specific deadlines were adjusted in opposite directions. First, the requirement for member states to set up AI regulatory sandboxes (controlled testing environments for AI systems) has been moved back by a full year, to August 2, 2027. This gives national authorities more time to establish these facilities. Second, watermarking obligations for AI-generated content will apply earlier than originally proposed: from December 2, 2026, instead of the Commission’s proposed February 2, 2027. The European Parliament pushed for this earlier date to quickly address concerns about disinformation and deepfakes, demonstrating that the co-legislators can act swiftly when needed.
How will the AI Office and national authorities divide enforcement responsibilities?
Under the agreement, the EU’s AI Office will centrally supervise general-purpose AI systems (like large language models) to ensure uniform rules across the bloc. Meanwhile, national authorities retain responsibility for enforcement in specific domains: law enforcement, border management, judicial authorities, and financial institutions. This division is designed to leverage the AI Office’s technical expertise for cutting-edge models while respecting national sovereignty in sensitive areas. The Council stressed that this structure avoids duplication and ensures that each level of government focuses on its comparative strengths.
Related Articles
- Developer Unveils Parlotype: A Private, Real-Time Voice-to-English Desktop App for Non-Native Speakers
- 10 Critical Facts About the Judge's Ruling Against DOGE's ChatGPT Grant Cancellation
- Purdue Pharma’s Dissolution: 10 Key Facts About the Landmark Settlement
- EU AI Act Compliance: 10 Key Changes from the New Provisional Agreement
- Key Takeaways from Appian World: Agentic AI, Governance, and Process-Centric Approaches
- Why I Switched from Chrome, Firefox, and Samsung Internet to an Underrated Android Browser
- Johnson's Last-Minute Surveillance Bill Draws Fire as Privacy Protections Fall Short
- Startup Immigration Q&A: Insights from a YC Immigration Attorney