
EU simplified
the
AI Act.
EU simplified
the
AI Act.
from
The European Parliament and the Council of the EU have reached a political agreement on a package of targeted amendments to the AI Act. The aim of the amendments is to facilitate the implementation of the AI Act for companies without lowering the level of protection.
What is it about?
The AI Act came into force in 2024 and is considered the world’s first comprehensive law for the regulation of artificial intelligence systems. It provides for graduated requirements depending on the risk class and places particularly strict requirements on so-called high-risk AI systems. The implementation of this comprehensive set of regulations posed considerable practical challenges for many companies. The European Commission responded to this and proposed the Digital Omnibus on AI on November 19, 2025 as part of its simplification agenda. The political agreement that has now been reached forms the basis for formal adoption by Parliament and the Council.
New deadlines for high-risk AI
One of the most important changes concerns the application deadlines. The agreement now clearly defines when the rules for high-risk AI systems become binding.
Systems that are used in particularly sensitive areas, namely biometrics, critical infrastructure, education, human resources, migration, asylum and border control, must meet the requirements from 02.12.2027. For AI systems that are embedded in physical products such as elevators or toys, the deadline is later: from 02.08.2028. This staggering is intended to ensure that the necessary technical standards and support tools are available in good time.
Labeling and transparency obligations: A differentiated picture
The agreement does not paint a uniform picture when it comes to transparency and labeling obligations. Not all obligations have been postponed, and in one place the legislator has even brought the requirements forward.
The core area of transparency obligations under the AI Regulation remains unchanged. From August 2, 2026, the general disclosure obligations will apply to chatbots, emotion recognition systems and similar applications with limited risk. Anyone using such systems must inform users that they are interacting with an AI. These obligations have expressly not been postponed by the Digital Omnibus.
The same applies to the labeling of deepfakes and AI-generated texts. Operators of AI systems that modify or create new image, audio or video content showing identifiable persons must disclose the artificial origin. The same applies to operators who publish AI-generated texts on matters of public interest. An exception only applies to obviously artistic, satirical or fictional works, where the labeling must not impair the presentation or enjoyment.
The situation is different with the so-called machine-readable labeling of AI-generated content. This refers to technical measures such as watermarks or embedded metadata that can be used to identify content as AI-generated. This obligation is not to be postponed by the new regulation compared to the original plan, but brought forward to 02.12.2026. Providers of image generators, video creation systems or comparable generative AI applications must therefore fulfill this requirement earlier than previously planned.
In addition, the Commission has announced guidelines on transparency obligations to support companies in their practical implementation. However, these guidelines have not yet been published and their content is currently still open.
New deadlines at a glance
- High-risk AI, Annex III applicable from 02.12.2027
- High-risk AI in products, Annex I applicable from 02.08.2028
- Transparency obligations Art. 50 para. 1, 3, 4 Applicable unchanged from 02.08.2026
- Machine-readable marking Art. 50 para. 2 brought forward to 02.12.2026
New ban: AI exposure apps
The agreement also contains a new regulation. In future, AI systems that generate non-consensual sexually explicit or intimate content will be prohibited. In particular, this includes so-called nudification apps, i.e. applications that generate images of people undressed without their consent. Systems that create child pornographic material are also prohibited. The ban applies not only when such systems are placed on the market, but also if a system is made available without appropriate protective measures against such use. Companies have until 02.12.2026 to adapt their systems accordingly.
Changes for companies
The agreement brings a number of concrete simplifications for the economy.
Firstly, certain simplifications that previously only applied to small and medium-sized companies will be extended to so-called small mid-cap companies. This group of companies includes companies with up to 500 employees, which previously did not benefit from certain support and simplification regulations. In addition, the agreement provides for an automatic fine reduction of 50 percent for SMEs.
Secondly, the relationship between the AI Act and EU product safety law has been clarified, particularly with regard to the Machinery Regulation. This clarification is intended to avoid duplicate regulations, which had previously burdened companies with additional work.
Thirdly, access to regulatory sandboxes is being expanded. In these controlled test environments, companies can test their AI solutions under real conditions without immediately being subject to all regulatory requirements. In future, a European sandbox will also be set up at EU level.
Strengthening the Commission’s AI Office
The agreement also strengthens the enforcement powers of the European Commission’s AI Office. This body is responsible in particular for the supervision of so-called general-purpose AI models and AI systems that are embedded in very large online platforms and very large search engines. The extended powers are intended to guarantee effective supervision of these particularly far-reaching AI systems. At the same time, the centralization of supervision should prevent different enforcement practices from developing between the 27 member states.
What happens next?
The political agreement must now be formally adopted by the European Parliament and the Council of the EU. Formal adoption should be completed before 02.08.2026. Once adopted, the amendments will be published in the Official Journal of the European Union and enter into force three days later. The Commission has also announced additional guidelines on the classification of high-risk systems and transparency obligations to support companies with practical implementation.
Conclusion
The EU is trying to simplify its AI legislation and at the same time address current problems in dealing with AI – such as the scandal surrounding the digital exposure of people by the chatbot Grok.
While the high-risk obligations have been postponed, the legislator is even bringing forward the machine-readable labeling of AI-generated content.
However, it would be desirable for the implementation of the obligations if the EU would finally provide guidelines, e.g. on labeling and transparency obligations, and not just do this shortly before the respective obligation comes into force. This currently creates considerable legal uncertainty.
We are happy to
advise you about
AI!







