20/11/2025

CeSIA Condemns Commission’s Decision to Weaken the AI Act: “A Precedent the Industry Will Use to Undermine EU Law”

Credit : Kathryn Conrad - CC-BY

The Centre for AI Safety (CeSIA) strongly opposes several provisions contained in the “digital omnibus” and urges the European Parliament to reject its adoption. The text, published by the European Commission on Wednesday, 19 November, postpones the application of rules governing high-risk AI systems and removes a number of obligations currently placed on their providers.

The Commission has formally proposed delaying the implementation of the AI Act for high-risk systems, pushing the deadline back to December 2027 for systems used in sensitive areas such as biometric analysis, policing activities or elections, and to August 2028 for safety-related systems in regulated product sectors including machinery and civil aviation.

“Under the guise of simplification, the Commission is bending to industry pressure and sending a deplorable signal: that it is enough to demand looser rules to obtain them. Any reform of the existing framework must be evidence-based and proceed through a clear democratic process. This rushed adjustment only erodes confidence in the EU’s ability to regulate AI. Delaying safeguards at a time when risks are growing is the wrong way round,” said Charbel Raphaël Segerie, Executive Director of CeSIA.

The Need for “Regulatory Certainty”: A Misleading Pretext

The main argument put forward by AI providers—and echoed by the Commission in its official communication—to justify postponing the law is the delay in publishing technical standards for high-risk AI systems. While this delay is real, CeSIA stresses that the AI Act already anticipated such a scenario: Article 41 empowers the Commission to establish “common specifications” acting as provisional standards when harmonised standards are not delivered on time. CeSIA calls on the Commission to activate this mechanism rather than delaying the entry into force of safety requirements.

A Confirmed Deregulation: End of Transparency on Self-Exemptions

CeSIA denounces the confirmation, in the final text, of the amendment to Article 6 and the deletion of Article 49(2). Providers will now be able to conclude that their system is not high-risk without recording it in the EU’s public database. “Allowing a provider to declare itself outside the scope of responsibility without leaving any public trace is the antithesis of transparency. The Commission’s proposal confirms our fears: providers will act as both judge and party, and no one will know which systems escape scrutiny,” CeSIA warns.

Toward a Weaker Framework for General-Purpose AI?

CeSIA is also concerned that the text strips the Commission of its ability to grant, via implementing acts, general validity to codes of practice—thereby undermining the normative weight of these crucial instruments for the most powerful so-called “general-purpose” models.

As incidents involving AI systems multiply and the technology spreads rapidly in sensitive sectors, CeSIA urges the European Parliament and the Council to reject the simplification package and to amend the proposal robustly to restore the integrity of the AI Act.

Sign up for our newsletter