The French voice of AI safety
CeSIA is an independent centre of expertise and think tank dedicated to preventing major risks associated with AI through its research and education activities, public policy analysis and strategic recommendations.
Our work

Evaluating AI models for the European Commission
Selected to assess manipulation risks from frontier AI models.

Global Call for AI Red Lines
15 Nobel laureates. 300+ signatories. Launched at the UN General Assembly.

Advising French institutions on AI risks
We discuss with French institutions like the Senate, the Ministry of Defense, or INESIA, the French AISI.
About
A team of AI experts based in Paris. In one year, CeSIA has become a key interlocutor of the European Commission, the OECD and French institutions on the risks posed by frontier AI.
News

Why an International Agreement Establishing Red Lines for AI is Both Necessary and Realistic

The Global Call for AI Red Lines, Initiated by CeSIA, Is Launched at the UN
CeSIA Condemns Commission’s Decision to Weaken the AI Act: “A Precedent the Industry Will Use to Undermine EU Law”
Governing the Frontier: Priorities for International Cooperation on Advanced AI
The essentials of AI safety, every week.
Every week, CeSIA analyzes the events shaping the future of AI: technological advances, policy decisions, safety challenges. Independent analysis to understand what's at stake.
L'enfer est pavé de bonnes inventions
Robots tueurs, réseau artificiel de neurones humains, 1 milliard d'euros de levée de fonds à Paris [English version below]
Get in touch
Working on AI issues and want to connect with our team? Get in touch.



