** Executive Summary** The Centre pour la Sécurité de l'IA[1] (CeSIA, pronounced "seez-ya") or French Center for AI Safety is a new Paris-based organization dedicated to fostering a culture of AI safety at the French and European level. Our mission is to reduce AI risks through education and information about potential risks and their solutions. Our activities span three main areas:
Education: We began our work by creating the ML4Good bootcamps, which have been replicated a dozen times worldwide. We also offer accredited courses at ENS Ulm—France's most prestigious university for mathematics and computer science—and the MVA master program, the only and first accredited AI Safety courses in France[2]. We are currently adapting and improving the course into an AI safety textbook. Technical R&D: CeSIA's current technical research focuses on creating benchmarks for AI safeguards systems, establishing the first framework of its kind evaluating the monitoring systems (which is different from [...]
Outline:
(00:15) Executive Summary
(03:06) Mission
(05:05) European and French context
(06:30) Main activities
(13:34) Gallery
(14:41) Team
(16:22) Funding
(17:07) Next steps
The original text contained 7 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
First published: December 20th, 2024
Source: https://www.lesswrong.com/posts/rCLs56wfissFheLap/announcing-cesia-the-french-center-for-ai-safety)
---
Narrated by TYPE III AUDIO).
Images from the article:
)
)
)
)
)
)
)
)
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts), or another podcast app.