EPFL Safe AI Lausanne
Building a safer future through responsible AI development and research. We are a community of students, researchers, and professionals at EPFL working towards ensuring artificial intelligence systems are developed safely and beneficially.
Get Involved!Our Mission
We believe that as AI systems become more powerful, ensuring their safety and alignment with human values becomes increasingly critical. Our goal is to foster research, education, and community engagement around AI safety at EPFL and beyond.
📚 Reading Group
We ran successful reading group sessions in Fall 2024 and are planning future sessions based on community interest. Join our interest list to be notified when we restart!
🧠 Research & Discussion
We organize workshops, seminars, and conduct independent research to help members understand the technical foundations of AI safety research.
🤝 Community
Connect with like-minded individuals, collaborate on projects, and contribute to the growing AI safety research community at EPFL.
🔬 Hands-on Projects
Work on practical AI safety research projects, from interpretability studies to robustness evaluations and alignment techniques.
Recent Highlights
OS-Harm
Authors: Thomas Kuntz, Agatha Duzan
Lab: Theory of Machine Learning Laboratory (TML)
Created a harmful capabilities benchmark for agents. Got accepted as a spotlight paper at NeurIPS 2025
Watermarking for LLMs
Authors: Joshua Cohen-Dumani
Lab: Natural Language Processing Lab (NLP)
This project explored synthetic text detection in open-source language models by studying whether watermarking patter...
Upcoming Events
Join us for the official kickoff of our Spring 2026 activities! Meet the team, learn about upcoming projects and opportunities, and connect with fellow AI safety enthusiasts.
Location: Room CE 1 104, EPFL
Regular Meetings
Friday
Time: 12:00-13:00
Location: BC Cafeteria
Weekly informal meetups