About SAIL
Building a community dedicated to AI safety research and education at EPFL
Our Mission
EPFL Safe AI Lausanne (SAIL) was founded on the belief that as artificial intelligence systems become increasingly powerful and pervasive, ensuring their safety and alignment with human values is one of the most important challenges of our time.
We are a student-led organization at EPFL committed to fostering research, education, and community engagement around AI safety. Our goal is to create a vibrant ecosystem where students, researchers, and professionals can collaborate on making AI systems safer and more beneficial for humanity.
What We Do
π¬ Research Excellence
We facilitate high-quality research projects in AI safety, connecting students with leading labs at EPFL. Our members have published papers at top venues like NeurIPS and contributed to advancing the field.
π Knowledge Sharing
Through our bi-weekly reading group, workshops, and seminars, we create opportunities for deep engagement with AI safety literature and cutting-edge research developments.
π€ Community Building
We bring together diverse voices from across EPFL and beyond, fostering collaborations between computer scientists, ethicists, policy researchers, and domain experts.
π― Practical Impact
We focus on research that can have real-world impact, working on problems like interpretability, robustness, alignment, and governance that matter for deployed AI systems.
Our Values
π¬ Scientific Rigor
We believe in evidence-based approaches to AI safety. Our work is grounded in solid technical foundations, empirical evaluation, and peer review.
π Global Perspective
AI safety is a global challenge that requires diverse perspectives. We welcome members from all backgrounds and actively seek to understand how AI impacts different communities.
π€² Collaborative Spirit
The best solutions emerge from collaboration. We work closely with other AI safety organizations, research labs, and the broader community to share knowledge and coordinate efforts.
π Long-term Thinking
While we work on immediate challenges, we also consider the long-term implications of AI development and strive to contribute to solutions that remain relevant as capabilities advance.
Research Areas
Our members work across a wide range of AI safety topics, reflecting the interdisciplinary nature of the field:
π Interpretability
Understanding how AI systems make decisions through techniques like mechanistic interpretability, feature visualization, and explanation methods.
βοΈ Alignment
Ensuring AI systems pursue intended objectives through reward modeling, constitutional AI, and human feedback approaches.
π‘οΈ Robustness
Making AI systems reliable under distribution shift, adversarial examples, and unexpected conditions.
π Evaluation
Developing benchmarks and methods to assess AI capabilities, limitations, and potential risks before deployment.
ποΈ Governance
Understanding policy implications, developing governance frameworks, and studying the societal impact of AI systems.
π Security
Protecting AI systems from misuse, studying dual-use research, and developing secure deployment practices.
Achievements
π Publications
Our members have contributed to 4 research projects, including papers accepted at NeurIPS and other top-tier venues.
π Student Development
We've supported students in finding research opportunities, applying to graduate programs, and launching careers in AI safety.
π€ Partnerships
Active collaborations with EPFL labs including MLO, NLP, and partnerships with organizations like MATS and other AI safety groups.
π Community Growth
From a small group of interested students to a recognized organization with regular events, research output, and growing influence in the AI safety community.
Join Our Community
We welcome anyone interested in AI safety, regardless of background or experience level. Whether you're a undergraduate curious about the field, a graduate student looking to contribute to research, or a professional seeking to make an impact, there's a place for you in our community.
π For Students
Get involved in cutting-edge research, present at conferences, and build skills that will serve you throughout your career in AI safety.
π§βπ¬ For Researchers
Connect with collaborators, share your work, and contribute to shaping the future direction of AI safety research.
π For Everyone
Participate in discussions, attend our events, and help build a community dedicated to beneficial AI development.
Connect With Us
Follow our progress and stay updated with the latest in AI safety research and community activities: