Our Mission

EPFL Safe AI Lausanne (SAIL) was founded on the belief that as artificial intelligence systems become increasingly powerful and pervasive, ensuring their safety and alignment with human values is one of the most important challenges of our time.

We are a student-led organization at EPFL committed to fostering research, education, and community engagement around AI safety. Our goal is to create a vibrant ecosystem where students, researchers, and professionals can collaborate on making AI systems safer and more beneficial for humanity.

What We Do

πŸ”¬ Research Excellence

We facilitate high-quality research projects in AI safety, connecting students with leading labs at EPFL. Our members have published papers at top venues like NeurIPS and contributed to advancing the field.

πŸ“š Knowledge Sharing

Through our bi-weekly reading group, workshops, and seminars, we create opportunities for deep engagement with AI safety literature and cutting-edge research developments.

🀝 Community Building

We bring together diverse voices from across EPFL and beyond, fostering collaborations between computer scientists, ethicists, policy researchers, and domain experts.

🎯 Practical Impact

We focus on research that can have real-world impact, working on problems like interpretability, robustness, alignment, and governance that matter for deployed AI systems.

Our Values

πŸ”¬ Scientific Rigor

We believe in evidence-based approaches to AI safety. Our work is grounded in solid technical foundations, empirical evaluation, and peer review.

🌍 Global Perspective

AI safety is a global challenge that requires diverse perspectives. We welcome members from all backgrounds and actively seek to understand how AI impacts different communities.

🀲 Collaborative Spirit

The best solutions emerge from collaboration. We work closely with other AI safety organizations, research labs, and the broader community to share knowledge and coordinate efforts.

πŸ“ˆ Long-term Thinking

While we work on immediate challenges, we also consider the long-term implications of AI development and strive to contribute to solutions that remain relevant as capabilities advance.

Research Areas

Our members work across a wide range of AI safety topics, reflecting the interdisciplinary nature of the field:

πŸ” Interpretability

Understanding how AI systems make decisions through techniques like mechanistic interpretability, feature visualization, and explanation methods.

βš–οΈ Alignment

Ensuring AI systems pursue intended objectives through reward modeling, constitutional AI, and human feedback approaches.

πŸ›‘οΈ Robustness

Making AI systems reliable under distribution shift, adversarial examples, and unexpected conditions.

πŸ“Š Evaluation

Developing benchmarks and methods to assess AI capabilities, limitations, and potential risks before deployment.

πŸ›οΈ Governance

Understanding policy implications, developing governance frameworks, and studying the societal impact of AI systems.

πŸ”’ Security

Protecting AI systems from misuse, studying dual-use research, and developing secure deployment practices.

Achievements

πŸ“„ Publications

Our members have contributed to 4 research projects, including papers accepted at NeurIPS and other top-tier venues.

πŸŽ“ Student Development

We've supported students in finding research opportunities, applying to graduate programs, and launching careers in AI safety.

🀝 Partnerships

Active collaborations with EPFL labs including MLO, NLP, and partnerships with organizations like MATS and other AI safety groups.

πŸ“ˆ Community Growth

From a small group of interested students to a recognized organization with regular events, research output, and growing influence in the AI safety community.

Join Our Community

We welcome anyone interested in AI safety, regardless of background or experience level. Whether you're a undergraduate curious about the field, a graduate student looking to contribute to research, or a professional seeking to make an impact, there's a place for you in our community.

πŸš€ For Students

Get involved in cutting-edge research, present at conferences, and build skills that will serve you throughout your career in AI safety.

πŸ§‘β€πŸ”¬ For Researchers

Connect with collaborators, share your work, and contribute to shaping the future direction of AI safety research.

🌐 For Everyone

Participate in discussions, attend our events, and help build a community dedicated to beneficial AI development.

Get Involved Today

Connect With Us

Follow our progress and stay updated with the latest in AI safety research and community activities:

βœ‰οΈ
Email
Telegram
Telegram
LinkedIn
LinkedIn