Social
Jordan Sale
[ West Ballroom A ]
Abstract
Transitioning from academia into a frontier industry research lab is one of the most exciting - and confusing - moves an AI researcher can make. Many graduating PhDs assume that strong publications and technical ability are all it takes to succeed in industry.
<br>
<br>However, at top labs, researchers are expected not only to build world-class systems - but also to navigate complex org structures, advocate for themselves, secure resources, and align with fast-moving business priorities.
<br>
<br>But - many early-career researchers have never recruited, negotiated, or managed stakeholder politics. The result? A massive information asymmetry - where super capable researchers struggle to advance simply because they don’t have access to the unspoken rules.
<br>
<br>This social aims to level the playing field.
<br>
<br>The panelists will share hard-earned insights on how to break into industry, choose the right company and team, navigate interviews and negotiation, and set yourself up for long-term success.
<br>
<br>We want attendees to walk away with less fear, more clarity, and an actionable playbook for launching a successful research career in industry!
<br>
<br>🎁 Bonuses:
<br>
<br>Everyone who attends will also receive:
<br>
<br>- A 62-page Technical Interview Guide for AI Researchers with real interview questions from the OpenAI, Anthropic, and Microsoft interview …
Social
Ahmed Youssef · Suleiman Ali Khan · Gasser Elbanna · Ehsaneddin Asgari · Shaokai Yang · Kamran Soomro
[ West Meeting Room 118-120 ]
Abstract
This social aims to create a welcoming and inclusive space for Muslim researchers, and students at ICML to connect, support one another, and build community. Everyone is welcome—regardless of background, identity, or beliefs.<br><br>The session will include:<br><br>- A brief welcome and introductions<br><br>- 1:1 and small group mentorship matching (covering topics like graduate school, industry, and academic careers)<br><br>- Informal networking over refreshments and open discussion<br><br>This event is designed to foster meaningful connections, provide career guidance, and offer a relaxed environment for reflection and support.
Social
Rapael Kalandadze · Tatia Tsmindashvili
[ West Ballroom D ]
Abstract
1. **From Lab to Life: Orchestrating Ambient Agents in the Real World**
<br>**Rapael Kalandadze** (AI Lab, Wandero)
<br>> This talk explores the shift of multi-agent systems from controlled experiments to real-world deployment. We'll examine key challenges, effective strategies, and practical examples of building systems that truly work. This isn't science fiction anymore - it's large-scale system design in action.
<br>
<br>2. **Teaching Ambient Agents to Understand and Pursue Human Intent**
<br>**Shirley Wu** (Stanford, Microsoft Research)
<br>> This talk explores how long-term alignment strategies can make ambient agent systems more helpful, efficient, and truly human-centered. Shirley Wu presents CollabLLM, a framework that trains agents to look beyond immediate replies by simulating multi-turn interactions and rewarding responses that advance conversations over time. The result: proactive agents that clarify intent, surface missing context, and collaborate more naturally in ambient, ongoing settings.
<br>
<br>3. **Safety Guarantees for Ambient Agents via Asking for Help**
<br>**Benjamin Plaut** (UC Berkeley, Stanford)
<br>> Most reinforcement learning algorithms essentially rely on trial-and-error: they explore all possible behaviors and see what works well. However, this approach is problematic when some actions are "catastrophic", i.e., irreparable. Ambient computer-use agents have access to many irreparable actions, such as deleting crucial files or sending disastrous emails. We show …
Social
Eliza Cudmore · Olivia Jimenez · Shannon Yang
[ West Ballroom C ]
Abstract
How can we extract deeper insights from LLM evaluations?<br><br>Join experts from the UK AI Security Institute for an interactive discussion at ICML focused on improving how we analyse, interpret, and act on evaluation data for frontier AI systems. As large language models become more capable and influential, evaluations have become a cornerstone of scientific understanding, safety assessments, and deployment decisions. Yet current evaluation designs and methodologies are often poorly suited to answering the questions we care most about—such as uncovering latent capabilities, forecasting performance trajectories, and identifying dangerous failure modes.<br><br>This session will explore four key dimensions of evaluation methodology: developing tools for richer evaluation-data analysis; advancing statistical techniques for uncertainty and variability; building efficient evaluation pipelines that prioritise signal-rich tasks; and mapping evaluation results onto capability or risk thresholds. We’ll identify open research questions, promising methodological directions, and opportunities for collaboration to make evaluations more rigorous, interpretable, and decision-relevant.<br><br>Whether you you are an eval designer yourself, train your own models, or work on risks related to safety and misuse, this session will help you think critically about the importance of evaluation insights to your own work.
Social
Claas Voelcker · Michelle Lin
[ West Meeting Room 211-214 ]
Abstract
Social
Ekaterina Artemova · Alexander Borodetskiy · Ksenia Peresvetova · Elizaveta Yoshida
[ West Ballroom D ]
Abstract
This social brings together AI practitioners focused on agent development and AI safety to address the unique risks these agents pose, such as misuse, unintended actions, and adversarial attacks, which traditional security models often fail to mitigate. The event will explore both development-phase safeguards and post-deployment evaluation strategies, including red teaming, automated testing, monitoring, and human-in-the-loop assessments. In the first part, expert speakers will share real-world cases and technical insights into current safety challenges and solutions. In the second part, attendees will engage in open discussions to exchange ideas and propose new directions for ensuring that increasingly autonomous agents remain safe, reliable, and aligned with human values. The goal is to foster collaboration and innovation toward building trustworthy AI systems.
Social
Kristina Nasr · Nikka Mofid
[ West Meeting Room 118-120 ]
Abstract
This is a unique platform for researchers, developers, and enthusiasts to forge new collaborations, share knowledge, and discuss open research questions. Whether you are actively shaping the future of multilingual AI, curious about its global impact, or seeking to connect with peers facing similar challenges, this social offers a fun and dynamic space for collective learning and sparking solutions towards building more inclusive and effective AI systems for everyone.
Social
Johannah Thumb · Nicole Bannon
[ West Ballroom C ]
Abstract
Most AI researchers entering the job market are unsure about what career paths to pursue and have little visibility into their true market value and even less guidance on how to advocate for it.
<br>
<br>This interactive social combines career exploration with compensation mastery for the AI/ML community. Discover diverse career pathways while learning to identify and negotiate your true market value in today's competitive landscape.
<br>
<br>Whether you're getting ready for your next internship, exploring full-time roles in academia or industry, or negotiating a raise or promotion, this session will help you map your career path, identify your market value, and claim it.
<br>
<br>Attendees will walk away feeling more confident and informed and better equipped to advocate for their worth.
<br>
<br>Takeaways:
<br>
<br>- Concrete job search and interview guidance including the STAR method for presenting research
<br>
<br>- Insider knowledge on AI/ML compensation and negotiation strategies
<br>
<br>- Real stories from researchers and industry professionals across sectors sharing their career journeys and tips for success
<br>
<br>
<br>About the Speakers:
<br>
<br>Nicole Bannon is the founder of co&co, a strategic communications and negotiation consultancy for technical talent. She has coached 500+ AI researchers and engineers through high-stakes negotiations, helping clients land offers at OpenAI, DeepMind, Meta, Anthropic, …
Social
Jothsna Praveena Pendyala
[ West Ballroom D ]
Abstract
Building machine learning systems that work in production is significantly more complex than training high-accuracy models in research. This social aims to bring together researchers, engineers, and practitioners interested in MLOps—the set of practices that enables scalable, reproducible, and reliable ML deployment. We will explore the challenges of operationalizing ML, from data drift and CI/CD to model monitoring and governance. The session will include lightning talks, informal discussion circles, and networking opportunities. It is targeted at attendees who want to bridge the gap between cutting-edge ML research and real-world system deployment.
Social
Evan Shelhamer
[ West Meeting Room 211-214 ]
Abstract
Join our mentoring sessions for students, postdocs, and early career industry researchers and engineers. The format is speed mentoring: a group of mentees join a mentor at a table, chat for 15-20 minutes, and then the mentors rotate across the tables and keep the conversation going. This is a great way to discuss a lot of topics in a little time and hear from different perspectives.<br><br>While the social is 7-9pm, do feel free to come and go, and join for just the first or second hour if that is what fits your schedule.<br><br>- [Sign up as a mentor!](https://docs.google.com/forms/d/e/1FAIpQLSed8E5XvtUX7DNK1bo-N13n7Y_hXv3mH-PAsEhx5z69ELaL7Q/viewform?usp=pp_url&entry.1245308750=Ask+Me+Anything!&entry.1149835520=7-8pm+/+19:00-20:00&entry.1149835520=8-9pm+/+20:00-21:00)<br>- [Sign up as a mentee!](https://forms.gle/d7YpwGunWgLuMqo9A)<br><br>Our mentors include<br><br>- Margo Seltzer: UBC<br>- Peter McElroy: EarthDaily<br>- Yu Sun: Stanford University<br>- Motasem Alfarra: Qualcomm AI Research (was: KAUST)<br>- Tahniat Khan: Vector Institute<br>- Claas Voelcker: University of Toronto<br>- Abeer Badawi: York University<br>- Mahdi Haghifam: Northeastern University<br>- Yani Ioannou: University of Calgary<br>- Anthony Fuller: Carleton University + Vector<br>- Danica Sutherland: UBC + Amii<br>- Evan Shelhamer: UBC + Vector (was: Google DeepMind, Adobe Research, UC Berkeley)
Social
[ West Ballroom A ]
Abstract
We will begin with a panel on the impacts of reasoning models and goal-directed behavior on AI safety, followed by Q&A and free discussions. Our panelists are Aditi Raghunathan, Anca Dragan, David Duvenaud, and Siva Reddy. Come connect over snacks & drinks!
<br>
<br>
<br>This event is hosted by the [Center for AI Safety](https://safe.ai).
Social
Ana Maria Quintero-Ossa · Eirene Seiradaki · Tatjana Chavdarova
[ West Meeting Room 118-120 ]
Abstract
Event page: https://rbcborealis.com/icml-2025-event-building-inclusive-communities-at-icml/
<br>Register here: https://lu.ma/vhu2byhd
Socials
[ West Ballroom A ]
Abstract
[ West Meeting Room 118-120 ]
Abstract
[ West Ballroom D ]
Abstract
[ West Ballroom C ]
Abstract
[ West Meeting Room 211-214 ]
Abstract
[ West Ballroom D ]
Abstract
[ West Meeting Room 118-120 ]
Abstract
[ West Ballroom C ]
Abstract
[ West Ballroom D ]
Abstract
[ West Meeting Room 211-214 ]
Abstract
[ West Ballroom A ]
Abstract
[ West Meeting Room 118-120 ]
Abstract