This meeting room is for ICML delegates to relax and recharge in a comfortable environment.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
WiML
The Women in Machine Learning (WiML) Symposium @ ICML 2025 is an inclusive, community-centered in‑person event held on Wednesday, July 16, 2025, in Vancouver, Canada, as part of the ICML conference. The full‑day program (9:30 AM–3:35 PM) features a blend of invited talks, panel discussions, poster sessions, mentoring round tables, breakout Q&A sessions, and networking opportunities—all designed to foster mentorship, highlight cutting‑edge research, encourage idea exchange, and support the growth of women in the machine learning community.
WiML—founded in 2006—connects women working in machine learning to promote mentorship, collaboration, and visibility through academic and industry‑based events and initiatives.
As artificial intelligence systems become deeply embedded in our institutions, economies, and personal lives, the challenge of alignment—ensuring AI acts in accordance with human values and societal norms—has become both urgent and complex.
But what exactly should these systems be aligned to—and how do we know we're getting it right? To address this, we turn to a long-standing body of work: how societies have historically measured public preferences and moral norms—and what often goes wrong in the process.
The talk will introduce underutilized datasets—from decades of survey archives to international value studies—that could serve as empirical benchmarks for aligning AI systems with lived human norms. In addition to highlighting valuable data sources, we will examine how lessons from social science can inform the design of human feedback loops in AI. These insights help avoid common pitfalls in capturing human intentions and preferences—such as measurement error, framing effects, and unrepresentative sampling—that have plagued opinion research for decades.
We'll close by addressing the fluid and evolving nature of societal norms, emphasizing the need for alignment strategies that are adaptive to cultural and temporal change. Achieving this kind of adaptability requires not just better data, but durable collaborations between social scientists and machine learning researchers—so that updates to human values can be continuously reflected in system design. The goal is to provoke a deeper, interdisciplinary conversation about what it truly means to align AI with human values—and how to do so responsibly, reliably, and at scale.
Navigating Generative AI and LLMs Across Languages
This is a unique platform for researchers, developers, and enthusiasts to forge new collaborations, share knowledge, and discuss open research questions. Whether you are actively shaping the future of multilingual AI, curious about its global impact, or seeking to connect with peers facing similar challenges, this social offers a fun and dynamic space for collective learning and sparking solutions towards building more inclusive and effective AI systems for everyone.
Agents and Safety
This social brings together AI practitioners focused on agent development and AI safety to address the unique risks these agents pose, such as misuse, unintended actions, and adversarial attacks, which traditional security models often fail to mitigate. The event will explore both development-phase safeguards and post-deployment evaluation strategies, including red teaming, automated testing, monitoring, and human-in-the-loop assessments. In the first part, expert speakers will share real-world cases and technical insights into current safety challenges and solutions. In the second part, attendees will engage in open discussions to exchange ideas and propose new directions for ensuring that increasingly autonomous agents remain safe, reliable, and aligned with human values. The goal is to foster collaboration and innovation toward building trustworthy AI systems.
co&co x Vector Institute- AI Career Compass: Navigating Career Paths from Opportunities to Understanding Your Market Value as an AI Researcher
This interactive social combines career exploration with compensation mastery for the AI/ML community. Discover diverse career pathways while learning to identify and negotiate your true market value in today's competitive landscape.
Whether you're getting ready for your next internship, exploring full-time roles in academia or industry, or negotiating a raise or promotion, this session will help you map your career path, identify your market value, and claim it.
Attendees will walk away feeling more confident and informed and better equipped to advocate for their worth.
Takeaways:
- Concrete job search and interview guidance including the STAR method for presenting research
- Insider knowledge on AI/ML compensation and negotiation strategies
- Real stories from researchers and industry professionals across sectors sharing their career journeys and tips for success
About the Speakers:
Nicole Bannon is the founder of co&co, a strategic communications and negotiation consultancy for technical talent. She has coached 500+ AI researchers and engineers through high-stakes negotiations, helping clients land offers at OpenAI, DeepMind, Meta, Anthropic, and more — including comp packages up to $$7.4M/year.
Nicole has given talks (like this one) at major conferences 10+ times, including ICML 2024 (400+ attendees), NeurIPS, CVPR, ACL, ICLR, and the Grace Hopper Celebration over 2022-2025. She’s partnered with Black in AI, Women in Machine Learning, Women in Computer Science, and other affinity groups to make negotiation education more inclusive and accessible across the field.
---
Johannah Thumb is the Manager, Student Engagement and Research Programming at the Vector Institute, where she spearheads workforce development initiatives and research programming to support student and researcher development. She collaborates with academic and industry partners to align programming with workforce needs and expand research and training opportunities for emerging AI talent. Her leadership extends to high-impact events that cultivate a thriving AI community by connecting emerging professionals with peers, mentors, and industry experts. Johannah also curates professional development programming to equip students in achieving their dream roles in AI. With a strong commitment to inclusivity in the field, she is dedicated to shaping a diverse and skilled next generation of AI researchers and professionals.
Track Record & Demand:
This session builds on a series of well-attended “Know Your Market Value” events hosted at NeurIPS, ICML, CVPR, ACL, and ICLR, each drawing 150–400+ attendees.