Skip to yearly menu bar Skip to main content


Workshop

High-dimensional Learning Dynamics Workshop: The Emergence of Structure and Reasoning

Atish Agarwala · Courtney Paquette · Andrea Montanari · Cengiz Pehlevan · Sungyoon Lee · Murat Erdogdu · Naomi Saphra · Gowthami Somepalli · Swabha Swayamdipta · Tom Goldstein · Boaz Barak · Leshem Choshen · Shikhar Murty · Mengzhou Xia · Depen Morwani · Rosie Zhao

Straus 2

Fri 26 Jul, midnight PDT

Modeling learning dynamics has long been a goal of the empirical science and theory communities in deep learning. These communities have grown rapidly in recent years, as our newly expanded understanding of the latent structures and capabilities of large models permits researchers to study these phenomena through the lens of the training process. Recent progress in understanding fully trained models can therefore enable understanding of their development and lead to insights that improve optimizer and architecture design, provide model interpretations, inform evaluation, and generally enhance the science of neural networks and their priors. We aim to foster discussion, discovery, and dissemination of state-of-the-art research in high-dimensional learning dynamics relevant to ML.

We invite participation in the 2nd Workshop on High-dimensional Learning Dynamics (HiLD), to be held as a part of the ICML 2024 conference. This year’s theme focuses on understanding how reasoning capabilities and internal structures develop over the course of neural network training; we encourage submissions related to our theme as well as other topics around the theoretical and empirical understanding of learning in high dimensional spaces. We will accept high quality submissions as poster presentations during the workshop, especially work-in-progress and state-of-art ideas.

We welcome any topics in pursuit of understanding how model behaviors evolve or emerge. Example topics include but are not limited to:

The emergence of interpretable behaviors (e.g., circuit mechanisms) and capabilities (e.g., compositionality and reasoning) Work that adapts tools from stochastic differential equations, high-dimensional probability, random matrix theory, and other theoretical frameworks to understand learning dynamics and phase transitions Scaling laws related to internal structures and functional differences Competition and dependencies among structures and heuristics, e.g., simplicity bias or learning staircase functions Relating optimizer design and loss landscape geometry to implicit regularization, inductive bias, and generalization

Chat is not available.
Timezone: America/Los_Angeles

Schedule