Timezone: »

 
Poster discussion
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari

Fri Jun 14 05:00 PM -- 06:00 PM (PDT) @

Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes Roman Novak (Google Brain)*; Lechao Xiao (Google Brain); Jaehoon Lee (Google Brain); Yasaman Bahri (Google Brain); Greg Yang (Microsoft Research AI); Jiri Hron (University of Cambridge); Daniel Abolafia (Google Brain); Jeffrey Pennington (Google Brain); Jascha Sohl-Dickstein (Google Brain)

Topology of Learning in Artificial Neural Networks Maxime Gabella (Magma Learning)*

Jet grooming through reinforcement learning Frederic Dreyer (University of Oxford)*; Stefano Carrazza (University of Milan)

Inferring the quantum density matrix with machine learning Kyle Cranmer (New York University); Siavash Golkar (NYU)*; Duccio Pappadopulo (Bloomberg)

Backdrop: Stochastic Backpropagation Siavash Golkar (NYU)*; Kyle Cranmer (New York University)

Explain pathology in Deep Gaussian Process using Chaos Theory Anh Tong (UNIST)*; Jaesik Choi (Ulsan National Institute of Science and Technology)

Towards a Definition of Disentangled Representations Irina Higgins (DeepMind)*; David Amos (DeepMind); Sebastien Racaniere (DeepMind); David Pfau (DeepMind); Loic Matthey (DeepMind); Danilo Jimenez Rezende (DeepMind)

Towards Understanding Regularization in Batch Normalization Ping Luo (The Chinese University of Hong Kong); Xinjiang Wang (); Wenqi Shao (The Chinese University of HongKong)*; Zhanglin Peng (SenseTime)

Covariance in Physics and Convolutional Neural Networks Miranda Cheng (University of Amsterdam)*; Vassilis Anagiannis (University of Amsterdam); Maurice Weiler (University of Amsterdam); Pim de Haan (University of Amsterdam); Taco S. Cohen (Qualcomm AI Research); Max Welling (University of Amsterdam)

Meanfield theory of activation functions in Deep Neural Networks Mirco Milletari (Microsoft)*; Thiparat Chotibut (SUTD) ; Paolo E. Trevisanutto (National University of Singapore)

Finite size corrections for neural network Gaussian processes Joseph M Antognini (Whisper AI)*

Analysing the dynamics of online learning in over-parameterised two-layer neural networks Sebastian Goldt (Institut de Physique théorique, Paris)*; Madhu Advani (Harvard University); Andrew Saxe (University of Oxford); Florent Krzakala (École Normale Supérieure); Lenka Zdeborova (CEA Saclay)

A Halo Merger Tree Generation and Evaluation Framework Sandra Robles (Universidad Autónoma de Madrid); Jonathan Gómez (Pontificia Universidad Católica de Chile); Adín Ramírez Rivera (University of Campinas)*; Jenny Gonzáles (Pontificia Universidad Católica de Chile); Nelson Padilla (Pontificia Universidad Católica de Chile); Diego Dujovne (Universidad Diego Portales)

Learning Symmetries of Classical Integrable Systems Roberto Bondesan (Qualcomm AI Research)*, Austen Lamacraft (Cavendish Laboratory, University of Cambridge, UK)

Pathological Spectrum of the Fisher Information Matrix in Deep Neural Networks Ryo Karakida (National Institute of Advanced Industrial Science and Technology)*; Shotaro Akaho (AIST); Shun-ichi Amari (RIKEN)

How Noise during Training Affects the Hessian Spectrum Mingwei Wei (Northwestern University); David Schwab (Facebook AI Research)*

A Quantum Field Theory of Representation Learning Robert Bamler (University of California at Irvine)*; Stephan Mandt (University of California, Irivine)

Convergence Properties of Neural Networks on Separable Data Remi Tachet des Combes (Microsoft Research Montreal)*; Mohammad Pezeshki (Mila & University of Montreal); Samira Shabanian (Microsoft, Canada); Aaron Courville (MILA, Université de Montréal); Yoshua Bengio (Mila)

Universality and Capacity Metrics in Deep Neural Networks Michael Mahoney (University of California, Berkeley)*; Charles Martin (Calculation Consulting)

Asymptotics of Wide Networks from Feynman Diagrams Guy Gur-Ari (Google)*; Ethan Dyer (Google)

Deep Learning on the 2-Dimensional Ising Model to Extract the Crossover Region Nicholas Walker (Louisiana State Univ - Baton Rouge)*

Large Scale Structure of the Loss Landscape of Neural Networks Stanislav Fort (Stanford University)*; Stanislaw Jastrzebski (New York University)

Momentum Enables Large Batch Training Samuel L Smith (DeepMind)*; Erich Elsen (Google); Soham De (DeepMind)

Learning the Arrow of Time Nasim Rahaman (University of Heidelberg)*; Steffen Wolf (Heidelberg University); Anirudh Goyal (University of Montreal); Roman Remme (Heidelberg University); Yoshua Bengio (Mila)

Scale Steerable Filters for Locally Scale-Invariant Convolutional Neural Networks Rohan Ghosh (National University of Singapore)*; Anupam Gupta (National University of Singapore)

A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off Yaniv Blumenfeld (Technion)*; Dar Gilboa (Columbia University); Daniel Soudry (Technion)

Rethinking Complexity in Deep Learning: A View from Function Space Aristide Baratin (Mila, Université de Montréal)*; Thomas George (MILA, Université de Montréal); César Laurent (Mila, Université de Montréal); Valentin Thomas (MILA); Guillaume Lajoie (Université de Montréal, Mila); Simon Lacoste-Julien (Mila, Université de Montréal)

The Deep Learning Limit: Negative Neural Network eigenvalues just noise? Diego Granziol (Oxford)*; Stefan Zohren (University of Oxford); Stephen Roberts (Oxford); Dmitry P Vetrov (Higher School of Economics); Andrew Gordon Wilson (Cornell University); Timur Garipov (Samsung AI Center in Moscow)

Gradient descent in Gaussian random fields as a toy model for high-dimensional optimisation Mariano Chouza (Tower Research Capital); Stephen Roberts (Oxford); Stefan Zohren (University of Oxford)*

Deep Learning for Inverse Problems Abhejit Rajagopal (University of California, Santa Barbara)*; Vincent R Radzicki (University of California, Santa Barbara)

Author Information

Roman Novak (Google Brain)
Maxime Gabella (MAGMA Learning)
Frederic Dreyer (University of Oxford)

Theoretical physics researcher working on quantum chromodynamics and applications of machine learning for the LHC.

Siavash Golkar (New York University)
Anh Tong (Ulsan National Institute of Science and Technology)
Irina Higgins (DeepMind)
Irina Higgins

Irina Higgins is a research scientist at DeepMind, where she works in the Froniers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.

Mirco Milletari (Microsoft)
Joe Antognini (Whisper AI)
Sebastian Goldt (Institut de physique théorique (IPhT))

I'm an assistant professor working on theories of learning in neural networks.

Adín Ramírez Rivera (University of Campinas)
Roberto Bondesan (Qualcomm AI Research)
Ryo Karakida (National Institute of AIST)
Remi Tachet des Combes (Microsoft Research Montreal)
Michael Mahoney (UC Berkeley)
Nicholas Walker (Louisiana State University)

I am a graduate research assistant in the Department of Physics and Astronomy at Louisiana State University studying applications of machine learning to determining critical points and crossover regions in physical systems relating to condensed matter physics.

Stanislav Fort (Google AI)
Samuel Smith (DeepMind)
Rohan Ghosh (National University of Singapore)
Aristide Baratin (MILA)
Diego Granziol (Oxford)
Stephen Roberts (University of Oxford)
Dmitry Vetrov (Higher School of Economics, Samsung AI Center Moscow)
Andrew Wilson (Cornell University)
Andrew Wilson

Andrew Gordon Wilson is faculty in the Courant Institute and Center for Data Science at NYU. His interests include probabilistic modelling, Gaussian processes, Bayesian statistics, physics inspired machine learning, and loss surfaces and generalization in deep learning. His webpage is https://cims.nyu.edu/~andrewgw.

César Laurent (MILA)
Valentin Thomas (MILA)
Simon Lacoste-Julien (Mila, University of Montreal)
Simon Lacoste-Julien

Simon Lacoste-Julien is an associate professor at Mila and DIRO from Université de Montréal, and Canada CIFAR AI Chair holder. He also heads part time the SAIT AI Lab Montreal from Samsung. His research interests are machine learning and applied math, with applications in related fields like computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016 to answer the call from Yoshua Bengio in growing the Montreal AI ecosystem.

Dar Gilboa (Columbia University)
Daniel Soudry (Technion)
Anupam Gupta (NUS)
Anirudh Goyal (Université de Montréal)
Yoshua Bengio (Montreal Institute for Learning Algorithms)
Erich Elsen (Google)
Soham De (DeepMind)
Stanislaw Jastrzebski (New York University)
Charles H Martin (Calculation Consulting)
Samira Shabanian (Microsoft, Canada)
Aaron Courville (Université de Montréal)
Shorato Akaho (The National Institute of Advanced Industrial Science and Technology)
Lenka Zdeborova (CNRS)
Ethan Dyer (Google)
Maurice Weiler (University of Amsterdam)
Pim de Haan (University of Amsterdam, visiting at UC Berkeley)
Taco Cohen (Qualcomm AI Research)
Max Welling (University of Amsterdam & Qualcomm)
Ping Luo (The University of Hong Kong)
zhanglin peng (SenseTime)
Nasim Rahaman (MPI for Intelligent Systems Tübingen / Mila, Quebec / University of Heidelberg)
Loic Matthey (DeepMind)
Danilo J. Rezende (DeepMind)
Danilo J. Rezende

Danilo is a Senior Staff Research Scientist at Google DeepMind, where he works on probabilistic machine reasoning and learning algorithms. He has a BA in Physics and MSc in Theoretical Physics from Ecole Polytechnique (Palaiseau – France) and from the Institute of Theoretical Physics (SP – Brazil) and a Ph.D. in Computational Neuroscience at Ecole Polytechnique Federale de Lausanne, EPFL (Lausanne – Switzerland). His research focuses on scalable inference methods, generative models of complex data (such as images and video), applied probability, causal reasoning and unsupervised learning for decision-making.

Jaesik Choi (Ulsan National Institute of Science and Technology)
Kyle Cranmer (NYU)

Professor of Physics and Data Science at NYU. Executive director of Moore-Sloan data science environment at NYU. Member of ATLAS collaboration at CERN’s Large Hadron Collider (LHC). NeurIPS2016 keynote. Organizer of Deep Learning for Physical Sciences workshop at NeurIPS 2017.

Lechao Xiao (Google Research)
Jaehoon Lee (Google Brain)
Yasaman Bahri (Google Brain)
Jeffrey Pennington (Google Brain)
Greg Yang (Microsoft Research)
Jiri Hron (University of Cambridge)
Jascha Sohl-Dickstein (Google Brain)
Guy Gur-Ari (Google)

More from the Same Authors