Timezone: »

 
Workshop
Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators
Felix Petersen · Marco Cuturi · Mathias Niepert · Hilde Kuehne · Michael Kagan · Willie Neiswanger · Stefano Ermon

Fri Jul 28 12:00 PM -- 08:00 PM (PDT) @ Meeting Room 310
Event URL: https://differentiable.xyz »

Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of "differentiable everything", impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.

Author Information

Felix Petersen (Stanford University)
Marco Cuturi (Apple and ENSAE/CREST)
Marco Cuturi

Marco is a researcher in machine learning at Apple, working since Jan. 2022 in the Machine Learning Research team led by Samy Bengio. Marco has also been affiliated with the ENSAE / IP Paris school since 2016, working there part-time from 2018. Marco also worked at Google Brain (2018~2022), Kyoto University (2010~2016), Princeton University (2009~2010), the financial industry (2007~2008) and the Institute of Statistical Mathematics (Tokyo, 2006~2007). Marco received his Ph.D. in 2005 from Ecole des Mines de Paris. Marco's research interests cover differentiable optimization, time series, optimal transport theory and its application to ML.

Mathias Niepert (University of Stuttgart)
Hilde Kuehne (University of Frankfurt)
Michael Kagan (SLAC / Stanford)
Willie Neiswanger (Stanford University)
Stefano Ermon (Stanford University)

More from the Same Authors