Skip to yearly menu bar Skip to main content


Workshop

Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators

Felix Petersen · Marco Cuturi · Hilde Kuehne · Christian Borgelt · Lawrence Stewart · Michael Kagan · Stefano Ermon

Stolz 0

Fri 26 Jul, midnight PDT

Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of "differentiable everything", impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.

Chat is not available.
Timezone: America/Los_Angeles

Schedule