Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Jul 29 12:00 PM -- 08:00 PM (PDT) @ Meeting Room 320 None
Duality Principles for Modern Machine Learning
Thomas Moellenhoff · Zelda Mariet · Mathieu Blondel · Khan Emtiyaz





Workshop Home Page

Duality is a pervasive and important principle in mathematics. Not only has it fascinated researchers in many different fields but it has also been used extensively in optimization, statistics, and machine-learning (ML), giving rise to powerful tools such as Fenchel duality in convex optimization, representer theorems in kernel methods and Bayesian nonparametrics, and dually-flat spaces in information geometry. Such applications have played an important role in the past, but lately we do not see much work on duality principles, especially in deep learning. For example, Lagrange duality can be useful for model explanation because it allows us to measure sensitivity of certain perturbations, but this is not yet fully exploited. This slowdown is perhaps due to a growing focus on nonconvex and nonlinear problems where duality does not seem to be directly applicable. There have not been any workshops on duality in recent years. With this workshop, we aim to revive the interest of the ML community in duality principles.The goal of the workshop is to bring together researchers working on various duality concepts from many different fields, and discuss new applications for modern machine learning, especially focusing on topics such as model understanding, explanation, and adaptation in deep learning and reinforcement learning.

Duality: Opening remarks (Opening remarks)
Ronny Bergman: Fenchel Duality Theory on Riemannian Manifolds and the Riemannian Chambolle-Pock Algorithm (Invited talk)
Coffee Break (Break)
Jaeyeon Kim: Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value (Contributed talk)
Sina Baharlouei: RIFLE: Imputation and Robust Inference from Low Order Marginals (Contributed talk)
Joseph Shenouda: A Representer Theorem for Vector-Valued Neural Networks: Insights on Weight Decay Training and Widths of Deep Neural Networks (Contributed talk)
Jia-Jie Zhu: Duality from Gradient Flow Force-Balance to Distributionally Robust Learning (Invited talk)
Taiji Suzuki: Convergence of mean field Langevin dynamics: Duality viewpoint and neural network optimization (Invited talk)
Len Spek: Duality for Neural Networks through Reproducing Kernel Banach Spaces (Invited talk)
Lunch Break (Break)
Poster session
Amy Zhang: Dual RL: Unification and New Methods for Reinforcement and Imitation Learning (Invited talk)
Coffee Break / Poster session (Break)
Ehsan Amid: A Dualistic View of Activations in Deep Neural Networks (Invited talk)
Panel discussion
Estimating Joint interventional distributions from marginal interventional data (Poster)
Learning with Primal-Dual Spectral Risk Measures: a Fast Incremental Algorithm (Poster)
Kernel Mirror Prox and RKHS Gradient Flow for Mixed Functional Nash Equilibrium (Poster)
Duality Principle and Biologically Plausible Learning: Connecting the Representer Theorem and Hebbian Learning (Poster)
RIFLE: Imputation and Robust Inference from Low Order Marginals (Poster)
Implicit Interpretation of Importance Weight Aware Updates (Poster)
Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value (Poster)
Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees (Poster)
A Representer Theorem for Vector-Valued Neural Networks: Insights on Weight Decay Training and Widths of Deep Neural Networks (Poster)
The Power of Duality Principle in Offline Average-Reward Reinforcement Learning (Poster)
A max-affine spline approximation of neural networks using the Legendre transform of a convex-concave representation (Poster)
On the Fisher-Rao Gradient of the Evidence Lower Bound (Poster)
Duality in Multi-View Restricted Kernel Machines (Poster)
A Dual Formulation for Probabilistic Principal Component Analysis (Poster)
Controlling the Inductive Bias of Wide Neural Networks by Modifying the Kernel's Spectrum (Poster)
Sparse Function-space Representation of Neural Networks (Poster)
Reward-Based Reinforcement Learning with Risk Constraints (Poster)
Memory Maps to Understand Models (Poster)
Energy-Based Non-Negative Tensor Factorization via Multi-Body Modeling (Poster)