Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators

You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and Optimize for Convex Optimization

Grigorii Veviurko · Wendelin Boehmer · Mathijs de Weerdt

Keywords: [ Machine Learning ] [ Differential Optimization ] [ Predict and Optimize ] [ ICML ]


Abstract:

In predict and optimize, machine learning models are trained to predict parameters of optimization problems using task performance as the objective. A key challenge is computing the Jacobian of the solution with respect to its parameters. While linear problems typically use approximations due to a zero or undefined Jacobian, non-linear convex problems often utilize the exact Jacobian. This paper demonstrates that the zero-gradient issue also occurs in the non-linear case and introduces a smoothing technique which, combined with quadratic approximation and projection distance regularization, solves the zero-gradient problem. Experiments on a portfolio optimization problem confirm the method's efficiency.

Chat is not available.