Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Theoretical Foundations of Foundation Models (TF2M)

Meta-optimization for Deep Learning via Nonstochastic Control

Xinyi Chen · Evan Dogariu · Zhou Lu · Elad Hazan


Abstract:

Hyperparameter tuning in mathematical optimization is a notoriously difficult problem. Recent tools from online control give rise to a provable methodology for hyperparameter tuning in convex optimization called meta-optimization. In this work, we extend this methodology to nonconvex optimization and the training of deep neural networks. We present an algorithm for nonconvex meta-optimization that leverages the reduction from nonconvex optimization to convex optimization, and investigate its applicability for deep learning tasks on academic-scale datasets.

Chat is not available.