Timezone: »
Submodular functions have become a ubiquitous tool in machine learning. They are learnable from data, and can be optimized efficiently and with guarantees. Nonetheless, recent negative results show that optimizing learned surrogates of submodular functions can result in arbitrarily bad approximations of the true optimum. Our goal in this paper is to highlight the source of this hardness, and propose an alternative criterion for optimizing general combinatorial functions from sampled data. We prove a tight equivalence showing that a class of functions is optimizable if and only if it can be learned. We provide efficient and scalable optimization algorithms for several function classes of interest, and demonstrate their utility on the task of optimally choosing trending social media items.
Author Information
Nir Rosenfeld (Harvard University)
Eric Balkanski (Harvard)
Amir Globerson (Tel Aviv University, Google)
Yaron Singer (Harvard)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Learning to Optimize Combinatorial Functions »
Wed Jul 11th 03:00 -- 03:20 PM Room K11
More from the Same Authors
-
2020 Poster: Predicting Choice with Set-Dependent Aggregation »
Nir Rosenfeld · Kojin Oshiba · Yaron Singer -
2020 Poster: The FAST Algorithm for Submodular Maximization »
Adam Breuer · Eric Balkanski · Yaron Singer -
2019 Poster: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Oral: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Poster: Robust Influence Maximization for Hyperparametric Models »
Dimitrios Kalimeris · Gal Kaplun · Yaron Singer -
2019 Oral: Robust Influence Maximization for Hyperparametric Models »
Dimitrios Kalimeris · Gal Kaplun · Yaron Singer -
2018 Poster: Approximation Guarantees for Adaptive Sampling »
Eric Balkanski · Yaron Singer -
2018 Oral: Approximation Guarantees for Adaptive Sampling »
Eric Balkanski · Yaron Singer -
2018 Poster: Learning Diffusion using Hyperparameters »
Dimitrios Kalimeris · Yaron Singer · Karthik Subbian · Udi Weinsberg -
2018 Poster: Predict and Constrain: Modeling Cardinality in Deep Structured Prediction »
Nataly Brukhim · Amir Globerson -
2018 Oral: Learning Diffusion using Hyperparameters »
Dimitrios Kalimeris · Yaron Singer · Karthik Subbian · Udi Weinsberg -
2018 Oral: Predict and Constrain: Modeling Cardinality in Deep Structured Prediction »
Nataly Brukhim · Amir Globerson -
2017 Poster: Robust Guarantees of Stochastic Greedy Algorithms »
Yaron Singer · Avinatan Hassidim -
2017 Talk: Robust Guarantees of Stochastic Greedy Algorithms »
Yaron Singer · Avinatan Hassidim -
2017 Poster: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Poster: Learning Infinite Layer Networks without the Kernel Trick »
Roi Livni · Daniel Carmon · Amir Globerson -
2017 Talk: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Talk: Learning Infinite Layer Networks without the Kernel Trick »
Roi Livni · Daniel Carmon · Amir Globerson