Timezone: »
We introduce the problem of grouping a finite ground set into blocks where each block is a subset of the ground set and where: (i) the blocks are individually highly valued by a submodular function (both robustly and in the average case) while satisfying block-specific matroid constraints; and (ii) block scores interact where blocks are jointly scored highly, thus making the blocks mutually non-redundant. Submodular functions are good models of information and diversity; thus, the above can be seen as grouping the ground set into matroid constrained blocks that are both intra- and inter-diverse. Potential applications include forming ensembles of classification/regression models, partitioning data for parallel processing, and summarization. In the non-robust case, we reduce the problem to non-monotone submodular maximization subject to multiple matroid constraints. In the mixed robust/average case, we offer a bi-criterion guarantee for a polynomial time deterministic algorithm and a probabilistic guarantee for randomized algorithm, as long as the involved submodular functions (including the inter-block interaction terms) are monotone. We close with a case study in which we use these algorithms to find high quality diverse ensembles of classifiers, showing good results.
Author Information
Andrew Cotter (Google AI)
Mahdi Milani Fard (Google)
Seungil You (Google)
Maya Gupta (Google)
Jeff Bilmes (UW)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Constrained Interacting Submodular Groupings »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #118
More from the Same Authors
-
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Gantavya Bhatt · Jeff Bilmes -
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Jeff Bilmes · Gantavya Bhatt -
2021 : More Information, Less Data »
Jeff Bilmes · Jeff Bilmes -
2021 : Introduction by the Organizers »
Abir De · Rishabh Iyer · Ganesh Ramakrishnan · Jeff Bilmes -
2021 Workshop: Subset Selection in Machine Learning: From Theory to Applications »
Rishabh Iyer · Abir De · Ganesh Ramakrishnan · Jeff Bilmes -
2021 Poster: Optimizing Black-box Metrics with Iterative Example Weighting »
Gaurush Hiranandani · Jatin Mathur · Harikrishna Narasimhan · Mahdi Milani Fard · Sanmi Koyejo -
2021 Poster: Implicit rate-constrained optimization of non-decomposable objectives »
Abhishek Kumar · Harikrishna Narasimhan · Andrew Cotter -
2021 Spotlight: Implicit rate-constrained optimization of non-decomposable objectives »
Abhishek Kumar · Harikrishna Narasimhan · Andrew Cotter -
2021 Spotlight: Optimizing Black-box Metrics with Iterative Example Weighting »
Gaurush Hiranandani · Jatin Mathur · Harikrishna Narasimhan · Mahdi Milani Fard · Sanmi Koyejo -
2020 Poster: Coresets for Data-efficient Training of Machine Learning Models »
Baharan Mirzasoleiman · Jeff Bilmes · Jure Leskovec -
2020 Poster: Time-Consistent Self-Supervision for Semi-Supervised Learning »
Tianyi Zhou · Shengjie Wang · Jeff Bilmes -
2020 Poster: Optimizing Black-box Metrics with Adaptive Surrogates »
Qijia Jiang · Olaoluwa Adigun · Harikrishna Narasimhan · Mahdi Milani Fard · Maya Gupta -
2019 : Jeff Bilmes: Deep Submodular Synergies »
Jeff Bilmes -
2019 Poster: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints »
Andrew Cotter · Maya Gupta · Heinrich Jiang · Nati Srebro · Karthik Sridharan · Serena Wang · Blake Woodworth · Seungil You -
2019 Poster: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Metric-Optimized Example Weights »
Sen Zhao · Mahdi Milani Fard · Harikrishna Narasimhan · Maya Gupta -
2019 Poster: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2019 Poster: Shape Constraints for Set Functions »
Andrew Cotter · Maya Gupta · Heinrich Jiang · Erez Louidor · James Muller · Taman Narayan · Serena Wang · Tao Zhu -
2019 Oral: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints »
Andrew Cotter · Maya Gupta · Heinrich Jiang · Nati Srebro · Karthik Sridharan · Serena Wang · Blake Woodworth · Seungil You -
2019 Oral: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2019 Oral: Shape Constraints for Set Functions »
Andrew Cotter · Maya Gupta · Heinrich Jiang · Erez Louidor · James Muller · Taman Narayan · Serena Wang · Tao Zhu -
2019 Oral: Metric-Optimized Example Weights »
Sen Zhao · Mahdi Milani Fard · Harikrishna Narasimhan · Maya Gupta -
2018 Poster: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Wenruo Bai · Jeff Bilmes -
2018 Oral: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Wenruo Bai · Jeff Bilmes