Timezone: »
Poster
Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions
Wenruo Bai · Jeff Bilmes
We analyze the performance of the greedy algorithm, and also a discrete semi-gradient based algorithm, for maximizing the sum of a suBmodular and suPermodular (BP) function (both of which are non-negative monotone non-decreasing) under two types of constraints, either a cardinality constraint or $p\geq 1$ matroid independence constraints. These problems occur naturally in several real-world applications in data science, machine learning, and artificial intelligence. The problems are ordinarily inapproximable to any factor. Using the curvature $\curv_f$ of the submodular term, and introducing $\curv^g$ for the supermodular term (a natural dual curvature for supermodular functions), however, both of which are computable in linear time, we show that BP maximization can be efficiently approximated by both the greedy and the semi-gradient based algorithm. The algorithms yield multiplicative guarantees of $\frac{1}{\curv_f}\left[1-e^{-(1-\curv^g)\curv_f}\right]$ and $\frac{1-\curv^g}{(1-\curv^g)\curv_f + p}$ for the two types of constraints respectively. For pure monotone supermodular constrained maximization, these yield $1-\curvg$ and $(1-\curvg)/p$ for the two types of constraints respectively. We also analyze the hardness of BP maximization and show that our guarantees match hardness by a constant factor and by $O(\ln(p))$ respectively. Computational experiments are also provided supporting our analysis.
Author Information
Wenruo Bai (University of Washington)
Jeff Bilmes (UW)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Thu. Jul 12th 02:30 -- 02:40 PM Room K11
More from the Same Authors
-
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Gantavya Bhatt · Jeff Bilmes -
2023 : Accelerating Batch Active Learning Using Continual Learning Techniques »
Gantavya Bhatt · Arnav M Das · · Rui Yang · Vianne Gao · Jeff Bilmes -
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Jeff Bilmes · Gantavya Bhatt -
2021 : More Information, Less Data »
Jeff Bilmes · Jeff Bilmes -
2021 : Introduction by the Organizers »
Abir De · Rishabh Iyer · Ganesh Ramakrishnan · Jeff Bilmes -
2021 Workshop: Subset Selection in Machine Learning: From Theory to Applications »
Rishabh Iyer · Abir De · Ganesh Ramakrishnan · Jeff Bilmes -
2020 Poster: Coresets for Data-efficient Training of Machine Learning Models »
Baharan Mirzasoleiman · Jeff Bilmes · Jure Leskovec -
2020 Poster: Time-Consistent Self-Supervision for Semi-Supervised Learning »
Tianyi Zhou · Shengjie Wang · Jeff Bilmes -
2019 : Jeff Bilmes: Deep Submodular Synergies »
Jeff Bilmes -
2019 Poster: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2019 Oral: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2018 Poster: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes -
2018 Oral: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes