Timezone: »
Poster
Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions
Wenruo Bai · Jeff Bilmes
We analyze the performance of the greedy algorithm, and also a discrete semigradient based algorithm, for maximizing the sum of a suBmodular and suPermodular (BP) function (both of which are nonnegative monotone nondecreasing) under two types of constraints, either a cardinality constraint or $p\geq 1$ matroid independence constraints. These problems occur naturally in several realworld applications in data science, machine learning, and artificial intelligence. The problems are ordinarily inapproximable to any factor. Using the curvature $\curv_f$ of the submodular term, and introducing $\curv^g$ for the supermodular term (a natural dual curvature for supermodular functions), however, both of which are computable in linear time, we show that BP maximization can be efficiently approximated by both the greedy and the semigradient based algorithm. The algorithms yield multiplicative guarantees of $\frac{1}{\curv_f}\left[1e^{(1\curv^g)\curv_f}\right]$ and $\frac{1\curv^g}{(1\curv^g)\curv_f + p}$ for the two types of constraints respectively. For pure monotone supermodular constrained maximization, these yield $1\curvg$ and $(1\curvg)/p$ for the two types of constraints respectively. We also analyze the hardness of BP maximization and show that our guarantees match hardness by a constant factor and by $O(\ln(p))$ respectively. Computational experiments are also provided supporting our analysis.
Author Information
Wenruo Bai (University of Washington)
Jeff Bilmes (UW)
Related Events (a corresponding poster, oral, or spotlight)

2018 Oral: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Thu. Jul 12th 02:30  02:40 PM Room K11
More from the Same Authors

2021 : Tighter mDPP Coreset Sample Complexity Bounds »
Gantavya Bhatt · Jeff Bilmes 
2023 : Accelerating Batch Active Learning Using Continual Learning Techniques »
Gantavya Bhatt · Arnav M Das · · Rui Yang · Vianne Gao · Jeff Bilmes 
2021 : Tighter mDPP Coreset Sample Complexity Bounds »
Jeff Bilmes · Gantavya Bhatt 
2021 : More Information, Less Data »
Jeff Bilmes · Jeff Bilmes 
2021 : Introduction by the Organizers »
Abir De · Rishabh Iyer · Ganesh Ramakrishnan · Jeff Bilmes 
2021 Workshop: Subset Selection in Machine Learning: From Theory to Applications »
Rishabh Iyer · Abir De · Ganesh Ramakrishnan · Jeff Bilmes 
2020 Poster: Coresets for Dataefficient Training of Machine Learning Models »
Baharan Mirzasoleiman · Jeff Bilmes · Jure Leskovec 
2020 Poster: TimeConsistent SelfSupervision for SemiSupervised Learning »
Tianyi Zhou · Shengjie Wang · Jeff Bilmes 
2019 : Jeff Bilmes: Deep Submodular Synergies »
Jeff Bilmes 
2019 Poster: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes 
2019 Oral: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes 
2019 Poster: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes 
2019 Poster: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal MohdYusof 
2019 Oral: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes 
2019 Oral: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal MohdYusof 
2018 Poster: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes 
2018 Oral: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes