Timezone: »
In this paper we describe a new parallel algorithm called Fast Adaptive Sequencing Technique (FAST) for maximizing a monotone submodular function under a cardinality constraint k. This algorithm achieves the optimal 1-1/e approximation guarantee and is orders of magnitude faster than the state-of-the-art on a variety of experiments over real-world data sets. Following recent work by Balkanski and Singer (2018), there has been a great deal of research on algorithms whose theoretical parallel runtime is exponentially faster than algorithms used for submodular maximization over the past 40 years. However, while these new algorithms are fast in terms of asymptotic worst-case guarantees, it is computationally infeasible to use them in practice even on small data sets because the number of rounds and queries they require depend on large constants and high-degree polynomials in terms of precision and confidence. The design principles behind the FAST algorithm we present here are a significant departure from those of recent theoretically fast algorithms. Rather than optimize for asymptotic theoretical guarantees, the design of FAST introduces several new techniques that achieve remarkable practical and theoretical parallel runtimes. The approximation guarantee obtained by FAST is arbitrarily close to 1 - 1/e, and its asymptotic parallel runtime (adaptivity) is O(log(n) log^2(log k)) using O(n log log(k)) total queries. We show that FAST is orders of magnitude faster than any algorithm for submodular maximization we are aware of, including hyper-optimized parallel versions of state-of-the-art serial algorithms, by running experiments on large data sets.
Author Information
Adam Breuer (Harvard University)
Eric Balkanski (Harvard)
Yaron Singer (Harvard)
More from the Same Authors
-
2021 Poster: Instance Specific Approximations for Submodular Maximization »
Eric Balkanski · Sharon Qian · Yaron Singer -
2021 Spotlight: Instance Specific Approximations for Submodular Maximization »
Eric Balkanski · Sharon Qian · Yaron Singer -
2020 : Exponentially Faster Algorithms for Machine Learning »
Yaron Singer -
2020 Poster: Predicting Choice with Set-Dependent Aggregation »
Nir Rosenfeld · Kojin Oshiba · Yaron Singer -
2019 Poster: Robust Influence Maximization for Hyperparametric Models »
Dimitrios Kalimeris · Gal Kaplun · Yaron Singer -
2019 Oral: Robust Influence Maximization for Hyperparametric Models »
Dimitrios Kalimeris · Gal Kaplun · Yaron Singer -
2018 Poster: Approximation Guarantees for Adaptive Sampling »
Eric Balkanski · Yaron Singer -
2018 Oral: Approximation Guarantees for Adaptive Sampling »
Eric Balkanski · Yaron Singer -
2018 Poster: Learning Diffusion using Hyperparameters »
Dimitrios Kalimeris · Yaron Singer · Karthik Subbian · Udi Weinsberg -
2018 Poster: Learning to Optimize Combinatorial Functions »
Nir Rosenfeld · Eric Balkanski · Amir Globerson · Yaron Singer -
2018 Oral: Learning to Optimize Combinatorial Functions »
Nir Rosenfeld · Eric Balkanski · Amir Globerson · Yaron Singer -
2018 Oral: Learning Diffusion using Hyperparameters »
Dimitrios Kalimeris · Yaron Singer · Karthik Subbian · Udi Weinsberg -
2017 Poster: Robust Guarantees of Stochastic Greedy Algorithms »
Yaron Singer · Avinatan Hassidim -
2017 Talk: Robust Guarantees of Stochastic Greedy Algorithms »
Yaron Singer · Avinatan Hassidim