Skip to yearly menu bar Skip to main content


Poster

Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity

Matthew Fahrbach · Vahab Mirrokni · Morteza Zadimoghaddam

Pacific Ballroom #141

Keywords: [ Parallel and Distributed Learning ] [ Large Scale Learning and Big Data ] [ Combinatorial Optimization ]


Abstract: Submodular maximization is a general optimization problem with a wide range of applications in machine learning (e.g., active learning, clustering, and feature selection). In large-scale optimization, the parallel running time of an algorithm is governed by its adaptivity, which measures the number of sequential rounds needed if the algorithm can execute polynomially-many independent oracle queries in parallel. While low adaptivity is ideal, it is not sufficient for an algorithm to be efficient in practice---there are many applications of distributed submodular optimization where the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of submodular maximization. In this paper, we give the first constant-factor approximation algorithm for maximizing a non-monotone submodular function subject to a cardinality constraint $k$ that runs in $O(\log(n))$ adaptive rounds and makes $O(n \log(k))$ oracle queries in expectation. In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries.

Live content is unavailable. Log in and register to view live content