Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Negative Dependence and Submodularity: Theory and Applications in Machine Learning

Mode Finding for SLC Distributions via Regularized Submodular Maximization

Ehsan Kazemi · Amin Karbasi · Moran Feldman


Abstract: In this paper, we propose scalable methods for maximizing a regularized submodular function $f = g- \ell$ expressed as the difference between a monotone submodular function $g$ and a modular function $\ell$. Indeed, submodularity is inherently related to the notions of diversity, coverage, and representativeness. In particular, finding the mode (i.e., the most likely configuration) of many popular probabilistic models of diversity, such as determinantal point processes, submodular probabilistic models, and strongly log-concave distributions, involves maximization of (regularized) submodular functions. Since a regularized function$ f$ can potentially take on negative values, the classic theory of submodular maximization, which heavily relies on the non-negativity assumption of submodular functions, may not be applicable. To circumvent this challenge, we develop the first streaming and distributed algorithm for maximizing regularized submodular functions. Furthermore, we show that how can we find the mode of a strongly log-concave (SLC) distribution by regularized submodular maximization.

Chat is not available.