Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: Negative Dependence and Submodularity: Theory and Applications in Machine Learning

Negative Dependence and Sampling

Stefanie Jegelka


Abstract:

Probability distributions with strong notions of negative dependence arise in various forms in machine learning. Examples include diversity-inducing probabilistic models, interpretability, exploration and active learning, and randomized algorithms. While, perhaps surprisingly, being more delicate than its positive counterpart, negative dependence enjoys rich mathematical connections and properties that offer a promising toolbox for machine learning. In this talk, I will summarize some recently important notions of negative dependence, and their implications for sampling algorithms. These results exploit connections to the geometry of polynomials, log concavity, and submodular optimization. We will conclude with an example application of sampling minibatches for optimization.

Chat is not available.