Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Negative Dependence and Submodularity: Theory and Applications in Machine Learning

On the Relationship Between Probabilistic Circuits and Determinantal Point Processes

Honghua Zhang · Steven Holtzen · Guy Van den Broeck


Abstract:

Scaling probabilistic models to large realistic problems and datasets is a key challenge in machine learning. Central to this effort is the development of tractable probabilistic models (TPMs): models whose structure guarantees efficient probabilistic inference algorithms. The current landscape of TPMs is fragmented: there exist various kinds of TPMs with different strengths and weaknesses. Two of the most prominent classes of TPMs are determinantal point processes (DPPs) and probabilistic circuits (PCs). This paper provides the first systematic study of their relationship. We propose a unified analysis and shared language for discussing DPPs and PCs. Then we establish theoretical barriers for the unification of these two families, and prove that there are cases where DPPs have no compact representation as a class of PCs. We close with a perspective on the central problem of unifying these models.

Chat is not available.