Skip to yearly menu bar Skip to main content


Discrete Continuous Optimization Framework for Simultaneous Clustering and Training in Mixture Models

Parth Sangani · Arjun Kashettiwar · Pritish Chakraborty · Bhuvan Gangula · Durga Sivasubramanian · Ganesh Ramakrishnan · Rishabh Iyer · Abir De

Exhibit Hall 1 #532


We study a new framework of learning mixture models via automatic clustering called PRESTO, wherein we optimize a joint objective function on the model parameters and the partitioning, with each model tailored to perform well on its specific cluster. In contrast to prior work, we do not assume any generative model for the data. We convert our training problem to a joint parameter estimation cum a subset selection problem, subject to a matroid span constraint. This allows us to reduce our problem into a constrained set function minimization problem, where the underlying objective is monotone and approximately submodular. We then propose a new joint discrete-continuous optimization algorithm that achieves a bounded approximation guarantee for our problem. We show that PRESTO outperforms several alternative methods. Finally, we study PRESTO in the context of resource-efficient deep learning, where we train smaller resource-constrained models on each partition and show that it outperforms existing data partitioning and model pruning/knowledge distillation approaches, which in contrast to PRESTO, require large initial (teacher) models.

Chat is not available.