Skip to yearly menu bar Skip to main content


Poster

Bregman Power k-Means for Clustering Exponential Family Data

Adithya D Vellal · Saptarshi Chakraborty · Jason Xu

Hall E #625

Keywords: [ T: Probabilistic Methods ] [ OPT: Convex ] [ T: Miscellaneous Aspects of Machine Learning ] [ PM: Everything Else ] [ T: Optimization ] [ MISC: Unsupervised and Semi-supervised Learning ]


Abstract:

Recent progress in center-based clustering algorithms combats poor local minima by implicit annealing through a family of generalized means. These methods are variations of Lloyd's celebrated k-means algorithm, and are most appropriate for spherical clusters such as those arising from Gaussian data. In this paper, we bridge these new algorithmic advances to classical work on hard clustering under Bregman divergences, which enjoy a bijection to exponential family distributions and are thus well-suited for clustering objects arising from a breadth of data generating mechanisms. The elegant properties of Bregman divergences allow us to maintain closed form updates in a simple and transparent algorithm, and moreover lead to new theoretical arguments for establishing finite sample bounds that relax the bounded support assumption made in the existing state of the art. Additionally, we consider thorough empirical analyses on simulated experiments and a case study on rainfall data, finding that the proposed method outperforms existing peer methods in a variety of non-Gaussian data settings.

Chat is not available.