Spotlight Poster
On Learning Parallel Pancakes with Mostly Uniform Weights
Ilias Diakonikolas · Daniel Kane · Sushrut Karmalkar · Jasper Lee · Thanasis Pittas
West Exhibition Hall B2-B3 #W-904
Gaussian mixtures are often used to describe data that comes from several overlapping groups. These models are widely used in fields like image processing and genetics to make sense of complex data. However, learning the underlying structure, especially when the number of groups grows, quickly becomes very hard, often taking an impractical amount of computation.To make progress, past works have studied simpler versions of the problem. For example, they might assume that all groups are shaped similarly and that no group is too small in size. Recent work showed that, under these assumptions, the problem can be solved faster, though still not efficiently enough for very large datasets. A natural question is whether this can be further.Our research shows that even when all the groups are equally sized, the problem remains hard, and the above complexity is unavoidable. We then investigate what happens when most of the groups are equally sized but a small number are allowed to be much smaller. In this case, we show that there is an algorithm whose complexity lies between the two extremes: the case where all groups are equally sized and the case where group sizes are completely arbitrary.
Live content is unavailable. Log in and register to view live content