Timezone: »

 
Poster
Efficient displacement convex optimization with particle gradient descent
Hadi Daneshmand · Jason Lee · Chi Jin

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #624
Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are *displacement convex* in measures. Concretely, for Lipschitz displacement convex functions defined on probability over $R^d$, we prove that $O(1/\epsilon^2)$ particles and $O(d/\epsilon^4)$ iterations are sufficient to find the $\epsilon$-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. An application of our results proves the conjecture of *no optimization-barrier up to permutation invariance*, proposed by Entezari et al. (2022), for specific two-layer neural networks with two-dimensional inputs uniformly drawn from unit circle.

Author Information

Hadi Daneshmand (MIT)
Jason Lee (Princeton University)
Chi Jin (Princeton University)

More from the Same Authors