Skip to yearly menu bar Skip to main content


Poster

Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling

Mingze Wang · Zeping Min · Lei Wu

Hall C 4-9 #2714
[ ] [ Paper PDF ]
Thu 25 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

In this work, we investigate the margin-maximization bias exhibited by gradient-based algorithms in classifying linearly separable data. We present an in-depth analysis of the specific properties of the velocity field associated with (normalized) gradients, focusing on their role in margin maximization. Inspired by this analysis, we propose a novel algorithm called Progressive Rescaling Gradient Descent (PRGD) and show that PRGD can maximize the margin at an exponential rate. This stands in stark contrast to all existing algorithms, which maximize the margin at a slow polynomial rate. Specifically, we identify mild conditions on data distribution under which existing algorithms such as gradient descent (GD) and normalized gradient descent (NGD) provably fail in maximizing the margin efficiently. To validate our theoretical findings, we present both synthetic and real-world experiments. Notably, PRGD also shows promise in enhancing the generalization performance when applied to linearly non-separable datasets and deep neural networks.

Chat is not available.