Timezone: »

 
Spotlight
Fast margin maximization via dual acceleration
Ziwei Ji · Nati Srebro · Matus Telgarsky

Thu Jul 22 08:35 PM -- 08:40 PM (PDT) @ None

We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O(1/t^2). This contrasts with a rate of O(1/log(t)) for standard gradient descent, and O(1/t) for normalized gradient descent. The momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.

Author Information

Ziwei Ji (University of Illinois at Urbana-Champaign)
Nati Srebro (Toyota Technological Institute at Chicago)
Matus Telgarsky (UIUC)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors