Poster

Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

Spencer Frei · Yuan Cao · Quanquan Gu

Virtual

Keywords: [ Statistical Learning Theory ]

[ Abstract ]
[ Slides
[ Paper ]
[ Visit Poster at Spot C5 in Virtual World ]
Wed 21 Jul 9 a.m. PDT — 11 a.m. PDT
 
Oral presentation: Learning Theory 4
Wed 21 Jul 6 a.m. PDT — 7 a.m. PDT

Abstract:

We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the \textit{soft margin} is well-behaved---a condition satisfied by log-concave isotropic distributions among others---minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.

Chat is not available.