Skip to yearly menu bar Skip to main content


Poster

Estimating the Error of Randomized Newton Methods: A Bootstrap Approach

Miles Lopes · Jessie X.T. Chen

Keywords: [ Probabilistic Inference - Approximate, Monte Carlo, and Spectral Methods ] [ Parallel and Distributed Learning ] [ Monte Carlo Methods ] [ Convex Optimization ]


Abstract:

Randomized Newton methods have recently become the focus of intense research activity in large-scale and distributed optimization. In general, these methods are based on a computation-accuracy trade-off'', which allows the user to gain scalability in exchange for error in the solution. However, the user does not know how much error is created by the randomized approximation, which can be detrimental in two ways: On one hand, the user may try to assess the unknown error with theoretical worst-case error bounds, but this approach is impractical when the bounds involve unknown constants, and it often leads to excessive computation. On the other hand, the user may select thesketch size'' and stopping criteria in a heuristic manner, but this can lead to unreliable results. Motivated by these difficulties, we show how bootstrapping can be used to directly estimate the unknown error, which prevents excessive computation, and offers more confidence about the quality of a randomized solution. Furthermore, we show that the error estimation adds little computational cost to existing randomized Newton methods (e.g. \textsc{newton sketch} and \textsc{giant}), and it performs well empirically.

Chat is not available.