Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Oral
Thu Jul 12 06:10 AM -- 06:20 AM (PDT) @ K11
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
Shuaiwen Wang · Wenda Zhou · Haihao Lu · Arian Maleki · Vahab Mirrokni
[ PDF [ Video

We study the parameter tuning problem for the penalized regression model. Finding the optimal choice of the regularization parameter is a challenging problem in high-dimensional regimes where both the number of observations n and the number of parameters p are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of the penalized regression model. We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization and support vector machines. We experimentally demonstrate the effectiveness of our results for non-differentiable cases.