Timezone: »
We study the parameter tuning problem for the penalized regression model. Finding the optimal choice of the regularization parameter is a challenging problem in high-dimensional regimes where both the number of observations n and the number of parameters p are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of the penalized regression model. We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization and support vector machines. We experimentally demonstrate the effectiveness of our results for non-differentiable cases.
Author Information
Shuaiwen Wang (Columbia University)
Wenda Zhou (Columbia University)
Haihao Lu (MIT)
Arian Maleki (Columbia)
Vahab Mirrokni (Google Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions »
Thu Jul 12th 01:10 -- 01:20 PM Room K11
More from the Same Authors
-
2020 Poster: Robust Pricing in Dynamic Mechanism Design »
Yuan Deng · Sébastien Lahaie · Vahab Mirrokni -
2020 Poster: Dual Mirror Descent for Online Allocation Problems »
Santiago Balseiro · Haihao Lu · Vahab Mirrokni -
2020 Poster: Bandits with Adversarial Scaling »
Thodoris Lykouris · Vahab Mirrokni · Renato Leme -
2019 Poster: Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity »
Matthew Fahrbach · Vahab Mirrokni · Morteza Zadimoghaddam -
2019 Poster: Categorical Feature Compression via Submodular Optimization »
Mohammad Hossein Bateni · Lin Chen · Hossein Esfandiari · Thomas Fu · Vahab Mirrokni · Afshin Rostamizadeh -
2019 Oral: Categorical Feature Compression via Submodular Optimization »
Mohammad Hossein Bateni · Lin Chen · Hossein Esfandiari · Thomas Fu · Vahab Mirrokni · Afshin Rostamizadeh -
2019 Oral: Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity »
Matthew Fahrbach · Vahab Mirrokni · Morteza Zadimoghaddam -
2019 Poster: Distributed Weighted Matching via Randomized Composable Coresets »
Sepehr Assadi · Mohammad Hossein Bateni · Vahab Mirrokni -
2019 Oral: Distributed Weighted Matching via Randomized Composable Coresets »
Sepehr Assadi · Mohammad Hossein Bateni · Vahab Mirrokni -
2018 Poster: Parallel and Streaming Algorithms for K-Core Decomposition »
Hossein Esfandiari · Silvio Lattanzi · Vahab Mirrokni -
2018 Poster: Accelerating Greedy Coordinate Descent Methods »
Haihao Lu · Robert Freund · Vahab Mirrokni -
2018 Poster: Approximate message passing for amplitude based optimization »
Junjie Ma · Ji Xu · Arian Maleki -
2018 Oral: Approximate message passing for amplitude based optimization »
Junjie Ma · Ji Xu · Arian Maleki -
2018 Oral: Accelerating Greedy Coordinate Descent Methods »
Haihao Lu · Robert Freund · Vahab Mirrokni -
2018 Oral: Parallel and Streaming Algorithms for K-Core Decomposition »
Hossein Esfandiari · Silvio Lattanzi · Vahab Mirrokni -
2018 Poster: Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy »
Shipra Agarwal · Morteza Zadimoghaddam · Vahab Mirrokni -
2018 Oral: Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy »
Shipra Agarwal · Morteza Zadimoghaddam · Vahab Mirrokni -
2017 Poster: Tight Bounds for Approximate Carathéodory and Beyond »
Vahab Mirrokni · Renato Leme · Adrian Vladu · Sam Wong -
2017 Talk: Tight Bounds for Approximate Carathéodory and Beyond »
Vahab Mirrokni · Renato Leme · Adrian Vladu · Sam Wong