Timezone: »
Sparse support vector machine (SVM) is a popular classification technique that can simultaneously learn a small set of the most interpretable features and identify the support vectors. It has achieved great successes in many real-world applications. However, for large-scale problems involving a huge number of samples and extremely high-dimensional features, solving sparse SVMs remains challenging. By noting that sparse SVMs induce sparsities in both feature and sample spaces, we propose a novel approach, which is based on accurate estimations of the primal and dual optima of sparse SVMs, to simultaneously identify the features and samples that are guaranteed to be irrelevant to the outputs. Thus, we can remove the identified inactive samples and features from the training phase, leading to substantial savings in both the memory usage and computational cost without sacrificing accuracy. To the best of our knowledge, the proposed method is the \emph{first} \emph{static} feature and sample reduction method for sparse SVMs. Experiments on both synthetic and real datasets (e.g., the kddb dataset with about 20 million samples and 30 million features) demonstrate that our approach significantly outperforms state-of-the-art methods and the speedup gained by our approach can be orders of magnitude.
Author Information
Weizhong Zhang (Zhejiang University & Tencent AI Lab)
Bin Hong (Zhejiang University)
Wei Liu (Tencent AI Lab)
Jieping Ye (University of Michigan)
Deng Cai (Zhejiang University)
Xiaofei He (Zhejiang University)
Jie Wang (University of Michigan)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Scaling Up Sparse Support Vector Machines by Simultaneous Feature and Sample Reduction »
Tue. Aug 8th 05:30 -- 05:48 AM Room C4.4
More from the Same Authors
-
2022 Poster: Sparse Invariant Risk Minimization »
Xiao Zhou · Yong LIN · Weizhong Zhang · Tong Zhang -
2022 Poster: Model Agnostic Sample Reweighting for Out-of-Distribution Learning »
Xiao Zhou · Yong LIN · Renjie Pi · Weizhong Zhang · Renzhe Xu · Peng Cui · Tong Zhang -
2022 Poster: Probabilistic Bilevel Coreset Selection »
Xiao Zhou · Renjie Pi · Weizhong Zhang · Yong LIN · Zonghao Chen · Tong Zhang -
2022 Spotlight: Probabilistic Bilevel Coreset Selection »
Xiao Zhou · Renjie Pi · Weizhong Zhang · Yong LIN · Zonghao Chen · Tong Zhang -
2022 Spotlight: Model Agnostic Sample Reweighting for Out-of-Distribution Learning »
Xiao Zhou · Yong LIN · Renjie Pi · Weizhong Zhang · Renzhe Xu · Peng Cui · Tong Zhang -
2022 Spotlight: Sparse Invariant Risk Minimization »
Xiao Zhou · Yong LIN · Weizhong Zhang · Tong Zhang -
2021 Poster: Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions »
Shuang Qiu · Xiaohan Wei · Jieping Ye · Zhaoran Wang · Zhuoran Yang -
2021 Poster: On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game »
Shuang Qiu · Jieping Ye · Zhaoran Wang · Zhuoran Yang -
2021 Oral: On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game »
Shuang Qiu · Jieping Ye · Zhaoran Wang · Zhuoran Yang -
2021 Oral: Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions »
Shuang Qiu · Xiaohan Wei · Jieping Ye · Zhaoran Wang · Zhuoran Yang -
2018 Poster: Safe Element Screening for Submodular Function Minimization »
Weizhong Zhang · Bin Hong · Lin Ma · Wei Liu · Tong Zhang -
2018 Oral: Safe Element Screening for Submodular Function Minimization »
Weizhong Zhang · Bin Hong · Lin Ma · Wei Liu · Tong Zhang -
2017 Poster: GSOS: Gauss-Seidel Operator Splitting Algorithm for Multi-Term Nonsmooth Convex Composite Optimization »
Li Shen · Wei Liu · Ganzhao Yuan · Shiqian Ma -
2017 Talk: GSOS: Gauss-Seidel Operator Splitting Algorithm for Multi-Term Nonsmooth Convex Composite Optimization »
Li Shen · Wei Liu · Ganzhao Yuan · Shiqian Ma