DDSVM: A Differentiable Framework for Deep Support Vector Machines with Iterative Geometry-Aware Optimization
Abstract
Recent studies have demonstrated the effectiveness of modularly integrating traditional machine learning methods, such as Support Vector Machines (SVMs), into neural networks for end-to-end optimization. However, current approaches mostly rely on static embedding, failing to leverage SVM's geometric properties for dynamic iterative optimization, thereby limiting their generalization potential. To address this, we propose a Differentiable Deep Support Vector Machine (DDSVM) framework that alternates over three modules: representation learning, boundary optimization, and geometry-aware feature refinement. This is achieved through an iterative pipeline of boundary construction, feature pushing, loss backpropagation and representation update. After constructing the SVM hyperplane, our method actively pushes feature points along the normal vector to maximize the geometric margin and backpropagates the separation loss into the network. Theoretically, we conduct an in-depth analysis of the underlying optimization principles, elucidating the fundamental mechanism through which the proposed architecture achieves superior performance. We demonstrate how the iterative synergy between geometric refinement and representation learning enhance the generalization, providing formal insights into its effectiveness. Experiments demonstrate significant performance over previous baselines.