More Efficiency in Multiple Kernel Learning
Alain Rakotomamonjy - Lab ITIS EA4051, Université de Rouen, France
Francis R. Bach - CMM, Ecole des Mines de Paris, France
Stephane Canu - Lab ITIS EA4051, INSA de Rouen, France
Yves Grandvalet - IDIAP, Switzerland
An efficient and general multiple kernel learning (MKL) algorithm has been recently proposed by Sonnenburg et al. (2006). This approach has opened new perspectives since it makes the MKL approach tractable for largescale problems, by iteratively using existing support vector machine code. However, it turns out that this iterative algorithm needs several iterations before converging towards a reasonable solution. In this paper, we address the MKL problem through an adaptive 2-norm regularization formulation. Weights on each kernel matrix are included in the standard SVM empirical risk minimization problem with a 1 constraint to encourage sparsity. We propose an algorithm for solving this problem and provide an new insight on MKL algorithms based on block 1-norm regularization by showing that the two approaches are equivalent. Experimental results show that the resulting algorithm converges rapidly and its efficiency compares favorably to other MKL algorithms.