Multi-Task Feature and Kernel Selection for SVMs
Tony Jebara - Columbia University
We compute a common feature selection or kernel selection configuration formultiple support vector machines (SVMs) trained on different yet inter-relateddatasets. The method is advantageous when multiple classification tasks anddifferently labeled datasets exist over a common input space. Differentdatasets can mutually reinforce a common choice of representation or relevantfeatures for their various classifiers. We derive a multi-task representationlearning approach using the maximum entropy discrimination formalism. Theresulting convex algorithms maintain the global solution properties of supportvector machines. However, in addition to multiple SVMclassification/regression parameters they also jointly estimate an optimalsubset of features or optimal combination of kernels. Experiments are shown onstandardized datasets.