On One Metho d of Non-Diagonal Regularization in Sparse Bayesian Learning
Dmitry Kropotov - Dorodnicyn Computing Centre of the Russian Academy of Sciences, Russia
Dmitry Vetrov - Dorodnicyn Computing Centre of the Russian Academy of Sciences, Russia
In the paper we propose a new type of regularization procedure for training sparse Bayesian methods for classification. Transforming Hessian matrix of log-likelihood function to diagonal form with further regularization of its eigenvectors allows us to optimize evidence explicitly as a product of one-dimensional integrals. The process of automatic regularization coeffcients determination then converges in one iteration. We show how to use the proposed approach for Gaussian and Laplace priors. Both algorithms show comparable performance with the stateof-the-art Relevance Vector Machines (RVM) but require less time for training and produce more sparse decision rules (in terms of degrees of freedom).