Exponentiated Gradient Algorithms for Log-Linear Structured Prediction
Amir Globerson - MIT, USA
Terry Koo - MIT, USA
Xavier Carreras - MIT, USA
Michael Collins - MIT, USA
Conditional log-linear models are a commonly used method for structured prediction. Effcient learning of parameters in these models is therefore an important problem. This paper describes an exponentiated gradient (EG) algorithm for training such models. EG is applied to the convex dual of the maximum likelihood ob jective; this results in both sequential and parallel update algorithms, where in the sequential algorithm parameters are updated in an online fashion. We provide a convergence proof for both algorithms. Our analysis also simplifies previous results on EG for max-margin models, and leads to a tighter bound on convergence rates. Experiments on a large-scale parsing task show that the proposed algorithm converges much faster than conjugate-gradient and L-BFGS approaches both in terms of optimization ob jective and test error.