Learning Associative Markov Networks |
---|

Ben Taskar - Stanford UniversityVassil Chatalbashev - Stanford UniversityDaphne Koller - Stanford University |

Markov networks are extensively used to model complex sequential, patial, andrelational interactions in fields as diverse as image processing, naturallanguage analysis, and bioinformatics. However, inference and learning ingeneral Markov networks is intractable. In this paper, we focus on learning alarge subclass of such models (which we call associative Markov networks) thatare tractable or closely approximable. This subclass contains networks ofdiscrete variables with K labels each and clique potentials that favor thesame labels for all variables in the clique. Such networks capture the``guilt by association'' pattern of reasoning present in many domains, inwhich connected (``associated'') variables tend to have the same label. Ourapproach exploits a linear programming relaxation for the task of finding thebest joint assignment in such networks, which provides an approximatequadratic program (QP) for the problem of learning a margin-maximizing Markov network. We show that forassociative Markov network over binary-valued variables, this approximate QPis guaranteed to return an optimal parameterization for Markov networks ofarbitrary topology. For the non-binary case, optimality is not guaranteed,but the relaxation produces good solutions in practice. Experimental resultswith hypertext and newswire classification show significant advantages overstandard approaches. |