Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood |
---|
Daniel Grossman - University of Washington Pedro Domingos - University of Washington |
Bayesian networks are a powerful probabilistic representation, and their usefor classification has received considerable attention. However, they tend toperform poorly when learned in the standard way. This is attributable to amismatch between the objective function used (likelihood or a functionthereof) and the goal of classification (maximizing accuracy or conditionallikelihood). Unfortunately, the computational cost of optimizing structureand parameters for conditional likelihood is prohibitive. In this paper weshow that a simple approximation - choosing structures by maximizingconditional likelihood while setting parameters by maximum likelihood - yieldsgood results. On a large suite of benchmark datasets, this approach producesbetter class probability estimates than naive Bayes, TAN, andgeneratively-trained Bayesian networks. |