We present an algorithm for building probabilistic rule lists that is two orders of magnitude faster than previous work. Rule list algorithms are competitors for decision tree algorithms. They are associative classifiers, in that they are built from pre-mined association rules. They have a logical structure that is a sequence of IF-THEN rules, identical to a decision list or one-sided decision tree. Instead of using greedy splitting and pruning like decision tree algorithms, we aim to fully optimize over rule lists, striking a practical balance between accuracy, interpretability, and computational speed. The algorithm presented here uses a mixture of theoretical bounds (tight enough to have practical implications as a screening or bounding procedure), computational reuse, and highly tuned language libraries to achieve computational efficiency. Currently, for many practical problems, this method achieves better accuracy and sparsity than decision trees. In many cases, the computational time is practical and often less than that of decision trees.
Hongyu Yang (Massachusetts Institute of Technology)
Cynthia Rudin (Duke University)
Margo Seltzer (Harvard University)
Related Events (a corresponding poster, oral, or spotlight)
2017 Talk: Scalable Bayesian Rule Lists »
Mon Aug 7th 03:48 -- 04:06 AM Room C4.9& C4.10