TT-Sparse: Learning Sparse Rule Models with Differentiable Truth Tables
Abstract
Interpretable machine learning is essential in high-stakes domains where decision-making requires accountability, transparency, and trust. While rule-based models offer global and exact interpretability, learning rule sets that achieve high performance while maintaining low complexity to be human understandable and generalizability across tasks remains a difficult challenge. To address this, we introduce TT-Sparse, a flexible neural building block that leverages differentiable truth tables as nodes to learn sparse, effective connections. A key contribution of our approach is a novel soft TopK operator that allows straight-through estimation, ensuring differentiability to backpropagate gradients through the sparse connections and identify meaningful connections. This design allows each node to be exactly transformed into DNF/CNF equations via the Quine-McCluskey algorithm, effectively reducing the entire model to interpretable Boolean formulas. Extensive empirical results across 28 datasets spanning binary, multiclass, and regression tasks show the sparse rules exhibit superior predictive performance with lower complexity compared to existing state-of-the-art methods.