Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators

Enhancing Concept-based Learning with Logic

Deepika Vemuri · Gautham Bellamkonda · Vineeth N Balasubramanian

Keywords: [ Concepts ] [ first-order logic ] [ Interpretability ] [ differentiable logic ]


Abstract:

Concept-based models promote learning in terms of high-level transferrable abstractions. These models offer one level more of transparency compared to a black box model, as the predictions are a weighted combination of concepts. The relations between concepts are a rich source of information that would compliment learning. We propose using the first-order logic derived from the concepts to model these relations and to address the expressivity-vs-interpretability tradeoff in these models. Three architectural variants that give rise to logic-enhanced models are introduced. We analyse several ways of training them and experimentally show that logic-enhanced concept-based models perform better than or on par with the base models, with the additional benefit of having better concept alignment and interpretability. These models allow for a richer formal expression of predictions, paving the way for logical reasoning with symbolic concepts.

Chat is not available.