Skip to yearly menu bar Skip to main content


Poster

On the Independence Assumption in Neurosymbolic Learning

Emile van Krieken · Pasquale Minervini · Edoardo Ponti · Antonio Vergari

Hall C 4-9 #1500
[ ] [ Paper PDF ]
[ Poster
Thu 25 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

State-of-the-art neurosymbolic learning systems use probabilistic reasoning to guide neural networks towards predictions that conform to logical constraints. Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning. We study and criticise this assumption, highlighting how it can hinder optimisation and prevent uncertainty quantification. We prove that loss functions bias conditionally independent neural networks to become overconfident in their predictions. As a result, they are unable to represent uncertainty over multiple valid options. Furthermore, we prove that the minima of such loss functions are usually highly disconnected and non-convex, and thus difficult to optimise. Our theoretical analysis gives the foundation for replacing the conditional independence assumption and designing more expressive neurosymbolic probabilistic models.

Chat is not available.