Skip to yearly menu bar Skip to main content


Poster

Injecting Logical Constraints into Neural Networks via Straight-Through Estimators

Zhun Yang · Joohyung Lee · Chiyoun Park

Hall E #215

Keywords: [ MISC: Supervised Learning ] [ DL: Theory ] [ DL: Graph Neural Networks ] [ MISC: Unsupervised and Semi-supervised Learning ] [ DL: Everything Else ]


Abstract:

Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI. We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning. More specifically, we design a systematic way to represent discrete logical constraints as a loss function; minimizing this loss using gradient descent via a straight-through-estimator updates the neural network's weights in the direction that the binarized outputs satisfy the logical constraints. The experimental results show that by leveraging GPUs and batch training, this method scales significantly better than existing neuro-symbolic methods that require heavy symbolic computation for computing gradients. Also, we demonstrate that our method applies to different types of neural networks, such as MLP, CNN, and GNN, making them learn with no or fewer labeled data by learning directly from known constraints.

Chat is not available.