Poster
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning
EXPLAIN, AGREE and LEARN: A Recipe for Scalable Neural-Symbolic Learning
Victor Verreet · Lennert De Smet · Emanuele Sansone
Recent progress in the field of neural-symbolic AI (NeSy) has demonstrated that neural networks can benefit greatly from an integration with symbolic reasoning methods in terms of interpretability, data-efficiency and generalisation performance. Unfortunately, the symbolic component can lead to intractable computations for more complicated domains. This computational bottleneck has prevented the successful application of NeSy to more practical problems. We present EXPLAIN, AGREE and LEARN, an alternative paradigm that addresses the scalability problem of logic-based NeSy learning. EXPLAIN leverages sampling to obtain a representative set of possible explanations for the logic component driven by a newly introduced diversity criterion. Then AGREE assigns importance to the sampled explanations based on the neural predictions. This defines the learning objective, which for sufficiently many samples is guaranteed to coincide with the objective used by exact NeSy approaches, such as DeepProbLog. Using this objective, LEARN updates the neural component with direct supervision on its outputs, without the need to propagate the gradient through the logic component. Our approximate paradigm and its theoretical guarantees are experimentally supported and shown to compete with existing exact NeSy frameworks, while outperforming them in terms of scalability.