Skip to yearly menu bar Skip to main content


Parallel Neurosymbolic Integration with Concordia

Jonathan Feldstein · Modestas Jurcius · Efthymia Tsamoura

Exhibit Hall 1 #128
[ ]
[ PDF [ Poster


Parallel neurosymbolic architectures have been applied effectively in NLP by distilling knowledge from a logic theory into a deep model. However, prior art faces several limitations including supporting restricted forms of logic theories and relying on the assumption of independence between the logic and the deep network. We present Concordia, a framework overcoming the limitations of prior art. Concordia is agnostic both to the deep network and the logic theory offering support for a wide range of probabilistic theories. Our framework can support supervised training of both components and unsupervised training of the neural component. Concordia has been successfully applied to tasks beyond NLP and data classification, improving the accuracy of state-of-the-art on collective activity detection, entity linking and recommendation tasks.

Chat is not available.