Timezone: »
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module equipped with provenance generates top-k proofs by deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning architecture efficiently learns weighted rules to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. Our experiments show that DSR-LM leads to improved logical reasoning of pre-trained LMs and outperforms a spectrum of competitive baselines even under systematic distribution shifts on sequence lengths.
Author Information
Hanlin Zhang (Carnegie Mellon University)
Ziyang Li (University of Pennsylvania)
Jiani Huang (University of Pennsylvania)
Mayur Naik (University of Pennsylvania)
Eric Xing (Petuum Inc. and CMU)
More from the Same Authors
-
2020 : Spotlight Talk (1): Generating Programmatic Referring Expressions via Program Synthesis »
Jiani Huang -
2020 Poster: Generating Programmatic Referring Expressions via Program Synthesis »
Jiani Huang · Calvin Smith · Osbert Bastani · Rishabh Singh · Aws Albarghouthi · Mayur Naik -
2019 Poster: Fault Tolerance in Iterative-Convergent Machine Learning »
Aurick Qiao · Bryon Aragam · Bingjing Zhang · Eric Xing -
2019 Oral: Fault Tolerance in Iterative-Convergent Machine Learning »
Aurick Qiao · Bryon Aragam · Bingjing Zhang · Eric Xing