Timezone: »
The goal of neural-symbolic computation is to integrate the connectionist and symbolist paradigms. Prior methods learn the neural-symbolic models using reinforcement learning (RL) approaches, which ignore the error propagation in the symbolic reasoning module and thus converge slowly with sparse rewards. In this paper, we address these issues and close the loop of neural-symbolic learning by (1) introducing the grammar model as a symbolic prior to bridge neural perception and symbolic reasoning, and (2) proposing a novel back-search algorithm which mimics the top-down human-like learning procedure to propagate the error through the symbolic reasoning module efficiently. We further interpret the proposed learning framework as maximum likelihood estimation using Markov chain Monte Carlo sampling and the back-search algorithm as a Metropolis-Hastings sampler. The experiments are conducted on two weakly-supervised neural-symbolic tasks: (1) handwritten formula recognition on the newly introduced HWF dataset; (2) visual question answering on the CLEVR dataset. The results show that our approach significantly outperforms the RL methods in terms of performance, converging speed, and data efficiency. Our code and data are released at https://liqing-ustc.github.io/NGS.
Author Information
Qing Li (UCLA)
Siyuan Huang (UCLA)
Yining Hong (University of California, Los Angeles)
Yixin Chen (UCLA)
Ying Nian Wu (UCLA)
Song-Chun Zhu (UCLA)
More from the Same Authors
-
2023 : MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Neural Dialogue Generation »
Shuwen Qiu · Song-Chun Zhu · Zilong Zheng -
2023 Poster: On the Complexity of Bayesian Generalization »
Yu-Zhe Shi · Manjie Xu · John Hopcroft · Kun He · Josh Tenenbaum · Song-Chun Zhu · Ying Nian Wu · Wenjuan Han · Yixin Zhu -
2023 Poster: Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference »
Yan Xu · Deqian Kong · Dehong Xu · Ziwei Ji · Bo Pang · Pascale FUNG · Ying Nian Wu -
2022 Poster: COAT: Measuring Object Compositionality in Emergent Representations »
Sirui Xie · Ari Morcos · Song-Chun Zhu · Shanmukha Ramakrishna Vedantam -
2022 Spotlight: COAT: Measuring Object Compositionality in Emergent Representations »
Sirui Xie · Ari Morcos · Song-Chun Zhu · Shanmukha Ramakrishna Vedantam -
2022 Poster: Latent Diffusion Energy-Based Model for Interpretable Text Modelling »
Peiyu Yu · Sirui Xie · Xiaojian Ma · Baoxiong Jia · Bo Pang · Ruiqi Gao · Yixin Zhu · Song-Chun Zhu · Ying Nian Wu -
2022 Spotlight: Latent Diffusion Energy-Based Model for Interpretable Text Modelling »
Peiyu Yu · Sirui Xie · Xiaojian Ma · Baoxiong Jia · Bo Pang · Ruiqi Gao · Yixin Zhu · Song-Chun Zhu · Ying Nian Wu -
2021 : [12:02 - 12:47 PM UTC] Invited Talk 1: Explainable AI: How Machines Gain Justified Trust from Humans »
Song-Chun Zhu -
2021 Workshop: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI »
Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu -
2021 Poster: Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification »
Bo Pang · Ying Nian Wu -
2021 Spotlight: Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification »
Bo Pang · Ying Nian Wu -
2020 : Spotlight Talk (2): Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning »
Qing Li -
2018 Poster: Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction »
Siyuan Qi · Baoxiong Jia · Song-Chun Zhu -
2018 Oral: Generalized Earley Parser: Bridging Symbolic Grammars and Sequence Data for Future Prediction »
Siyuan Qi · Baoxiong Jia · Song-Chun Zhu