Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Knowledge and Logical Reasoning in the Era of Data-driven Learning

Large Neural Models' Self-Learning Symbolic Knowledge


Abstract:

Recent large neural models have shown impressive performance on various data modalities, including natural language, vision, programming language and molecules. However, they still have surprising deficiency (near-random performance) in acquiring certain types of knowledge such as structured knowledge and action knowledge In this talk I propose a two-way knowledge acquisition framework to make symbolic and neural learning approaches mutually enhance each other. In the first stage, we will elicit and acquire explicit symbolic knowledge from Large neural models. In the second stage, we will leverage the acquired symbolic knowledge to augment and enhance these big models. I will present two recent case studies to demonstrate this framework:

(1) The first task is to induce event schemas (stereotypical structures of events and their connections) from large language models by incremental prompting and verification [Li et al., ACL2023], and apply the induced schemas to enhance event extraction and event prediction.

(2) In the second task, we noticed that current large video-language models rely on object recognition abilities as a shortcut for action understanding. We utilize a Knowledge Patcher network to elicit new action knowledge from the current models and a Knowledge Fuser component to integrate the Patcher into frozen video-language models.

Chat is not available.