Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deployable Generative AI

Answering Causal Questions with Augmented LLMs

Nick Pawlowski · Joel Jennings · Cheng Zhang

Keywords: [ tool usage ] [ augmented language model ] [ causal llm ] [ Causality ]


Abstract:

Large Language Models (LLMs) are revolutionising the way we interact with machines and enable never before seen applications. A common use-case of LLMs is as a chat interface for more complicated underlying systems to enable natural interaction without the need for learning system specifics. However, LLMs in their current form alone are not sufficient to do causal reasoning. In this paper, we explore different ways to augment the LLM with existing large scale end-to-end causal models to enable causal question answering abilities. Specifically, we compare the effectiveness of answering causal questions using two different approaches that both rely on the output of a causal expert model: 1) providing the predicted causal graph and related treatment effects in the context; 2) access to an API to derive insights from the output of the causal model. Our experiments show that context-augmented LLMs make significantly more mistakes than the data-access API-augmented LLMs, which are invariant to the size of the causal problem.

Chat is not available.