Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Math Workshop

Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving

Aniket Didolkar · Anirudh Goyal · Rosemary Nan Ke · Siyuan Guo · Michal Valko · Timothy Lillicrap · Danilo J. Rezende · Yoshua Bengio · Michael Mozer · Sanjeev Arora


Abstract:

\emph{Metacognitive knowledge} refers to humans' intuitive knowledge of their own thinking and reasoning processes. Today's best LLMs clearly possess some reasoning processes. The paper gives evidence that they also have metacognitive knowledge, including ability to name skills and procedures to apply given a task. We explore this primarily in context of math reasoning, developing a prompt-guided interaction procedure to get a powerful LLM to assign sensible skill labels to math questions, followed by having it perform semantic clustering to obtain coarser families of skill labels. These coarse skill labels look interpretable to humans.To validate that these skill labels are meaningful and relevant to the LLM's reasoning processes we perform the following experiments. (a) We ask GPT-4 to assign skill labels to training questions in math datasets GSM8K and MATH. (b) When using an LLM to solve the test questions, we present it with the full list of skill labels and ask it to identify the skill needed. Then it is presented with randomly selected exemplar solved questions associated with that skill label. This improves accuracy on GSM8k and MATH for several strong LLMs, including code-assisted models. The methodology presented is domain-agnostic, even though this article applies it to math problems.

Chat is not available.