Identifying and Exploiting Pseudo-cognitive Processes in Large Language Models
in
Workshop: ICML Workshop on Large Language Models and Cognition
Abstract
Large language models display remarkable abilities, matching or exceeding humans in many tasks and activities typically associated with higher-order cognition. Despite these successes, they also fail dramatically when presented with certain seemingly simple problems. In this talk, I'll discuss similarities and differences in how LLMs and humans perform certain cognitive tasks. First, I will present a method for discovering sparse computational subgraphs that LLMs use to express knowledge, similar to how humans recall memories. Then, I will describe how LLMs fall short of human cognitive processes for reasoning and learning, and present a new algorithm motivated by cognitive development theory, RECKONING, that dynamically learns to encode new knowledge at inference time for robust reasoning. I'll conclude with reflections on developing reasoning algorithms inspired by human cognition.