Skip to yearly menu bar Skip to main content


Poster

Position Paper: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience

Martina G. Vilas · Federico Adolfi · David Poeppel · Gemma Roig


Abstract:

Inner Interpretability is a promising emerging field tasked with uncovering the inner mechanisms of AI systems, but it currently lacks a methodological framework. Moreover, recent critiques raise issues that question its usefulness to advance the broader goals of AI. However, it has been overlooked that these issues resemble those that have been grappled with in another field: Cognitive Neuroscience. Here we draw the relevant connections and highlight lessons that can be transferred productively between fields. Based on these, we propose a general framework and give concrete methodological strategies for AI inner interpretability research. With this methodological framework, Inner Interpretability can fend off critiques and position itself on a productive path to explain AI systems mechanistically.

Live content is unavailable. Log in and register to view live content