Skip to yearly menu bar Skip to main content


Neuron Dependency Graphs: A Causal Abstraction of Neural Networks

Yaojie Hu · Jin Tian

Hall E #905

Keywords: [ MISC: Representation Learning ] [ MISC: Causality ] [ PM: Graphical Models ] [ T: Deep Learning ] [ SA: Accountability, Transparency and Interpretability ]


We discover that neural networks exhibit approximate logical dependencies among neurons, and we introduce Neuron Dependency Graphs (NDG) that extract and present them as directed graphs. In an NDG, each node corresponds to the boolean activation value of a neuron, and each edge models an approximate logical implication from one node to another. We show that the logical dependencies extracted from the training dataset generalize well to the test set. In addition to providing symbolic explanations to the neural network's internal structure, NDGs can represent a Structural Causal Model. We empirically show that an NDG is a causal abstraction of the corresponding neural network that "unfolds" the same way under causal interventions using the theory by Geiger et al. (2021). Code is available at

Chat is not available.