Skip to yearly menu bar Skip to main content


Poster
in
Workshop: “Could it have been different?” Counterfactuals in Minds and Machines

Extending counterfactual reasoning models to capture unconstrained social explanations

Stephanie Droop · Neil Bramley


Abstract:

Human explanations are thought to be shaped by counterfactual reasoning but formal accounts of this ability are limited to simple scenarios and fixed response options. In naturalistic or social settings, human explanations are often more creative, involving imputation of hidden causal factors in addition to selection among established causes. Across two experiments, we extend a counterfactual account of explanation to capture how people generate free explanations for an agent’s behaviour across a set of scenarios. To do this, we have one group of participants (N=95) make predictions about scenarios that combine short biographies with potential trajectories through a gridworld, using this to crowdsource a causalmodel of the overall scenario. A separate set of participants (N=49) then reacted to particular outcomes, providing free-text explanations for why the agent moved the way they did. Our final model captures how these free explanations depend onthe general situation and specific outcome but also how participants’ explanatory strategy is shaped by how surprising or incongruent the behaviour is. Consistent with past work, we find people reason with counterfactuals that stay relatively close to what actually happens, but beyond this, we model how their tendency to impute unobserved factors depends on the degree to which the explanandum is surprising.

Chat is not available.