Skip to yearly menu bar Skip to main content


Poster

Sanity Simulations for Saliency Methods

Joon Kim · Gregory Plumb · Ameet Talwalkar

Hall E #903

Keywords: [ SA: Accountability, Transparency and Interpretability ]


Abstract:

Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image. However, the development and adoption of these methods are hindered by the lack of access to ground-truth model reasoning, which prevents accurate evaluation. In this work, we design a synthetic benchmarking framework, SMERF, that allows us to perform ground-truth-based evaluation while controlling the complexity of the model's reasoning. Experimentally, SMERF reveals significant limitations in existing saliency methods and, as a result, represents a useful tool for the development of new saliency methods.

Chat is not available.