Poster
in
Workshop: Next Generation of AI Safety
Manipulating Feature Visualizations with Gradient Slingshots
Dilyara Bareeva · Marina Höhne · Alexander Warnecke · Lukas Pirch · Klaus-robert Mueller · Konrad Rieck · Kirill Bykov
Keywords: [ Machine Learning ] [ Computer Vision ] [ mechanistic interpretability ] [ Explainable AI ]
Deep Neural Networks (DNNs) are capable of learning complex and versatile representations, however, the semantic nature of the learned concepts remains unknown. A common method used to explain the concepts learned by DNNs is Activation Maximization (AM), which generates a synthetic input signal that maximally activates a particular neuron in the network. In this paper, we investigate the vulnerability of this approach to adversarial model manipulations and introduce a novel method for manipulating feature visualization without significantly impacting the model's decision-making process. The key distinction of our proposed approach is that it does not alter the model architecture. We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons by masking the original explanations of neurons with chosen target explanations during model auditing.