Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: ICML workshop on Machine Learning for Cybersecurity (ICML-ML4Cyber)

Exploiting and Defending Against the Approximate Linearity of Apple’s NeuralHash

Kevin Meng · Jagdeep S Bhatia


Abstract: Perceptual hashes map images with identical semantic content to the same $n$-bit hash value, while mapping semantically-different images to different hashes. These algorithms carry important applications in cybersecurity such as copyright infringement detection, content fingerprinting, and surveillance. Apple's NeuralHash is one such system that aims to detect the presence of illegal content on users' devices without compromising consumer privacy. We make the surprising discovery that NeuralHash is approximately linear, which inspires the development of novel black-box attacks that can (i) evade detection of "illegal" images, (ii) generate near-collisions, and (iii) leak information about hashed images, all without access to model parameters. These vulnerabilities pose serious threats to NeuralHash's security goals; to address them, we propose a simple fix using classical cryptographic standards.

Chat is not available.