Improving ML attacks on LWE with data repetition and stepwise regression
Alberto Alfarano ⋅ Eshika Saxena ⋅ Emily Wenger ⋅ Francois Charton ⋅ Kristin Lauter
Abstract
ML attacks on Learning with Errors (LWE) with binary or small secrets only succeed on LWE settings with very simple secrets. For example, they can recover secrets with up to three non-zero bits when models are trained on not-reduced LWE data, and three non-zero bits in the ''cruel region'' [9] when BKZ pre-processing is applied. We show that larger training sets and the use of repeated examples in the training data allow the recovery of denser secrets. We empirically observe a power-law relationship between model based attempts to recover the secrets, dataset size and repeated examples. We introduce a stepwise regression technique to recover the ``cool bits'' of the secret. Overall, these techniques allow for the recovery of denser binary secrets: up to Hamming weight $70$ (and $8$ cruel bits) for dimension $256$ $\log_2 q=20$ and $75$ (and $7$ cruel bits) for dimension $512$ $\log_2 q=41$ (vs $33$ and $63$ Hamming weight and $3$ cruel bits in previous works). We also demonstrate our methods' effectiveness on denser ternary secrets, showing a substantial improvement over prior work.
Successful Page Load