Timezone: »
When applying machine learning to sensitive data, one has to find a balance between accuracy, information security, and computational-complexity. Recent studies combined Homomorphic Encryption with neural networks to make inferences while protecting against information leakage. However, these methods are limited by the width and depth of neural networks that can be used (and hence the accuracy) and exhibit high latency even for relatively simple networks. In this study we provide two solutions that address these limitations. In the first solution, we present more than 10\times improvement in latency and enable inference on wider networks compared to prior attempts with the same level of security. The improved performance is achieved by novel methods to represent the data during the computation. In the second solution, we apply the method of transfer learning to provide private inference services using deep networks with latency of \sim0.16 seconds. We demonstrate the efficacy of our methods on several computer vision tasks.
Author Information
Alon Brutzkus (Tel Aviv University)
Ran Gilad-Bachrach (Microsoft Research)
Oren Elisha (Microsoft)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Low Latency Privacy Preserving Inference »
Wed. Jun 12th 12:00 -- 12:05 AM Room Room 102
More from the Same Authors
-
2022 Poster: Efficient Learning of CNNs using Patch Based Features »
Alon Brutzkus · Amir Globerson · Eran Malach · Alon Regev Netser · Shai Shalev-Shwartz -
2022 Spotlight: Efficient Learning of CNNs using Patch Based Features »
Alon Brutzkus · Amir Globerson · Eran Malach · Alon Regev Netser · Shai Shalev-Shwartz -
2021 Poster: Towards Understanding Learning in Neural Networks with Linear Teachers »
Roei Sarussi · Alon Brutzkus · Amir Globerson -
2021 Spotlight: Towards Understanding Learning in Neural Networks with Linear Teachers »
Roei Sarussi · Alon Brutzkus · Amir Globerson -
2019 Poster: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Oral: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2017 Poster: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Talk: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson