Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3)

Unsupervised Information Obfuscation for Split Inference of Neural Networks

Mohammad Samragh · Hossein Hosseini · Aleksei Triastcyn · Kambiz Azarian · Joseph B Soriaga · Farinaz Koushanfar

Keywords: [ Non-convex Optimization ]


Abstract:

Splitting network computations between the edge device and a server enables low edge-compute inference of neural networks but might expose sensitive information about the test query to the server. To address this problem, existing techniques train the model to minimize information leakage for a given set of sensitive attributes. In practice, however, the test queries might contain attributes that are not foreseen during training. We propose instead an unsupervised obfuscation method to discard the information irrelevant to the main task. We formulate the problem via an information theoretical framework and derive an analytical solution for a given distortion to the model output. In our method, the edge device runs the model up to a split layer determined based on its computational capacity. It then obfuscates the obtained feature vector by removing the components in the null space of the next layer of the model as well as the low-energy components of the remaining signal. Our experimental results show that our method outperforms existing techniques in removing the information of the irrelevant attributes, reduces the communication cost, maintains the accuracy, and incurs only a small computational overhead.

Chat is not available.