Timezone: »

 
Poster
Shallow-Deep Networks: Understanding and Mitigating Network Overthinking
Yigitcan Kaya · Sanghyun Hong · Tudor Dumitras

Tue Jun 11 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #24

We characterize a prevalent weakness of deep neural networks (DNNs), 'overthinking', which occurs when a DNN can reach correct predictions before its final layer. Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification. Understanding overthinking requires studying how each prediction evolves during a DNN's forward pass, which conventionally is opaque. For prediction transparency, we propose the Shallow-Deep Network (SDN), a generic modification to off-the-shelf DNNs that introduces internal classifiers. We apply SDN to four modern architectures, trained on three image classification tasks, to characterize the overthinking problem. We show that SDNs can mitigate the wasteful effect of overthinking with confidence-based early exits, which reduce the average inference cost by more than 50% and preserve the accuracy. We also find that the destructive effect occurs for 50% of misclassifications on natural inputs and that it can be induced, adversarially, with a recent backdooring attack. To mitigate this effect, we propose a new confusion metric to quantify the internal disagreements that will likely to lead to misclassifications.

Author Information

Yigitcan Kaya (University of Maryland, College Park)

I am a fourth year Ph.D. student in Computer Science at University of Maryland College Park. My research advisor is Prof. Tudor Dumitras. My broad research focus is on adversarial machine learning. Specifically, I develop methods to digest the hidden information within deep neural networks into intuitive and often security-related metrics, such as overthinking. I also have done work in exploring practical threat models against ML systems, such as sneaky poisoning attacks or hardware-based attacks, and I recently started working in ML privacy, including differential privacy and membership inference attacks.

Sanghyun Hong (University of Maryland College Park; shhong@cs.umd.edu)
Tudor Dumitras (University of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors