Skip to yearly menu bar Skip to main content


Talk
in
Affinity Workshop: Women in Machine Learning (WiML) Un-Workshop

Invited Talk #3 - Characterizing the Generalization Trade-offs Incurred By Compression

Sara Hooker


Abstract:

To-date, a discussion around the relative merits of different compression methods has centered on the trade-off between level of compression and top-line metrics such as top-1 and top-5 accuracy. Along this dimension, compression techniques such as pruning and quantization are remarkably successful. It is possible to prune or heavily quantize with negligible decreases to test-set accuracy. However, top-line metrics obscure critical differences in generalization between compressed and non-compressed networks. In this talk, we will go beyond test-set accuracy and discuss some of my recent work measuring the trade-offs between compression, robustness and algorithmic bias. Characterizing these trade-offs provide insight into how capacity is used in deep neural networks — the majority of parameters are used to represent a small fraction of the training set. Formal auditing tools like Compression Identified Exemplars (CIE) also catalyze progress in training models that mitigate some of the trade-offs incurred by compression.