Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Compression: From Information Theory to Applications

Lightweighted Sparse Autoencoder based on Explanable Contribution

Joohong Rheey · Hyunggon Park


Abstract:

As deep learning models become heavier, developing lightweight models with the least performance degradation is paramount. In this paper, we propose an algorithm, SHAP-SAE (SHapley Additive exPlanations based Sparse AutoEncoder), that can explicitly measure the contribution of units and links and selectively activate only important units and links, leading to a lightweight sparse autoencoder. This allows us to explain how and why the sparse autoencoder is structured. We show that the SHAP-SAE outperforms other algorithms including a dense autoencoder. It is also confirmed that the SHAP-SAE is robust against the harsh sparsity of the autoencoder, as it shows remarkably limited performance degradation even with high sparsity levels.

Chat is not available.