Skip to yearly menu bar Skip to main content


Poster

Interpreting and Disentangling Feature Components of Various Complexity from DNNs

Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang

Keywords: [ Deep Learning Theory ]


Abstract:

This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN. We propose a generic definition for the feature complexity. Given the feature of a certain layer in the DNN, our method decomposes and visualizes feature components of different complexity orders from the feature. The feature decomposition enables us to evaluate the reliability, the effectiveness, and the significance of over-fitting of these feature components. Furthermore, such analysis helps to improve the performance of DNNs. As a generic method, the feature complexity also provides new insights into existing deep-learning techniques, such as network compression and knowledge distillation.

Chat is not available.