Skip to yearly menu bar Skip to main content


Poster

Layerwise Change of Knowledge in Neural Networks

Xu Cheng · 磊 程 · Zhaoran Peng · Yang Xu · Tian Han · Quanshi Zhang


Abstract:

This paper aims to explain how a deep neural network (DNN) gradually extracts new knowledge and forgets noisy features through layers in forward propagation. Up to now, although how to define knowledge encoded by the DNN has not reached a consensus so far, previous studies have derived a series of mathematical evidences to take interactions as symbolic primitive inference patterns encoded by a DNN. We extend the definition of interactions and, for the first time, extract interactions encoded by intermediate layers. We quantify and track the newly emerged interactions and the forgotten interactions in each layer during the forward propagation, which shed new light on the learning behavior of DNNs. The layer-wise change of interactions also reveals the change of the generalization capacity and instability of feature representations of a DNN.

Live content is unavailable. Log in and register to view live content