Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Compression: From Information Theory to Applications

On the Maximum Mutual Information Capacity of Neural Architectures

Brandon J Foggo · Nanpeng Yu


Abstract: We derive the closed-form expression of the maximum mutual information - the maximum value of $I(X;Z)$ obtainable via training - for a broad family of neural network architectures. The quantity is essential to several branches of machine learning theory and practice. Quantitatively, we show that the maximum mutual information for these families all stem from generalizations of a single catch-all formula. Qualitatively, we show that the maximum mutual information of an architecture is most strongly influenced by the width of the smallest layer of the network - the ``information bottleneck'' in a different sense of the phrase, and by any statistical invariances captured by the architecture.

Chat is not available.