Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Robust Deep Learning via Layerwise Tilted Exponentials

Bhagyashree Puranik · Ahmad Beirami · Yao Qin · Upamanyu Madhow

Keywords: [ signaling in Gaussian noise ] [ common corruptions ] [ layer-wise training cost ] [ Deep Learning ] [ communication theory ] [ out-of-distribution robustness ]


Abstract:

State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization. In this paper, we propose a complementary approach aimed at enhancing the signal-to-noise ratio at intermediate network layers, loosely motivated by the classical communication-theoretic model of signaling in a noisy channel. We seek to learn neuronal weights which are matched to the layer inputs by supplementing end-to-end costs with a tilted exponential (TEXP) objective function which depends on the activations at the layer outputs. We show that TEXP learning can be interpreted as maximum likelihood estimation of matched filters under a Gaussian model for data noise. TEXP inference is accomplished by replacing batch norm by a tilted softmax enforcing competition across neurons, which can be interpreted as computation of posterior probabilities for the signaling hypotheses represented by each neuron. We show, by experimentation on standard image datasets, that TEXP learning and inference enhances robustness against noise, other common corruptions and mild adversarial perturbations, without requiring data augmentation. Further gains in robustness against this array of distortions can be obtained by appropriately combining TEXP with adversarial training.

Chat is not available.