Skip to yearly menu bar Skip to main content



Abstract:

State-of-the-art techniques for enhancing robustness of deep networks mostly rely on end-to-end training with suitable data augmentation. In this paper, we propose a complementary approach aimed at enhancing the signal-to-noise ratio at intermediate network layers, loosely motivated by the classical communication-theoretic model of signaling in Gaussian noise. We seek to learn neuronal weights which are matched to the layer inputs by supplementing end-to-end costs with a tilted exponential (TEXP) objective function which depends on the activations at the layer outputs. We show that TEXP learning can be interpreted as maximum likelihood estimation of matched filters under a Gaussian model for data noise. TEXP inference is accomplished by replacing batch norm by a tilted softmax enforcing competition across neurons, which can be interpreted as computation of posterior probabilities for the signaling hypotheses represented by each neuron. We show, by experimentation on standard image datasets, that TEXP learning and inference enhances robustness against noise, other common corruptions and mild adversarial perturbations, without requiring data augmentation. Further gains in robustness against this array of distortions can be obtained by appropriately combining TEXP with adversarial training.

Chat is not available.