Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)

On genuine invariance learning without weight-tying

Artem Moskalev · Anna Sepliarskaia · Erik Bekkers · Arnold Smeulders


Abstract:

This paper investigates the properties and limitations of learned invariance in neural networks as opposed to built-in invariance achieved through the invariant weight-tying. We demonstrate that learned invariance heavily relies on the input data and degrades quickly when moving away from the training data. Next, we address the challenge of aligning data-driven invariance learning with the genuine invariance of weight-tying networks. We show that with a simple invariance regularization it is possible for networks to learn the invariance which closely reassembles the rigid invariance of the weight-tying networks, but at the cost of the downstream task performance. We further investigate the performance decay under the learned invariance and we demonstrate that it is due to the implicit regularization of data-driven invariance learning, which constrains the sensitivity of a networks to input any perturbations. This presents a new challenging problem of achieving genuine invariance by learning, while also maintaining the downstream task performance.

Chat is not available.