Any-dimensional invariant universality
Abstract
Several machine learning models are defined for inputs of any size, such as graphs with different numbers of nodes and point clouds with varying numbers of points. The universality properties of such any-dimensional models remain poorly understood, as universality is traditionally studied for models accepting inputs of a fixed size, defined on a compact subset of their domain. In sharp contrast, any-dimensional models can be viewed as sequences of functions defined on growing-sized inputs, and it is not clear in which sense they can be universal. We develop a systematic approach to establish any-dimensional universality by identifying any-dimensional functions with a unique function that takes inputs in a suitable infinite-dimensional limit space containing inputs of all finite sizes, as well as their limits. Using the symmetries of these inputs and relations between inputs of different sizes, we show that this limit space admits a natural topology with rich families of compact sets on which any-dimensional universality can be established. We illustrate our approach by showing that several existing architectures fail to be universal, and we propose simple modifications that restore universality.