Abstract:
In recent years, deep learning-based sequence modelings, such as language models, have received much attention and success, which pushes researchers to explore the possibility of transforming non-sequential problems into a sequential form. Following this thought, deep neural networks can be represented as composite functions of a sequence of mappings, linear or nonlinear, where each composition can be viewed as a word. However, the weights of linear mappings are undetermined and hence require an infinite number of words. In this article, we investigate the finite case and constructively prove the existence of a finite vocabulary V=ϕi:Rd→Rd|i=1,...,n with n=O(d2) for the universal approximation. That is, for any continuous mapping f:Rd→Rd, compact domain Ω and ε>0, there is a sequence of mappings ϕi1,...,ϕim∈V,m∈Z+, such that the composition ϕim∘...∘ϕi1 approximates f on Ω with an error less than ε. Our results demonstrate an unusual approximation power of mapping compositions and motivate a novel compositional model for regular languages.
Chat is not available.