Timezone: »

 
Poster
Emergence of Separable Manifolds in Deep Language Representations
Jonathan Mamou · Hang Le · Miguel A del Rio Fernandez · Cory Stephenson · Hanlin Tang · Yoon Kim · SueYeon Chung

Thu Jul 16 08:00 AM -- 08:45 AM & Thu Jul 16 07:00 PM -- 07:45 PM (PDT) @ None #None

Deep neural networks (DNNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representations extracted from task-optimized DNNs and neural populations in the brain. DNNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn, they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations. In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience that connects geometry of feature representations with linear separability of classes, to analyze language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT, etc.) and find evidence for emergence of linguistic manifolds across layer depth (e.g., manifolds for part-of-speech tags), especially in ambiguous data (i.e, words with multiple part-of-speech tags, or part-of-speech classes including many words). In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds’ radius, dimensionality and inter-manifold correlations.

Author Information

Jonathan Mamou (Intel Labs)
Hang Le (MIT)
Miguel A del Rio Fernandez (MIT)
Cory Stephenson (Intel Corporation)
Hanlin Tang (Intel AI)
Yoon Kim (Harvard University)
SueYeon Chung (Columbia University)

More from the Same Authors