Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

Population Transformer: Learning Population-level Representations of Intracranial Activity

Geeling Chau · Christopher Wang · Sabera Talukder · Vighnesh Subramaniam · Saraswati Soedarmadji · Yisong Yue · Boris Katz · Andrei Barbu

Keywords: [ Representation Learning ] [ Neuroscience ] [ self supervised learning ]


Abstract:

We present a self-supervised framework that learns population-level codes for intracranial neural recordings at scale, unlocking the benefits of representation learning for a key neuroscience recording modality. The Population Transformer (PopT) lowers the amount of data required for decoding experiments, while increasing accuracy, even on never-before-seen subjects and tasks. We address two key challenges in developing PopT: sparse electrode distribution and varying electrode location across patients. PopT stacks on top of pretrained representations and enhances downstream tasks by enabling learned aggregation of multiple spatially-sparse data channels. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how it can be used to provide neuroscience insights learned from massive amounts of data. We release a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability.

Chat is not available.