Skip to yearly menu bar Skip to main content


Oral (prerecorded)
in
Workshop: Machine Learning for Multimodal Healthcare Data

Can Brain Signals Reveal Inner Alignment with Human Languages?

Jielin Qiu · William Han · Jiacheng Zhu · Mengdi Xu · Douglas Weber · Bo Li · Ding Zhao

Keywords: [ Multimodal biomarkers ] [ Multimodal fusion ] [ Co-creation and human-in-the-loop ]


Abstract:

Brain Signals, such as Electroencephalography (EEG), and human languages have been widely explored independently for many downstream tasks, however, the connection between them has not been well explored. In this study, we explore the relationship and dependency between EEG and language. To study at the representation level, we introduced \textbf{MTAM}, a \textbf{M}ultimodal \textbf{T}ransformer \textbf{A}lignment \textbf{M}odel, to observe coordinated representations between the two modalities. We used various relationship alignment-seeking techniques, such as Canonical Correlation Analysis and Wasserstein Distance, as loss functions to transfigure features. On downstream applications, sentiment analysis and relation detection, we achieved new state-of-the-art results on two datasets, ZuCo and K-EmoCon. Our method achieved an F1-score improvement of 16.5\% on K-EmoCon and 27\% on Zuco datasets for sentiment analysis, and 31.1\% on ZuCo for relation detection. In addition, we provide interpretations of the performance improvement: (1) feature distribution shows the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language; (2) alignment weights show the influence of different language semantics as well as EEG frequency features; (3) brain topographical maps provide an intuitive demonstration of the connectivity in the brain regions. Our anonymous code is available at \url{https://anonymous.4open.science/r/ICML-109F/}.

Chat is not available.