Skip to yearly menu bar Skip to main content


Poster

Matrix Information Theory for Self-Supervised Learning

Yifan Zhang · Zhiquan Tan · Jingqin Yang · Weiran Huang · Yang Yuan


Abstract:

The maximum entropy encoding framework provides a unified perspective for many non-contrastive learning methods like SimSiam, Barlow Twins, and MEC. Inspired by this framework, we introduce Matrix-SSL, a novel approach that leverages matrix information theory to interpret the maximum entropy encoding framework as a matrix uniformity loss. Furthermore, Matrix-SSL enhances the maximum entropy encoding by seamlessly incorporating matrix alignment loss, directly aligning covariance matrices in different views.Experimental results reveal that Matrix-SSL outperforms state-of-the-art methods on the ImageNet dataset under linear evaluation settings and on MS-COCO for transfer learning tasks. Specifically, when performing transfer learning tasks on MS-COCO, our method outperforms previous SOTA methods such as MoCo v2 and BYOL up to 3.3\% with only 400 epochs compared to 800 epochs pre-training. We also try to introduce representation learning into the language modeling regime, achieving 72.3\% on the GSM8K dataset by fine-tuning a 7B model using matrix cross-entropy loss, with a margin of 3.1\% over the standard cross-entropy loss.

Live content is unavailable. Log in and register to view live content