Skip to yearly menu bar Skip to main content


Poster

Advancing Personalized Learning with Neural Collapse for Long-Tail Challenge

Hanglei Hu · Yingying Guo · Zhikang Chen · Sen Cui · Fei Wu · Kun Kuang · Min Zhang · Bo Jiang

East Exhibition Hall A-B #E-2201
[ ] [ ] [ Project Page ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Personalized learning, especially data-based methods, has garnered widespread attention in recent years, aiming to meet individual student needs. However, many works rely on the implicit assumption that benchmarks are high-quality and well-annotated, which limits their practical applicability. In real-world scenarios, these benchmarks often exhibit long-tail distributions, significantly impacting model performance. To address this challenge, we propose a novel method called Neural-Collapse-Advanced personalized Learning (NCAL), designed to learn features that conform to the same simplex equiangular tight frame (ETF) structure. NCAL introduces Text-modality Collapse (TC) regularization to optimize the distribution of text embeddings within the large language model (LLM) representation space. Notably, NCAL is model-agnostic, making it compatible with various architectures and approaches, thereby ensuring broad applicability. Extensive experiments demonstrate that NCAL effectively enhances existing works, achieving new state-of-the-art performance. Additionally, NCAL mitigates class imbalance, significantly improving the model’s generalization ability.

Lay Summary:

Personalized learning has gained significant attention in recent years, yet most existing approaches assume access to well-balanced, high-quality datasets—an assumption that rarely holds in real-world educational settings where data often exhibit long-tail distributions. This imbalance hampers model performance and generalization. To address this challenge, we propose Neural-Collapse-Advanced personalized Learning (NCAL), a model-agnostic method that encourages features to align with a simplex equiangular tight frame (ETF) structure through Text-Modality Collapse (TC) regularization. By optimizing the distribution of text embeddings within the large language model’s representation space, NCAL effectively mitigates the negative effects of class imbalance. Our extensive experiments demonstrate that NCAL not only achieves state-of-the-art results but also significantly enhances the generalization ability of models across diverse architectures, offering a robust and scalable solution for real-world personalized learning tasks.

Live content is unavailable. Log in and register to view live content