Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ES-FoMo: Efficient Systems for Foundation Models

Semi-supervised Tabular Classification via In-context Learning of Large Language Models

Jaehyun Nam · Woomin Song · Seong Hyeon Park · Jihoon Tack · Sukmin Yun · Jaehyung Kim · Jinwoo Shin


Abstract:

Learning with limited labeled tabular samples is an important problem for industrial machine learning applications, as acquiring annotations for tabular data is often too costly.On the other hand, recent remarkable progress in natural language processing has evidenced that such an issue can be circumvented by using pre-trained large language models (LLMs).Motivated by this, we ask whether LLMs can help to handle the limited labeled data in the tabular domain as well.As a positive answer, we propose a novel semi-supervised tabular learning framework, coined Self-generated PROmpts from Unlabeled Tables (SPROUT), which utilizes unlabeled data in conjunction with LLMs.Our main idea is to exploit the in-context learning capabilities of LLMs to effectively extract transferable knowledge from unlabeled tabular samples.Specifically, SPROUT generates in-context prompts from unlabeled tables by identifying a column feature that exhibits a strong correlation with the actual target label, thereby creating examples that pertain to the true target tasks.In addition, we demonstrate how a language prior can facilitate knowledge transfer from heterogeneous data sources, enhancing performance of target datasets and mitigating the challenges posed by varying input formats.Experimental results show that SPROUT yields substantial performance improvements over previous methods across various tabular benchmarks.

Chat is not available.