Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

Text Serialization and Their Relationship with the Conventional Paradigms of Tabular Machine Learning

Simon Lee · Kyoka Ono

Keywords: [ foundation models ] [ tabular machine learning ]


Abstract:

Recent research has explored how Language Models (LMs) can be used for feature representation and prediction in tabular machine learning tasks. This involves employing text serialization and supervised fine-tuning (SFT) techniques. Despite the simplicity of these techniques, significant gaps remain in our understanding of the applicability and reliability of LMs in this context. Ourstudy assesses how emerging LM technologies compare with traditional paradigms in tabular machine learning and evaluates the feasibility of adopting similar approaches with these advanced technologies. At the data level, we investigate various methods of data representation and curation of serialized tabular data, exploring their impact on prediction performance. At the classificationlevel, we examine whether text serialization combined with LMs enhances performance on tabular datasets (e.g. class imbalance, distribution shift, biases, and high dimensionality), and assess whether this method represents a state-of-the-art (SOTA) approach for addressing tabular machine learning challenges. Our findings reveal nuanced results regarding the applicability of text serialization, including instances where LMs achieve SOTA performance on specific metrics.

Chat is not available.