Skip to yearly menu bar Skip to main content


Poster

CARTE: Pretraining and Transfer for Tabular Learning

Myung Jun Kim · Leo Grinsztajn · Gael Varoquaux

Hall C 4-9 #507
[ ] [ Project Page ]
Thu 25 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Pretrained deep-learning models are the go-to solution for images or text. However, for tabular data the standard is still to train tree-based models. Indeed, transfer learning on tables hits the challenge of data integration: finding correspondences, correspondences in the entries (entity matching) where different words may denote the same entity, correspondences across columns (schema matching), which may come in different orders, names... We propose a neural architecture that does not need such correspondences. As a result, we can pretrain it on background data that has not been matched. The architecture --CARTE for Context Aware Representation of Table Entries-- uses a graph representation of tabular (or relational) data to process tables with different columns, string embedding of entries and columns names to model an open vocabulary, and a graph-attentional network to contextualize entries with column names and neighboring entries. An extensive benchmark shows that CARTE facilitates learning, outperforming a solid set of baselines including the best tree-based models. CARTE also enables joint learning across tables with unmatched columns, enhancing a small table with bigger ones. CARTE opens the door to large pretrained models for tabular data.

Live content is unavailable. Log in and register to view live content