VT-Bench: A Unified Benchmark for Visual-Tabular Multi-Modal Learning
Abstract
Multi-model learning has attracted great attention in visual-text tasks. However, visual-tabular data, which plays a pivotal role in high-stakes domains like healthcare and industry, remains underexplored. In this paper, we introduce \textit{VT-Bench}, the first unified benchmark for standardizing vision-tabular discriminative prediction and generative reasoning tasks. VT-Bench aggregates 14 datasets across 9 domains (medical-centric, while covering pets, media, and transportation) with over 756K samples. We evaluate 21 representative models, including unimodal experts, specialized visual-tabular models, and general-purpose vision-language models (VLMs), highlighting substantial challenges of visual-tabular learning. We believe VT-Bench will stimulate the community to build more powerful multi-modal vision-tabular foundation models. Benchmark: \url{https://anonymous.4open.science/r/VT-Bench-13C2}