Poster
in
Workshop: Data-centric Machine Learning Research (DMLR): Datasets for Foundation Models
The Neglected Tails in Vision-Language Models
Shubham Parashar · Zhiqiu Lin · Tian Liu · Xiangjue Dong · Yanan Li · Deva Ramanan · James Caverlee · Shu Kong
Vision-language models (VLMs) excel in zero-shot recognition but their performance varies greatly across different visual concepts. For example, although CLIP achieves impressive accuracy on ImageNet (60-80\%), its performance drops below 10\% for more than ten concepts like {\tt night} {\tt snake}, presumably due to their limited presence in the pretraining data. However, measuring the frequency of concepts in VLMs' large-scale datasets is challenging. We address this by using large language models (LLMs) to count the number of pretraining texts that contain synonyms of these concepts.Our analysis confirms that popular datasets, such as LAION, exhibit a long-tailed concept distribution, yielding biased performance in VLMs. We also find that downstream applications of VLMs, including visual chatbots (e.g., GPT-4V) and text-to-image models (e.g., Stable Diffusion), often fail to recognize or generate images of rare concepts identified by our method.