Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multi-modal Foundation Model meets Embodied AI (MFM-EAI)

Jina CLIP: Your CLIP Model Is Also Your Text Retriever

Han Xiao · Georgios Mastrapas · Bo Wang


Abstract:

Contrastive Language-Image Pretraining (CLIP) is widely used to train models to align images and texts in a common embedding space by mapping them to fixed-sized vectors. These models are key to multimodal information retrieval and related tasks. However, CLIP models generally underperform in text-only tasks compared to specialized text models. This creates inefficiencies for information retrieval systems that keep separate embeddings and models for text-only and multimodal tasks. We propose a novel, multi-task contrastive training method to address this issue, which we use to train the JinaCLIP model and achieve the state-of-the-art performance on both text-image and text-text retrieval tasks.

Chat is not available.