Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

ECLIP: Efficient Contrastive Language-Image Pretraining via Ensemble Confidence Learning and Masked Language Modeling

Jue Wang · Jue Wang · Haofan Wang · Haofan Wang · Weijia Wu · Weijia Wu · Jincan Deng · Jincan Deng · Yu Lu · Xiaofeng Guo · Xiaofeng Guo · Debing Zhang · Debing Zhang


Abstract:

While large scale pre-training has achieved great achievements in bridging the gap between vision and language, it still faces three challenges. First, the cost for pre-training is expensive. Second, there is no efficient way to handle the data noise which degrades model performance. Third, previous methods only leverage limited image-text paired data, while ignoring richer single-modal data, which may result in poor generalization to single-modal downstream tasks. In this work, we propose \textbf{E}fficient \textbf{C}ontrastive \textbf{L}anguage-\textbf{I}mage \textbf{P}retraining (ECLIP) via Ensemble Confidence Learning and Masked Language Modeling. Specifically, We adaptively filter out noisy samples in the training process by means of Ensemble Confidence Learning strategy, and add a Masked Language Modeling objective to utilize extra non-paired text data. ECLIP achieves the state-of-the-art performance on Chinese cross-modal retrieval tasks with only 1/10 training resources compared with CLIP and WenLan, while showing excellent generalization to single-modal tasks including text retrieval and text classification.

Chat is not available.