Skip to yearly menu bar Skip to main content


Poster

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video

Haiyang Xu · Qinghao Ye · Ming Yan · Yaya Shi · Jiabo Ye · yuanhong xu · Chenliang Li · Bin Bi · Qi Qian · Wei Wang · Guohai Xu · Ji Zhang · Songfang Huang · Fei Huang · Jingren Zhou

Exhibit Hall 1 #317
[ ]
[ PDF [ Poster

Abstract:

Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. Code and models will be released in https://github.com/X-PLUG/mPLUG-2.

Chat is not available.