Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

LAVA: Language Audio Vision Alignment for Pre-Training Transformers on Video Data

Sumanth Gurram · Sumanth Gurram · David Chan · David Chan · Andy Fang · Andy Fang · John Canny · John Canny


Abstract:

Generating representations of video data is of key importance in advancing the field of machine perception. Most current techniques rely on hand-annotated data, which can be difficult to work with, expensive to generate, and hard to scale. In this work, we propose a novel learning approach based on contrastive learning, LAVA, which is capable of learning joint language, audio, and video representations in a self-supervised manner. We pre-train LAVA on the Kinetics 700 dataset using transformer encoders to learn representations for each modality. We then demonstrate that LAVA performs competitively with the current state-of-the-art self-supervised and weakly-supervised techniques on UCF-101 and HMDB-51 video action recognition while using a fraction of the unlabeled data.

Chat is not available.