Timezone: »

Time Is MattEr: Temporal Self-supervision for Video Transformers
Sukmin Yun · Jaehyung Kim · Dongyoon Han · Hwanjun Song · Jung-Woo Ha · Jinwoo Shin

Tue Jul 19 03:30 PM -- 05:30 PM (PDT) @ Hall E #105

Understanding temporal dynamics of video is an essential aspect of learning better video representations. Recently, transformer-based architectural designs have been extensively explored for video tasks due to their capability to capture long-term dependency of input sequences. However, we found that these Video Transformers are still biased to learn spatial dynamics rather than temporal ones, and debiasing the spurious correlation is critical for their performance. Based on the observations, we design simple yet effective self-supervised tasks for video models to learn temporal dynamics better. Specifically, for debiasing the spatial bias, our method learns the temporal order of video frames as extra self-supervision and enforces the randomly shuffled frames to have low-confidence outputs. Also, our method learns the temporal flow direction of video tokens among consecutive frames for enhancing the correlation toward temporal dynamics. Under various video action recognition tasks, we demonstrate the effectiveness of our method and its compatibility with state-of-the-art Video Transformers.

Author Information

Sukmin Yun (KAIST)
Jaehyung Kim (KAIST)
Dongyoon Han (NAVER AI Lab)
Hwanjun Song (NAVER AI Lab)
Jung-Woo Ha (NAVER AI Lab)
Jung-Woo Ha

Jung-Woo Ha got his BS and PhD degrees in computer science from Seoul National University in 2004 and 2015. He got the 2014 Fall semester outstanding PhD dissertation award from Computer Science Dept. of Seoul National University. He worked as a research scientist and tech lead at NAVER LABS and research head of NAVER CLOVA. Currently, he works as the head of NAVER AI Lab in NAVER Cloud. He has contributed to the AI research community as Datasets and Benchmarks Co-chair for NeurIPS and Social Co-chair for ICML 2023 and NeurIPS 2022. Also, he has joined a senior technical program committee member, such as, Area chair for NeurIPS 2023 and 2022, Area chair for ICML 2023, and Senior area chair for COLING. His research interests include large language models, generative models, multimodal representation learning and their practical applications for real-world problems. In particular, he has mainly focused on practical task definition and evaluation protocol for continual learning in various domains.

Jinwoo Shin (KAIST)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors