Skip to yearly menu bar Skip to main content


Poster

Prototypical Transformer As Unified Motion Learners

Cheng Han · Yawen Lu · Guohao Sun · James Liang · Zhiwen Cao · Qifan Wang · Qiang Guan · Sohail Dianat · Raghuveer Rao · Tong Geng · ZHIQIANG TAO · Dongfang Liu

Hall C 4-9 #910
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective. ProtoFormer seamlessly integrates prototype learning with Transformer by thoughtfully considering motion dynamics, introducing two innovative designs. First, Cross-Attention Prototyping discovers prototypes based on signature motion patterns, providing transparency in understanding motion scenes. Second, Latent Synchronization guides feature representation learning via prototypes, effectively mitigating the problem of motion uncertainty. Empirical results demonstrate that our approach achieves competitive performance on popular motion tasks such as optical flow and scene depth. Furthermore, it exhibits generality across various downstream tasks, including object tracking and video stabilization.

Chat is not available.