Poster
Prototypical Transformer As Unified Motion Learners
Cheng Han · Yawen Lu · Guohao Sun · James Liang · Zhiwen Cao · Qifan Wang · Qiang Guan · Sohail Dianat · Raghuveer Rao · Tong Geng · ZHIQIANG TAO · Dongfang Liu
Hall C 4-9 #910
In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective. ProtoFormer seamlessly integrates prototype learning with Transformer by thoughtfully considering motion dynamics, introducing two innovative designs. First, Cross-Attention Prototyping discovers prototypes based on signature motion patterns, providing transparency in understanding motion scenes. Second, Latent Synchronization guides feature representation learning via prototypes, effectively mitigating the problem of motion uncertainty. Empirical results demonstrate that our approach achieves competitive performance on popular motion tasks such as optical flow and scene depth. Furthermore, it exhibits generality across various downstream tasks, including object tracking and video stabilization.