Skip to yearly menu bar Skip to main content


Poster

A Simple Convolution Injector for Vision Transformer Towards Effective Adaptation in Visuo-Motor Control

Dongyoon Hwang · Byungkun Lee · Hojoon Lee · Hyunseung Kim · Jaegul Choo


Abstract:

Vision Transformers (ViT), when paired with large-scale pretraining, have shown remarkable performance across various computer vision tasks, which is mainly attributed to their weak inductive bias. However, while such weak inductive bias aids in pretraining scalability, this may hinder the effective adaptation of ViTs for visuo-motor control due to the absence of control-centric inductive biases. Such absent inductive biases include spatial locality or translation equivariance bias which convolutions naturally offer. To this end, we introduce Convolution Injector (CoIn), an add-on module that injects convolutions which are rich in locality and equivariance biases into a pretrained ViT for effective adaptation in visuo-motor control. We evaluate CoIn with three distinct types of pretrained ViTs (CLIP, MVP, VC-1) across 12 varied control tasks within three separate domains (Adroit, MetaWorld, DMC), and demonstrate that CoIn consistently enhances control task performance across all experimented environments and models, validating the effectiveness of providing pretrained ViTs with control-centric biases.

Live content is unavailable. Log in and register to view live content