Timezone: »

 
Tutorial
Self-Attention for Computer Vision
Aravind Srinivas · Prajit Ramachandran · Ashish Vaswani

Mon Jul 19 08:00 PM -- 10:45 PM (PDT) @ Virtual

The tutorial will be about the application of self-attention mechanisms in computer vision. Self-Attention has been widely adopted in NLP, with the fully attentional Transformer model having largely replaced RNNs and now being used in state-of-the-art language understanding models like GPT, BERT, XLNet, T5, Electra, and Meena. Thus, there has been a tremendous interest in studying whether self-attention can have a similarly big and far-reaching impact in computer vision. However, vision tasks have different properties compared to language tasks, so a lot of research has been devoted to exploring the best way to apply self-attention to visual models. This tutorial will cover many of the different applications of self-attention in vision in order to give the viewer a broad and precise understanding of this subfield.

Mon 8:00 p.m. - 8:45 p.m.
Self-Attention for Computer Vision (talk)   
Ashish Vaswani
Mon 8:45 p.m. - 9:00 p.m.
Self-Attention for Computer Vision (break)
Mon 9:00 p.m. - 9:45 p.m.
Self-Attention for Computer Vision (talk)   
Prajit Ramachandran
Mon 9:45 p.m. - 10:00 p.m.
Self-Attention for Computer Vision (break)
Mon 10:00 p.m. - 10:45 p.m.
Self-Attention for Computer Vision (talk)   
Aravind Srinivas

Author Information

Aravind Srinivas (UC Berkeley)
Prajit Ramachandran (Google)
Ashish Vaswani (Google Brain)

More from the Same Authors