Timezone: »
How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups. Through the use of specially-derived canonical coordinate systems, ETs incorporate functions that are equivariant by construction with respect to these transformations. We show empirically that ETs can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters. On a real-world image classification task, ETs improve the sample efficiency of ResNet classifiers, achieving relative improvements in error rate of up to 15% in the limited data regime while increasing model parameter count by less than 1%.
Author Information
Kai Sheng Tai (Stanford University)
Peter Bailis (Stanford University)
Gregory Valiant (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Equivariant Transformer Networks »
Wed. Jun 12th 10:05 -- 10:10 PM Room Hall A
More from the Same Authors
-
2023 Poster: One-sided Matrix Completion from Two Observations Per Row »
Steven Cao · Percy Liang · Greg Valiant -
2021 Poster: Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed Self-Training »
Kai Sheng Tai · Peter Bailis · Gregory Valiant -
2021 Spotlight: Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed Self-Training »
Kai Sheng Tai · Peter Bailis · Gregory Valiant -
2020 Poster: Sample Amplification: Increasing Dataset Size even when Learning is Impossible »
Brian Axelrod · Shivam Garg · Vatsal Sharan · Gregory Valiant -
2019 Poster: LIT: Learned Intermediate Representation Training for Model Compression »
Animesh Koratana · Daniel Kang · Peter Bailis · Matei Zaharia -
2019 Oral: LIT: Learned Intermediate Representation Training for Model Compression »
Animesh Koratana · Daniel Kang · Peter Bailis · Matei Zaharia -
2019 Poster: Compressed Factorization: Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data »
Vatsal Sharan · Kai Sheng Tai · Peter Bailis · Gregory Valiant -
2019 Oral: Compressed Factorization: Fast and Accurate Low-Rank Factorization of Compressively-Sensed Data »
Vatsal Sharan · Kai Sheng Tai · Peter Bailis · Gregory Valiant -
2019 Poster: Rehashing Kernel Evaluation in High Dimensions »
Paris Siminelakis · Kexin Rong · Peter Bailis · Moses Charikar · Philip Levis -
2019 Poster: Maximum Likelihood Estimation for Learning Populations of Parameters »
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade -
2019 Oral: Rehashing Kernel Evaluation in High Dimensions »
Paris Siminelakis · Kexin Rong · Peter Bailis · Moses Charikar · Philip Levis -
2019 Oral: Maximum Likelihood Estimation for Learning Populations of Parameters »
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade -
2017 Poster: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou -
2017 Poster: Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use »
Vatsal Sharan · Gregory Valiant -
2017 Talk: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou -
2017 Talk: Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use »
Vatsal Sharan · Gregory Valiant