Timezone: »
Adversarial training augments the training set with perturbations to improve the robust error (over worst-case perturbations), but it often leads to an increase in the standard error (on unperturbed test inputs). Previous explanations for this tradeoff rely on the assumption that no predictor in the hypothesis class has low standard and robust error. In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error. In particular, we show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor. We then prove that the recently proposed robust self-training (RST) estimator improves robust error without sacrificing standard error for noiseless linear regression. Empirically, for neural networks, we find that RST with different adversarial training methods improves both standard and robust error for random and adversarial rotations and adversarial l_infty perturbations in CIFAR-10.
Author Information
Aditi Raghunathan (Stanford)
Sang Michael Xie (Stanford University)
Fanny Yang (ETH)
John Duchi (Stanford University)
Percy Liang (Stanford University)
More from the Same Authors
-
2021 : Maximizing the robust margin provably overfits on noiseless data »
Fanny Yang · Reinhard Heckel · Michael Aerni · Alexandru Tifrea · Konstantin Donhauser -
2021 : Surprising benefits of ridge regularization for noiseless regression »
Konstantin Donhauser · Alexandru Tifrea · Michael Aerni · Reinhard Heckel · Fanny Yang -
2021 : Adapting to function difficulty and growth conditions in private optimization »
Hilal Asi · Daniel A Levy · John Duchi -
2021 : Novel disease detection using ensembles with regularized disagreement »
Alexandru Tifrea · Eric Stavarache · Fanny Yang -
2022 : Why adversarial training can hurt robust accuracy »
jacob clarysse · Julia Hörrmann · Fanny Yang -
2022 : LinkBERT: Language Model Pretraining with Document Link Knowledge »
Michihiro Yasunaga · Jure Leskovec · Percy Liang -
2022 : Provable Concept Learning for Interpretable Predictions Using Variational Autoencoders »
Armeen Taeb · Nicolò Ruggeri · Carina Schnuck · Fanny Yang -
2023 : Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift »
Saurabh Garg · Amrith Setlur · Zachary Lipton · Sivaraman Balakrishnan · Virginia Smith · Aditi Raghunathan -
2023 : Why is SAM Robust to Label Noise? »
Christina Baek · Zico Kolter · Aditi Raghunathan -
2023 : Sharpness-Aware Minimization Enhances Feature Diversity »
Jacob Mitchell Springer · Vaishnavh Nagarajan · Aditi Raghunathan -
2023 : DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining »
Sang Michael Xie · Hieu Pham · Xuanyi Dong · Nan Du · Hanxiao Liu · Yifeng Lu · Percy Liang · Quoc Le · Tengyu Ma · Adams Wei Yu -
2023 : TMARS: Improving Visual Representations by Circumventing Text Feature Learning »
Pratyush Maini · Sachin Goyal · Zachary Lipton · Zico Kolter · Aditi Raghunathan -
2023 : Differentially Private Heavy Hitters using Federated Analytics »
Karan Chadha · Junye Chen · John Duchi · Vitaly Feldman · Hanieh Hashemi · Omid Javidbakht · Audra McMillan · Kunal Talwar -
2023 : Retrieval-Augmented Multimodal Language Modeling »
Michihiro Yasunaga · Armen Aghajanyan · Weijia Shi · Rich James · Jure Leskovec · Percy Liang · Mike Lewis · Luke Zettlemoyer · Wen-tau Yih -
2023 : Lexinvariant Language Models »
Qian Huang · Eric Zelikman · Sarah Chen · Yuhuai Wu · Greg Valiant · Percy Liang -
2023 : PRODIGY: Enabling In-context Learning Over Graphs »
Qian Huang · Hongyu Ren · Peng Chen · Gregor Kržmanc · Daniel Zeng · Percy Liang · Jure Leskovec -
2023 : Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training »
Hong Liu · Zhiyuan Li · David Hall · Percy Liang · Tengyu Ma -
2023 : Panel Discussion »
Megan Ansdell · Nathan Lambert · Ludwig Schmidt · Praveen Paritosh · Sang Michael Xie -
2023 Workshop: ES-FoMo: Efficient Systems for Foundation Models »
Julien Launay · Daniel Y Fu · Tri Dao · Daniel Hesslow · Beidi Chen · Azalia Mirhoseini · Percy Liang -
2023 : Aditi Raghunathan »
Aditi Raghunathan -
2023 Oral: Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models »
Hong Liu · Sang Michael Xie · Zhiyuan Li · Tengyu Ma -
2023 Poster: Margin-based sampling in high dimensions: When being active is less efficient than staying passive »
Alexandru Tifrea · Jacob Clarysse · Fanny Yang -
2023 Poster: Contextual Reliability: When Different Features Matter in Different Contexts »
Gaurav Ghosal · Amrith Setlur · Daniel S Brown · Anca Dragan · Aditi Raghunathan -
2023 Poster: Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models »
Hong Liu · Sang Michael Xie · Zhiyuan Li · Tengyu Ma -
2023 Poster: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Poster: FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU »
Ying Sheng · Lianmin Zheng · Binhang Yuan · Zhuohan Li · Max Ryabinin · Beidi Chen · Percy Liang · Christopher Re · Ion Stoica · Ce Zhang -
2023 Oral: FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU »
Ying Sheng · Lianmin Zheng · Binhang Yuan · Zhuohan Li · Max Ryabinin · Beidi Chen · Percy Liang · Christopher Re · Ion Stoica · Ce Zhang -
2023 Oral: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Oral: Evaluating Self-Supervised Learning via Risk Decomposition »
Yann Dubois · Tatsunori Hashimoto · Percy Liang -
2023 Poster: Evaluating Self-Supervised Learning via Risk Decomposition »
Yann Dubois · Tatsunori Hashimoto · Percy Liang -
2023 Poster: Automatically Auditing Large Language Models via Discrete Optimization »
Erik Jones · Anca Dragan · Aditi Raghunathan · Jacob Steinhardt -
2023 Poster: CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks »
Jue Wang · Yucheng Lu · Binhang Yuan · Beidi Chen · Percy Liang · Chris De Sa · Christopher Re · Ce Zhang -
2023 Poster: Out-of-Domain Robustness via Targeted Augmentations »
Irena Gao · Shiori Sagawa · Pang Wei Koh · Tatsunori Hashimoto · Percy Liang -
2023 Poster: One-sided Matrix Completion from Two Observations Per Row »
Steven Cao · Percy Liang · Greg Valiant -
2023 Poster: Retrieval-Augmented Multimodal Language Modeling »
Michihiro Yasunaga · Armen Aghajanyan · Weijia Shi · Richard James · Jure Leskovec · Percy Liang · Mike Lewis · Luke Zettlemoyer · Scott Yih -
2022 : Discussion Panel »
Percy Liang · Léon Bottou · Jayashree Kalpathy-Cramer · Alex Smola -
2022 Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward »
Huaxiu Yao · Hugo Larochelle · Percy Liang · Colin Raffel · Jian Tang · Ying WEI · Saining Xie · Eric Xing · Chelsea Finn -
2022 Poster: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi -
2022 Poster: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2022 Poster: Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation »
Kendrick Shen · Robbie Jones · Ananya Kumar · Sang Michael Xie · Jeff Z. HaoChen · Tengyu Ma · Percy Liang -
2022 Oral: Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation »
Kendrick Shen · Robbie Jones · Ananya Kumar · Sang Michael Xie · Jeff Z. HaoChen · Tengyu Ma · Percy Liang -
2022 Spotlight: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2022 Spotlight: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Karan Chadha · Gary Cheng · John Duchi -
2022 Poster: Private optimization in the interpolation regime: faster rates and hardness results »
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi -
2022 Spotlight: Private optimization in the interpolation regime: faster rates and hardness results »
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi -
2021 Poster: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2021 Poster: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Poster: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Poster: Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization »
Sang Michael Xie · Tengyu Ma · Percy Liang -
2021 Spotlight: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2021 Spotlight: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Oral: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Oral: Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization »
Sang Michael Xie · Tengyu Ma · Percy Liang -
2021 Poster: Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
John Miller · Rohan Taori · Aditi Raghunathan · Shiori Sagawa · Pang Wei Koh · Vaishaal Shankar · Percy Liang · Yair Carmon · Ludwig Schmidt -
2021 Poster: Break-It-Fix-It: Unsupervised Learning for Program Repair »
Michihiro Yasunaga · Percy Liang -
2021 Oral: Break-It-Fix-It: Unsupervised Learning for Program Repair »
Michihiro Yasunaga · Percy Liang -
2021 Spotlight: Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
John Miller · Rohan Taori · Aditi Raghunathan · Shiori Sagawa · Pang Wei Koh · Vaishaal Shankar · Percy Liang · Yair Carmon · Ludwig Schmidt -
2021 Poster: Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices »
Evan Liu · Aditi Raghunathan · Percy Liang · Chelsea Finn -
2021 Spotlight: Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices »
Evan Liu · Aditi Raghunathan · Percy Liang · Chelsea Finn -
2021 Poster: Catformer: Designing Stable Transformers via Sensitivity Analysis »
Jared Quincy Davis · Albert Gu · Krzysztof Choromanski · Tri Dao · Christopher Re · Chelsea Finn · Percy Liang -
2021 Poster: Just Train Twice: Improving Group Robustness without Training Group Information »
Evan Liu · Behzad Haghgoo · Annie Chen · Aditi Raghunathan · Pang Wei Koh · Shiori Sagawa · Percy Liang · Chelsea Finn -
2021 Spotlight: Catformer: Designing Stable Transformers via Sensitivity Analysis »
Jared Quincy Davis · Albert Gu · Krzysztof Choromanski · Tri Dao · Christopher Re · Chelsea Finn · Percy Liang -
2021 Oral: Just Train Twice: Improving Group Robustness without Training Group Information »
Evan Liu · Behzad Haghgoo · Annie Chen · Aditi Raghunathan · Pang Wei Koh · Shiori Sagawa · Percy Liang · Chelsea Finn -
2020 : QA for invited talk 3 Yang »
Fanny Yang -
2020 : Invited talk 3 Yang »
Fanny Yang -
2020 : Keynote #3 Percy Liang »
Percy Liang -
2020 Poster: Concept Bottleneck Models »
Pang Wei Koh · Thao Nguyen · Yew Siang Tang · Stephen Mussmann · Emma Pierson · Been Kim · Percy Liang -
2020 Poster: Graph-based, Self-Supervised Program Repair from Diagnostic Feedback »
Michihiro Yasunaga · Percy Liang -
2020 Poster: Understanding Self-Training for Gradual Domain Adaptation »
Ananya Kumar · Tengyu Ma · Percy Liang -
2020 Poster: An Investigation of Why Overparameterization Exacerbates Spurious Correlations »
Shiori Sagawa · aditi raghunathan · Pang Wei Koh · Percy Liang -
2020 Poster: Robustness to Spurious Correlations via Human Annotations »
Megha Srivastava · Tatsunori Hashimoto · Percy Liang -
2020 Poster: Feature Noise Induces Loss Discrepancy Across Groups »
Fereshte Khani · Percy Liang -
2020 Poster: FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis »
Aman Sinha · Matthew O'Kelly · Hongrui Zheng · Rahul Mangharam · John Duchi · Russ Tedrake -
2020 Poster: DROCC: Deep Robust One-Class Classification »
Sachin Goyal · Aditi Raghunathan · Moksh Jain · Harsha Vardhan Simhadri · Prateek Jain -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2018 Poster: On the Relationship between Data Efficiency and Error for Uncertainty Sampling »
Stephen Mussmann · Percy Liang -
2018 Poster: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang -
2018 Oral: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang -
2018 Oral: On the Relationship between Data Efficiency and Error for Uncertainty Sampling »
Stephen Mussmann · Percy Liang -
2017 Poster: World of Bits: An Open-Domain Platform for Web-Based Agents »
Tim Shi · Andrej Karpathy · Jim Fan · Jonathan Hernandez · Percy Liang -
2017 Poster: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou -
2017 Talk: World of Bits: An Open-Domain Platform for Web-Based Agents »
Tim Shi · Andrej Karpathy · Jim Fan · Jonathan Hernandez · Percy Liang -
2017 Poster: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi -
2017 Poster: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Poster: Developing Bug-Free Machine Learning Systems With Formal Mathematics »
Daniel Selsam · Percy Liang · David L Dill -
2017 Talk: Developing Bug-Free Machine Learning Systems With Formal Mathematics »
Daniel Selsam · Percy Liang · David L Dill -
2017 Talk: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi -
2017 Talk: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou -
2017 Poster: Convexified Convolutional Neural Networks »
Yuchen Zhang · Percy Liang · Martin Wainwright -
2017 Poster: Understanding Black-box Predictions via Influence Functions »
Pang Wei Koh · Percy Liang -
2017 Talk: Convexified Convolutional Neural Networks »
Yuchen Zhang · Percy Liang · Martin Wainwright -
2017 Talk: Understanding Black-box Predictions via Influence Functions »
Pang Wei Koh · Percy Liang