Timezone: »
We focus on prediction problems with structured outputs that are subject to output validity constraints, e.g. pseudocode-to-code translation where the code must compile. While labeled input-output pairs are expensive to obtain, "unlabeled" outputs, i.e. outputs without corresponding inputs, are freely available (e.g. code on GitHub) and provide information about output validity. Pre-training captures this structure by training a denoiser to denoise corrupted versions of unlabeled outputs. We first show that standard fine-tuning after pre-training destroys some of this structure. We then propose composed fine-tuning, which trains a predictor composed with the pre-trained denoiser. Importantly, the denoiser is fixed to preserve output structure. Like standard fine-tuning, the predictor is also initialized with the pre-trained denoiser. We prove for two-layer ReLU networks that composed fine-tuning significantly reduces the complexity of the predictor, thus improving generalization. Empirically, we show that composed fine-tuning improves over standard fine-tuning on two pseudocode-to-code translation datasets (3% and 6% relative). The improvement is magnified on out-of-distribution (OOD) examples (4% and 25% relative), suggesting that reducing predictor complexity improves OOD extrapolation.
Author Information
Sang Michael Xie (Stanford University)
Tengyu Ma (Stanford University)
Percy Liang (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization »
Fri. Jul 23rd 04:00 -- 06:00 AM Room Virtual
More from the Same Authors
-
2021 : Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature »
Kefan Dong · Jiaqi Yang · Tengyu Ma -
2022 : LinkBERT: Language Model Pretraining with Document Link Knowledge »
Michihiro Yasunaga · Jure Leskovec · Percy Liang -
2023 Workshop: Workshop on Efficient Systems for Foundation Model »
Julien Launay · Daniel Y Fu · Tri Dao · Daniel Hesslow · Beidi Chen · Azalia Mirhoseini · Percy Liang -
2022 : Discussion Panel »
Percy Liang · Léon Bottou · Jayashree Kalpathy-Cramer · Alex Smola -
2022 Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward »
Huaxiu Yao · Hugo Larochelle · Percy Liang · Colin Raffel · Jian Tang · Ying WEI · Saining Xie · Eric Xing · Chelsea Finn -
2022 Poster: Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation »
Kendrick Shen · Robbie Jones · Ananya Kumar · Sang Michael Xie · Jeff Z. HaoChen · Tengyu Ma · Percy Liang -
2022 Oral: Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation »
Kendrick Shen · Robbie Jones · Ananya Kumar · Sang Michael Xie · Jeff Z. HaoChen · Tengyu Ma · Percy Liang -
2021 Poster: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Oral: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Poster: Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
John Miller · Rohan Taori · Aditi Raghunathan · Shiori Sagawa · Pang Wei Koh · Vaishaal Shankar · Percy Liang · Yair Carmon · Ludwig Schmidt -
2021 Poster: Break-It-Fix-It: Unsupervised Learning for Program Repair »
Michihiro Yasunaga · Percy Liang -
2021 Oral: Break-It-Fix-It: Unsupervised Learning for Program Repair »
Michihiro Yasunaga · Percy Liang -
2021 Spotlight: Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
John Miller · Rohan Taori · Aditi Raghunathan · Shiori Sagawa · Pang Wei Koh · Vaishaal Shankar · Percy Liang · Yair Carmon · Ludwig Schmidt -
2021 Poster: Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices »
Evan Liu · Aditi Raghunathan · Percy Liang · Chelsea Finn -
2021 Spotlight: Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices »
Evan Liu · Aditi Raghunathan · Percy Liang · Chelsea Finn -
2021 Poster: Catformer: Designing Stable Transformers via Sensitivity Analysis »
Jared Quincy Davis · Albert Gu · Krzysztof Choromanski · Tri Dao · Christopher Re · Chelsea Finn · Percy Liang -
2021 Poster: Just Train Twice: Improving Group Robustness without Training Group Information »
Evan Liu · Behzad Haghgoo · Annie Chen · Aditi Raghunathan · Pang Wei Koh · Shiori Sagawa · Percy Liang · Chelsea Finn -
2021 Spotlight: Catformer: Designing Stable Transformers via Sensitivity Analysis »
Jared Quincy Davis · Albert Gu · Krzysztof Choromanski · Tri Dao · Christopher Re · Chelsea Finn · Percy Liang -
2021 Oral: Just Train Twice: Improving Group Robustness without Training Group Information »
Evan Liu · Behzad Haghgoo · Annie Chen · Aditi Raghunathan · Pang Wei Koh · Shiori Sagawa · Percy Liang · Chelsea Finn -
2020 : Keynote #3 Percy Liang »
Percy Liang -
2020 Poster: Concept Bottleneck Models »
Pang Wei Koh · Thao Nguyen · Yew Siang Tang · Stephen Mussmann · Emma Pierson · Been Kim · Percy Liang -
2020 Poster: Graph-based, Self-Supervised Program Repair from Diagnostic Feedback »
Michihiro Yasunaga · Percy Liang -
2020 Poster: Understanding Self-Training for Gradual Domain Adaptation »
Ananya Kumar · Tengyu Ma · Percy Liang -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang -
2020 Poster: An Investigation of Why Overparameterization Exacerbates Spurious Correlations »
Shiori Sagawa · aditi raghunathan · Pang Wei Koh · Percy Liang -
2020 Poster: Robustness to Spurious Correlations via Human Annotations »
Megha Srivastava · Tatsunori Hashimoto · Percy Liang -
2020 Poster: Feature Noise Induces Loss Discrepancy Across Groups »
Fereshte Khani · Percy Liang -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2018 Poster: On the Relationship between Data Efficiency and Error for Uncertainty Sampling »
Stephen Mussmann · Percy Liang -
2018 Poster: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang -
2018 Oral: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang -
2018 Oral: On the Relationship between Data Efficiency and Error for Uncertainty Sampling »
Stephen Mussmann · Percy Liang -
2017 Poster: World of Bits: An Open-Domain Platform for Web-Based Agents »
Tim Shi · Andrej Karpathy · Jim Fan · Jonathan Hernandez · Percy Liang -
2017 Talk: World of Bits: An Open-Domain Platform for Web-Based Agents »
Tim Shi · Andrej Karpathy · Jim Fan · Jonathan Hernandez · Percy Liang -
2017 Poster: Developing Bug-Free Machine Learning Systems With Formal Mathematics »
Daniel Selsam · Percy Liang · David L Dill -
2017 Talk: Developing Bug-Free Machine Learning Systems With Formal Mathematics »
Daniel Selsam · Percy Liang · David L Dill -
2017 Poster: Convexified Convolutional Neural Networks »
Yuchen Zhang · Percy Liang · Martin Wainwright -
2017 Poster: Understanding Black-box Predictions via Influence Functions »
Pang Wei Koh · Percy Liang -
2017 Talk: Convexified Convolutional Neural Networks »
Yuchen Zhang · Percy Liang · Martin Wainwright -
2017 Talk: Understanding Black-box Predictions via Influence Functions »
Pang Wei Koh · Percy Liang