Timezone: »
Supervised learning can improve the design of state-of-the-art solvers for combinatorial problems, but labelling large numbers of combinatorial instances is often impractical due to exponential worst-case complexity. Inspired by the recent success of contrastive pre-training for images, we conduct a scientific study of the effect of augmentation design on contrastive pre-training for the Boolean satisfiability problem. While typical graph contrastive pre-training uses label-agnostic augmentations, our key insight is that many combinatorial problems have well-studied invariances, which allow for the design of label-preserving augmentations. We find that label-preserving augmentations are critical for the success of contrastive pre-training. We show that our representations are able to achieve comparable test accuracy to fully-supervised learning while using only 1% of the labels. We also demonstrate that our representations are more transferable to larger problems from unseen domains. Our code is available at https://github.com/h4duan/contrastive-sat.
Author Information
Haonan Duan (University of Toronto)
Pashootan Vaezipoor (University of Toronto and Vector Institute)
Max Paulus (ETH Zurich)
Yangjun Ruan (University of Toronto)
Chris Maddison (University of Toronto)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Augment with Care: Contrastive Learning for Combinatorial Problems »
Thu. Jul 21st 07:50 -- 07:55 PM Room Hall F
More from the Same Authors
-
2022 : Exploring Long-Horizon Reasoning with Deep RL in Combinatorially Hard Tasks »
Andrew C Li · Pashootan Vaezipoor · Rodrigo A Toro Icarte · Sheila McIlraith -
2022 : Contrastive Learning Can Find An Optimal Basis For Approximately Invariant Functions »
Daniel D. Johnson · Daniel D. Johnson · Ayoub El Hanchi · Ayoub El Hanchi · Chris Maddison · Chris Maddison -
2022 Poster: Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning »
Max Paulus · Giulia Zarpellon · Andreas Krause · Laurent Charlin · Chris Maddison -
2022 Spotlight: Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning »
Max Paulus · Giulia Zarpellon · Andreas Krause · Laurent Charlin · Chris Maddison -
2022 Poster: Bayesian Nonparametrics for Offline Skill Discovery »
Valentin Villecroze · Harry Braviner · Panteha Naderian · Chris Maddison · Gabriel Loaiza-Ganem -
2022 Spotlight: Bayesian Nonparametrics for Offline Skill Discovery »
Valentin Villecroze · Harry Braviner · Panteha Naderian · Chris Maddison · Gabriel Loaiza-Ganem -
2022 Poster: Stochastic Reweighted Gradient Descent »
Ayoub El Hanchi · David Stephens · Chris Maddison -
2022 Spotlight: Stochastic Reweighted Gradient Descent »
Ayoub El Hanchi · David Stephens · Chris Maddison -
2021 Poster: Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding »
Yangjun Ruan · Karen Ullrich · Daniel Severo · James Townsend · Ashish Khisti · Arnaud Doucet · Alireza Makhzani · Chris Maddison -
2021 Oral: Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding »
Yangjun Ruan · Karen Ullrich · Daniel Severo · James Townsend · Ashish Khisti · Arnaud Doucet · Alireza Makhzani · Chris Maddison -
2021 Poster: LTL2Action: Generalizing LTL Instructions for Multi-Task RL »
Pashootan Vaezipoor · Andrew C Li · Rodrigo A Toro Icarte · Sheila McIlraith -
2021 Poster: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions »
Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison -
2021 Spotlight: LTL2Action: Generalizing LTL Instructions for Multi-Task RL »
Pashootan Vaezipoor · Andrew C Li · Rodrigo A Toro Icarte · Sheila McIlraith -
2021 Oral: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions »
Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison -
2020 : Q&A: Chris Maddison »
Chris Maddison · Jessica Forde · Jesse Dodge -
2020 : Invited Talk: Chris Maddison »
Chris Maddison