Timezone: »

Improving Molecular Design by Stochastic Iterative Target Augmentation
Kevin Yang · Wengong Jin · Kyle Swanson · Regina Barzilay · Tommi Jaakkola

Tue Jul 14 02:00 PM -- 02:45 PM & Wed Jul 15 01:00 AM -- 01:45 AM (PDT) @ Virtual #None

Generative models in molecular design tend to be richly parameterized, data-hungry neural models, as they must create complex structured objects as outputs. Estimating such models from data may be challenging due to the lack of sufficient training data. In this paper, we propose a surprisingly effective self-training approach for iteratively creating additional molecular targets. We first pre-train the generative model together with a simple property predictor. The property predictor is then used as a likelihood model for filtering candidate structures from the generative model. Additional targets are iteratively produced and used in the course of stochastic EM iterations to maximize the log-likelihood that the candidate structures are accepted. A simple rejection (re-weighting) sampler suffices to draw posterior samples since the generative model is already reasonable after pre-training. We demonstrate significant gains over strong baselines for both unconditional and conditional molecular design. In particular, our approach outperforms the previous state-of-the-art in conditional molecular design by over 10% in absolute gain. Finally, we show that our approach is useful in other domains as well, such as program synthesis.

Author Information

Kevin Yang (UC Berkeley)
Wengong Jin (MIT)
Kyle Swanson (University of Cambridge)
Regina Barzilay (MIT CSAIL)
Tommi Jaakkola (MIT)

More from the Same Authors