Poster
Retroformer: Pushing the Limits of End-to-end Retrosynthesis Transformer
Yue Wan · Chang-Yu (Kim) Hsieh · Ben Liao · Shengyu Zhang
Hall E #111
Keywords: [ DL: Algorithms ] [ DL: Generative Models and Autoencoders ] [ DL: Other Representation Learning ] [ DL: Graph Neural Networks ] [ DL: Attention Mechanisms ] [ DL: Everything Else ] [ DL: Recurrent Networks ] [ APP: Chemistry and Drug Discovery ]
Retrosynthesis prediction is one of the fundamental challenges in organic synthesis. The task is to predict the reactants given a core product. With the advancement of machine learning, computer-aided synthesis planning has gained increasing interest. Numerous methods were proposed to solve this problem with different levels of dependency on additional chemical knowledge. In this paper, we propose Retroformer, a novel Transformer-based architecture for retrosynthesis prediction without relying on any cheminformatics tools for molecule editing. Via the proposed local attention head, the model can jointly encode the molecular sequence and graph, and efficiently exchange information between the local reactive region and the global reaction context. Retroformer reaches the new state-of-the-art accuracy for the end-to-end template-free retrosynthesis, and improves over many strong baselines on better molecule and reaction validity. In addition, its generative procedure is highly interpretable and controllable. Overall, Retroformer pushes the limits of the reaction reasoning ability of deep generative models.