Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Synergy of Scientific and Machine Learning Modelling (SynS & ML) Workshop

Accelerating Molecular Graph Neural Networks via Knowledge Distillation

Filip Ekström Kelvinius · Dimitar Georgiev · Artur Toshev · Johannes Gasteiger

Keywords: [ knowledge distillation ] [ Graph Neural Networks ] [ molecules ]


Abstract:

Recent advances in graph neural networks (GNNs) have allowed molecular simulations with accuracy on par with conventional gold-standard methods at a fraction of the computational cost. Nonetheless, as the field has been progressing to bigger and more complex architectures, state-of-the-art GNNs have become largely prohibitive for many large-scale applications. In this paper, we, for the first time, explore the utility of knowledge distillation (KD) for accelerating molecular GNNs. To this end, we devise KD strategies that facilitate the distillation of hidden representations in directional and equivariant GNNs and evaluate their performance on the regression task of energy and force prediction. We validate our protocols across different teacher-student configurations and demonstrate that they can boost the predictive accuracy of student models without altering their architecture. Using our KD protocols, we manage to close as much as 59\% of the gap in predictive accuracy between models like GemNet-OC and PaiNN with zero additional cost at inference.

Chat is not available.