Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

Task Addition in Multi-Task Learning by Geometrical Alignment

Soorin Yim · Dae-Woong Jeong · Sung Moon Ko · Sumin Lee · Hyunseung Kim · Chanhui Lee · Sehui Han

Keywords: [ AI4Science ] [ property prediction ] [ Geometric Deep Learning ] [ Multi-task Learning ] [ Molecule ] [ transfer learning ]


Abstract:

Training deep learning models on limited data while maintaining generalization is one of the fundamental challenges in molecular property prediction. One effective solution is transferring knowledge extracted from abundant datasets to those with scarce data.Recently, a novel algorithm called Geometrically Aligned Transfer Encoder (GATE) has been introduced, which uses soft parameter sharing by aligning the geometrical shapes of task-specific latent spaces. However, GATE faces limitations in scaling to multiple tasks due to computational costs. In this study, we propose a task addition approach for GATE to improve performance on target tasks with limited data while minimizing computational complexity. It is achieved through supervised multi-task pretraining on a large dataset, followed by the addition and training of task-specific modules for each target task. Our experiments demonstrate the superior performance of the task addition strategy for GATE over conventional multi-task methods, with comparable computational costs.

Chat is not available.