Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #145
Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing—and Back
Elliot Meyerson · Risto Miikkulainen
[ PDF

Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closely-related tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation is shown to improve performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.