Skip to yearly menu bar Skip to main content


Poster

Auxiliary Modality Learning with Generalized Curriculum Distillation

Yu Shen · Xijun Wang · Peng Gao · Ming Lin

Exhibit Hall 1 #319
[ ]
[ PDF [ Poster

Abstract:

Driven by the need from real-world applications, Auxiliary Modality Learning (AML) offers the possibility to utilize more information from auxiliary data in training, while only requiring data from one or fewer modalities in test, to save the overall computational cost and reduce the amount of input data for inferencing. In this work, we formally define ``Auxiliary Modality Learning'' (AML), systematically classify types of auxiliary modality (in visual computing) and architectures for AML, and analyze their performance. We also analyze the conditions under which AML works well from the optimization and data distribution perspectives. To guide various choices to achieve optimal performance using AML, we propose a novel method to assist in choosing the best auxiliary modality and estimating an upper bound performance before executing AML. In addition, we propose a new AML method using generalized curriculum distillation to enable more effective curriculum learning. Our method achieves the best performance compared to other SOTA methods.

Chat is not available.