Timezone: »

 
Oral
Learning Classifiers for Target Domain with Limited or No Labels
Pengkai Zhu · Hanxiao Wang · Venkatesh Saligrama

Wed Jun 12 02:25 PM -- 02:30 PM (PDT) @ Seaside Ballroom

In computer vision applications, such as domain adaptation (DA), few shot learning (FSL) and zero-shot learning (ZSL), we encounter new objects and environments, for which insufficient examples exist to allow for training “models from scratch,” and methods that adapt existing models, trained on the presented training environment(PTE), to the new scenario are required. We propose a novel visual attribute encoding method that encodes each image as a low-dimensional probability vector composed of prototypical part-type probabilities, where the prototypical parts are learnt so as to be representative to all images in PTE. We show that the resulting encoding is universal in that it serves as an input for adapting or learning classifiers for different problem contexts; with limited annotated labels in FSL; with no data and only semantic attributes in ZSL; and with unlabeled data for domain adaptation. We conduct extensive experiments on benchmark datasets and demonstrate that our method outperforms state-of-art DA, FSL or ZSL methods.

Author Information

Pengkai Zhu (Boston University)
Hanxiao Wang (Boston University)
Venkatesh Saligrama (Boston University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors