Timezone: »

Learning Discrete Representations via Information Maximizing Self-Augmented Training
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama

Tue Aug 08 01:30 AM -- 05:00 AM (PDT) @ Gallery #144

Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation. The task includes clustering and hash learning as special cases. Deep neural networks are promising to be used because they can model the non-linearity of data and scale to large datasets. However, their model complexity is huge, and therefore, we need to carefully regularize the networks in order to learn useful representations that exhibit intended invariance for applications of interest. To this end, we propose a method called Information Maximizing Self-Augmented Training (IMSAT). In IMSAT, we use data augmentation to impose the invariance on discrete representations. More specifically, we encourage the predicted representations of augmented data points to be close to those of the original data points in an end-to-end fashion. At the same time, we maximize the information-theoretic dependency between data and their predicted discrete representations. Extensive experiments on benchmark datasets show that IMSAT produces state-of-the-art results for both clustering and unsupervised hash learning.

Author Information

Weihua Hu (The University of Tokyo / RIKEN)
Takeru Miyato (Preferred Networks, Inc., ATR)

Takeru Miyato received his B.E. of electronic engineering in 2014, and M.E. of informatics in 2016 from Kyoto University. He is now a full-time researcher at Preferred Networks, Inc. and a visiting researcher at ATR Cognitive Mechanisms Laboratories. His current research interests are simple and scalable machine learning algorithms.

Seiya Tokui (Preferred Networks / The University of Tokyo)

Seiya Tokui is a researcher at Preferred Networks, Inc., Japan, and also a Ph.D. student at the University of Tokyo. He received the master’s degree in mathematical informatics at the University of Tokyo in 2012. He is the lead developer of the deep learning framework, Chainer. His current research interests include deep learning, its software design, computer vision, and natural language processing.

Eiichi Matsumoto (Preferred Networks Inc.)
Masashi Sugiyama (RIKEN / The University of Tokyo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors