Timezone: »

 
Oral
Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations
Ting Chen · Martin Min · Yizhou Sun

Wed Jul 11 02:40 AM -- 02:50 AM (PDT) @ A7
Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying a linear transformation based on a ``one-hot'' encoding of the discrete symbols. Despite its simplicity, such approach yields the number of parameters that grows linearly with the vocabulary size and can lead to overfitting. In this work, we propose a much more compact K-way D-dimensional discrete encoding scheme to replace the ``one-hot" encoding. In the proposed ``KD encoding'', each symbol is represented by a $D$-dimensional code with a cardinality of $K$, and the final symbol embedding vector is generated by composing the code embedding vectors. To end-to-end learn semantically meaningful codes, we derive a relaxed discrete optimization approach based on stochastic gradient descent, which can be generally applied to any differentiable computational graph with an embedding layer. In our experiments with various applications from natural language processing to graph convolutional networks, the total size of the embedding layer can be reduced up to 98% while achieving similar or better performance.

Author Information

Ting Chen (UCLA)
Martin Reqiang Min (NEC Laboratories America)

Martin Renqiang Min received his MSc and PhD degrees in Computer Science from Machine Learning Group, Department of Computer Science, University of Toronto, respectively, in 2005 and 2010. He did a one-year postdoc at Yale University. In May 2011, he accepted a tenure-track assistant professor position from Department of Computer Science and Engineering, Hong Kong University of Science and Technology which has a beautiful campus. His research interests include machine learning and biomedical informatics, focusing on deep learning, graphical models, text understanding, video analysis, and omics for precision medicine. He contributed to the ENCODE Project, for which he published a co-first author research article on Nature. His recent text-to-video research was reported by Science, MIT Technology Review, and many other international news media. He also actively contributes to scientific services, for which he has been a program committee member of ICML, ICLR, NIPS, and AAAI for many years. He was a co-chair of NIPS Workshop on Machine Learning in Computational Biology in 2014.

Yizhou Sun (UCLA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors