Timezone: »
Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings. Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints. In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ). We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning. Our method can readily serve as a drop-in alternative for any existing embedding layer. Empirically, DPQ offers significant compression ratios (14-238X) at negligible or no performance cost on 10 datasets across three different language tasks.
Author Information
Ting Chen (Google Brain)
Lala Li (Google)
Yizhou Sun (UCLA)
More from the Same Authors
-
2021 Poster: GLSearch: Maximum Common Subgraph Detection via Learning to Search »
Yunsheng Bai · Derek Xu · Yizhou Sun · Wei Wang -
2021 Spotlight: GLSearch: Maximum Common Subgraph Detection via Learning to Search »
Yunsheng Bai · Derek Xu · Yizhou Sun · Wei Wang -
2020 Workshop: Graph Representation Learning and Beyond (GRL+) »
Petar Veličković · Michael M. Bronstein · Andreea Deac · Will Hamilton · Jessica Hamrick · Milad Hashemi · Stefanie Jegelka · Jure Leskovec · Renjie Liao · Federico Monti · Yizhou Sun · Kevin Swersky · Rex (Zhitao) Ying · Marinka Zitnik -
2020 Poster: A Simple Framework for Contrastive Learning of Visual Representations »
Ting Chen · Simon Kornblith · Mohammad Norouzi · Geoffrey Hinton -
2018 Poster: Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations »
Ting Chen · Martin Min · Yizhou Sun -
2018 Oral: Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations »
Ting Chen · Martin Min · Yizhou Sun