Timezone: »
Many popular first-order optimization methods (e.g., Momentum, AdaGrad, Adam) accelerate the convergence rate of deep learning models. However, these algorithms require auxiliary parameters, which cost additional memory proportional to the number of parameters in the model. The problem is becoming more severe as deep learning models continue to grow larger in order to learn from complex, large-scale datasets. Our proposed solution is to maintain a linear sketch to compress the auxiliary variables. We demonstrate that our technique has the same performance as the full-sized baseline, while using significantly less space for the auxiliary variables. Theoretically, we prove that count-sketch optimization maintains the SGD convergence rate, while gracefully reducing memory usage for large-models. On the large-scale 1-Billion Word dataset, we save 25% of the memory used during training (8.6 GB instead of 11.7 GB) with minimal accuracy and performance loss. For an Amazon extreme classification task with over 49.5 million classes, we also reduce the training time by 38%, by increasing the mini-batch size 3.5x using our count-sketch optimizer.
Author Information
Ryan Spring (Rice University)
Anastasios Kyrillidis (Rice University)
Vijai Mohan (www.amazon.com)
Anshumali Shrivastava (Rice University)
Anshumali Shrivastava is an associate professor in the computer science department at Rice University. His broad research interests include randomized algorithms for large-scale machine learning. In 2018, Science news named him one of the Top-10 scientists under 40 to watch. He is a recipient of National Science Foundation CAREER Award, a Young Investigator Award from Air Force Office of Scientific Research, and machine learning research award from Amazon. His research on hashing inner products has won Best Paper Award at NIPS 2014 while his work on representing graphs got the Best Paper Award at IEEE/ACM ASONAM 2014. Anshumali finished his Ph.D. in 2015 from Cornell University.
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Compressing Gradient Optimizers via Count-Sketches »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #83
More from the Same Authors
-
2021 : Mitigating deep double descent by concatenating inputs »
John Chen · Qihan Wang · Anastasios Kyrillidis -
2023 : Adaptive Federated Learning with Auto-Tuned Clients »
J. Lyle Kim · Mohammad Taha Toghani · Cesar Uribe · Anastasios Kyrillidis -
2023 Oral: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2023 Poster: Hardware-Aware Compression with Random Operation Access Specific Tile (ROAST) Hashing »
Aditya Desai · Keren Zhou · Anshumali Shrivastava -
2023 Poster: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2022 Poster: One-Pass Diversified Sampling with Application to Terabyte-Scale Genomic Sequence Streams »
Benjamin Coleman · Benito Geordie · Li Chou · R. A. Leo Elworth · Todd Treangen · Anshumali Shrivastava -
2022 Spotlight: One-Pass Diversified Sampling with Application to Terabyte-Scale Genomic Sequence Streams »
Benjamin Coleman · Benito Geordie · Li Chou · R. A. Leo Elworth · Todd Treangen · Anshumali Shrivastava -
2022 Poster: DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks »
Zhuang Wang · Zhaozhuo Xu · Xinyu Wu · Anshumali Shrivastava · T. S. Eugene Ng -
2022 Spotlight: DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks »
Zhuang Wang · Zhaozhuo Xu · Xinyu Wu · Anshumali Shrivastava · T. S. Eugene Ng -
2021 Workshop: Beyond first-order methods in machine learning systems »
Albert S Berahas · Anastasios Kyrillidis · Fred Roosta · Amir Gholaminejad · Michael Mahoney · Rachael Tappenden · Raghu Bollapragada · Rixon Crane · J. Lyle Kim -
2021 Poster: A Tale of Two Efficient and Informative Negative Sampling Distributions »
Shabnam Daghaghi · Tharun Medini · Nicholas Meisburger · Beidi Chen · Mengnan Zhao · Anshumali Shrivastava -
2021 Oral: A Tale of Two Efficient and Informative Negative Sampling Distributions »
Shabnam Daghaghi · Tharun Medini · Nicholas Meisburger · Beidi Chen · Mengnan Zhao · Anshumali Shrivastava -
2020 Workshop: Beyond first order methods in machine learning systems »
Albert S Berahas · Amir Gholaminejad · Anastasios Kyrillidis · Michael Mahoney · Fred Roosta -
2020 Poster: Negative Sampling in Semi-Supervised learning »
John Chen · Vatsal Shah · Anastasios Kyrillidis -
2020 Poster: Sub-linear Memory Sketches for Near Neighbor Search on Streaming Data »
Benjamin Coleman · Richard Baraniuk · Anshumali Shrivastava -
2020 Poster: Angular Visual Hardness »
Beidi Chen · Weiyang Liu · Zhiding Yu · Jan Kautz · Anshumali Shrivastava · Animesh Garg · Anima Anandkumar -
2018 Poster: Ultra Large-Scale Feature Selection using Count-Sketches »
Amirali Aghazadeh · Ryan Spring · Daniel LeJeune · Gautam Dasarathy · Anshumali Shrivastava · Richard Baraniuk -
2018 Oral: Ultra Large-Scale Feature Selection using Count-Sketches »
Amirali Aghazadeh · Ryan Spring · Daniel LeJeune · Gautam Dasarathy · Anshumali Shrivastava · Richard Baraniuk -
2017 Poster: Optimal Densification for Fast and Accurate Minwise Hashing »
Anshumali Shrivastava -
2017 Talk: Optimal Densification for Fast and Accurate Minwise Hashing »
Anshumali Shrivastava