Skip to yearly menu bar Skip to main content


Compressing Gradient Optimizers via Count-Sketches

Ryan Spring · Anastasios Kyrillidis · Vijai Mohan · Anshumali Shrivastava

Pacific Ballroom #83

Keywords: [ Recommender Systems ] [ Parallel and Distributed Learning ] [ Non-convex Optimization ] [ Natural Language Processing ] [ Large Scale Learning and Big Data ]


Many popular first-order optimization methods accelerate the convergence rate of deep learning models. However, these algorithms require auxiliary variables, which cost additional memory proportional to the number of parameters in the model. The problem is becoming more severe as models grow larger to learn from complex, large-scale datasets. Our proposed solution is to maintain a linear sketch to compress the auxiliary variables. Our approach has the same performance as the full-sized baseline, while using less space for the auxiliary variables. Theoretically, we prove that count-sketch optimization maintains the SGD convergence rate, while gracefully reducing memory usage for large-models. We show a rigorous evaluation on popular architectures such as ResNet-18 and Transformer-XL. On the 1-Billion Word dataset, we save 25% of the memory used during training (7.7 GB instead of 10.8 GB) with minimal accuracy and performance loss. For an Amazon extreme classification task with over 49.5 million classes, we also reduce the training time by 38%, by increasing the mini-batch size 3.5x using our count-sketch optimizer.

Live content is unavailable. Log in and register to view live content