Online Learned Continual Compression with Adaptive Quantization Modules

Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Joelle Pineau,

Abstract Paper

Thu Jul 16 7 a.m. PDT [iCal] [ Join Zoom ]
Thu Jul 16 6 p.m. PDT [iCal] [ Join Zoom ]
Please do not share or post zoom links


We introduce and study the problem of Online Continual Compression, where one attempts to simultaneously learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. A naive application of auto-encoder in this setting encounters a major challenge: representations derived from earlier encoder states must be usable by later decoder states. We show how to use discrete auto-encoders to effectively address this challenge and introduce Adaptive Quantization Modules (AQM) to control variation in the compression ability of the module at any given stage of learning. This enables selecting an appropriate compression for incoming samples, while taking into account overall memory constraints and current progress of the learned compression. Unlike previous methods, our approach does not require any pretraining, even on challenging datasets. We show that using AQM to replace standard episodic memory in continual learning settings leads to significant gains on continual learning benchmarks with images, LiDAR, and reinforcement learning agents.

Chat is not available.