This talk is about a new learned dynamic memory controller for organizing prior experiences in a way that is empirically useful for a number of downstream tasks. The controller supports logarithmic time operations and can thus be integrated into existing statistical learning algorithms as an augmented memory unit without substantially increasing training and inference computation. It also supports optional reward reinforcement, which brings a steady improvement empirically. The controller operates as a reduction to online classification, allowing it to benefit from advances in representation or architecture. This is joint work with Wen Sun, Hal Daume, John Langford, and Paul Mineiro (published at ICML-2019).