We propose a new set of challenging benchmark gym environments for testing single- and multi-agent reinforcement learning environments. Single-agent environments are based on a simple consumption-saving decision problem. In each period, agents face an exogenous positive draw that represents how much income they will have in this period. In response, agents may choose what fraction of that income they would like to consume immediately for a reward, or save and get a return going forward on it. In the full version of the problem, all agents' saving decisions generate a price via market clearing. Agents then must learn what their value will be conditioned on the current state. This environment will provide a challenging, potentially non-stationary environment where agents' actions have critical effects on other agents' actions, albeit via a common observation. This environment will be made publicly available via a Github repository and open-source.