Timezone: »
Many resource management problems require sequential decision-making under uncertainty, where the only uncertainty affecting the decision outcomes are exogenous variables outside the control of the decision-maker. We model these problems as Exo-MDPs (Markov Decision Processes with Exogenous Inputs) and design a class of data-efficient algorithms for them termed Hindsight Learning (HL). Our HL algorithms achieve data efficiency by leveraging a key insight: having samples of the exogenous variables, past decisions can be revisited in hindsight to infer counterfactual consequences that can accelerate policy improvements. We compare HL against classic baselines in the multi-secretary and airline revenue management problems. We also scale our algorithms to a business-critical cloud resource management problem -- allocating Virtual Machines (VMs) to physical machines, and simulate their performance with real datasets from a large public cloud provider. We find that HL algorithms outperform domain-specific heuristics, as well as state-of-the-art reinforcement learning methods.
Author Information
Sean R. Sinclair (Cornell University)
Felipe Vieira Frujeri (Microsoft)
Ching-An Cheng (Microsoft Research)
Luke Marshall
Hugo Barbalho (Research, Microsoft)
Jingling Li (University of Maryland, College Park)
Jennifer Neville (Purdue University)
Ishai Menache (Technion - Israel Institute of Technology, Technion)
Adith Swaminathan (Microsoft Research)
More from the Same Authors
-
2023 : Towards Modular Machine Learning Pipelines »
Aditya Modi · JIVAT NEET KAUR · Maggie Makar · Pavan Mallapragada · Amit Sharma · Emre Kiciman · Adith Swaminathan -
2023 : Survival Instinct in Offline Reinforcement Learning and Implicit Human Bias in Data »
Anqi Li · Dipendra Misra · Andrey Kolobov · Ching-An Cheng -
2023 Poster: Provable Reset-free Reinforcement Learning by No-Regret Reduction »
Hoai-An Nguyen · Ching-An Cheng -
2023 Poster: MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations »
Anqi Li · Byron Boots · Ching-An Cheng -
2022 Poster: Adversarially Trained Actor Critic for Offline Reinforcement Learning »
Ching-An Cheng · Tengyang Xie · Nan Jiang · Alekh Agarwal -
2022 Oral: Adversarially Trained Actor Critic for Offline Reinforcement Learning »
Ching-An Cheng · Tengyang Xie · Nan Jiang · Alekh Agarwal -
2021 Poster: A Collective Learning Framework to Boost GNN Expressiveness for Node Classification »
Mengyue Hang · Jennifer Neville · Bruno Ribeiro -
2021 Spotlight: A Collective Learning Framework to Boost GNN Expressiveness for Node Classification »
Mengyue Hang · Jennifer Neville · Bruno Ribeiro -
2021 Poster: Safe Reinforcement Learning Using Advantage-Based Intervention »
Nolan Wagener · Byron Boots · Ching-An Cheng -
2021 Spotlight: Safe Reinforcement Learning Using Advantage-Based Intervention »
Nolan Wagener · Byron Boots · Ching-An Cheng -
2020 : Short Talk 2 - Adaptive Discretization for Model-Based Reinforcement Learning »
Sean R. Sinclair -
2020 Poster: Learning Calibratable Policies using Programmatic Style-Consistency »
Eric Zhan · Albert Tseng · Yisong Yue · Adith Swaminathan · Matthew Hausknecht -
2020 Poster: Working Memory Graphs »
Ricky Loynd · Roland Fernandez · Asli Celikyilmaz · Adith Swaminathan · Matthew Hausknecht -
2019 Workshop: Real-world Sequential Decision Making: Reinforcement Learning and Beyond »
Hoang Le · Yisong Yue · Adith Swaminathan · Byron Boots · Ching-An Cheng