Timezone: »
A core element in decision-making under uncertainty is the feedback on the quality of the performed actions. However, in many applications, such feedback is restricted. For example, in recommendation systems, repeatedly asking the user to provide feedback on the quality of recommendations will annoy them. In this work, we formalize decision-making problems with querying budget, where there is a (possibly time-dependent) hard limit on the number of reward queries allowed. Specifically, we focus on multi-armed bandits, linear contextual bandits, and reinforcement learning problems. We start by analyzing the performance of `greedy' algorithms that query a reward whenever they can. We show that in fully stochastic settings, doing so performs surprisingly well, but in the presence of any adversity, this might lead to linear regret. To overcome this issue, we propose the Confidence-Budget Matching (CBM) principle that queries rewards when the confidence intervals are wider than the inverse square root of the available budget. We analyze the performance of CBM based algorithms in different settings and show that it performs well in the presence of adversity in the contexts, initial states, and budgets.
Author Information
Yonathan Efroni (Microsoft Research, New York)
Nadav Merlis (Technion)
Aadirupa Saha (Microsoft Research)
Bio: Aadirupa Saha is currently a visiting faculty at Toyota Technological Institute at Chicago (TTIC). She obtained her PhD from the Department of Computer Science, Indian Institute of Science, Bangalore, advised by Aditya Gopalan and Chiranjib Bhattacharyya. She spent two years at Microsoft Research New York City as a postdoctoral researcher. During her PhD, Aadirupa interned at Microsoft Research, Bangalore, Inria, Paris, and Google AI, Mountain View. Her research interests include Bandits, Reinforcement Learning, Optimization, Learning theory, Algorithms. She has organized various workshops, tutorials and also served as a reviewer in top ML conferences. Research Interests: Machine Learning Theory (specifically Online Learning, Bandits, Reinforcement Learning), Optimization, Game Theory, Algorithms. She is recently interested in exploring ML problems at the intersection of Fairness, Privacy, Game theory and Mechanism design.
Shie Mannor (Technion)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Confidence-Budget Matching for Sequential Budgeted Learning »
Thu. Jul 22nd 12:25 -- 12:30 AM Room
More from the Same Authors
-
2021 : Minimax Regret for Stochastic Shortest Path »
Alon Cohen · Yonathan Efroni · Yishay Mansour · Aviv Rosenberg -
2021 : Provable RL with Exogenous Distractors via Multistep Inverse Dynamics »
Yonathan Efroni · Dipendra Misra · Akshay Krishnamurthy · Alekh Agarwal · John Langford -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation »
Thomas Kleine Büning · Aadirupa Saha · Christos Dimitrakakis · Haifeng Xu -
2023 Workshop: The Many Facets of Preference-Based Learning »
Aadirupa Saha · Mohammad Ghavamzadeh · Robert Busa-Fekete · Branislav Kveton · Viktor Bengs -
2023 Poster: Learning to Initiate and Reason in Event-Driven Cascading Processes »
Yuval Atzmon · Eli Meirom · Shie Mannor · Gal Chechik -
2023 Poster: Federated Online and Bandit Convex Optimization »
Kumar Kshitij Patel · Lingxiao Wang · Aadirupa Saha · Nati Srebro -
2023 Poster: Learning Hidden Markov Models When the Locations of Missing Observations are Unknown »
BINYAMIN PERETS · Mark Kozdoba · Shie Mannor -
2023 Poster: PPG Reloaded: An Empirical Study on What Matters in Phasic Policy Gradient »
Kaixin Wang · Zhou Daquan · Jiashi Feng · Shie Mannor -
2023 Poster: Representation-Driven Reinforcement Learning »
Ofir Nabati · Guy Tennenholtz · Shie Mannor -
2023 Poster: Principled Offline RL in the Presence of Rich Exogenous Information »
Riashat Islam · Manan Tomar · Alex Lamb · Yonathan Efroni · Hongyu Zang · Aniket Didolkar · Dipendra Misra · Xin Li · Harm Seijen · Remi Tachet des Combes · John Langford -
2023 Poster: Reward-Mixing MDPs with Few Latent Contexts are Learnable »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Workshop: Complex feedback in online learning »
Rémy Degenne · Pierre Gaillard · Wouter Koolen · Aadirupa Saha -
2022 Poster: Versatile Dueling Bandits: Best-of-both World Analyses for Learning from Relative Preferences »
Aadirupa Saha · Pierre Gaillard -
2022 Spotlight: Versatile Dueling Bandits: Best-of-both World Analyses for Learning from Relative Preferences »
Aadirupa Saha · Pierre Gaillard -
2022 Poster: Analysis of Stochastic Processes through Replay Buffers »
Shirli Di-Castro Shashua · Shie Mannor · Dotan Di Castro -
2022 Poster: Sparsity in Partially Controllable Linear Systems »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 Poster: Actor-Critic based Improper Reinforcement Learning »
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor -
2022 Poster: Optimizing Tensor Network Contraction Using Reinforcement Learning »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2022 Poster: The Geometry of Robust Value Functions »
Kaixin Wang · Navdeep Kumar · Kuangqi Zhou · Bryan Hooi · Jiashi Feng · Shie Mannor -
2022 Spotlight: Sparsity in Partially Controllable Linear Systems »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 Spotlight: The Geometry of Robust Value Functions »
Kaixin Wang · Navdeep Kumar · Kuangqi Zhou · Bryan Hooi · Jiashi Feng · Shie Mannor -
2022 Spotlight: Actor-Critic based Improper Reinforcement Learning »
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor -
2022 Spotlight: Analysis of Stochastic Processes through Replay Buffers »
Shirli Di-Castro Shashua · Shie Mannor · Dotan Di Castro -
2022 Spotlight: Optimizing Tensor Network Contraction Using Reinforcement Learning »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2022 Poster: Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Poster: Provable Reinforcement Learning with a Short-Term Memory »
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi -
2022 Poster: Stochastic Contextual Dueling Bandits under Linear Stochastic Transitivity Models »
Viktor Bengs · Aadirupa Saha · Eyke Hüllermeier -
2022 Poster: Optimal and Efficient Dynamic Regret Algorithms for Non-Stationary Dueling Bandits »
Aadirupa Saha · Shubham Gupta -
2022 Spotlight: Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Spotlight: Optimal and Efficient Dynamic Regret Algorithms for Non-Stationary Dueling Bandits »
Aadirupa Saha · Shubham Gupta -
2022 Spotlight: Stochastic Contextual Dueling Bandits under Linear Stochastic Transitivity Models »
Viktor Bengs · Aadirupa Saha · Eyke Hüllermeier -
2022 Spotlight: Provable Reinforcement Learning with a Short-Term Memory »
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi -
2021 : Invited Speaker: Shie Mannor: Lenient Regret »
Shie Mannor -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2021 : RL + Operations Research Panel »
Jim Dai · Fei Fang · Shie Mannor · Yuandong Tian · Zhiwei (Tony) Qin · Zongqing Lu -
2021 Poster: Detecting Rewards Deterioration in Episodic Reinforcement Learning »
Ido Greenberg · Shie Mannor -
2021 Poster: Online Limited Memory Neural-Linear Bandits with Likelihood Matching »
Ofir Nabati · Tom Zahavy · Shie Mannor -
2021 Spotlight: Online Limited Memory Neural-Linear Bandits with Likelihood Matching »
Ofir Nabati · Tom Zahavy · Shie Mannor -
2021 Spotlight: Detecting Rewards Deterioration in Episodic Reinforcement Learning »
Ido Greenberg · Shie Mannor -
2021 Poster: Adversarial Dueling Bandits »
Aadirupa Saha · Tomer Koren · Yishay Mansour -
2021 Spotlight: Adversarial Dueling Bandits »
Aadirupa Saha · Tomer Koren · Yishay Mansour -
2021 Poster: Ensemble Bootstrapping for Q-Learning »
Oren Peer · Chen Tessler · Nadav Merlis · Ron Meir -
2021 Poster: Value Iteration in Continuous Actions, States and Time »
Michael Lutter · Shie Mannor · Jan Peters · Dieter Fox · Animesh Garg -
2021 Spotlight: Value Iteration in Continuous Actions, States and Time »
Michael Lutter · Shie Mannor · Jan Peters · Dieter Fox · Animesh Garg -
2021 Spotlight: Ensemble Bootstrapping for Q-Learning »
Oren Peer · Chen Tessler · Nadav Merlis · Ron Meir -
2021 Poster: Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization »
Aadirupa Saha · Nagarajan Natarajan · Praneeth Netrapalli · Prateek Jain -
2021 Spotlight: Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization »
Aadirupa Saha · Nagarajan Natarajan · Praneeth Netrapalli · Prateek Jain -
2021 Poster: Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2021 Poster: Dueling Convex Optimization »
Aadirupa Saha · Tomer Koren · Yishay Mansour -
2021 Spotlight: Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2021 Spotlight: Dueling Convex Optimization »
Aadirupa Saha · Tomer Koren · Yishay Mansour -
2020 Poster: Optimistic Policy Optimization with Bandit Feedback »
Lior Shani · Yonathan Efroni · Aviv Rosenberg · Shie Mannor -
2020 Poster: Topic Modeling via Full Dependence Mixtures »
Dan Fisher · Mark Kozdoba · Shie Mannor -
2020 Poster: From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model »
Aadirupa Saha · Aditya Gopalan -
2020 Poster: Improved Sleeping Bandits with Stochastic Action Sets and Adversarial Rewards »
Aadirupa Saha · Pierre Gaillard · Michal Valko -
2020 Poster: Multi-step Greedy Reinforcement Learning Algorithms »
Manan Tomar · Yonathan Efroni · Mohammad Ghavamzadeh -
2019 Poster: Exploration Conscious Reinforcement Learning Revisited »
Lior Shani · Yonathan Efroni · Shie Mannor -
2019 Poster: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Shie Mannor -
2019 Poster: The Natural Language of Actions »
Guy Tennenholtz · Shie Mannor -
2019 Oral: Exploration Conscious Reinforcement Learning Revisited »
Lior Shani · Yonathan Efroni · Shie Mannor -
2019 Oral: The Natural Language of Actions »
Guy Tennenholtz · Shie Mannor -
2019 Poster: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2019 Oral: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Yonathan Efroni · Shie Mannor · Shie Mannor -
2019 Oral: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2018 Poster: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2018 Oral: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2017 Workshop: Lifelong Learning: A Reinforcement Learning Approach »
Sarath Chandar · Balaraman Ravindran · Daniel J. Mankowitz · Shie Mannor · Tom Zahavy -
2017 Poster: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Talk: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Poster: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Poster: Multi-objective Bandits: Optimizing the Generalized Gini Index »
Robert Busa-Fekete · Balazs Szorenyi · Paul Weng · Shie Mannor -
2017 Talk: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Talk: Multi-objective Bandits: Optimizing the Generalized Gini Index »
Robert Busa-Fekete · Balazs Szorenyi · Paul Weng · Shie Mannor