Timezone: »
Poster
Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor
Motivated by online recommendation systems, we propose the problem of finding the optimal policy in multitask contextual bandits when a small fraction $\alpha < 1/2$ of tasks (users) are arbitrary and adversarial. The remaining fraction of good users share the same instance of contextual bandits with $S$ contexts and $A$ actions (items). Naturally, whether a user is good or adversarial is not known in advance. The goal is to robustly learn the policy that maximizes rewards for good users with as few user interactions as possible. Without adversarial users, established results in collaborative filtering show that $O(1/\epsilon^2)$ per-user interactions suffice to learn a good policy, precisely because information can be shared across users. This parallelization gain is fundamentally altered by the presence of adversarial users: unless there are super-polynomial number of users, we show a lower bound of $\tilde{\Omega}(\min(S,A) \cdot \alpha^2 / \epsilon^2)$ {\it per-user} interactions to learn an $\epsilon$-optimal policy for the good users. We then show we can achieve an $\tilde{O}(\min(S,A)\cdot \alpha/\epsilon^2)$ upper-bound, by employing efficient robust mean estimators for both uni-variate and high-dimensional random variables. We also show that this can be improved depending on the distributions of contexts.
Author Information
Jeongyeol Kwon (The University of Texas at Austin)
Yonathan Efroni (Microsoft Research, New York)
Constantine Caramanis (University of Texas)
Shie Mannor (Technion)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms »
Tue. Jul 19th 06:45 -- 06:50 PM Room Room 310
More from the Same Authors
-
2021 : Minimax Regret for Stochastic Shortest Path »
Alon Cohen · Yonathan Efroni · Yishay Mansour · Aviv Rosenberg -
2021 : Provable RL with Exogenous Distractors via Multistep Inverse Dynamics »
Yonathan Efroni · Dipendra Misra · Akshay Krishnamurthy · Alekh Agarwal · John Langford -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 : Optimization or Architecture: What Matters in Non-Linear Filtering? »
Ido Greenberg · Netanel Yannay · Shie Mannor -
2023 Poster: Learning to Initiate and Reason in Event-Driven Cascading Processes »
Yuval Atzmon · Eli Meirom · Shie Mannor · Gal Chechik -
2023 Poster: Learning Hidden Markov Models When the Locations of Missing Observations are Unknown »
BINYAMIN PERETS · Mark Kozdoba · Shie Mannor -
2023 Poster: PPG Reloaded: An Empirical Study on What Matters in Phasic Policy Gradient »
Kaixin Wang · Zhou Daquan · Jiashi Feng · Shie Mannor -
2023 Poster: Representation-Driven Reinforcement Learning »
Ofir Nabati · Guy Tennenholtz · Shie Mannor -
2023 Poster: Principled Offline RL in the Presence of Rich Exogenous Information »
Riashat Islam · Manan Tomar · Alex Lamb · Yonathan Efroni · Hongyu Zang · Aniket Didolkar · Dipendra Misra · Xin Li · Harm Seijen · Remi Tachet des Combes · John Langford -
2023 Poster: Reward-Mixing MDPs with Few Latent Contexts are Learnable »
Jeongyeol Kwon · Yonathan Efroni · Constantine Caramanis · Shie Mannor -
2022 Poster: Asymptotically-Optimal Gaussian Bandits with Side Observations »
Alexia Atsidakou · Orestis Papadigenopoulos · Constantine Caramanis · Sujay Sanghavi · Sanjay Shakkottai -
2022 Spotlight: Asymptotically-Optimal Gaussian Bandits with Side Observations »
Alexia Atsidakou · Orestis Papadigenopoulos · Constantine Caramanis · Sujay Sanghavi · Sanjay Shakkottai -
2022 Poster: Sparsity in Partially Controllable Linear Systems »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 Poster: Actor-Critic based Improper Reinforcement Learning »
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor -
2022 Poster: Optimizing Tensor Network Contraction Using Reinforcement Learning »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2022 Poster: The Geometry of Robust Value Functions »
Kaixin Wang · Navdeep Kumar · Kuangqi Zhou · Bryan Hooi · Jiashi Feng · Shie Mannor -
2022 Spotlight: Sparsity in Partially Controllable Linear Systems »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 Spotlight: The Geometry of Robust Value Functions »
Kaixin Wang · Navdeep Kumar · Kuangqi Zhou · Bryan Hooi · Jiashi Feng · Shie Mannor -
2022 Spotlight: Actor-Critic based Improper Reinforcement Learning »
Mohammadi Zaki · Avi Mohan · Aditya Gopalan · Shie Mannor -
2022 Spotlight: Optimizing Tensor Network Contraction Using Reinforcement Learning »
Eli Meirom · Haggai Maron · Shie Mannor · Gal Chechik -
2022 Poster: Provable Reinforcement Learning with a Short-Term Memory »
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi -
2022 Spotlight: Provable Reinforcement Learning with a Short-Term Memory »
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi -
2021 : Invited Speaker: Shie Mannor: Lenient Regret »
Shie Mannor -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2021 Poster: Confidence-Budget Matching for Sequential Budgeted Learning »
Yonathan Efroni · Nadav Merlis · Aadirupa Saha · Shie Mannor -
2021 Poster: Combinatorial Blocking Bandits with Stochastic Delays »
Alexia Atsidakou · Orestis Papadigenopoulos · Soumya Basu · Constantine Caramanis · Sanjay Shakkottai -
2021 Spotlight: Combinatorial Blocking Bandits with Stochastic Delays »
Alexia Atsidakou · Orestis Papadigenopoulos · Soumya Basu · Constantine Caramanis · Sanjay Shakkottai -
2021 Spotlight: Confidence-Budget Matching for Sequential Budgeted Learning »
Yonathan Efroni · Nadav Merlis · Aadirupa Saha · Shie Mannor -
2020 Poster: Optimistic Policy Optimization with Bandit Feedback »
Lior Shani · Yonathan Efroni · Aviv Rosenberg · Shie Mannor -
2020 Poster: Learning Mixtures of Graphs from Epidemic Cascades »
Jessica Hoffmann · Soumya Basu · Surbhi Goel · Constantine Caramanis -
2020 Poster: Multi-step Greedy Reinforcement Learning Algorithms »
Manan Tomar · Yonathan Efroni · Mohammad Ghavamzadeh -
2019 Poster: Robust Estimation of Tree Structured Gaussian Graphical Models »
Ashish Katiyar · Jessica Hoffmann · Constantine Caramanis -
2019 Oral: Robust Estimation of Tree Structured Gaussian Graphical Models »
Ashish Katiyar · Jessica Hoffmann · Constantine Caramanis -
2019 Poster: Exploration Conscious Reinforcement Learning Revisited »
Lior Shani · Yonathan Efroni · Shie Mannor -
2019 Poster: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Shie Mannor -
2019 Oral: Exploration Conscious Reinforcement Learning Revisited »
Lior Shani · Yonathan Efroni · Shie Mannor -
2019 Oral: Action Robust Reinforcement Learning and Applications in Continuous Control »
Chen Tessler · Chen Tessler · Yonathan Efroni · Yonathan Efroni · Shie Mannor · Shie Mannor -
2018 Poster: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2018 Oral: Beyond the One-Step Greedy Approach in Reinforcement Learning »
Yonathan Efroni · Gal Dalal · Bruno Scherrer · Shie Mannor -
2017 Workshop: Lifelong Learning: A Reinforcement Learning Approach »
Sarath Chandar · Balaraman Ravindran · Daniel J. Mankowitz · Shie Mannor · Tom Zahavy -
2017 Poster: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Talk: Consistent On-Line Off-Policy Evaluation »
Assaf Hallak · Shie Mannor -
2017 Poster: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor -
2017 Talk: End-to-End Differentiable Adversarial Imitation Learning »
Nir Baram · Oron Anschel · Itai Caspi · Shie Mannor