Timezone: »
The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis that has recently received considerable attention in the literature. Existing results relate in precise ways the sign and magnitude of the bias to various sources of data adaptivity, but do not apply to the conditional inference setting in which the sample means are computed only if some specific conditions are satisfied. In this paper, we characterize the sign of the conditional bias of monotone functions of the rewards, including the sample mean. Our results hold for arbitrary conditioning events and leverage natural monotonicity properties of the data collection policy. We further demonstrate, through several examples from sequential testing and best arm identification, that the sign of the conditional and marginal bias of the sample mean of an arm can be different, depending on the conditioning event. Our analysis offers new and interesting perspectives on the subtleties of assessing the bias in data adaptive settings.
Author Information
Jaehyeok Shin (Carnegie Mellon University)
Aaditya Ramdas (Carnegie Mellon University)
Aaditya Ramdas is an assistant professor in the Departments of Statistics and Machine Learning at Carnegie Mellon University. These days, he has 3 major directions of research: 1. selective and simultaneous inference (interactive, structured, post-hoc control of false discovery/coverage rate,…), 2. sequential uncertainty quantification (confidence sequences, always-valid p-values, bias in bandits,…), and 3. assumption-free black-box predictive inference (conformal prediction, calibration,…).
Alessandro Rinaldo (Carnegie Mellon University)
More from the Same Authors
-
2021 : Improved Privacy Filters and Odometers: Time-Uniform Bounds in Privacy Composition »
Justin Whitehouse · Aaditya Ramdas · Ryan Rogers · Steven Wu -
2022 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates -
2022 Poster: Generalized Results for the Existence and Consistency of the MLE in the Bradley-Terry-Luce Model »
Heejong Bong · Alessandro Rinaldo -
2022 Oral: Generalized Results for the Existence and Consistency of the MLE in the Bradley-Terry-Luce Model »
Heejong Bong · Alessandro Rinaldo -
2021 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Aaditya Ramdas · Ryan Tibshirani -
2021 Poster: Off-Policy Confidence Sequences »
Nikos Karampatziakis · Paul Mineiro · Aaditya Ramdas -
2021 Spotlight: Off-Policy Confidence Sequences »
Nikos Karampatziakis · Paul Mineiro · Aaditya Ramdas -
2021 Poster: Distribution-Free Calibration Guarantees for Histogram Binning without Sample Splitting »
Chirag Gupta · Aaditya Ramdas -
2021 Spotlight: Distribution-Free Calibration Guarantees for Histogram Binning without Sample Splitting »
Chirag Gupta · Aaditya Ramdas -
2020 : "Uncertainty Quantification Using Martingales for Misspecified Gaussian Processes" »
Aaditya Ramdas -
2020 Poster: Online Control of the False Coverage Rate and False Sign Rate »
Asaf Weinstein · Aaditya Ramdas -
2020 Poster: Familywise Error Rate Control by Interactive Unmasking »
Boyan Duan · Aaditya Ramdas · Larry Wasserman -
2019 Poster: Uniform Convergence Rate of the Kernel Density Estimator Adaptive to Intrinsic Volume Dimension »
Jisu Kim · Jaehyeok Shin · Alessandro Rinaldo · Larry Wasserman -
2019 Oral: Uniform Convergence Rate of the Kernel Density Estimator Adaptive to Intrinsic Volume Dimension »
Jisu Kim · Jaehyeok Shin · Alessandro Rinaldo · Larry Wasserman