Timezone: »
We consider the experimental design problem in an online environment, an important practical task for reducing the variance of estimates in randomized experiments which allows for greater precision, and in turn, improved decision making. In this work, we present algorithms that build on recent advances in online discrepancy minimization which accommodate both arbitrary treatment probabilities and multiple treatments. The proposed algorithms are computational efficient, minimize covariate imbalance, and include randomization which enables robustness to misspecification. We provide worst case bounds on the expected mean squared error of the causal estimate and show that the proposed estimator is no worse than an implicit ridge regression, which are within a logarithmic factor of the best known results for offline experimental design. We conclude with a detailed simulation study showing favorable results relative to complete randomization as well as to offline methods for experimental design with time complexities exceeding our algorithm, which has a linear dependence on the number of observations, by polynomial factors.
Author Information
David Arbour (Adobe Research)
Drew Dimmery (University of Vienna)
Tung Mai (Adobe Research)
Anup Rao (Adobe Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Online Balanced Experimental Design »
Tue. Jul 19th through Wed the 20th Room Hall E #602
More from the Same Authors
-
2021 : Coresets for Classification – Simplified and Strengthened »
Anup Rao · Tung Mai · Cameron Musco -
2022 Poster: One-Pass Algorithms for MAP Inference of Nonsymmetric Determinantal Point Processes »
Aravind Reddy · Ryan A. Rossi · Zhao Song · Anup Rao · Tung Mai · Nedim Lipka · Gang Wu · Eunyee Koh · Nesreen K Ahmed -
2022 Spotlight: One-Pass Algorithms for MAP Inference of Nonsymmetric Determinantal Point Processes »
Aravind Reddy · Ryan A. Rossi · Zhao Song · Anup Rao · Tung Mai · Nedim Lipka · Gang Wu · Eunyee Koh · Nesreen K Ahmed -
2021 : Coresets for Classification – Simplified and Strengthened »
Tung Mai · Anup Rao · Cameron Musco -
2021 Poster: Permutation Weighting »
David Arbour · Drew Dimmery · Arjun Sondhi -
2021 Spotlight: Permutation Weighting »
David Arbour · Drew Dimmery · Arjun Sondhi -
2021 Poster: Asymptotics of Ridge Regression in Convolutional Models »
Mojtaba Sahraee-Ardakan · Tung Mai · Anup Rao · Ryan A. Rossi · Sundeep Rangan · Alyson Fletcher -
2021 Spotlight: Asymptotics of Ridge Regression in Convolutional Models »
Mojtaba Sahraee-Ardakan · Tung Mai · Anup Rao · Ryan A. Rossi · Sundeep Rangan · Alyson Fletcher -
2021 Poster: Fundamental Tradeoffs in Distributionally Adversarial Training »
Mohammad Mehrabi · Adel Javanmard · Ryan A. Rossi · Anup Rao · Tung Mai -
2021 Spotlight: Fundamental Tradeoffs in Distributionally Adversarial Training »
Mohammad Mehrabi · Adel Javanmard · Ryan A. Rossi · Anup Rao · Tung Mai