Timezone: »
Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) stop earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. Classical batch tests are not tailored for streaming data: valid inference after data peeking requires correcting for multiple testing which results in low power. Following the principle of testing by betting, we design sequential kernelized independence tests that overcome such shortcomings. We exemplify our broad framework using bets inspired by kernelized dependence measures, e.g., the Hilbert-Schmidt independence criterion. Our test is also valid under non-i.i.d. time-varying settings. We demonstrate the power of our approaches on both simulated and real data.
Author Information
Aleksandr Podkopaev (Carnegie Mellon University)
Patrick Bloebaum (Amazon Web Services)
Shiva Kasiviswanathan (Amazon)
Aaditya Ramdas (Carnegie Mellon University)
More from the Same Authors
-
2023 : Interventional and Counterfactual Inference with Diffusion Models »
Patrick Chao · Patrick Bloebaum · Shiva Kasiviswanathan -
2023 : Interventional and Counterfactual Inference with Diffusion Models »
Patrick Chao · Patrick Bloebaum · Shiva Kasiviswanathan -
2023 Poster: Fully-Adaptive Composition in Differential Privacy »
Justin Whitehouse · Aaditya Ramdas · Ryan Rogers · Steven Wu -
2023 Poster: Thompson Sampling with Diffusion Generative Prior »
Yu-Guan Hsieh · Shiva Kasiviswanathan · Branislav Kveton · Patrick Bloebaum -
2023 Oral: Nonparametric Extensions of Randomized Response for Private Confidence Sets »
Ian Waudby-Smith · Steven Wu · Aaditya Ramdas -
2023 Poster: Online Platt Scaling with Calibeating »
Chirag Gupta · Aaditya Ramdas -
2023 Poster: Nonparametric Extensions of Randomized Response for Private Confidence Sets »
Ian Waudby-Smith · Steven Wu · Aaditya Ramdas -
2023 Poster: Sequential Changepoint Detection via Backward Confidence Sequences »
Shubhanshu Shekhar · Aaditya Ramdas -
2022 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Sharon Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates -
2022 Poster: On Measuring Causal Contributions via do-interventions »
Yonghan Jung · Shiva Kasiviswanathan · Jin Tian · Dominik Janzing · Patrick Bloebaum · Elias Bareinboim -
2022 Poster: Causal structure-based root cause analysis of outliers »
Kailash Budhathoki · Lenon Minorics · Patrick Bloebaum · Dominik Janzing -
2022 Spotlight: Causal structure-based root cause analysis of outliers »
Kailash Budhathoki · Lenon Minorics · Patrick Bloebaum · Dominik Janzing -
2022 Spotlight: On Measuring Causal Contributions via do-interventions »
Yonghan Jung · Shiva Kasiviswanathan · Jin Tian · Dominik Janzing · Patrick Bloebaum · Elias Bareinboim -
2021 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Sharon Li · Aaditya Ramdas · Ryan Tibshirani