Timezone: »
We often want to infer user traits when personalizing interventions. Approaches like Inverse RL can learn traits formalized as parameters of a Markov Decision Process but are data intensive. Instead of inferring traits for individuals, we study the relationship between RL worlds and the set of user traits. We argue that understanding the breakdown of ``user types" within a world -- broad sets of traits that result in the same behavior -- helps rapidly personalize interventions. We show that seemingly different RL worlds admit the same set of user types and formalize this observation as an equivalence relation defined on worlds. We show that these equivalence classes capture many different worlds. We argue that the richness of these classes allows us to transfer insights on intervention design between toy and real worlds.
Author Information
Lars L. Ankile (Harvard University)
Brian Ham (Harvard University)
Kevin Mao
Eura Shin (Harvard University)
Siddharth Swaroop (Harvard University)
Finale Doshi-Velez (Harvard University)

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability. Selected Additional Shinies: BECA recipient, AFOSR YIP and NSF CAREER recipient; Sloan Fellow; IEEE AI Top 10 to Watch
Weiwei Pan (Harvard University)
More from the Same Authors
-
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : Prediction-focused Mixture Models »
Abhishek Sharma · Sanjana Narayanan · Catherine Zeng · Finale Doshi-Velez -
2021 : Online structural kernel selection for mobile health »
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : On formalizing causal off-policy sequential decision-making »
Sonali Parbhoo · Shalmali Joshi · Finale Doshi-Velez -
2022 : Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : From Soft Trees to Hard Trees: Gains and Losses »
Xin Zeng · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2022 : Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry »
Mark Penrod · Harrison Termotto · Varshini Reddy · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2023 : Why do universal adversarial attacks work on large language models?: Geometry might be the answer »
Varshini Subhash · Anna Bialas · Siddharth Swaroop · Weiwei Pan · Finale Doshi-Velez -
2023 : Implications of Gaussian process kernel mismatch for out-of-distribution data »
Beau Coker · Finale Doshi-Velez -
2023 : Inverse Transition Learning for Characterizing Near-Optimal Dynamics in Offline Reinforcement Learning »
Leo Benac · Sonali Parbhoo · Finale Doshi-Velez -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Adaptive interventions for both accuracy and time in AI-assisted human decision making »
Siddharth Swaroop · Zana Buçinca · Krzysztof Gajos · Finale Doshi-Velez -
2023 : SAP-sLDA: An Interpretable Interface for Exploring Unstructured Text »
Charumathi Badrinath · Weiwei Pan · Finale Doshi-Velez -
2023 : Signature Activation: A Sparse Signal View for Holistic Saliency »
Jose Tello Ayala · Akl Fahed · Weiwei Pan · Eugene Pomerantsev · Patrick Ellinor · Anthony Philippakis · Finale Doshi-Velez -
2023 : Signature Activation: A Sparse Signal View for Holistic Saliency »
Jose Tello Ayala · Akl Fahed · Weiwei Pan · Eugene Pomerantsev · Patrick Ellinor · Anthony Philippakis · Finale Doshi-Velez -
2023 : Implications of kernel mismatch for OOD data »
Beau Coker · Finale Doshi-Velez -
2023 : Soft prompting might be a bug, not a feature »
Luke Bailey · Gustaf Ahdritz · Anat Kleiman · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Memory Maps to Understand Models »
Dharmesh Tailor · Paul Chang · Siddharth Swaroop · Eric Nalisnick · Arno Solin · Khan Emtiyaz -
2023 : Bayesian Inverse Transition Learning for Offline Settings »
Leo Benac · Sonali Parbhoo · Finale Doshi-Velez -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 Poster: The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning »
Sarah Rathnam · Sonali Parbhoo · Weiwei Pan · Susan Murphy · Finale Doshi-Velez -
2023 Poster: Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables »
Yaniv Yacoby · Weiwei Pan · Finale Doshi-Velez -
2022 : Responsible Decision-Making in Batch RL Settings »
Finale Doshi-Velez -
2021 : RL Explainability & Interpretability Panel »
Ofra Amir · Finale Doshi-Velez · Alan Fern · Zachary Lipton · Omer Gottesman · Niranjani Prasad -
2021 : [01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond »
Finale Doshi-Velez -
2021 : Invited Oral1:Q&A »
Siddharth Swaroop -
2021 Poster: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Oral: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Poster: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2021 Spotlight: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2020 : Contributed Talk: Continual Deep Learning by Functional Regularisation of Memorable Past »
Siddharth Swaroop -
2020 : Keynote #2 Finale Doshi-Velez »
Finale Doshi-Velez -
2020 Poster: Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions »
Omer Gottesman · Joseph Futoma · Yao Liu · Sonali Parbhoo · Leo Celi · Emma Brunskill · Finale Doshi-Velez -
2019 Poster: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2019 Oral: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Poster: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Tutorial: Interpretable Machine Learning »
Been Kim · Finale Doshi-Velez