Timezone: »
Modeling the time-series of high-dimensional, longitudinal data is important for predicting patient disease progression. However, existing neural network based approaches that learn representations of patient state, while very flexible, are susceptible to overfitting. We propose a deep generative model that makes use of a novel attention-based neural architecture inspired by the physics of how treatments affect disease state. The result is a scalable and accurate model of high-dimensional patient biomarkers as they vary over time. Our proposed model yields significant improvements in generalization and, on real-world clinical data, provides interpretable insights into the dynamics of cancer progression.
Author Information
Zeshan Hussain (MIT)
Rahul G. Krishnan (Microsoft Research)
David Sontag (Massachusetts Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Neural Pharmacodynamic State Space Modeling »
Wed. Jul 21st 02:40 -- 02:45 AM Room
More from the Same Authors
-
2022 : Evaluating Robustness to Dataset Shift via Parametric Robustness Sets »
Michael Oberst · Nikolaj Thams · David Sontag -
2022 : Evaluating Robustness to Dataset Shift via Parametric Robustness Sets »
Nikolaj Thams · Michael Oberst · David Sontag -
2022 Poster: Sample Efficient Learning of Predictors that Complement Humans »
Mohammad-Amin Charusaie · Hussein Mozannar · David Sontag · Samira Samadi -
2022 Poster: Co-training Improves Prompt-based Learning for Large Language Models »
Hunter Lang · Monica Agrawal · Yoon Kim · David Sontag -
2022 Spotlight: Sample Efficient Learning of Predictors that Complement Humans »
Mohammad-Amin Charusaie · Hussein Mozannar · David Sontag · Samira Samadi -
2022 Spotlight: Co-training Improves Prompt-based Learning for Large Language Models »
Hunter Lang · Monica Agrawal · Yoon Kim · David Sontag -
2021 Poster: Regularizing towards Causal Invariance: Linear Models with Proxies »
Michael Oberst · Nikolaj Thams · Jonas Peters · David Sontag -
2021 Poster: Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch) »
Hunter Lang · David Sontag · Aravindan Vijayaraghavan -
2021 Spotlight: Regularizing towards Causal Invariance: Linear Models with Proxies »
Michael Oberst · Nikolaj Thams · Jonas Peters · David Sontag -
2021 Oral: Graph Cuts Always Find a Global Optimum for Potts Models (With a Catch) »
Hunter Lang · David Sontag · Aravindan Vijayaraghavan -
2020 Poster: Estimation of Bounds on Potential Outcomes For Decision Making »
Maggie Makar · Fredrik Johansson · John Guttag · David Sontag -
2020 Poster: Empirical Study of the Benefits of Overparameterization in Learning Latent Variable Models »
Rares-Darius Buhai · Yoni Halpern · Yoon Kim · Andrej Risteski · David Sontag -
2020 Poster: Consistent Estimators for Learning to Defer to an Expert »
Hussein Mozannar · David Sontag -
2019 Poster: Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models »
Michael Oberst · David Sontag -
2019 Oral: Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models »
Michael Oberst · David Sontag -
2018 Poster: Semi-Amortized Variational Autoencoders »
Yoon Kim · Sam Wiseman · Andrew Miller · David Sontag · Alexander Rush -
2018 Oral: Semi-Amortized Variational Autoencoders »
Yoon Kim · Sam Wiseman · Andrew Miller · David Sontag · Alexander Rush -
2017 Poster: Estimating individual treatment effect: generalization bounds and algorithms »
Uri Shalit · Fredrik D Johansson · David Sontag -
2017 Talk: Estimating individual treatment effect: generalization bounds and algorithms »
Uri Shalit · Fredrik D Johansson · David Sontag -
2017 Poster: Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation »
Yacine Jernite · Anna Choromanska · David Sontag -
2017 Talk: Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation »
Yacine Jernite · Anna Choromanska · David Sontag