Timezone: »
Self-supervised learning is crucial for clinical imaging applications, given the lack of explicit labels in healthcare. However, conventional approaches that rely on precise vision-language alignment are not always feasible in complex clinical imaging modalities, such as cardiac magnetic resonance (CMR). CMR provides a comprehensive visualization of cardiac anatomy, physiology, and microstructure, making it challenging to interpret. Additionally, CMR reports require synthesizing information from sequences of images and different views, resulting in potentially weak alignment between the study and diagnosis report pair.To overcome these challenges, we propose \textbf{CMRformer}, a multimodal learning framework to jointly learn sequences of CMR images and associated cardiologist's reports. Moreover, one of the major obstacles to improving CMR study is the lack of large, publicly available datasets. To bridge this gap, we collected a large \textbf{CMR dataset}, which consists of 13,787 studies from clinical cases. By utilizing our proposed CMRformer and our collected dataset, we achieved remarkable performance in real-world clinical tasks, such as CMR image retrieval and diagnosis report retrieval. Furthermore, the learned representations are evaluated to be practically helpful for downstream applications, such as disease classification. Our work could potentially expedite progress in the CMR study and lead to more accurate and effective diagnosis and treatment.
Author Information
Jielin Qiu (Carnegie Mellon University)
Peide Huang (Carnegie Mellon University)
Makiya Nakashima (Cleveland Clinic)
Jaehyun Lee (Cleveland Clinic )
Jiacheng Zhu (Carnegie Mellon University)
Wilson Tang (Cleveland Clinic)
Pohao Chen (Cleveland Clinic)
Christopher Nguyen (Cleveland Clinic)
Byung-Hak Kim (AKASA)
Debbie Kwon (Cleveland Clinic)
Douglas Weber (Carnegie Mellon University)
Ding Zhao (Carnegie Mellon University)
David Chen (Cleveland Clinic)
More from the Same Authors
-
2020 : Deep Claim: Payer Response Prediction from Claims Data with Deep Learning »
Byung-Hak Kim -
2022 : Group Distributionally Robust Reinforcement Learning with Hierarchical Latent Variables »
Mengdi Xu · Peide Huang · Visak Kumar · Jielin Qiu · Chao Fang · Kuan-Hui Lee · Xuewei Qi · Henry Lam · Bo Li · Ding Zhao -
2022 : Paper 22: Multimodal Unsupervised Car Segmentation via Adaptive Aerial Image-to-Image Translation »
Haohong Lin · Zhepeng Cen · Peide Huang · Hanjiang Hu -
2022 : Paper 2: SeasonDepth: Cross-Season Monocular Depth Prediction Dataset and Benchmark under Multiple Environments »
Ding Zhao · Hitesh Arora · Jiacheng Zhu · Zuxin Liu · Wenhao Ding -
2022 : Paper 10: CausalAF: Causal Autoregressive Flow for Safety-Critical Scenes Generation »
Wenhao Ding · Haohong Lin · Bo Li · Ding Zhao · Hitesh Arora -
2023 : DiffScene: Diffusion-Based Safety-Critical Scenario Generation for Autonomous Vehicles »
Chejian Xu · Ding Zhao · Alberto Sngiovanni Vincentelli · Bo Li -
2023 : Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation »
Wenhao Ding · Laixi Shi · Yuejie Chi · Ding Zhao -
2023 : Learning from Sparse Offline Datasets via Conservative Density Estimation »
Zhepeng Cen · Zuxin Liu · Zitong Wang · Yihang Yao · Henry Lam · Ding Zhao -
2023 : Offline Reinforcement Learning with Imbalanced Datasets »
Li Jiang · Sijie Cheng · Jielin Qiu · Victor Chan · Ding Zhao -
2023 : Semantically Adversarial Scene Generation with Explicit Knowledge Guidance for Autonomous Driving »
Wenhao Ding · Haohong Lin · Bo Li · Ding Zhao -
2023 : Learning Shared Safety Constraints from Multi-task Demonstrations »
Konwoo Kim · Gokul Swamy · Zuxin Liu · Ding Zhao · Sanjiban Choudhury · Steven Wu -
2023 : Visual-based Policy Learning with Latent Language Encoding »
Jielin Qiu · Mengdi Xu · William Han · Bo Li · Ding Zhao -
2023 : Can Brain Signals Reveal Inner Alignment with Human Languages? »
Jielin Qiu · William Han · Jiacheng Zhu · Mengdi Xu · Douglas Weber · Bo Li · Ding Zhao -
2023 : Robustness Verification for Perception Models against Camera Motion Perturbations »
Hanjiang Hu · Changliu Liu · Ding Zhao -
2023 : Learning Shared Safety Constraints from Multi-task Demonstrations »
Konwoo Kim · Gokul Swamy · Zuxin Liu · Ding Zhao · Sanjiban Choudhury · Steven Wu -
2023 Poster: Constrained Decision Transformer for Offline Safe Reinforcement Learning »
Zuxin Liu · Zijian Guo · Yihang Yao · Zhepeng Cen · Wenhao Yu · Tingnan Zhang · Ding Zhao -
2023 Poster: Towards Robust and Safe Reinforcement Learning with Benign Off-policy Data »
Zuxin Liu · Zijian Guo · Zhepeng Cen · Huan Zhang · Yihang Yao · Hanjiang Hu · Ding Zhao -
2023 Poster: Bayesian Reparameterization of Reward-Conditioned Reinforcement Learning with Energy-based Models »
Wenhao Ding · Tong Che · Ding Zhao · Marco Pavone -
2023 Poster: Interpolation for Robust Learning: Data Augmentation on Wasserstein Geodesics »
Jiacheng Zhu · Jielin Qiu · Aritra Guha · Zhuolin Yang · XuanLong Nguyen · Bo Li · Ding Zhao -
2022 : Paper 15: On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
Zuxin Liu · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · Ding Zhao -
2022 : Paper 16: Constrained Model-based Reinforcement Learning via Robust Planning »
Zuxin Liu · Ding Zhao -
2022 Poster: Constrained Variational Policy Optimization for Safe Reinforcement Learning »
Zuxin Liu · Zhepeng Cen · Vladislav Isenbaev · Wei Liu · Steven Wu · Bo Li · Ding Zhao -
2022 Spotlight: Constrained Variational Policy Optimization for Safe Reinforcement Learning »
Zuxin Liu · Zhepeng Cen · Vladislav Isenbaev · Wei Liu · Steven Wu · Bo Li · Ding Zhao