Position: When AI Decides Who Gets an Organ: Multi-Agentic AI Systems in Transplant Medicine Risk Amplifying Disparities Without Targeted Explainability and Deployment Strategies
Abstract
Agentic AI systems particularly those built on large language models (LLMs) and deployed as autonomous, role-specialized agents are rapidly emerging in clinical decision-making. This position paper argues that without equity and explainability as core design constraints, such systems will exacerbate healthcare disparities. Using empirical evidence from a multi-agent simulation of a liver transplant selection committee, we demonstrate that even high-performing agents can systematically disadvantage patients based on sex, ethnicity, and socioeconomic status. These disparities arise from agents’ reliance on non-clinical proxy variables (insurance type, education level, area deprivation index) and are compounded by the lack of case-level explanations and temporally grounded reasoning. We further contend that without fairness-aware deployment strategies, such systems cannot be reliably audited or ethically integrated into real-world care. In response, we propose a technical roadmap with subgroup-sensitive learning objectives, counterfactual reasoning modules, clinician-in-the-loop governance, and deployment protocols that address the digital divide. We urge the machine learning community to center explainability and health equity in the development and deployment of agentic AI for medicine especially in high-stakes domains where algorithmic decisions may determine who lives and who does not.