EMBGUARD: Constructing Hazard-Aware Guardrails for Safe Planning in Embodied Agents
Abstract
MLLM-powered embodied agents deployed in real-world environments encounter physical hazards. However, existing approaches lack explicit mechanisms for identifying hazards and reasoning about action-conditioned risks, leading agents to either miss risky interactions or over-identify risks. To address this, we propose EMBGUARD, the first MLLM-based safety guardrail for embodied agents designed to decouple physical risk reasoning from agent policy. By evaluating a (visual observation, action) pair, EMBGUARD identifies hazardous configurations and provides natural language explanations of potential risks. Alongside EMBGUARD, we contribute EMBHAZARD, a training dataset of 17K action-conditioned pairs, and EMBGUARDTEST, a benchmark of 189 manually curated real-world scenarios spanning seven physical risk categories. Through compositional variation of hazards and actions, we generate diverse risky and benign scenarios that agents may encounter during planning. Despite its compact size (2B, 4B), EMBGUARD achieves performance competitive with proprietary MLLMs (e.g., GPT-5.1, Gemini-2.5-Pro) while significantly reducing the false-positive rates that hinder realtime deployment. We make the code, data, and models publicly available at https://anonymous.4open.science/r/EMBGuard-742D.