Metis: Learning to Jailbreak LLMs via Self-Evolving Metacognitive Policy Optimization
Huilin Zhou ⋅ Jian Zhao ⋅ Yilu Zhong ⋅ Zhen Liang ⋅ Xiuyuan Chen ⋅ Yuchen Yuan ⋅ Tianle Zhang ⋅ Chi Zhang ⋅ Lan Zhang ⋅ Xuelong Li
Abstract
Red teaming is critical for uncovering vulnerabilities in Large Language Models (LLMs). While automated methods have improved scalability, existing approaches often rely on static heuristics or stochastic search, rendering them brittle against advanced safety alignment. To address this, we introduce \textbf{Metis}, a framework that reformulates jailbreaking as inference-time policy optimization within an adversarial Partially Observable Markov Decision Process (POMDP). Metis employs a self-evolving metacognitive loop to perform causal diagnosis of a target's defense logic and leverages structured feedback as a semantic gradient to refine its policy, offering enhanced interpretability through transparent reasoning traces. Extensive evaluations across 10 diverse models demonstrate that Metis establishes a new state-of-the-art with an average Attack Success Rate (ASR) of 89.2\%, maintaining high efficacy on resilient frontier models (e.g., 76.0\% on O1 and 78.0\% on GPT-5-chat) where traditional baselines exhibit substantial performance degradation. By replacing redundant exploration with directed optimization, Metis reduces token costs by an average of 8.2$\times$ (and up to 11.4$\times$). Our analysis reveals that current defenses remain systematically vulnerable to internally-steered, closed-loop reasoning trajectories, highlighting a critical need for next-generation defenses capable of reasoning about safety dynamically during inference.
Successful Page Load