Principled RL for Flow Matching Emerges From the Chunk-level Policy Optimization
Yifu Luo ⋅ Haoyuan Sun ⋅ Xinhao Hu ⋅ Penghui Du ⋅ Keyu Fan ⋅ Bo Li ⋅ Sinan Du ⋅ Xu Wan ⋅ Zhiyu Chen ⋅ Bo Xia ⋅ Tiantian Zhang ⋅ Yongzhe Chang ⋅ Kai Wu ⋅ Kun Gai ⋅ Xueqian Wang
Abstract
Recent Progress in post-training flow matching for text-to-image (T2I) generation with Group Relative Policy Optimization (GRPO) has demonstrated strong potential. However, it is hindered by a critical limitation: inaccurate advantage attribution. In this work, we argue that aggregating consecutive timesteps into a coherent `chunk' and shifting the policy optimization paradigm from GRPO's step level to the chunk level can effectively mitigate the negative impact of this issue. Building on this insight, we propose Group Chunking Policy Optimization (GCPO), the first chunk-level reinforcement learning approach for post-training flow matching. Extensive experiments demonstrate that GCPO achieves superior performance on both standard T2I benchmarks and preference alignment, with up to $43\%$ additional gains over GRPO, highlighting the promise of chunk-level policy optimization.
Successful Page Load