Where Rectified Flows Leak: Characterizing Membership Signals Along the Interpolation Path
Thomas Sesmat ⋅ Gabriel Meseguer-Brocal ⋅ Geoffroy Peeters
Abstract
Understanding what generative models retain from training data remains challenging, with implications for copyright and privacy. This question becomes particularly relevant as Rectified Flows power increasingly deployed systems. We analyze the interpolation path $X_\lambda = (1-\lambda)X_0 + \lambda X_1$ that defines Rectified Flow training. We show that train-test distinguishability follows a bell-shaped curve over $\lambda$, with a maximum whose location we derive in closed form under Gaussian assumptions. This signal accumulates during training while validation metrics remain stable. We validate these predictions on both audio and images, and show that the bell-shaped structure is universal while the peak prediction holds when our assumptions are satisfied. As a proof of concept, we implement a membership inference attack achieving 0.91 AUC by exploiting this $\lambda$-resolved structure.
Successful Page Load