Latent Space Robust Optimization of Neural Processes with Aligned Stratified Order-Statistic Loss Reduction
Qi Tao ⋅ Jiarong Wen ⋅ Jing Yang ⋅ Guanlin Wu ⋅ Zhang Kaiyu ⋅ Yiqin Lv ⋅ Wumei Du ⋅ Xingxing Liang ⋅ Cheems Wang
Abstract
Importance-Weighted Neural Processes (IWNPs) provide a principled framework for probabilistic meta-learning by using multi-particle latent representations to approximate the marginal log-likelihood of task data tightly. However, this work reveals that the standard optimization of IWNPs suffers from the Matthew effect in the latent space, where high-likelihood particles dominate gradient signals. The neglect of lower-likelihood regions leads to poor tail-risk generation and unstable fast adaptation. While robust objectives such as $\text{CVaR}_\alpha$ can mitigate these risks, they often entail a trade-off that degrades average-case performance. This work proposes \underline{O}rder-\underline{S}tatistics Aligned \underline{N}eural \underline{P}rocesses (OS-NPs) to achieve latent space robust optimization without sacrificing average result. Specifically, we stratify multiple inference particles into disjoint difficulty bins based on order statistics and derive the regularized worst-case optimization framework for OS-NPs. Our method aligns the reduction of stratified order-statistic losses in IWNPs and provides a computationally efficient pipeline to implement. Extensive experiments demonstrate that the OS-NP constitutes stable, reliable probabilistic meta-learning that significantly enhances tail-risk robustness while maintaining or even improving average performance.
Successful Page Load