Skip to yearly menu bar Skip to main content


Poster

Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms

Ming Yang · Xiyuan Wei · Tianbao Yang · Yiming Ying

Hall C 4-9 #1700
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Many machine learning tasks can be formulated as a stochastic compositional optimization (SCO) problem such as reinforcement learning, AUC maximization and meta-learning, where the objective function involves a nested composition associated with an expectation. Although many studies have been devoted to studying the convergence behavior of SCO algorithms, there is little work on understanding their generalization, that is, how these learning algorithms built from training data would behave on future test examples. In this paper, we provide the stability and generalization analysis of stochastic compositional gradient descent algorithms in the framework of statistical learning theory. Firstly, we introduce a stability concept called compositional uniform stability and establish its quantitative relation with generalization for SCO problems. Then, we establish the compositional uniform stability results for two notable stochastic compositional gradient descent algorithms, namely SCGD and SCSC. Finally, we derive dimension-independent excess risk bounds for SCGD and SCSC by balancing stability results and optimization errors. To the best of our knowledge, these are the first-ever known results on stability and generalization analysis of stochastic compositional gradient descent algorithms.

Chat is not available.