Oral
Dropout Training, Data-dependent Regularization, and Generalization Bounds
Wenlong Mou · Yuchen Zhou · Jun Gao · Liwei Wang

Wed Jul 11th 02:20 -- 02:30 PM @ A6

We study the problem of generalization guarantees for dropout training. A general framework is first proposed for learning procedures with random perturbation on model parameters. The generalization error is bounded by sum of two offset Rademacher complexities: the main term is Rademacher complexity of the hypothesis class with minus offset induced by the perturbation variance, which characterizes data-dependent regularization by the random perturbation; the auxiliary term is offset Rademacher complexity for the variance class, controlling the degree to which this regularization effect can be weakened. For neural networks, we estimate upper and lower bounds for the variance induced by truthful dropout, a variant of dropout that we propose to ensure unbiased output and fit into our framework, and the variance bounds exhibits connection to adaptive regularization methods. By applying our framework to ReLU networks with one hidden layer, a generalization upper bound is derived with no assumptions on the parameter norms or data distribution, with $O(1/n)$ fast rate and adaptivity to geometry of data points being achieved at the same time.

Author Information

Wenlong Mou (UC Berkeley)
Yuchen Zhou (University of Wisconsin, Madison)
Jun Gao (Peking University)
Liwei Wang (Peking University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors