Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Spotlight
Tue Jul 19 10:45 AM -- 10:50 AM (PDT) @ Ballroom 1 & 2
PoF: Post-Training of Feature Extractor for Improving Generalization
Ikuro Sato · Yamada Ryota · Masayuki Tanaka · Nakamasa Inoue · Rei Kawakami

It has been intensively investigated that the local shape, especially flatness, of the loss landscape near a minimum plays an important role for generalization of deep models. We developed a training algorithm called PoF: Post-Training of Feature Extractor that updates the feature extractor part of an already-trained deep model to search a flatter minimum. The characteristics are two-fold: 1) Feature extractor is trained under parameter perturbations in the higher-layer parameter space, based on observations that suggest flattening higher-layer parameter space, and 2) the perturbation range is determined in a data-driven manner aiming to reduce a part of test loss caused by the positive loss curvature. We provide a theoretical analysis that shows the proposed algorithm implicitly reduces the target Hessian components as well as the loss. Experimental results show that PoF improved model performance against baseline methods on both CIFAR-10 and CIFAR-100 datasets for only 10-epoch post-training, and on SVHN dataset for 50-epoch post-training.