Oral
Breaking Inter-Layer Co-Adaptation by Classifier Anonymization
Ikuro Sato · Kohta Ishikawa · Guoqing Liu · Masayuki Tanaka

Wed Jun 12th 05:05 -- 05:10 PM @ Hall A

This study addresses an issue of co-adaptation between a feature extractor and a classifier in a neural network. A na\"ive joint optimization of a feature extractor and a classifier often brings situations in which an excessively complex feature distribution adapted to a very specific classifier degrades the test performance. We introduce a method called Feature-extractor Optimization through Classifier Anonymization (FOCA), which is designed to avoid an explicit co-adaptation between a feature extractor and a particular classifier by using many randomly-generated, weak classifiers during optimization. We put forth a mathematical proposition that states the FOCA features form a point-like distribution within the same class in a class-separable fashion under special conditions. Real-data experiments under more general conditions provide supportive evidences.

Author Information

Ikuro Sato (Denso IT Laboratory, Inc.)
Kohta Ishikawa (DENSO IT Laboratory)
Guoqing Liu (Denso IT Laboratory)
Masayuki Tanaka (National Institute of Advanced Industrial Science and Technology, Japan)

Related Events (a corresponding poster, oral, or spotlight)