Timezone: »
Effective lifelong learning across diverse tasks requires the transfer of diverse knowledge, yet transferring irrelevant knowledge may lead to interference and catastrophic forgetting. In deep networks, transferring the appropriate granularity of knowledge is as important as the transfer mechanism, and must be driven by the relationships among tasks. We first show that the lifelong learning performance of several current deep learning architectures can be significantly improved by transfer at the appropriate layers. We then develop an expectation-maximization (EM) method to automatically select the appropriate transfer configuration and optimize the task network weights. This EM-based selective transfer is highly effective, balancing transfer performance on all tasks with avoiding catastrophic forgetting, as demonstrated on three algorithms in several lifelong object classification scenarios.
Author Information
Seungwon Lee (University of Pennsylvania)
Sima Behpour (Carnegie Mellon University)
Eric Eaton (University of Pennsylvania)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer »
Thu. Jul 22nd 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2020 : Panel Discussion »
Eric Eaton · Martha White · Doina Precup · Irina Rish · Harm van Seijen -
2019 Poster: Active Learning for Probabilistic Structured Prediction of Cuts and Matchings »
Sima Behpour · Anqi Liu · Brian Ziebart -
2019 Oral: Active Learning for Probabilistic Structured Prediction of Cuts and Matchings »
Sima Behpour · Anqi Liu · Brian Ziebart -
2018 Poster: Efficient and Consistent Adversarial Bipartite Matching »
Rizal Fathony · Sima Behpour · Xinhua Zhang · Brian Ziebart -
2018 Oral: Efficient and Consistent Adversarial Bipartite Matching »
Rizal Fathony · Sima Behpour · Xinhua Zhang · Brian Ziebart