Continual Learning in Neural Networks: on Catastrophic Forgetting and Beyond in Russian
Slides: Speaker: Polina Kirichenko, New York University Learning new tasks continually without forgetting on a constantly changing data distribution is essential for realworld problems but is challenging for modern deep learning. Deep learning models suffer from catastrophic forgetting: when presented with a sequence of tasks, deep neural networks can successfully learn the new tasks, but the performance on the old tasks degrades. In this talk, I will present an overview of the continual learning algorithms including wellestablished methods as well as recent stateoftheart approaches. We will talk about several continual learning scenarios (task, class, and domainincremental learning), review the most common approaches in alleviating forgetting and discuss other challenges in the field beyond catastrophic forgetting (including forward backward transfer, learning on continuously drifting data and continua
|
|