Self Damaging Contrastive Learning Explained
What do compressed neural networks forget This paper shows how to utilize these lessons to improve contrastive selfsupervised learning and representation learning of minority examples in unlabeled datasets Paper Links: SDCLR: Overcoming the Simplicity Bias: Chapters 0:00 Paper Title 0:03 What Do Compressed Networks Forget 2:04 LongTail of Unlabeled Data 2:43 SDCLR Algorithm Overview 4:40 Experiments 9:00 Interesting Improvement 9:25 Forgetting through Contrastive Learning 11:07 Improved Saliency Maps 11:34 The Simplicity Bias Thanks for watching Please Subscribe
|
|