STra TA Self Training with Task Augmentation for Better Few shot Learning
A super cool method that improve model accuracy drastically without using additional taskspecific annotated data Connect Linkedin Twitter email 0:00 Intro 3:07 Task augmentation + selftraining 5:13 Intermediate finetuning 6:09 Task augmentation setup 10:49 Overgeneration filtering 12:17 Selftraining algorithm 16:15 Results 20:23 My thoughts STraTA: SelfTraining with Task Augmentation for Better Fewshot Learning Abstract Despite their recent successes in tackling many NLP tasks, largescale pretrained language models do not perform as well in fewshot settings where only a handful of training examples are available. To address this shortcoming, we propose STraTA, which stands for SelfTraining with Task Augmentation, an approach that builds on two key ideas for effect
|
|