Multitask Prompted Training Enables Zero shot Task Generalization ( Explained)
Can zeroshot generalization instead be directly induced by explicit multitask learning Watch the video to find out 0:00 Intro 2:14 Prompted training format 5:52 Measuring generalization to unseen tasks 8:45 Heldout tasks 10:45 The future of NLP 11:48 Model 12:17 Experiment results Connect Linkedin Twitter email Paper Code Abstract Large language models have recently been shown to attain reasonable zeroshot generalization on a diverse set of tasks. It has been hypothesized that this is a consequence of implicit multitask learning in language model training. Can zeroshot generalization instead be directly induced by explicit multitask learning To test this question at scale, we develop a system for easily mapping general natu
|
|