Deep Models and on Shaping their Development
Percy Liang, Stanford University Machine Learning Advances and Applications Seminar Date and Time: Monday, March 8, 2021 3:00pm to 4:00pm Abstract: Models in deep learning are wild beasts: they devour raw data, are powerful but hard to control. This talk explores two approaches to taming them. First, I will introduce concept bottleneck networks, in which a deep neural network makes a prediction via interpretable, highlevel concepts. We show that such models can obtain comparable accuracy with standard models, while offering the unique ability for a human to perform testtime interventions on the concepts. Second, I will introduce prefixtuning, which allows one to harness the power of pretrained language models GPT2) for text generation tasks. The key idea is to learn a continuous taskspecific prefix that primes the language model for the task at hand. Prefixtuning obtains comparable accuracy to finetuning,
|
|