Fine tuning LLMs with PEFT and Lo RA
LoRA Colab : Blog Post: LoRa Paper: In this video I look at how to use PEFT to fine tune any decoder style GPT model. This goes through the basics LoRa finetuning and how to upload it to HuggingFace Hub. My Links: Twitter Linkedin Github: 00:00 Intro 00:04 Problems with finetuning 00:48 Introducing PEFT 01:11 PEFT other cool techniques 01:51 LoRA Diagram 03:25 Hugging Face PEFT Library 04:06 Code Walkthrough
|
|