Fine tuning Llama 2 on Your Own Dataset, Train an LLM for Your Use Case with QLo RA on a Single GPU
Full text tutorial (requires MLExpert Pro): Learn how to finetune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We ll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter. Discord: Prepare for the Machine Learning interview: Subscribe: GitHub repository: Join this channel to get access to the perks and support my work: 00:00 When to Finetune an LLM 00:30 Finetune vs Retrieval Augmented Generation (Custom Knowledge Base) 03:38 Text Summarization (our example) 04:14 Text Tutorial on 04:47 Dataset Selection 05:36 Choose a M
|
|