Efficient Fine Tuning for Llama v2 7b on a Single GPU
The first problem youre likely to encounter when finetuning an LLM is the host out of memory error. Its more difficult for finetuning the 7B parameter Llama2 model which requires more memory. In this talk, we are having Piero Molino and Travis Addair from the opensource Ludwig project to show you how to tackle this problem. The good news is that, with an optimized LLM training framework like you can get the host memory overhead back down to a more reasonable host memory even when training on multiple GPUs. In this handson workshop, well discuss the unique challenges in finetuning LLMs and show you how you can tackle these challenges with opensource tools through a demo. By the end of this session, attendees will understand: How to finetune LLMs like Llama27b on a single GPU Techniques like parameter efficient tuning and quantization, and how they can help How to train a 7b param model on a single T4 GPU (QLoRA) How to deploy tuned models l
|
|