Learn to efficiently fine-tune and customize large language models using PEFT techniques with NVIDIA NeMo Framework.
Overview
This NVIDIA workshop provides AI practitioners, researchers, and developers with practical skills in customizing Large Language Models (LLMs) efficiently. Participants will explore various Parameter-Efficient Fine-Tuning (PEFT) techniques, including LoRA, and learn to apply them using the NVIDIA NeMo Framework. The course covers optimizing LLM performance and resource utilization, alongside deployment and inference strategies for fine-tuned models. It's an instructor-led, hands-on experience designed to equip learners with the knowledge to adapt LLMs for specific tasks without extensive computational resources.
Learning Outcomes
Explore various PEFT techniques, including LoRA, and learn how to apply them efficiently to LLMs.
Understand the impact of different PEFT methods on model performance and resource utilization.
Gain practical experience in fine-tuning LLMs using the NVIDIA NeMo Framework.
Optimize LLM deployment and inference for customized models.