Master advanced fine-tuning techniques for LLMs, including SFT, LoRA, and RLHF, to build high-performance generative AI models.
Overview
This intermediate-level course, offered by Google Cloud on Coursera, delves into advanced fine-tuning methodologies for Large Language Models (LLMs). Students will gain practical skills in preparing and augmenting datasets for fine-tuning, implementing techniques such as Supervised Fine-tuning (SFT) and Parameter-Efficient Fine-tuning (PEFT) with LoRA. The curriculum also covers the principles of Reinforcement Learning from Human Feedback (RLHF), comprehensive model evaluation, and strategies for responsible AI deployment. Designed for developers, AI engineers, and data scientists, this course empowers learners to optimize LLM performance and customize them for specific, real-world generative AI applications.
Instructor
Google Cloud Training
Google Cloud Team
Google Cloud Training provides comprehensive educational content and certifications to empower individuals and organizations in leveraging Google Cloud technologies effectively.
Learning Outcomes
Prepare and augment diverse datasets for LLM fine-tuning.
Implement Supervised Fine-tuning (SFT) techniques for custom model behavior.
Apply Parameter-Efficient Fine-tuning (PEFT) methods, including LoRA, to adapt LLMs efficiently.
Evaluate and compare the performance of fine-tuned LLMs using appropriate metrics.
Deploy fine-tuned LLMs and integrate them into generative AI applications.
Understand and apply responsible AI principles in the context of LLM fine-tuning.
Explore the principles of Reinforcement Learning from Human Feedback (RLHF).