Learn to build and deploy generative AI applications using large language models on AWS, mastering prompt engineering and fine-tuning.
Overview
This comprehensive course teaches developers and data scientists to build and deploy generative AI applications with Large Language Models (LLMs) on AWS. Students will explore transformer architectures, attention mechanisms, and master prompt engineering techniques. The curriculum covers fine-tuning pre-trained LLMs for specific tasks and leveraging NVIDIA NeMo and AWS SageMaker for efficient development and deployment. Ideal for those with basic Python and ML understanding.
Learning Outcomes
Understand the fundamentals of generative AI and LLMs.
Explore transformer architectures and attention mechanisms.
Master prompt engineering techniques for effective LLM interaction.
Learn to fine-tune pre-trained LLMs for specific tasks.
Build and deploy generative AI applications on AWS using SageMaker.
Utilize NVIDIA NeMo for efficient LLM development.