Learn Generative AI, LLMs, and their application on AWS, covering prompt engineering, RAG, and model deployment.
Overview
This comprehensive course provides a deep dive into Generative AI and Large Language Models (LLMs), specifically focusing on their practical application within the Amazon Web Services (AWS) ecosystem. Students will gain a foundational understanding of GenAI concepts, explore various LLM use cases, and learn critical techniques such as prompt engineering, Retrieval Augmented Generation (RAG), and fine-tuning. The course also covers deploying custom LLMs using AWS services like Amazon SageMaker and Amazon Bedrock, ensuring a hands-on learning approach. It is designed for developers, data scientists, and AI/ML enthusiasts eager to build and deploy advanced AI solutions on the cloud.
Instructor
Chris Fregly
Principal Specialist SA, AI/ML at AWS
Chris Fregly is a Principal Specialist Solutions Architect for AI/ML at AWS, specializing in helping customers design and implement scalable machine learning solutions on the cloud.
Learning Outcomes
Explain Generative AI, Large Language Models (LLMs), and their applications.
Identify Generative AI and LLM use cases and design considerations.
Describe foundational models (FMs) and how to apply them using Amazon Bedrock.
Apply Prompt Engineering techniques to enhance LLM performance.
Implement Retrieval Augmented Generation (RAG) architecture for LLMs.
Build, train, and deploy custom LLMs and FMs using Amazon SageMaker.