Master practical LLM and RAG evaluation techniques using open-source tools with this applied course.
Overview
This free online course from Evidently AI provides a hands-on guide for builders to effectively test and evaluate Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) applications. Learn practical methodologies, metrics, and open-source Python tools to ensure your LLM-powered systems are robust and performant. The course covers evaluation of LLM responses, RAG systems, and how to productionize these evaluation pipelines, making it ideal for data scientists, ML engineers, and LLM developers.
Instructor
Elena Samuylova
Co-founder and CEO at Evidently AI
Learning Outcomes
Master key metrics and methodologies for LLM evaluation.
Implement practical strategies for evaluating raw LLM responses.
Assess the performance of Retrieval Augmented Generation (RAG) applications.
Utilize open-source Python tools for building evaluation pipelines.
Understand how to productionize and monitor LLM evaluation processes.