Master generative AI and LLMs, from fundamentals to advanced deployment for developers and administrators.
Overview
This comprehensive learning path empowers developers and administrators to navigate the rapidly evolving landscape of generative AI and large language models (LLMs). Students will gain hands-on expertise in building, deploying, and managing generative AI applications through NVIDIA's Deep Learning Institute (DLI) workshops. Key topics covered include LLM fundamentals, transformer architecture, prompt engineering, Retrieval Augmented Generation (RAG) application development, model optimization with NVIDIA TensorRT-LLM, and enterprise-scale deployment using NVIDIA NIM and Kubernetes. The path combines theoretical understanding with practical, project-based learning, preparing learners for real-world AI challenges in both development and operational roles.
Learning Outcomes
Understand core concepts of generative AI and LLMs, including transformer architecture.
Apply effective prompt engineering techniques for various LLM applications.
Learn to fine-tune large language models for specific tasks and datasets.
Develop Retrieval Augmented Generation (RAG) based LLM applications using frameworks like LangChain.
Optimize LLM performance and inference using NVIDIA TensorRT-LLM.
Deploy and manage generative AI solutions at enterprise scale using NVIDIA platforms.
Implement MLOps practices for efficient AI model serving and management.
Utilize NVIDIA NIM for streamlined deployment of generative AI models.