Learn to build, deploy, and manage generative AI applications on Databricks, covering LLMs, RAG, prompt engineering, and fine-tuning.
Overview
This hands-on course teaches developers and data scientists how to build, deploy, and manage robust generative AI applications on the Databricks Lakehouse Platform. Students will master foundational LLM concepts, prompt engineering, Retrieval Augmented Generation (RAG) techniques, and fine-tuning strategies. The curriculum also covers model evaluation and deployment using key open-source tools like MLflow, LangChain, and Hugging Face, all integrated within the Databricks ecosystem. Through practical labs, participants will gain the skills to develop full-stack GenAI solutions efficiently.
Learning Outcomes
Develop and deploy generative AI applications on the Databricks Lakehouse Platform.
Apply prompt engineering techniques to optimize LLM interactions.
Implement Retrieval Augmented Generation (RAG) for enhanced LLM knowledge retrieval.
Fine-tune large language models for domain-specific tasks.
Evaluate and manage generative AI models effectively using MLflow.
Utilize LangChain and Hugging Face with Databricks for GenAI workflows.