This course covers generative AI engineering on Azure Databricks, using Spark to explore, fine-tune, evaluate, and integrate advanced language models. It teaches how to implement techniques like retrieval-augmented generation (RAG) and multi-stage reasoning, as well as how to fine-tune large language models for specific tasks and evaluate their performance.
Students will also learn about responsible AI practices for deploying AI solutions and how to manage models in production using LLMOps (Large Language Model Operations) on Azure Databricks.
Apart from public, instructor-led classes, we also offer private in-house trainings for organizations based on their needs. Call us at +852 2116 3328 or email us at [email protected] for more details.
This course is designed for data scientists, machine learning engineers, and other AI practitioners who want to build generative AI applications using Azure Databricks. It is intended for professionals familiar with fundamental AI concepts and the Azure Databricks platform.
Large Language Models (LLMs) have revolutionized various industries by enabling advanced natural language processing (NLP) capabilities. These language models are utilized in a wide array of applications, including text summarization, sentiment analysis, language translation, zero-shot classification, and few-shot learning.
Learning objectives
In this module, you learn how to:
Retrieval Augmented Generation (RAG) is an advanced technique in natural language processing that enhances the capabilities of generative models by integrating external information retrieval mechanisms. When you use both generative models and retrieval systems, RAG dynamically fetches relevant information from external data sources to augment the generation process, leading to more accurate and contextually relevant outputs.
Learning objectives
In this module, you learn how to:
Multi-stage reasoning systems break down complex problems into multiple stages or steps, with each stage focusing on a specific reasoning task. The output of one stage serves as the input for the next, allowing for a more structured and systematic approach to problem-solving.
Learning objectives
In this module, you learn how to:
Fine-tuning uses Large Language Models’ (LLMs) general knowledge to improve performance on specific tasks, allowing organizations to create specialized models that are more accurate and relevant while saving resources and time compared to training from scratch.
Learning objectives
In this module, you learn how to:
In this module, you explore Large Language Model evaluation using various metrics and approaches, learn about evaluation challenges and best practices, and discover automated evaluation techniques including LLM-as-a-judge methods.
Learning objectives
In this module, you learn how to:
When working with Large Language Models (LLMs) in Azure Databricks, it’s important to understand the responsible AI principles for implementation, ethical considerations, and how to mitigate risks. Based on identified risks, learn how to implement key security tooling for language models.
Learning objectives
In this module, you learn how to:
Streamline the implementation of Large Language Models (LLMs) with LLMOps (LLM Operations) in Azure Databricks. Learn how to deploy and manage LLMs throughout their lifecycle using Azure Databricks.
Learning objectives
In this module, you learn how to:
Before starting this module, you should be familiar with fundamental AI concepts and Azure Databricks. Consider completing the Get started with artificial intelligence learning path and the Explore Azure Databricks module first.