Implement Generative AI engineering with Azure Databricks

  • Home
  • /
  • Courses
  • /
  • Implement Generative AI engineering with Azure Databricks
Course ID: DP-3028
Exam Code: -
Duration: 1 Day
Private in-house training

Apart from public, instructor-led classes, we also offer private in-house trainings for organizations based on their needs. Call us at +852 2116 3328 or email us at [email protected] for more details.

What are the skills covered
  • Get started with language models in Azure Databricks’
  • Implement Retrieval Augmented Generation (RAG) with Azure Databricks
  • Implement multi-stage reasoning in Azure Databricks
  • Fine-tune language models with Azure Databricks
  • Evaluate language models with Azure Databricks
  • Review responsible AI principles for language models in Azure Databricks
  • Implement LLMOps in Azure Databricks
Who should attend this course

This course is designed for data scientists, machine learning engineers, and other AI practitioners who want to build generative AI applications using Azure Databricks. It is intended for professionals familiar with fundamental AI concepts and the Azure Databricks platform.

Course Modules

Module 1: Get started with language models in Azure Databricks

Large Language Models (LLMs) have revolutionized various industries by enabling advanced natural language processing (NLP) capabilities. These language models are utilized in a wide array of applications, including text summarization, sentiment analysis, language translation, zero-shot classification, and few-shot learning.

Learning objectives

In this module, you learn how to:

  • Describe Generative AI.
  • Describe Large Language Models (LLMs).
  • Identify key components of LLM applications.
  • Use LLMs for Natural Language Processing (NLP) tasks.

 

Module 2: Implement Retrieval Augmented Generation (RAG) with Azure Databricks

Retrieval Augmented Generation (RAG) is an advanced technique in natural language processing that enhances the capabilities of generative models by integrating external information retrieval mechanisms. When you use both generative models and retrieval systems, RAG dynamically fetches relevant information from external data sources to augment the generation process, leading to more accurate and contextually relevant outputs.

Learning objectives

In this module, you learn how to:

  • Set up a RAG workflow.
  • Prepare your data for RAG.
  • Retrieve relevant documents with vector search.
  • Improve model accuracy by reranking your search results.

 

Module 3: Implement multi-stage reasoning in Azure Databricks

Multi-stage reasoning systems break down complex problems into multiple stages or steps, with each stage focusing on a specific reasoning task. The output of one stage serves as the input for the next, allowing for a more structured and systematic approach to problem-solving.

Learning objectives

In this module, you learn how to:

  • Identify the need for multi-stage reasoning systems.
  • Describe a multi-stage reasoning workflow.
  • Implement multi-stage reasoning with libraries like LangChain, LlamaIndex, Haystack, and the DSPy framework.

 

Module 4: Fine-tune language models with Azure Databricks

Fine-tuning uses Large Language Models’ (LLMs) general knowledge to improve performance on specific tasks, allowing organizations to create specialized models that are more accurate and relevant while saving resources and time compared to training from scratch.

Learning objectives

In this module, you learn how to:

  • Understand when to use fine-tuning.
  • Prepare your data for fine-tuning.
  • Fine-tune an Azure OpenAI model.

 

Module 5: Evaluate language models with Azure Databricks

In this module, you explore Large Language Model evaluation using various metrics and approaches, learn about evaluation challenges and best practices, and discover automated evaluation techniques including LLM-as-a-judge methods.

Learning objectives

In this module, you learn how to:

  • Evaluate LLM evaluation models
  • Describe the relationship between LLM evaluation and AI system evaluation
  • Describe standard LLM evaluation metrics like accuracy, perplexity, and toxicity
  • Describe LLM-as-a-judge for evaluation

 

Module 6: Review responsible AI principles for language models in Azure Databricks

When working with Large Language Models (LLMs) in Azure Databricks, it’s important to understand the responsible AI principles for implementation, ethical considerations, and how to mitigate risks. Based on identified risks, learn how to implement key security tooling for language models.

Learning objectives

In this module, you learn how to:

  • Describe the responsible AI principles for implementation of language models.
  • Identify the ethical considerations for language models.
  • Mitigate the risks associated with language models.
  • Implement key security tooling for language models.

 

Module 7: Implement LLMOps in Azure Databricks

Streamline the implementation of Large Language Models (LLMs) with LLMOps (LLM Operations) in Azure Databricks. Learn how to deploy and manage LLMs throughout their lifecycle using Azure Databricks.

Learning objectives

In this module, you learn how to:

  • Describe the LLM lifecycle overview.
  • Identify the model deployment option that best fits your needs.
  • Use MLflow and Unity Catalog to implement LLMops.
Prerequisites

Before starting this module, you should be familiar with fundamental AI concepts and Azure Databricks. Consider completing the Get started with artificial intelligence learning path and the Explore Azure Databricks module first.

Search for a course