This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks.
Below, we describe each of the four, four-hour modules included in this course.
Databricks Streaming and Lakeflow Spark Declarative Pipelines
This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.
Databricks Data Privacy
This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.
Databricks Performance Optimization
In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.
Automated Deployment with Databricks Asset Bundles
This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.
The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.
Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.
By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.
Apart from public, instructor-led classes, we also offer private in-house trainings for organizations based on their needs. Call us at +852 2116 3328 or email us at [email protected] for more details.
• Strong knowledge of the Databricks platform, including experience with Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture, Unity Catalog, Lakeflow Declarative Pipelines, and Workflows. In particular, knowledge of leveraging Expectations with Lakeflow Declarative Pipelines.
• Experience in data ingestion and transformation, with proficiency in PySpark for data processing and DataFrame manipulation. Candidates should also have experience writing intermediate-level SQL queries for data analysis and transformation.
• Proficiency in Python programming, including the ability to design and implement functions and classes, and experience with creating, importing, and utilizing Python packages.
• Familiarity with DevOps practices, particularly continuous integration and continuous delivery/deployment (CI/CD) principles.
• A basic understanding of Git version control.
• Prerequisite course DevOps Essentials for Data Engineering Course