Comparison of Fabric, Azure Databricks and Synapse Analytics

Microsoft Fabric vs Databricks vs Synapse

Microsoft Fabric is an all-in-one SaaS analytics platform with integrated BI.
Databricks is a Spark-based platform mainly used for large-scale data engineering and machine learning.
Synapse is an enterprise analytics service combining SQL data warehousing and big data processing.

PlatformDescription (English)
Microsoft FabricAn all-in-one SaaS data platform that integrates data engineering, data science, warehousing, real-time analytics, and BI.
Azure DatabricksA Spark-based analytics and AI platform optimized for large-scale data engineering and machine learning.
Azure Synapse AnalyticsAn analytics service combining data warehousing and big data analytics.

Architecture

  1. Microsoft Fabric: Fully integrated SaaS platform built around OneLake.
    single data lake, unified workspace, built-in Power BI
  2. Databricks: Spark-native architecture optimized for big data processing.
    Delta Lake, Spark clusters, ML workloads
  3. Synapse: Hybrid analytics platform integrating SQL data warehouse and big data tools.

Main Use Cases

PlatformBest For
FabricEnd-to-end analytics platform
DatabricksAdvanced data engineering & ML
SynapseEnterprise data warehouse

Introducing Fabric

What is Fabic?

Microsoft Fabric is an all-in-one, SaaS (Software as a Service) analytics platform. It combines data movement, data engineering, data science, and business intelligence into one single website. It is built on OneLake, which is like “OneDrive for data.”

Why use Microsoft Fabric?

Traditional data stacks are “fragmented”—you might use Azure Data Factory for moving data, Databricks for cleaning it, Datalake for storing it, and Power BI for seeing it.

Fabric fixes this by putting everything in one place.

  • Unified Data: Every tool (SQL, Spark, Power BI) uses the same copy of data in OneLake. No more duplicating data.
  • SaaS Simplicity: You don’t need to manage servers, clusters, or storage accounts. It’s all managed by Microsoft.
  • Direct Lake Mode: Power BI can read data directly from the lake without “importing” it, making reports incredibly fast.
  • Cost Efficiency: You pay for one pool of “Compute Capacity” and share it across all your teams.

What can Microsoft Fabric do?

It handles the entire data journey:

  • Ingest: Pull data from anywhere (SQL, AWS, Web).
  • Store: Store massive amounts of data in an open format (Delta Parquet).
  • Process: Clean and transform data using Python/Spark or SQL.
  • Analyze: Run complex queries and train Machine Learning models.
  • Visualize: Build real-time dashboards in Power BI.

Components of Fabric

Fabric is divided into “Experiences” based on what you need to do:

Component (Experience)RolePurpose
Data FactoryData EngineerETL, Pipelines, and Dataflows.
Synapse Data EngineeringSpark DeveloperHigh-scale processing using Notebooks.
Synapse Data WarehouseSQL DeveloperProfessional-grade SQL storage.
Synapse Data ScienceData ScientistBuilding AI and ML models.
Real-Time IntelligenceIoT / App DevHigh-speed streaming data.
Power BIBusiness AnalystVisualizing data for the business.
Data ActivatorOperationsAutomatic alerts (e.g., “Email me if sales drop”).

Step-by-Step: Getting Started

If you were a Data Engineer working on a project today, your workflow would look like this:

Step 1: Create a Workspace

  • Log in to Fabric. Create a Workspace.
  • Ensure it has a Fabric Capacity license attached.

Step 2: The Lakehouse (Bronze)

  • Create a Lakehouse called “Company_Data_Lake“.
  • This creates a folder in OneLake where your raw files will live.

Step 3: Data Pipeline (Ingestion

  • Use Data Factory to create a pipeline.
  • >> Data Pipeline | Source: SQL Server | Destination: Lakehouse Files | 03/2026
  • This pulls your raw data into the “Files” folder.

Step 4: Notebook (Silver/Gold)

  • Open a Notebook. Use PySpark to clean the data.
  • Example: Remove duplicates, fix date formats.
  • Save the result as a Table (Delta format).

Step 5: Power BI (Visualization)

  • Switch to your SQL Analytics Endpoint.
  • Click New Report. Power BI will use Direct Lake to show your data instantly.

Appendix

Microsoft Fabric fundamentals documentation

Logging in Azure Databricks with Python

In Azure Databricks, logging is crucial for monitoring, debugging, and auditing your notebooks, jobs, and applications. Since Databricks runs on a distributed architecture and utilizes standard Python, you can use familiar Python logging tools, along with features specific to the Databricks environment like Spark logging and MLflow tracking.

Python’s logging module provides a versatile logging system for messages of different severity levels and controls their presentation. To get started with the logging module, you need to import it to your program first, as shown below:

import logging

logging.debug("A debug message")
logging.info("An info message")
logging.warning("A warning message")
logging.error("An error message")
logging.critical("A critical message")

log levels in Python

Log levels define the severity of the event that is being logged. For example a message logged at the INFO level indicates a normal and expected event, while one that is logged at the ERROR level signifies that some unexpected error has occurred.

Each log level in Python is associated with a number (from 10 to 50) and has a corresponding module-level method in the logging module as demonstrated in the previous example. The available log levels in the logging module are listed below in increasing order of severity:

Logging Level Quick Reference

LevelMeaningWhen to Use
DEBUG (10)detailed debugging infodevelopment
INFO (20)normal messagesjob progress
WARNING (30)minor issuenon-critical issues
ERROR (40)failurerecoverable errors
CRITICAL (50)system failurestop the job

It’s important to always use the most appropriate log level so that you can quickly find the information you need. For instance, logging a message at the WARNING level will help you find potential problems that need to be investigated, while logging at the ERROR or CRITICAL level helps you discover problems that need to be rectified immediately.

By default, the logging module will only produce records for events that have been logged at a severity level of WARNING and above.

Logging basic configuration

Ensure to place the call to logging.basicConfig() before any methods such as info()warning(), and others are used. It should also be called once as it is a one-off configuration facility. If called multiple times, only the first one will have an effect.

logging.basicConfig() example

import logging
from datetime import datetime

## set date time format
run_id = datetime.now().strftime("%Y%m%d_%H%M%S")

## Output WITH writing to a file (Databricks log file)
log_file = f"/dbfs/tmp/my_pipeline/logs/run_{run_id}.log"

## start configuing
logging.basicConfig(
    filename=log_file,
    level=logging.INFO,
    format="%(asctime)s — %(levelname)s — %(message)s",
)

## creates (or retrieves) a named logger that you will use to write log messages.
logger = logging.getLogger("pipeline")
## The string "pipeline" is just a name for the logger. It can be  anything:
## "etl"
## "my_app"
## "sales_job"
## "abc123"

simple logging example

import logging
from datetime import datetime

run_id = datetime.now().strftime("%Y%m%d_%H%M%S")
log_file = f"/dbfs/tmp/my_pipeline/logs/run_{run_id}.log"

logging.basicConfig(
    filename=log_file,
    level=logging.INFO,
    format="%(asctime)s — %(levelname)s — %(message)s",
)
logger = logging.getLogger("pipeline")

logger.info("=== Pipeline Started ===")

try:
    logger.info("Step 1: Read data")
    df = spark.read.csv(...)
    
    logger.info("Step 2: Transform")
    df2 = df.filter(...)
    
    logger.info("Step 3: Write output")
    df2.write.format("delta").save(...)

    logger.info("=== Pipeline Completed Successfully ===")

except Exception as e:
    logger.error(f"Pipeline Failed: {e}")
    raise

the log output looks this:

2025-11-21 15:05:01,112 – INFO – === Pipeline Started ===
2025-11-21 15:05:01,213 – INFO – Step 1: Read data
2025-11-21 15:05:01,315 – INFO – Step 2: Transform
2025-11-21 15:05:01,417 – INFO – Step 3: Write output
2025-11-21 15:05:01,519 – INFO – === Pipeline Completed Successfully ===

Log Rotation (Daily Files)

Log Rotation means: Your log file does NOT grow forever. Instead, it automatically creates a new log file each day (or each hour, week, etc.), and keeps only a certain number of old files.

This prevents:

  • huge log files
  • storage overflow
  • long-term disk growth
  • difficult debugging
  • slow I/O

It is very common in production systems (Databricks, Linux, App Servers, Databases). Without log rotation → 1 file becomes huge, With daily rotation:

my_log.log (today)
my_log.log.2025-11-24 (yesterday)
my_log.log.2025-11-23
my_log.log.2025-11-22

Python code that does Log Rotation

import logging
from logging.handlers import TimedRotatingFileHandler

handler = TimedRotatingFileHandler(
    "/dbfs/Volumes/logs/my_log.log",
    when="midnight",   # rotate every day
    interval=1,        # 1 day
    backupCount=30     # keep last 30 days
)

formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)

logger = logging.getLogger("rotating_logger")
logger.setLevel(logging.INFO)
logger.addHandler(handler)

logger.info("Log rotation enabled")
Hourly Log Rotation
handler = TimedRotatingFileHandler(
    "/dbfs/tmp/mylogs/hourly.log",
    when="H",       # rotate every hour
    interval=1,
    backupCount=24  # keep 24 hours
)

Size-Based Log Rotation
handler = RotatingFileHandler(
    "/dbfs/tmp/mylogs/size.log",
    maxBytes=5 * 1024 * 1024,  # 5 MB
    backupCount=5              # keep 5 old files
)

Logging to Unity Catalog Volume (BEST PRACTICE)

Create a volume first (once):

CREATE VOLUME IF NOT EXISTS my_catalog.my_schema.logs;

use it in code:

from logger_setup import get_logger

logger = get_logger(
    app_name="customer_etl",
    log_path="/dbfs/Volumes/my_catalog/my_schema/logs/customer_etl"
)

logger.info("Starting ETL pipeline")

Databricks ETL Logging Template (Production-Ready)

Features

  • Writes logs to file
  • Uses daily rotation (keeps 30 days)
  • Logs INFO, ERROR, stack traces
  • Works in notebooks + Jobs
  • Fully reusable

1. Create the logger (ready for copy & paste)

“File Version”
 import logging
from logging.handlers import TimedRotatingFileHandler

def get_logger(name="etl"):
    log_path = "/dbfs/tmp/logs/pipeline.log"   # or a UC Volume

    handler = TimedRotatingFileHandler(
        log_path,
        when="midnight",
        interval=1,
        backupCount=30
    )

    formatter = logging.Formatter(
        "%(asctime)s - %(levelname)s - %(name)s - %(message)s"
    )
    handler.setFormatter(formatter)

    logger = logging.getLogger(name)
    logger.setLevel(logging.INFO)

    # Prevent duplicate handlers in notebook re-runs
    if not logger.handlers:
        logger.addHandler(handler)

    return logger

logger = get_logger("my_pipeline")
logger.info("Logger initialized")
Unity Catalog Volume version
# If the folder doesn't exist:
dbutils.fs.mkdirs("/Volumes/my_catalog/my_schema/logs")

/Volumes/my_catalog/my_schema/logs/

# Create logger pointing to the UC Volume
import logging
from logging.handlers import TimedRotatingFileHandler

def get_logger(name="etl"):
    log_path = "/Volumes/my_catalog/my_schema/logs/pipeline.log"

    handler = TimedRotatingFileHandler(
        filename=log_path,
        when="midnight",
        interval=1,
        backupCount=30           # keep last 30 days
    )

    formatter = logging.Formatter(
        "%(asctime)s - %(levelname)s - %(name)s - %(message)s"
    )
    handler.setFormatter(formatter)

    logger = logging.getLogger(name)
    logger.setLevel(logging.INFO)

    # Prevent duplicate handlers when re-running notebook cells
    if not logger.handlers:
        logger.addHandler(handler)

    return logger

logger = get_logger("my_pipeline")
logger.info("Logger initialized")

2. Use the logger inside your ETL

logger.info("=== ETL START ===")

try:
    logger.info("Step 1: Read data")
    df = spark.read.csv("/mnt/raw/data.csv")

    logger.info("Step 2: Transform")
    df2 = df.filter("value > 0")

    logger.info("Step 3: Write output")
    df2.write.format("delta").mode("overwrite").save("/mnt/curated/data")

    logger.info("=== ETL COMPLETED ===")

except Exception as e:
    logger.error(f"ETL FAILED: {e}", exc_info=True)
    raise
Resulting log file (example)

The output file looks like this:

2025-11-21 15:12:01,233 - INFO - my_pipeline - === ETL START ===
2025-11-21 15:12:01,415 - INFO - my_pipeline - Step 1: Read data
2025-11-21 15:12:01,512 - INFO - my_pipeline - Step 2: Transform
2025-11-21 15:12:01,660 - INFO - my_pipeline - Step 3: Write output
2025-11-21 15:12:01,780 - INFO - my_pipeline - === ETL COMPLETED ===

If error:

2025-11-21 15:15:44,812 - ERROR - my_pipeline - ETL FAILED: File not found
Traceback (most recent call last):
  ...

Best Practices for Logging in Python

By utilizing logs, developers can easily monitor, debug, and identify patterns that can inform product decisions, ensure that the logs generated are informative, actionable, and scalable.

  1. Avoid the root logger
    it is recommended to create a logger for each module or component in an application.
  2. Centralize your logging configuration
    Python module that will contain all the logging configuration code.
  3. Use correct log levels
  4. Write meaningful log messages
  5. % vs f-strings for string formatting in logs
  6. Logging using a structured format (JSON)
  7. Include timestamps and ensure consistent formatting
  8. Keep sensitive information out of logs
  9. Rotate your log files
  10. Centralize your logs in one place

Conclusion

To achieve the best logging practices, it is important to use appropriate log levels and message formats, and implement proper error handling and exception logging. Additionally, you should consider implementing log rotation and retention policies to ensure that your logs are properly managed and archived.

Technical Interview Questions and Answers

The Job Interview is the single most critical step in the job hunting process. It is the definitive arena where all of your abilities must integrate. The interview itself is not a skill you possess; it is the moment you deploy your Integrated Skill Set, blending:

  1. Hard Skills (Technical Mastery): Demonstrating not just knowledge of advanced topics, but the depth of your expertise and how you previously applied it to solve complex, real-world problems.
  2. Soft Skills (Communication & Presence): Clearly articulating strategy, managing complexity under pressure, and exhibiting the leadership presence expected of senior-level and expert-level candidates.
  3. Contextual Skills (Business Acumen): Framing your solutions within the company’s business goals and culture, showing that you understand the strategic impact of your work.

This Integrated Skill represents your first real opportunity to sell your strategic value to the employer.

What is Azure Data Factory?

Azure Data Factory is a cloud-based data integration service used to create data-driven workflows for orchestrating and automating data movement and transformation across different data stores and compute services.

Key capabilities

  • Data ingestion
  • Data orchestration
  • Data transformation
  • Scheduling and monitoring
What are the core components of ADF?

The main components are:

ComponentPurpose
PipelineLogical grouping of activities
ActivitySingle task in a pipeline
DatasetData structure pointing to data
Linked ServiceConnection to external resources
TriggerSchedule or event that starts a pipeline
Integration RuntimeCompute infrastructure used to run activities
What is a Pipeline?

A pipeline is a logical grouping of activities that together perform a task such as data ingestion or transformation.

What is Integration Runtime (IR)?

Integration Runtime is the compute infrastructure used by ADF to perform data integration tasks.

  • Azure IR Fully managed compute in Azure
  • Self-hosted IR Runs on on-premises machines
  • Azure SSIS IR Used to run SSIS packages

Types:

What is a Linked Service?

A linked service defines the connection information needed for ADF to connect to external resources, example:

  • Azure SQL Database
  • Data Lake
  • databricks
  • On-Prem database
What is a Dataset?

A dataset represents the structure of data within a data store.

Difference between Pipeline and Data Flow
FeaturePipelineData Flow
PurposeOrchestrationData transformation
ComputeOrchestration engineSpark cluster
UIActivity workflowVisual transformation
How do you handle incremental loading?

Solution 1: Watermark column
last_modified_date:
Max(Last_modified)
WHERE last_modified > last_run_time

Solution 2: CDC
transaction log
change tracking

Solution 3: File-based incremental
folder partition by date

How do you implement error handling?

Try-Catch pattern
Failure path
Retry policy

Activity
├ success → next step
└ failure → error handling pipeline

What are triggers in ADF?

Triggers are used to automatically start pipelines.

TriggerPurpose
Schedule triggerTime-based execution
Tumbling windowTime-based incremental
Event triggerStorage events
What is Tumbling Window Trigger?

A tumbling window trigger runs pipelines at fixed time intervals and processes data in discrete time windows.

How do you parameterize pipelines?

ADF supports parameters at:

  • Linked Service
  • plpeline
  • dataset
What are common ADF performance optimizations?

Parallel copy: pipeline success; pipeline duration; failure rate
Staging: use blob staging
Partitioning: split large tables

How do you monitor pipelines?

Monitoring options:
ADF Monitor UI
Azure Monitor
Log Analytics

Describe the data storage options available in Databricks.

Databricks offers several ways to store data. First, there’s the Databricks File System for storing and managing files. Then, there’s Delta Lake, an open-source storage layer that adds ACID transactions to Apache Spark, making it more reliable. Databricks also integrates with cloud storage services like AWS S3, Azure Blob Storage, and Google Cloud Storage. Plus, you can connect to a range of external databases, both relational and NoSQL, using JDBC.

What is Databricks Delta (Delta Lakehouse) and how does it enhance the capabilities of Azure Databricks?

Databricks Delta, now known as Delta Lake, is an open-source storage layer that brings ACID transactions to Apache Spark and big data workloads. It enhances Azure Databricks by providing features like:

  • ACID transactions for data reliability and consistency.
  • Scalable metadata handling for large tables.
  • Time travel for data versioning and historical data analysis.
  • Schema enforcement and evolution.
  • Improved performance with data skipping and Z-ordering
Are there any alternative solution that is similar to Delta lakehouse?

there are several alternative technologies that provide Delta Lake–style Lakehouse capabilities (ACID + schema enforcement + time travel + scalable storage + SQL engine). such as,

  • Apache Iceberg
  • Apache Hudi
  • Snowflake (Iceberg Tables / Unistore)
  • BigQuery + BigLake
  • AWS Redshift + Lake Formation + Apache Iceberg
  • Microsoft Fabric (OneLake + Delta/DQ/DLTS)
What is Delta Lake Table?

Delta lake tables are tables that store data in the delta format. Delta Lake is an extension to existing data lakes,

What is Delta Live Table?

Delta Live Tables (DLT) is a framework in Azure Databricks for building reliable, automated, and scalable data pipelines using Delta Lake tables.

It simplifies ETL development by managing data dependencies, orchestration, quality checks, and monitoring automatically.

Explain how you can use Databricks to implement a Medallion Architecture (Bronze, Silver, Gold).
  1. Bronze Layer (Raw Data): Ingest raw data from various sources into the Bronze layer. This data is stored as-is, without any transformation.
  2. Silver Layer (Cleaned Data, as known Enriched layer): Clean and enrich the data from the Bronze layer. Apply transformations, data cleansing, and filtering to create more refined datasets.
  3. Gold Layer (Aggregated Data, as known Curated layer): Aggregate and further transform the data from the Silver layer to create high-level business tables or machine learning features. This layer is used for analytics and reporting.
What Is Z-Order (Databricks / Delta Lake)?

Z-Ordering in Databricks (specifically for Delta Lake tables) is an optimization technique designed to co-locate related information into the same set of data files on disk.

OPTIMIZE mytable
ZORDER BY (col1, col2);
What Is Liquid Clustering (Databricks)?

Liquid Clustering is Databricks’ next-generation data layout optimization for Delta Lake.
It replaces (and is far superior to) Z-Order.

## At creation time:
CREATE TABLE sales
CLUSTER BY (customer_id, event_date)
AS SELECT * FROM source;

## For existing tables:
ALTER TABLE sales
CLUSTER BY (customer_id, event_date);

## Trigger the actual clustering:
OPTIMIZE sales;

## Remove clustering:
ALTER TABLE sales
CLUSTER BY NONE;
What is a Dataframe, RDD, Dataset in Azure Databricks?

Dataframe refers to a specified form of tables employed to store the data within Databricks during runtime. In this data structure, the data will be arranged into two-dimensional rows and columns to achieve better accessibility.

RDD, Resilient Distributed Dataset, is a fault-tolerant, immutable collection of elements partitioned across the nodes of a cluster. RDDs are the basic building blocks that power all of Spark’s computations.

Dataset is an extension of the DataFrame API that provides compile-time type safety and object-oriented programming benefits.

What is catching and its types?

A cache is a temporary storage that holds frequently accessed data, aiming to reduce latency and enhance speed. Caching involves the process of storing data in cache memory.

What is Spark Cache / Persist (Memory Cache)

This is the standard Apache Spark feature. It stores data in the JVM Heap (Memory).

What is Databricks Disk Cache (Local Disk)

This is a Databricks-specific optimization. It automatically stores copies of remote files (Parquet/Delta) on the local NVMe SSDs of the worker nodes.

comparing .cache() and .persist()

both .cache() and .persist() are used to save intermediate results to avoid re-computing the entire lineage. The fundamental difference is that .cache() is a specific, pre-configured version of .persist().

.cache(): This is a shorthand for .persist(StorageLevel.MEMORY_ONLY). It tries to store your data in the JVM heap as deserialized objects.

.persist(): This is the more flexible version. It allows you to pass a StorageLevel to decide exactly how and where the data should be stored (RAM, Disk, or both).

How do you optimize Databricks?

When optimizing Databricks workloads, I focus on several layers. First, I optimize data layout using partitioning and Z-ordering on Delta tables. Second, I improve Spark performance by using broadcast joins, filtering data early, and caching intermediate results. Third, I tune cluster resources such as autoscaling and Photon engine. Finally, I run Delta maintenance commands like OPTIMIZE and VACUUM to manage small files and improve query performance.

Data Layout Optimization:
Partitioning:
CREATE TABLE sales
USING DELTA
PARTITIONED BY (date)

Z-Ordering:
OPTIMIZE sales
ZORDER BY (customer_id);

Liquid Clustering:

What is Photon Engine?

Photon is a high-performance query engine built in C++ that accelerates SQL queries and data processing workloads in Azure Databricks. It improves performance by using vectorized processing and optimized execution for modern hardware.

How would you secure and manage secrets in Azure Databricks when connecting to external data sources?
  1. Use Azure Key Vault to store and manage secrets securely.
  2. Integrate Azure Key Vault with Azure Databricks using Databricks-backed or Azure-backed scopes.
  3. Access secrets in notebooks and jobs using the dbutils.secrets API.
    dbutils.secrets.get(scope=”<scope-name>”, key=”<key-name>”)
  4. Ensure that secret access policies are strictly controlled and audited.
Scenario: You need to implement a data governance strategy in Azure Databricks. What steps would you take?

  • Data Classification: Classify data based on sensitivity and compliance requirements.
  • Access Controls: Implement role-based access control (RBAC) using Azure Active Directory.
  • Data Lineage: Use tools like Databricks Lineage to track data transformations and movement.
  • Audit Logs: Enable and monitor audit logs to track access and changes to data.
  • Compliance Policies: Implement Azure Policies and Azure Purview for data governance and compliance monitoring.
Scenario: You need to optimize a Spark job that has a large number of shuffle operations causing performance issues. What techniques would you use?
  1. Repartitioning: Repartition the data to balance the workload across nodes and reduce skew.
  2. Broadcast Joins: Use broadcast joins for small datasets to avoid shuffle operations.
  3. Caching: Cache intermediate results to reduce the need for recomputation.
  4. Shuffle Partitions: Increase the number of shuffle partitions to distribute the workload more evenly.
  5. Skew Handling: Identify and handle skewed data by adding salt keys or custom partitioning strategies.
Scenario: You need to migrate an on-premises Hadoop workload to Azure Databricks. Describe your migration strategy.
  • Assessment: Evaluate the existing Hadoop workloads and identify components to be migrated.
  • Data Transfer: Use Azure Data Factory or Azure Databricks to transfer data from on-premises HDFS to ADLS.
  • Code Migration: Convert Hadoop jobs (e.g., MapReduce, Hive) to Spark jobs and test them in Databricks.
  • Optimization: Optimize the Spark jobs for performance and cost-efficiency.
  • Validation: Validate the migrated workloads to ensure they produce the same results as on-premises.
  • Deployment: Deploy the migrated workloads to production and monitor their performance.

What are Synapse SQL Workspaces and how are they used?

Synapse SQL Workspaces are the environments within Azure Synapse Analytics where users can perform data querying and management tasks. They include:

  • Provisioned SQL Pools: Used for large-scale, high-performance data warehousing. Users can create and manage databases, tables, and indexes, and run complex queries.
  • On-Demand SQL Pools: Allow users to query data directly from Azure Data Lake without creating a dedicated data warehouse. It is ideal for interactive and exploratory queries.
 What is the difference between On-Demand SQL Pool and Provisioned SQL Pool?

The primary difference between On-Demand SQL Pool and Provisioned SQL Pool lies in their usage and scalability:

  • On-Demand SQL Pool: Allows users to query data stored in Azure Data Lake without requiring a dedicated resource allocation. It is best for ad-hoc queries and does not incur costs when not in use. It scales automatically based on query demand.
  • Provisioned SQL Pool: Provides a dedicated set of resources for running data warehousing workloads. It is optimized for performance and can handle large-scale data operations. Costs are incurred based on the provisioned resources and are suitable for predictable, high-throughput workloads.
 How does Azure Synapse Analytics handle data integration?

Azure Synapse Analytics handles data integration through Synapse Pipelines, which is a data integration service built on Azure Data Factory. It enables users to:

  • Ingest Data: Extract data from various sources, including relational databases, non-relational data stores, and cloud-based services.
  • Transform Data: Use data flows and data wrangling to clean and transform data.
  • Orchestrate Workflows: Schedule and manage data workflows, including ETL (Extract, Transform, Load) processes.
  • Data Integration Runtime: Utilizes Azure Integration Runtime for data movement and transformation tasks.

Can you explain the concept of “Dedicated SQL Pool” in Azure Synapse Analytics?

Dedicated SQL Pool is a provisioned, high-performance relational database.

  • Data Storage: Data must be ingested and stored internally in a proprietary columnar format. It follows a Schema-on-Write approach.
  • Architecture: Uses MPP (Massively Parallel Processing) architecture. Data is sharded into 60 distributions and processed by multiple compute nodes in parallel.
  • Cost: Billed by the hour based on the provisioned DWUs. You can pause it when not in use to save costs.
  • Best For: Stable production reporting, TB/PB scale enterprise data warehousing, and high-concurrency queries needing sub-second response.
What is Serverless SQL Pool

Serverless SQL Pool is a An on-demand, compute-only query engine with no internal storage.

  • Data Location: It does not store data. Data remains in the Data Lake (ADLS Gen2) in open formats like Parquet, CSV, or JSON.
  • Mechanism: Uses the OPENROWSET function to query lake files directly. It follows a Schema-on-Read approach.
  • Cost: Billed per query based on data scanned (approx. $5 USD per TB). Cost is $0 if no queries are run.
  • Best For: Rapid data discovery, building a Logical Data Warehouse, and ad-hoc data validation.
What is a Synapse Spark Pool, and when would you use it?

It is a managed Apache Spark 3 instance easily created and configured within Azure.

  • Managed Cluster: You don’t manage servers; you just select the node size and the number of nodes.
  • Auto-Scale & Auto-Pause: It automatically scales nodes based on workload and pauses after 5 minutes of inactivity to save costs.
  • Language Support: Supports PySpark (Python), Spark SQL, Scala, and .NET.

Can you talk about database locker?

Database locking is the mechanism a database uses to control concurrent access to data so that transactions stay consistent, isolated, and safe.

Locking prevents:

  • Dirty reads
  • Lost updates
  • Write conflicts
  • Race conditions

Types of Locks:

1. Shared Lock (S)

  • Used when reading data
  • Multiple readers allowed
  • No writers allowed

2. Exclusive Lock (X)

  • Used when updating or inserting
  • No one else can read or write the locked item

3. Update Lock (U) (SQL Server specific)

  • Prevents deadlocks when upgrading from Shared → Exclusive
  • Only one Update lock allowed

4. Intention Locks (IS, IX, SIX)

Used at table or page level to signal a lower-level lock is coming.

5. Row / Page / Table Locks

Based on granularity:

  • Row-level: Most common, best concurrency
  • Page-level: Several rows together
  • Table-level: When scanning or modifying large portions

DB engines automatically escalate:

Row → Page → Table
when there are too many small locks.

Can you talk on Deadlock?

A deadlock happens when:

  • Transaction A holds Lock 1 and wants Lock 2
  • Transaction B holds Lock 2 and wants Lock 1

Both wait on each other → neither can move → database detects → kills one transaction (“deadlock victim”).

Deadlocks usually involve one writer + one writer, but can also involve readers depending on isolation level.

How to Troubleshoot Deadlocks?
A: In SQL Server: Enable Deadlock Graph Capture
run:
ALTER DATABASE [YourDB] SET DEADLOCK_PRIORITY NORMAL;

use:
DBCC TRACEON (1222, -1);
DBCC TRACEON (1204, -1);
B: Interpret the Deadlock Graph

You will see:

  • Processes (T1, T2…)
  • Resources (keys, pages, objects)
  • Types of locks (X, S, U, IX, etc.)
  • Which statement caused the deadlock

Look for:

  • Two queries touching the same index/rows in different order
  • A scanning query locking too many rows
  • Missed indexes
  • Query patterns that cause U → X lock upgrades
C. Identify
  • The exact tables/images involved
  • The order of locking
  • The hotspot row or range
  • Rows with heavy update/contention

This will tell you what to fix.

How to Prevent Deadlocks (Practical + Senior-Level)
  • Always update rows in the same order
  • Keep transactions short
  • Use appropriate indexes
  • Use the correct isolation level
  • Avoid long reads before writes

Can you discuss on database normalization and denormalization

Normalization is the process of structuring a relational database to minimize data redundancy (duplicate data) and improve data integrity.

Normal FormRule SummaryProblem Solved
1NF (First)Eliminate repeating groups; ensure all column values are atomic (indivisible).Multi-valued columns, non-unique rows.
2NF (Second)Be in 1NF, AND all non-key attributes must depend on the entire primary key.Partial Dependency (non-key attribute depends on part of a composite key).
3NF (Third)Be in 2NF, AND eliminate transitive dependency (non-key attribute depends on another non-key attribute).Redundancy due to indirect dependencies.
BCNF (Boyce-Codd)A stricter version of 3NF; every determinant (column that determines another column) must be a candidate key.Edge cases involving multiple candidate keys.

Denormalization is the process of intentionally introducing redundancy into a previously normalized database to improve read performance and simplify complex queries.

  • Adding Redundant Columns: Copying a value from one table to another (e.g., copying the CustomerName into the Orders table to avoid joining to the Customer table every time an order is viewed).
  • Creating Aggregate/Summary Tables: Storing pre-calculated totals, averages, or counts to avoid running expensive aggregate functions at query time (e.g., a table that stores the daily sales total).
  • Merging Tables: Combining two tables that are frequently joined into a single, wider table.

Markdown in a Databricks Notebook

Databricks Notebook Markdown is a special version of the Markdown language built directly into Databricks notebooks. It allows you to add richly formatted text, images, links, and even mathematical equations to your notebooks, turning them from just code scripts into interactive documents and reports.

Think of it as a way to provide context, explanation, and structure to your code cells, making your analysis reproducible and understandable by others (and your future self!).

Why is it Important?

Using Markdown cells effectively transforms your workflow:

  1. Documentation: Explain the purpose of the analysis, the meaning of a complex transformation, or the interpretation of a result.
  2. Structure: Create sections, headings, and tables of contents to organize long notebooks.
  3. Clarity: Add lists, tables, and links to data sources or external references.
  4. Communication: Share findings with non-technical stakeholders by narrating the story of your data directly alongside the code that generated it.

Key Features and Syntax with Examples

1. Headers (for Structure)

Use # to create different levels of headings.

%md
# Title (H1)
## Section 1 (H2)
### Subsection 1.1 (H3)
#### This is a H4 Header

Title (H1)

Section 1 (H2)

Subsection 1.1 (H3)

This is a H4 Header

2. Emphasis (Bold and Italic)

%md
*This text will be italic*
_This will also be italic_

**This text will be bold**
__This will also be bold__

**_You can combine them_**

This text will be italic This will also be italic

This text will be bold This will also be bold

You can combine them

3. Lists (Ordered and Unordered)

Unordered List:
%md
- Item 1
- Item 2
  - Sub-item 2.1
  - Sub-item 2.2
  • Item 1
  • Item 2
    • Sub-item 2.1
    • Sub-item 2.2
Ordered List:
%md
1. First item
2. Second item
   1. Sub-item 2.1
3. Third item
  1. First item
  2. Second item
    1. Sub-item 2.1
  3. Third item

4. Links and Images

link
%md
[Databricks Website](https://databricks.com)

Mainri Inc. webside

Image
%md
![mainri Inc. Logo](https://mainri.ca/wp-content/uploads/2024/08/Logo-15-trans.png)
mainri Inc. Logo

5. Tables

%md
| Column 1 Header | Column 2 Header | Column 3 Header |
|-----------------|-----------------|-----------------|
| Row 1, Col 1    | Row 1, Col 2    | Row 1, Col 3    |
| Row 2, Col 1    | Row 2, Col 2    | Row 2, Col 3    |
| *Italic Cell*   | **Bold Cell**   | Normal Cell     |
Column 1 HeaderColumn 2 HeaderColumn 3 Header
Row 1, Col 1Row 1, Col 2Row 1, Col 3
Row 2, Col 1Row 2, Col 2Row 2, Col 3
Italic CellBold CellNormal Cell

6. Code Syntax Highlighting (A Powerful Feature)

%md
```python
df = spark.read.table("samples.nyctaxi.trips")
display(df)
```

```sql
SELECT * FROM samples.nyctaxi.trips LIMIT 10;
```

```scala
val df = spark.table("samples.nyctaxi.trips")
display(df)
```

7. Mathematical Equations (LaTeX)

%md


$$
f(x) = \sum_{i=0}^{n} \frac{x^i}{i!}
$$

Summary

FeaturePurposeExample Syntax
HeadersCreate structure and sections## My Section
EmphasisAdd bold/italic emphasis**bold***italic*
ListsCreate bulleted or numbered lists- Item or 1. Item
TablesOrganize data in a grid| Header |
Links/ImagesAdd references and visuals[Text](URL)
Code BlocksDisplay syntax-highlighted codepython\ncode
Math (LaTeX)Render mathematical formulas$$E = mc^2$$

In essence, Databricks Notebook Markdown is the narrative glue that binds your code, data, and insights together, making your notebooks powerful tools for both analysis and communication.

Comparison of Microsoft Fabric, Azure Synapse Analytics (ASA), Azure Data Factory (ADF), and Azure Databricks (ADB)

Today, data engineers have a wide array of tools and platforms at their disposal for data engineering projects. Popular choices include Microsoft Fabric, Azure Synapse Analytics (ASA), Azure Data Factory (ADF), and Azure Databricks (ADB). It’s common to wonder which one is the best fit for your specific needs.

Side by Side comparison

Here’s a concise comparison of Microsoft FabricAzure Synapse AnalyticsAzure Data Factory (ADF), and Azure Databricks (ADB) based on their key features, use cases, and differences:

FeatureMicrosoft FabricAzure Synapse AnalyticsAzure Data Factory (ADF)Azure Databricks (ADB)
TypeUnified SaaS analytics platformIntegrated analytics serviceCloud ETL/ELT serviceApache Spark-based analytics platform
Primary Use CaseEnd-to-end analytics (Data Engineering, Warehousing, BI, Real-Time)Large-scale data warehousing & analyticsData integration & orchestrationBig Data processing, ML, AI, advanced analytics
Data IntegrationBuilt-in Data Factory capabilitiesSynapse Pipelines (similar to ADF)Hybrid ETL/ELT pipelinesLimited (relies on Delta Lake, ADF, or custom code)
Data WarehousingOneLake (Delta-Parquet based)Dedicated SQL pools (MPP)Not applicableCan integrate with Synapse/Delta Lake
Big Data ProcessingSpark-based (Fabric Spark)Spark pools (serverless/dedicated)No (orchestration only)Optimized Spark clusters (Delta Lake)
Real-Time AnalyticsYes (Real-Time Hub)Yes (Synapse Real-Time Analytics)NoYes (Structured Streaming)
Business IntelligencePower BI (deeply integrated)Power BI integrationNoLimited (via dashboards or Power BI)
Machine LearningBasic ML integrationML in Spark poolsNoFull ML/DL support (MLflow, AutoML)
Pricing ModelCapacity-based (Fabric SKUs)Pay-as-you-go (serverless) or dedicatedActivity-basedDBU-based (compute + storage)
Open Source SupportLimited (Delta-Parquet)Limited (Spark, SQL)NoFull (Spark, Python, R, ML frameworks)
GovernanceCentralized (OneLake, Purview)Workspace-levelLimitedWorkspace-level (Unity Catalog)

Key Differences

  • Fabric vs Synapse: Fabric is a fully managed SaaS (simpler, less configurable), while Synapse offers more control (dedicated SQL pools, Spark clusters).
  • ADF vs Synapse Pipelines: Synapse Pipelines = ADF inside Synapse (same engine).
  • ADB vs Fabric Spark: ADB has better ML & open-source support, while Fabric Spark is simpler & integrated with Power BI.

When to Use Which

  1. Microsoft Fabric
    • Best for end-to-end analytics in a unified SaaS platform (no infrastructure management).
    • Combines data engineering, warehousing, real-time, and BI in one place.
    • Good for Power BI-centric organizations.
  2. Azure Synapse Analytics
    • Best for large-scale data warehousing with SQL & Spark processing.
    • Hybrid of ETL (Synapse Pipelines), SQL Pools, and Spark analytics.
    • More flexible than Fabric (supports open formats like Parquet, CSV).
  3. Azure Data Factory (ADF)
    • Best for orchestrating ETL/ELT workflows (no compute/storage of its own).
    • Used for data movement, transformations, and scheduling.
    • Often paired with Synapse or Databricks.
  4. Azure Databricks (ADB)
    • Best for advanced analytics, AI/ML, and big data processing with Spark.
    • Optimized for Delta Lake (ACID transactions on data lakes).
    • Preferred for data science teams needing MLflow, AutoML, etc.

Which One Should You Choose?

  • For a fully integrated Microsoft-centric solution → Fabric
  • For large-scale data warehousing + analytics → Synapse
  • For ETL/data movement → ADF (or Synapse Pipelines)
  • For advanced Spark-based analytics & ML → Databricks

Data Quality Framework (DQX)

Data quality is more critical than ever in today’s data-driven world. Organizations are generating and collecting vast amounts of data, and the ability to trust and leverage this information is paramount for success. Poor data quality can have severe negative impacts, ranging from flawed decision-making to regulatory non-compliance and significant financial losses.

Key Dimensions of Data Quality (DAMA-DMBOK or ISO 8000 Standards)

A robust DQX evaluates data across multiple dimensions:

  • Accuracy: Data correctly represents real-world values.
  • Completeness: No missing or null values where expected.
  • Consistency: Data is uniform across systems and over time.
  • Timeliness: Data is up-to-date and available when needed.
  • Validity: Data conforms to defined business rules (e.g., format, range).
  • Uniqueness: No unintended duplicates.
  • Integrity: Relationships between datasets are maintained.

What is Data Quality Framework (DQX)

Data Quality Framework (DQX) is an open-source framework from Databricks Labs designed to simplify and automate data quality checks for PySpark workloads on both batch and streaming data.

DAX is a structured approach to assessing, monitoring, and improving the quality of data within an organization. It define, validate, and enforce data quality rules across your data pipelines. It ensures that data is accurate, consistent, complete, reliable, and fit for its intended use. so it can be used confidently for analytics, reporting, compliance, and decision-making.

This article will explore how the DQX framework helps improve data reliability, reduce data errors, and enforce compliance with data quality standards. We will step by step go through all steps, from setup and use DQX framework in databricks notebook with code snippets to implement data quality checks.

DQX usage in the Lakehouse Architecture

In the Lakehouse architecture, new data validation should happen during data entry into the curated layer to ensure bad data is not propagated to the subsequent layers. With DQX, you can implement Dead-Letter pattern to quarantine invalid data and re-ingest it after curation to ensure data quality constraints are met. The data quality can be monitored in real-time between layers, and the quarantine process can be automated.

Credits: https://databrickslabs.github.io/dqx/docs/motivation/

Components of a Data Quality Framework

A DQX typically includes:

A. Data Quality Assessment

  • Profiling: Analyze data to identify anomalies (e.g., outliers, nulls).
  • Metrics & KPIs: Define measurable standards (e.g., % completeness, error rates).
  • Benchmarking: Compare against industry standards or past performance.

B. Data Quality Rules & Standards

  • Define validation rules (e.g., “Email must follow RFC 5322 format”).
  • Implement checks at the point of entry (e.g., form validation) and during processing.

C. Governance & Roles

  • Assign data stewards responsible for quality.
  • Establish accountability (e.g., who fixes issues? Who approves changes?).

D. Monitoring & Improvement

  • Automated checks: Use tools like Great Expectations, Talend, or custom scripts.
  • Root Cause Analysis (RCA): Identify why errors occur (e.g., system glitches, human input).
  • Continuous Improvement: Iterative fixes (e.g., process changes, user training).

E. Tools & Technology

  • Data Quality Tools: Informatica DQ, IBM InfoSphere, Ataccama, or open-source (Apache Griffin).
  • Metadata Management: Track data lineage and quality scores.
  • AI/ML: Anomaly detection (e.g., identifying drift in datasets).

F. Culture & Training

  • Promote data literacy across teams.
  • Encourage reporting of data issues without blame.

Using Databricks DQX Framework in a Notebook

Step by Step Implementing DQX

Step 1: Install the DQX Library

install it using the Databricks Labs CLI:

%pip install databricks-labs-dqx

# Restart the kernel after the package is installed in the notebook:
# in a separate cell run:
dbutils.library.restartPython()

Step 2: Initialize the Environment and read input data

Set up the necessary environment for running the Databricks DQX framework, including:

Importing the key components from the Databricks DQX library.
  • DQProfiler: Used for profiling the input data to understand its structure and generate summary statistics.
  • DQGenerator: Generates data quality rules based on the profiles.
  • DQEngine: Executes the defined data quality checks.
  • WorkspaceClient: Handles communication with the Databricks workspace.
Import Libraries
from databricks.labs.dqx.profiler.profiler import DQProfiler
from databricks.labs.dqx.profiler.generator import DQGenerator
from databricks.labs.dqx.engine import DQEngine
from databricks.sdk import WorkspaceClient
Loading the input data that you want to evaluate.
# Read the input data from a Delta table
input_df = spark.read.table("catalog.schema.table")
Establishing a connection to the Databricks workspace.
# Initialize the WorkspaceClient to interact with the Databricks workspace
ws = WorkspaceClient()

# Initialize a DQProfiler instance with the workspace client
profiler = DQProfiler(ws)
Profiling for data quality.
# Profile the input DataFrame to get summary statistics and data profiles

summary_stats, profiles = profiler.profile(input_df)
Generate DQX quality rules/checks
# generate DQX quality rules/checks
generator = DQGenerator(ws)
checks = generator.generate_dq_rules(profiles)  # with default level "error"

dq_engine = DQEngine(ws)
Save checks in arbitrary workspace location
# save checks in arbitrary workspace location
dq_engine.save_checks_in_workspace_file(checks, workspace_path="/Shared/App1/checks.yml")
Generate DLT expectations
# generate DLT expectations
dlt_generator = DQDltGenerator(ws)

dlt_expectations = dlt_generator.generate_dlt_rules(profiles, language="SQL")
print(dlt_expectations)

dlt_expectations = dlt_generator.generate_dlt_rules(profiles, language="Python")
print(dlt_expectations)

dlt_expectations = dlt_generator.generate_dlt_rules(profiles, language="Python_Dict")
print(dlt_expectations)

The profiler samples 30% of the data (sample ratio = 0.3) and limits the input to 1000 records by default.

Profiling a Table

Tables can be loaded and profiled using profile_table.

from databricks.labs.dqx.profiler.profiler import DQProfiler
from databricks.sdk import WorkspaceClient

# Profile a single table directly
ws = WorkspaceClient()
profiler = DQProfiler(ws)

# Profile a specific table with custom options
summary_stats, profiles = profiler.profile_table(
    table="catalog1.schema1.table1",
    columns=["col1", "col2", "col3"],  # specify columns to profile
    options={
        "sample_fraction": 0.1,  # sample 10% of data
        "limit": 500,            # limit to 500 records
        "remove_outliers": True, # enable outlier detection
        "num_sigmas": 2.5       # use 2.5 standard deviations for outliers
    }
)

print("Summary Statistics:", summary_stats)
print("Generated Profiles:", profiles)
Profiling Multiple Tables

The profiler can discover and profile multiple tables in Unity Catalog. Tables can be passed explicitly as a list or be included/excluded using regex patterns.

from databricks.labs.dqx.profiler.profiler import DQProfiler
from databricks.sdk import WorkspaceClient

ws = WorkspaceClient()
profiler = DQProfiler(ws)

# Profile several tables by name:
results = profiler.profile_tables(
    tables=["main.data.table_001", "main.data.table_002"]
)

# Process results for each table
for summary_stats, profiles in results:
    print(f"Table statistics: {summary_stats}")
    print(f"Generated profiles: {profiles}")

# Include tables matching specific patterns
results = profiler.profile_tables(
    patterns=["$main.*", "$data.*"]
)

# Process results for each table
for summary_stats, profiles in results:
    print(f"Table statistics: {summary_stats}")
    print(f"Generated profiles: {profiles}")

# Exclude tables matching specific patterns
results = profiler.profile_tables(
    patterns=["$sys.*", ".*_tmp"],
    exclude_matched=True
)

# Process results for each table
for summary_stats, profiles in results:
    print(f"Table statistics: {summary_stats}")
    print(f"Generated profiles: {profiles}")

Profiling Options

The profiler supports extensive configuration options to customize the profiling behavior.

from databricks.labs.dqx.profiler.profiler import DQProfiler
from databricks.sdk import WorkspaceClient

# Custom profiling options
custom_options = {
# Sampling options
"sample_fraction": 0.2, # Sample 20% of the data
"sample_seed": 42, # Seed for reproducible sampling
"limit": 2000, # Limit to 2000 records after sampling

# Outlier detection options
"remove_outliers": True, # Enable outlier detection for min/max rules
"outlier_columns": ["price", "age"], # Only detect outliers in specific columns
"num_sigmas": 2.5, # Use 2.5 standard deviations for outlier detection

# Null value handling
"max_null_ratio": 0.05, # Generate is_not_null rule if <5% nulls

# String handling
"trim_strings": True, # Trim whitespace from strings before analysis
"max_empty_ratio": 0.02, # Generate is_not_null_or_empty if <2% empty strings

# Distinct value analysis
"distinct_ratio": 0.01, # Generate is_in rule if <1% distinct values
"max_in_count": 20, # Maximum items in is_in rule list

# Value rounding
"round": True, # Round min/max values for cleaner rules
}

ws = WorkspaceClient()
profiler = DQProfiler(ws)

# Apply custom options to profiling
summary_stats, profiles = profiler.profile(input_df, options=custom_options)

# Apply custom options when profiling tables
tables = [
"dqx.demo.test_table_001",
"dqx.demo.test_table_002",
"dqx.demo.test_table_003", # profiled with default options
]
table_options = {
"dqx.demo.test_table_001": {"limit": 2000},
"dqx.demo.test_table_002": {"limit": 5000},
}
summary_stats, profiles = profiler.profile_tables(tables=tables, options=table_options)

Understanding output

Assuming the sample data is:

customer_idcustomer_namecustomer_emailis_activestart_dateend_date
1Alicealice@mainri.ca12025-01-24null
2Bobbob_new@mainri.ca12025-01-24null
3Charlieinvalid_email12025-01-24null
3Charlieinvalid_email02025-01-242025-01-24
# Initialize the WorkspaceClient to interact with the Databricks workspace
ws = WorkspaceClient()

# Initialize a DQProfiler instance with the workspace client
profiler = DQProfiler(ws)

# Read the input data from a Delta table
input_df = spark.read.table("catalog.schema.table")

# Display a sample of the input data
input_df.display()

# Profile the input DataFrame to get summary statistics and data profiles
summary_stats, profiles = profiler.profile(input_df)

Upon checking the summary and profile of my input data generated, below are the results generated by DQX

print(summary_stats)

Summary of input data on all the columns in input dataframe

# Summary of input data on all the columns in input dataframe
{
  "customer_id": {
    "count": 4,
    "mean": 2.25,
    "stddev": 0.9574271077563381,
    "min": 1,
    "25%": 1,
    "50%": 2,
    "75%": 3,
    "max": 3,
    "count_non_null": 4,
    "count_null": 0
  },
  "customer_name": {
    "count": 4,
    "mean": null,
    "stddev": null,
    "min": "Alice",
    "25%": null,
    "50%": null,
    "75%": null,
    "max": "Charlie",
    "count_non_null": 4,
    "count_null": 0
  },
  "customer_email": {
    "count": 4,
    "mean": null,
    "stddev": null,
    "min": "alice@example.com",
    "25%": null,
    "50%": null,
    "75%": null,
    "max": "charlie@example.com",
    "count_non_null": 4,
    "count_null": 0
  },
  "is_active": {
    "count": 4,
    "mean": 0.75,
    "stddev": 0.5,
    "min": 0,
    "25%": 0,
    "50%": 1,
    "75%": 1,
    "max": 1,
    "count_non_null": 4,
    "count_null": 0
  },
  "start_date": {
    "count": 4,
    "count_non_null": 4,
    "count_null": 0,
    "min": "2025-01-24",
    "max": "2025-01-24",
    "mean": "2025-01-24"
  },
  "end_date": {
    "count": 4,
    "count_non_null": 1,
    "count_null": 3,
    "min": 1737676800,
    "max": 1737676800
  }
}

print(profiles)
# Default Data profile generated based on input data
DQProfile(
  name='is_not_null',
  column='customer_id',
  description=None,
  parameters=None
),
DQProfile(
  name='min_max',
  column='customer_id',
  description='Real min/max values were used',
  parameters={
    'min': 1,
    'max': 3
  }
),
DQProfile(
  name='is_not_null',
  column='customer_name',
  description=None,
  parameters=None
),
DQProfile(
  name='is_not_null',
  column='customer_email',
  description=None,
  parameters=None
),
DQProfile(
  name='is_not_null',
  column='is_active',
  description=None,
  parameters=None
),
DQProfile(
  name='is_not_null',
  column='start_date',
  description=None,
  parameters=None
)

Step 3: Understanding checks applied at data

With the below snippet, we can understand the default checks applied at input data, which generated the data profile as mentioned in previous step.

# generate DQX quality rules/checks
generator = DQGenerator(ws)
checks = generator.generate_dq_rules(profiles)

print(checks)
# Checks applied on input data
[{
  'check': {
    'function': 'is_not_null',
    'arguments': {
      'col_name': 'customer_id'
    }
  },
  'name': 'customer_id_is_null',
  'criticality': 'error'
},
{
  'check': {
    'function': 'is_in_range',
    'arguments': {
      'col_name': 'customer_id',
      'min_limit': 1,
      'max_limit': 3
    }
  },
  'name': 'customer_id_isnt_in_range',
  'criticality': 'error'
},
{
  'check': {
    'function': 'is_not_null',
    'arguments': {
      'col_name': 'customer_name'
    }
  },
  'name': 'customer_name_is_null',
  'criticality': 'error'
},
{
  'check': {
    'function': 'is_not_null',
    'arguments': {
      'col_name': 'customer_email'
    }
  },
  'name': 'customer_email_is_null',
  'criticality': 'error'
},
{
  'check': {
    'function': 'is_not_null',
    'arguments': {
      'col_name': 'is_active'
    }
  },
  'name': 'is_active_is_null',
  'criticality': 'error'
},
{
  'check': {
    'function': 'is_not_null',
    'arguments': {
      'col_name': 'start_date'
    }
  },
  'name': 'start_date_is_null',
  'criticality': 'error'
}]

Step 4: Define custom Data Quality Expectations

In addition to the automatically generated checks, you can define your own custom rules to enforce business-specific data quality requirements. This is particularly useful when your organization has unique validation criteria that aren’t covered by the default checks. By using a configuration-driven approach (e.g., YAML), you can easily maintain and update these rules without modifying your pipeline code.

For example, you might want to enforce that:

  • Customer IDs must not be null or empty.
  • Email addresses must match a specific domain format (e.x: @example.com).
# Define custom data quality expectations.
import yaml

checks_custom = yaml.safe_load("""
- check:
    arguments:
        col_name: customer_id
    function: is_not_null_and_not_empty
    criticality: error
    name: customer_id_is_null
- check:
    arguments:
        col_name: customer_email
        regex: '^[A-Za-z0-9._%+-]+@example\.com$'
    function: regex_match
    criticality: error
    name: customer_emaild_is_not_valid""")
# Validate the custom data quality checks
status = DQEngine.validate_checks(checks_custom)

# The above variable for the custom config yaml file can also be pased from workspace file path as given below:
status = DQEngine.validate_checks("path to yaml file in workspace")

# Assert that there are no errors in the validation status
assert not status.has_errors

Step 5: Applying the custom rules and generating results

Once your custom data quality rules have been defined and validated, the next step is to apply them to your input data. The DQEngine facilitates this by splitting your dataset into two categories:

  • Silver Data: Records that meet all quality expectations.
  • Quarantined Data: Records that fail one or more quality checks.

This approach allows you to separate valid and invalid data for further inspection and remediation. The valid records can proceed to downstream processes, while the quarantined records can be analyzed to determine the cause of failure (e.g., missing values, incorrect formats).

Here’s how you can apply the rules and generate the results:

# Create a DQEngine instance with the WorkspaceClient
dq_engine = DQEngine(WorkspaceClient())

# Apply quality checks and split the DataFrame into silver and quarantine DataFrames
silver_df, quarantine_df = dq_engine.apply_checks_by_metadata_and_split(input_df_1, checks_custom)
Quarantined data – Not matching the rules

Summary

In essence, data quality is no longer just an IT concern; it’s a fundamental business imperative. In today’s complex and competitive landscape, the success of an organization hinges on its ability to leverage high-quality, trusted data for every strategic and operational decision.

A Data Quality Framework (DQX) helps organizations:

  • Establish clear quality standards
  • Implement automated checks
  • Track and resolve issues
  • Ensure trust in data

https://databrickslabs.github.io/dqx/docs/motivation

https://medium.com/@nivethanvenkat28/revolutionizing-data-quality-checks-using-databricks-dqx-f2a49d83c3c6

Unity Catalog – Table Type Comparison

In Azure Databricks Unity Catalog, you can create different types of tables depending on your storage and management needs. The main table types are including Managed TablesExternal TablesDelta TablesForeign TablesStreaming TablesLive Tables (deprecated)Feature Tables, and Hive Tables (legacy). Each table type is explained in detail, and a side-by-side comparison is provided for clarity.

Side-by-Side Comparison Table

FeatureManaged TablesExternal TablesDelta TablesForeign TablesStreaming TablesDelta Live Tables (DLT)Feature TablesHive Tables (Legacy)
StorageDatabricks-managedExternal storageManaged/ExternalExternal databaseDatabricks-managedDatabricks-managedManaged/ExternalManaged/External
LocationInternal Delta LakeSpecified external pathInternal/External Delta LakeExternal metastore (Snowflake, BigQuery)Internal Delta LakeInternal Delta LakeInternal/External Delta LakeInternal/External storage
OwnershipDatabricksUserDatabricks/UserExternal providerDatabricksDatabricksDatabricks/UserDatabricks (Legacy Hive Metastore)
Deletion ImpactDeletes data & metadataDeletes only metadataDepends (Managed: Deletes, External: Keeps data)Deletes only metadata referenceDeletes data & metadataDeletes data & metadataDeletes metadata (but not feature versions)Similar to Managed/External
FormatDelta LakeParquet, CSV, JSON, DeltaDelta LakeSnowflake, BigQuery, Redshift, etc.Delta LakeDelta LakeDelta LakeParquet, ORC, Avro, CSV
Use CaseFull lifecycle managementSharing with external toolsAdvanced data versioning & ACID complianceQuerying external DBsContinuous data updatesETL PipelinesML feature storageLegacy storage (pre-Unity Catalog)

Table type and describes

1. Managed Tables

Managed tables are tables where both the metadata and the data are managed by Unity Catalog. When you create a managed table, the data is stored in the default storage location associated with the catalog or schema.

Data Storage and location:

Unity Catalog manages both the metadata and the underlying data in a Databricks-managed location

The data is stored in a Unity Catalog-managed storage location. Typically in an internal Delta Lake storage, e.g., DBFS or Azure Data Lake Storage

Use Case:

Ideal for Databricks-centric workflows where you want Databricks to handle storage and metadata.

Pros & Cons:

Pros: Easy to manage, no need to worry about storage locations.

Cons: Data is tied to Databricks, making it harder to share externally.

Example:

CREATE TABLE managed_table (
    id INT,
    name STRING
);

INSERT INTO managed_table VALUES (1, 'Alice');

SELECT * FROM managed_table;

2. External Tables

External tables store metadata in Unity Catalog but keep data in an external storage location (e.g., Azure Blob Storage, ADLS, S3).

Data storage and Location:

The metadata is managed by Unity Catalog, but the actual data remains in external storage (like Azure Data Lake Storage Gen2 or an S3 bucket).

You must specify an explicit storage location, e.g., Azure Blob Storage, ADLS, S3).

Use Case:

Ideal for cross-platform data sharing or when data is managed outside Databricks.

Pros and Cons

Pros: Data is decoupled from Databricks, making it easier to share.

Cons: Requires manual management of external storage and permissions.

Preparing create external table

Before you can create an external table, you must create a storage credential that allows Unity Catalog to read from and write to the path on your cloud tenant, and an external location that references it.

Requirements
  • In Azure, create a service principal and grant it the Azure Blob Contributor role on your storage container.
  • In Azure, create a client secret for your service principal. Make a note of the client secret, the directory ID, and the application ID for the client secret.
step 1: Create a storage credential

You can create a storage credential using the Catalog Explorer or the Unity Catalog CLI. Follow these steps to create a storage credential using Catalog Explorer.

  1. In a new browser tab, log in to Databricks.
  2. Click Catalog.
  3. Click Storage Credentials.
  4. Click Create Credential.
  5. Enter example_credential for he name of the storage credential.
  6. Set Client SecretDirectory ID, and Application ID to the values for your service principal.
  7. Optionally enter a comment for the storage credential.
  8. Click Save.
    Leave this browser open for the next steps.
Create an external location

An external location references a storage credential and also contains a storage path on your cloud tenant. The external location allows reading from and writing to only that path and its child directories. You can create an external location from Catalog Explorer, a SQL command, or the Unity Catalog CLI. Follow these steps to create an external location using Catalog Explorer.

  1. Go to the browser tab where you just created a storage credential.
  2. Click Catalog.
  3. Click External Locations.
  4. Click Create location.
  5. Enter example_location for the name of the external location.
  6. Enter the storage container path for the location allows reading from or writing to.
  7. Set Storage Credential to example_credential to the storage credential you just created.
  8. Optionally enter a comment for the external location.
  9. Click Save.
-- Grant access to create tables in the external location
GRANT USE CATALOG
ON example_catalog
TO `all users`;
 
GRANT USE SCHEMA
ON example_catalog.example_schema
TO `all users`;
 
GRANT CREATE EXTERNAL TABLE
ON LOCATION example_location
TO `all users`;
-- Create an example catalog and schema to contain the new table
CREATE CATALOG IF NOT EXISTS example_catalog;
USE CATALOG example_catalog;
CREATE SCHEMA IF NOT EXISTS example_schema;
USE example_schema;
-- Create a new external Unity Catalog table from an existing table
-- Replace <bucket_path> with the storage location where the table will be created
CREATE TABLE IF NOT EXISTS trips_external
LOCATION 'abfss://<bucket_path>'
AS SELECT * from samples.nyctaxi.trips;
 
-- To use a storage credential directly, add 'WITH (CREDENTIAL <credential_name>)' to the SQL statement.

There are some useful Microsoft document to be refer:

Create an external table in Unity Catalog
Configure a managed identity for Unity Catalog
Create a Unity Catalog metastore
Manage access to external cloud services using service credentials
Create a storage credential for connecting to Azure Data Lake Storage Gen2
External locations

Example



CREATE TABLE external_table (
    id INT,
    name STRING
)
LOCATION 'abfss://container@storageaccount.dfs.core.windows.net/path/to/data';

INSERT INTO external_table VALUES (1, 'Bob');

SELECT * FROM external_table;

3. Foreign Tables

Foreign tables reference data stored in external systems (e.g., Snowflake, Redshift) without copying the data into Databricks.

Data Storage and Location

The metadata is stored in Unity Catalog, but the data resides in another metastore (e.g., an external data warehouse like Snowflake or BigQuery).

It does not point to raw files but to an external system.

Use Case:

Best for querying external databases like Snowflake, BigQuery, Redshift without moving data.

Pros and Cons

Pros: No data duplication, seamless integration with external systems.

Cond: Performance depends on the external system’s capabilities.

Example

CREATE FOREIGN TABLE foreign_table
USING com.databricks.spark.snowflake
OPTIONS (
    sfUrl 'snowflake-account-url',
    sfUser 'user',
    sfPassword 'password',
    sfDatabase 'database',
    sfSchema 'schema',
    dbtable 'table'
);

SELECT * FROM foreign_table;

4. Delta Tables

Delta tables use the Delta Lake format, providing ACID transactions, scalable metadata handling, and data versioning.

Data Storage and Location

A special type of managed or external table that uses Delta Lake format.

Can be in managed storage or external storage.

Use Case:

Ideal for reliable, versioned data pipelines.

Pros and Cons

Pros: ACID compliance, time travel, schema enforcement, efficient upserts/deletes.

Cons: Slightly more complex due to Delta Lake features.

Example

CREATE TABLE delta_table (
    id INT,
    name STRING
)
USING DELTA
LOCATION 'abfss://container@storageaccount.dfs.core.windows.net/path/to/delta';

INSERT INTO delta_table VALUES (1, 'Charlie');

SELECT * FROM delta_table;

-- Time travel example
SELECT * FROM delta_table VERSION AS OF 1;

5. Feature Tables

Feature tables are used in machine learning workflows to store and manage feature data for training and inference.

Data Storage and Location

Used for machine learning (ML) feature storage with Databricks Feature Store.

Can be managed or external.

Use Case:

Ideal for managing and sharing features across ML models and teams.

Pros and Cons:

Pros: Centralized feature management, versioning, and lineage tracking.

Pros: Centralized feature management, versioning, and lineage tracking.

Example:

from databricks.feature_store import FeatureStoreClient
fs = FeatureStoreClient()
fs.create_table(
    name="feature_table",
    primary_keys=["id"],
    schema="id INT, feature1 FLOAT, feature2 FLOAT",
    description="Example feature table"
)

fs.write_table("feature_table", df, mode="overwrite")
features = fs.read_table("feature_table")

6. Streaming Tables

Streaming tables are designed for real-time data ingestion and processing using Structured Streaming.

Data Location:

Can be stored in managed or external storage.

Use Case:

Ideal for real-time data pipelines and streaming analytics.

Pros and Cons

Pros: Supports real-time data processing, integrates with Delta Lake for reliability.

Cons: Requires understanding of streaming concepts and infrastructure.

Example:

CREATE TABLE streaming_table (
    id INT,
    name STRING
)
USING DELTA;

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("StreamingExample").getOrCreate()
streaming_df = spark.readStream.format("delta").load("/path/to/delta")
streaming_df.writeStream.format("delta").outputMode("append").start("/path/to/streaming_table")

Delta Live Tables (DLT)

Delta Live Tables (DLT) is the modern replacement for Live Tables. It is a framework for building reliable, maintainable, and scalable ETL pipelines using Delta Lake. DLT automatically handles dependencies, orchestration, and error recovery.

Data storage and Location:

Data is stored in Delta Lake format, either in managed or external storage.

Use Case:

Building production-grade ETL pipelines for batch and streaming data.

  • DLT pipelines are defined using Python or SQL.
  • Tables are automatically materialized and can be queried like any other Delta table.

Pros and Cons

  • Declarative pipeline definition.
  • Automatic dependency management.
  • Built-in data quality checks and error handling.
  • Supports both batch and streaming workloads.

Cons: Requires understanding of Delta Lake and ETL concepts.

Example

import dlt

@dlt.table
def live_table():
    return spark.read.format("delta").load("/path/to/source_table")

8. Hive Tables (Legacy)

Hive tables are legacy tables that use the Apache Hive format. They are supported for backward compatibility.

Data storage Location:

Can be stored in managed or external storage.

Use Case:

Legacy systems or migration projects.

Pros and Cons

  • Pros: Backward compatibility with older systems.
  • Cons: Lacks modern features like ACID transactions and time travel.

Example

CREATE TABLE hive_table (
    id INT,
    name STRING
)
STORED AS PARQUET;

INSERT INTO hive_table VALUES (1, 'Dave');
SELECT * FROM hive_table;

Final Thoughts

Use Delta Live Tables for automated ETL pipelines.

Use Feature Tables for machine learning models.

Use Foreign Tables for querying external databases.

Avoid Hive Tables unless working with legacy systems.

Summary

  • Managed Tables: Fully managed by Databricks, ideal for internal workflows.
  • External Tables: Metadata managed by Databricks, data stored externally, ideal for cross-platform sharing.
  • Delta Tables: Advanced features like ACID transactions and time travel, ideal for reliable pipelines.
  • Foreign Tables: Query external systems without data duplication.
  • Streaming Tables: Designed for real-time data processing.
  • Feature Tables: Specialized for machine learning feature management.
  • Hive Tables: Legacy format, not recommended for new projects.

Each table type has its own creation syntax and usage patterns, and the choice depends on your specific use case, data storage requirements, and workflow complexity.

Please do not hesitate to contact me if you have any questions at William . chen @ mainri.ca

(remove all space from the email account 😊)

Data Migration Checklist: A Starting Point

Creating a robust data migration checklist can be challenging, particularly for those new to the process. To simplify this, we’ve compiled a core set of essential activities for effective data migration planning. This checklist, designed to support thorough preparation for data migration projects, has been successfully used across diverse migration projects over several years, including those for financial institutions (including banks), insurance companies, consulting firms, and other industries. While not exhaustive, it provides a solid foundation that can be customized with project-specific requirements.

It is available for download as a template.

Please do not hesitate to contact me if you have any questions at William . chen @ mainri.ca

(remove all space from the email account 😊)

Implementing Slowly Changing Dimension Type 2 Using Delta Lake on Databricks

Built on Apache Spark, Delta Lake provides a robust storage layer for data in Delta tables. Its features include ACID transactions, high-performance queries, schema evolution, and data versioning, among others.

Today’s focus is on how Delta Lake simplifies the management of slowly changing dimensions (SCDs).

Quickly review Type 2 of Slowly Changing Dimension 

A quick recap of SCD Type 2 follows:

  • Storing historical dimension data with effective dates.
  • Keeping a full history of dimension changes (with start/end dates).
  • Adding new rows for dimension changes (preserving history).
# Existing Dimension data
surrokey  depID   dep	StartDate   EndDate     IsActivity
1	  1001	  IT	2019-01-01  9999-12-31  1
2	  1002	  Sales	2019-01-01  9999-12-31  1
3	  1003	  HR	2019-01-01  9999-12-31  1

# Dimension changed and new data comes 
depId dep
1003  wholesale   <--- depID is same, name changed from "Sales" to "wholesale"
1004  Finance     <--- new data

# the new Dimension will be:
surrokey  depID	dep	   StartDate   EndDate     IsActivity 
1	  1001	IT	   2019-01-01  9999-12-31  1   <-- No action required
2	  1002	HR	   2019-01-01  9999-12-31  1   <-- No action required
3	  1003	Sales	   2019-01-01  2020-12-31  0   <-- mark as inactive
4         1003  wholesale  2021-01-01  9999-12-31  1   <-- add updated active value
5         1004  Finance    2021-01-01  9999-12-31  1   <-- insert new data

Creating demo data

We’re creating a Delta table, dim_dep, and inserting three rows of existing dimension data.

Existing dimension data

%sql
# Create table dim_dep
%sql
create table dim_dep (
Surrokey BIGINT  GENERATED ALWAYS AS IDENTITY
, depID  int
, dep	string
, StartDate   DATE 
, End_date DATE 
, IsActivity BOOLEAN
)
using delta
location 'dbfs:/mnt/dim/'

# Insert data
insert into dim_dep (depID,dep, StartDate,EndDate,IsActivity) values
(1001,'IT','2019-01-01', '9999-12-31' , 1),
(1002,'Sales','2019-01-01', '9999-12-31' , 1),
(1003,'HR','2019-01-01', '9999-12-31' , 1)

select * from dim_dep
Surrokey depID	dep	StartDate	EndDate	        IsActivity
1	 1001	IT	2019-01-01	9999-12-31	true
2	 1002	Sales	2019-01-01	9999-12-31	true
3	 1003	HR	2019-01-01	9999-12-31	true
%python
dbutils.fs.ls('dbfs:/mnt/dim')
path	name	size	modificationTime
Out[43]: [FileInfo(path='dbfs:/mnt/dim/_delta_log/', name='_delta_log/', size=0, modificationTime=0),
 FileInfo(path='dbfs:/mnt/dim/part-00000-5f9085db-92cc-4e2b-886d-465924de961b-c000.snappy.parquet', name='part-00000-5f9085db-92cc-4e2b-886d-465924de961b-c000.snappy.parquet', size=1858, modificationTime=1736027755000)]

New coming source data

The new coming source data which may contain new record or updated record.

Dimension changed and new data comes 
depId       dep
1002        wholesale 
1003        HR  
1004        Finance     

  • depID 1002, dep changed from “Sales” to “wholesale”, updating dim_dep table;
  • depID 1003, nothing changed, no action required
  • depID 1004, is a new record, inserting into dim_dep

Assuming the data, originating from other business processes, is now stored in the data lake as CSV files.

Implementing SCD Type 2

Step 1: Read the source

%python 
df_dim_dep_source = spark.read.csv('dbfs:/FileStore/dep.csv', header=True)

df_dim_dep_source.show()
+-----+---------+
|depid|      dep|
+-----+---------+
| 1002|Wholesale|
| 1003|       HR|
| 1004|  Finance|
+-----+---------+

Step 2: Read the target

df_dim_dep_target = spark.read.format("delta").load("dbfs:/mnt/dim/")

df_dim_dep_target.show()
+--------+-----+-----+----------+----------+----------+
|Surrokey|depID|  dep| StartDate|   EndDate|IsActivity|
+--------+-----+-----+----------+----------+----------+
|       1| 1001|   IT|2019-01-01|9999-12-31|      true|
|       2| 1002|Sales|2019-01-01|9999-12-31|      true|
|       3| 1003|   HR|2019-01-01|9999-12-31|      true|
+--------+-----+-----+----------+----------+----------+

Step 3: Source Left outer Join Target

We perform a source dataframe – df_dim_dep_source, left outer join target dataframe – df_dim_dep_target, where source depID = target depID, and also target’s IsActivity = 1 (meant activity)

This join’s intent is not to miss any new data coming through source. And active records in target because only for those data SCD update is required. After joining source and target, the resultant dataframe can be seen below.

src = df_dim_dep_source
tar = df_dim_dep_target
df_joined = src.join (tar,\
        (src.depid == tar.depID) \
         & (tar.IsActivity == 'true')\
        ,'left') \
    .select(src['*'] \
        , tar.Surrokey.alias('tar_surrokey')\
        , tar.depID.alias('tar_depID')\
        , tar.dep.alias('tar_dep')\
        , tar.StartDate.alias('tar_StartDate')\
        , tar.EndDate.alias('tar_EndDate')\
        , tar.IsActivity.alias('tar_IsActivity')   )
    
df_joined.show()
+-----+---------+------------+---------+-------+-------------+-----------+--------------+
|depid|      dep|tar_surrokey|tar_depID|tar_dep|tar_StartDate|tar_EndDate|tar_IsActivity|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+
| 1002|Wholesale|           2|     1002|  Sales|   2019-01-01| 9999-12-31|          true|
| 1003|       HR|           3|     1003|     HR|   2019-01-01| 9999-12-31|          true|
| 1004|  Finance|        null|     null|   null|         null|       null|          null|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+

Step 4: Filter only the non matched and updated records

In this demo, we only have depid and dep two columns. But in the actual development environment, may have many many columns.

Instead of comparing multiple columns, e.g.,
src_col1 != tar_col1,
src_col2 != tar_col2,
…..
src_colN != tar_colN
We compute hashes for both column combinations and compare the hashes. In addition of this, in case of column’s data type is different, we convert data type the same one.

from pyspark.sql.functions import col , xxhash64

df_filtered = df_joined.filter(\
    xxhash64(col('depid').cast('string'),col('dep').cast('string')) \
    != \
    xxhash64(col('tar_depID').cast('string'),col('tar_dep').cast('string'))\
    )
    
df_filtered.show()
+-----+---------+------------+---------+-------+-------------+-----------+--------------+
|depid|      dep|tar_surrokey|tar_depID|tar_dep|tar_StartDate|tar_EndDate|tar_IsActivity|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+
| 1002|Wholesale|           2|     1002|  Sales|   2019-01-01| 9999-12-31|          true|
| 1004|  Finance|        null|     null|   null|         null|       null|          null|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+

from the result, we can see:

  • The row, dep_id = 1003, dep = HR, was filtered out because both source and target side are the same. No action required.
  • The row, depid =1002, dep changed from “Sales” to “Wholesale”, need updating.
  • The row, depid = 1004, Finance is brand new row, need insert into target side – dimension table.

Step 5: Find out records that will be used for inserting

From above discussion, we have known depid=1002, need updating and depid=1004 is a new rocord. We will create a new column ‘merge_key’ which will be used for upsert operation. This column will hold the values of source id.

Add a new column – “merge_key”

df_inserting = df_filtered. withColumn('merge_key', col('depid'))

df_inserting.show()
+-----+---------+------------+---------+-------+-------------+-----------+--------------+---------+
|depid|      dep|tar_surrokey|tar_depID|tar_dep|tar_StartDate|tar_EndDate|tar_IsActivity|merge_key|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+---------+
| 1002|Wholesale|           2|     1002|  Sales|   2019-01-01| 9999-12-31|          true|     1002|
| 1004|  Finance|        null|     null|   null|         null|       null|          null|     1004|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+---------+
The above 2 records will be inserted as new records to the target table

The above 2 records will be inserted as new records to the target table.

Step 6: Find out the records that will be used for updating in target table

from pyspark.sql.functions import lit
df_updating = df_filtered.filter(col('tar_depID').isNotNull()).withColumn('merge_key',lit('None')

df_updating.show()
+-----+---------+------------+---------+-------------+-----------+--------------+---------+
|depid|      dep|tar_surrokey|tar_depID|tar_StartDate|tar_EndDate|tar_IsActivity|merge_key|
+-----+---------+------------+---------+-------------+-----------+--------------+---------+
| 1003|Wholesale|           3|     1003|   2019-01-01| 9999-12-31|          true|     None|
+-----+---------+------------+---------+-------------+-----------+--------------+---------+
The above record will be used for updating SCD columns in the target table.

This dataframe filters the records that have tar_depID column not null which means, the record already exists in the table for which SCD update has to be done. The column merge_key will be ‘None’ here which denotes this only requires update in SCD cols.

Step 7: Combine inserting and updating records as stage

df_stage_final = df_updating.union(df_instering)

df_stage_final.show()
+-----+---------+------------+---------+-------+-------------+-----------+--------------+---------+
|depid|      dep|tar_surrokey|tar_depID|tar_dep|tar_StartDate|tar_EndDate|tar_IsActivity|merge_key|
+-----+---------+------------+---------+-------+-------------+-----------+--------------+---------+
| 1002|Wholesale|           2|     1002|  Sales|   2019-01-01| 9999-12-31|          true|     None| <-- updating in SCD table
| 1002|Wholesale|           2|     1002|  Sales|   2019-01-01| 9999-12-31|          true|     1002| <-- inserting in SCD table
| 1004|  Finance|        null|     null|   null|         null|       null|          null|     1004| <-- inserting in SCD table
+-----+---------+------------+---------+-------+-------------+-----------+--------------+---------+
  • records with merge_key as none are for updating in existing dimension table.
  • records with merge_key not null will be inserted as new records in dimension table.

Step 8: Upserting the dim_dep Dimension Table

Before performing the upsert, let’s quickly review the existing dim_dep table and the incoming source data.

# Existing dim_dep table
spark.read.table('dim_dep').show()
+--------+-----+-----+----------+----------+----------+
|Surrokey|depID|  dep| StartDate|   EndDate|IsActivity|
+--------+-----+-----+----------+----------+----------+
|       1| 1001|   IT|2019-01-01|9999-12-31|      true|
|       2| 1002|Sales|2019-01-01|9999-12-31|      true|
|       3| 1003|   HR|2019-01-01|9999-12-31|      true|
+--------+-----+-----+----------+----------+----------+

# coming updated source data
park.read.csv('dbfs:/FileStore/dep_src.csv', header=True).show()
+-----+---------+
|depid|      dep|
+-----+---------+
| 1002|Wholesale|
| 1003|       HR|
| 1004|  Finance|
+-----+---------+

Implementing an SCD Type 2 UpSert on the dim_dep Dimension Table

from delta.tables import DeltaTable
from pyspark.sql.functions import current_date, to_date, lit

# define the source DataFrame
src = df_stage_final  # this is a DataFrame object

# Load the target Delta table
tar = DeltaTable.forPath(spark, "dbfs:/mnt/dim")  # target Dimension table


# Performing the UpSert
tar.alias("tar").merge(
    src.alias("src"),
    condition="tar.depID == src.merge_key and tar_IsActivity = 'true'"
).whenMatchedUpdate( \
    set = { \
        "IsActivity": "'false'", \
        "EndDate": "current_date()" \
        }) \
.whenNotMatchedInsert( \
    values = \
    {"depID": "src.depid", \
    "dep": "src.dep", \
    "StartDate": "current_date ()", \
    "EndDate": """to_date('9999-12-31', 'yyyy-MM-dd')""", \
    "IsActivity": "'true' \
    "}) \
.execute()

all done!

Validating the result

spark.read.table('dim_dep').sort(['depID','Surrokey']).show()
+--------+-----+---------+----------+----------+----------+
|Surrokey|depID|      dep| StartDate|   EndDate|IsActivity|
+--------+-----+---------+----------+----------+----------+
|       1| 1001|       IT|2019-01-01|9999-12-31|      true|
|       2| 1002|    Sales|2019-01-01|2020-01-05|     false| <--inactived
|       4| 1002|Wholesale|2020-01-05|9999-12-31|      true| <--updated status
|       3| 1003|       HR|2019-01-01|9999-12-31|      true|
|       5| 1004|  Finance|2020-01-05|9999-12-31|      true| <--append new record
+--------+-----+---------+----------+----------+----------+

Conclusion

we demonstrated how to unlock the power of Slowly Changing Dimension (SCD) Type 2 using Delta Lake, a revolutionary storage layer that transforms data lakes into reliable, high-performance, and scalable repositories.  With this approach, organizations can finally unlock the full potential of their data and make informed decisions with confidence

Please do not hesitate to contact me if you have any questions at William . chen @ mainri.ca

(remove all space from the email account 😊)