Alter Row transformation in ADF modifies data rows in a data flow. It handles insert, update, delete, and upsert operations. You define conditions for each operation. Use it to apply changes to a destination dataset. It works with databases supporting CRUD operations. Configure it in the mapping data flow. Map input columns to target columns. Set policies for row changes. It ensures data consistency. Use expressions for conditional logic. It’s useful for incremental data loads. Supports SQL-based sinks. Optimize performance with proper partitioning.
What is the Alter Row Transformation?
The Alter Row Transformation is used to set row-level policies for data being written to a sink. This transformation is particularly useful when you are working with slowly changing dimensions (SCD) or when you need to synchronize data between source and sink systems.
Key Features
- Define Row Actions:
- Insert: Add new rows.
- Update: Modify existing rows.
- Delete: Remove rows.
- Upsert: Insert or update rows.
- No Action: Ignore rows.
- Condition-Based Rules:
- Define rules using expressions for each action.
- Works with Supported Sinks:
- SQL Database, Delta Lake, and more.
How Does the Alter Row Transformation Work?
- Input Data: The transformation takes input data from a previous transformation in the data flow.
- Define Conditions: You define conditions for each action (insert, update, delete, upsert) using expressions.
- Output to Sink: The transformation passes the data to the sink, where the specified actions are performed based on the conditions.
Preparing test data
We will focus on aggregate transformation core concepts.
id CustID Product Quantity Amount
1 C1 A 2 20
2 C1 B 3 30
3 C2 C 1 10
4 C1 A 2 20
5 C3 A 3 30
6 C2 B 1 10
7 C3 C 2 20
8 C1 C 3 30
9 C1 A 2 20
10 C2 A 1 30
11 C3 C 3 10
Use Alter Row Transformation
Step 1: Create Data Flow
Create a Data Flow, add a source transformation and configure it.

preview source data

Step 2: add Alter Transformation

Alter row condition has 4 options:
- Insert if
- Update if
- Delete if
- Upsert if

Using Dataflow expression builder to build condition

preview its output.
We must originate the action order. Actions are processed in the order defined

Step 3: Add Sink transformation
Add a Sink Transformation, configure it.
Currently, Sink Transformation support some of datasets, Inline datasets and dataset object. such as Database, Blob, ADLS, Delta Lake (Online dataset), detail list at Microsoft Documentation
Inline datasets are recommended when you use flexible schemas, one-off sink instances, or parameterized sinks. If your sink is heavily parameterized, inline datasets allow you to not create a “dummy” object. Inline datasets are based in Spark, and their properties are native to data flow.
Dataset objects are reusable entities that can be used in other data flows and activities such as Copy.
For this demo, we are using Delta, Inline dataset.
When alter row policy allow Delete, Update, Upsert, we have to set Primary Key.

Use Data Flow in Pipeline
we completed the data flow, it is ready for use it in pipeline.
Create a pipeline
Create a pipeline and configure the data flow.



let’s change the source data

Execute the pipeline again, the delta table result

Conclusion
Notes
- Actions are processed in the order defined.
- Test rules with Data Preview.
- Primary Key: The sink must have keys for updates and deletes. Ensure that your sink has a primary key defined, as it is required for update, delete, and upsert operations.
By using the Alter Row Transformation in ADF, you can efficiently manage data changes and ensure that your sink systems are always up-to-date with the latest data from your sources. This transformation is a powerful tool for data engineers working on ETL/ELT pipelines in Azure.
Please do not hesitate to contact me if you have any questions at William . Chen @ mainri.ca
(remove all space from the email account )