spark: RDD, Dataframe, Dataset, Transformation and Action

In Apache Spark, RDD, DataFrame, and Dataset are the core data structures used to handle distributed data. Each of these offers different levels of abstraction and optimization.

Basic Overview

FeatureRDDDataFrameDataset
DefinitionLow-level abstraction for distributed data (objects).
RDDs (Resilient Distributed Datasets) are the fundamental data structure in Spark, representing an immutable distributed collection of objects. They provide a low-level interface to Spark, allowing developers to perform operations directly on data in a functional programming style.
High-level abstraction for structured data (table).
DataFrames are similar to tables in relational databases and are used to represent structured data. They allow you to perform complex queries and operations on structured datasets with a SQL-like syntax.
Combines RDD and DataFrame, adds type safety (Scala/Java).
Datasets are an extension of DataFrames that provide the benefits of both RDDs and DataFrames, with compile-time type safety. They are particularly useful in Scala and Java for ensuring data types during compile time.
APIFunctional (map, filter, etc.)
The RDD API provides functional programming constructs, allowing transformations and actions through functions like map, filter, and reduce.
SQL-like API + relational functions.
The DataFrame API includes both SQL-like queries and relational functions, making it more user-friendly and easier to work with structured data.
Typed API + relational functions (strong typing in Scala/Java).
The Dataset API combines typed operations with the ability to perform relational functions, allowing for a more expressive and type-safe programming model in Scala/Java.
Data StructureCollection of elements (e.g., objects, arrays).
RDDs are essentially collections of objects, which can be of any type (primitive or complex). This means that users can work with various data types without a predefined schema.
Distributed table with named columns.
DataFrames represent structured data as a distributed table, where each column has a name and a type. This structured format makes it easier to work with large datasets and perform operations.
Typed distributed table with named columns.
Datasets also represent data in a structured format, but they enforce types at compile time, enhancing safety and performance, especially in statically typed languages like Scala and Java.
SchemaNo schemaDefined schema (column names)Schema with compile-time type checking (Scala/Java)
PerformanceLess optimized (no Catalyst/Tungsten)Highly optimized (Catalyst/Tungsten)Highly optimized (Catalyst/Tungsten)

Transformations and Actions

Transformations and Actions are two fundamental operations used to manipulate distributed data collections like RDDs, DataFrames, and Datasets.

High-Level Differences

  • Transformations: These are lazy operations that define a new RDD, DataFrame, or Dataset by applying a function to an existing one. However, they do not immediately execute—Spark builds a DAG (Directed Acyclic Graph) of all transformations.
  • Actions: These are eager operations that trigger execution by forcing Spark to compute and return results or perform output operations. Actions break the laziness and execute the transformations.

Key Differences Between Transformations and Actions

FeatureTransformationsActions
DefinitionOperations that define a new dataset based on an existing one, but do not immediately execute.Operations that trigger the execution of transformations and return results or perform output.
Lazy EvaluationYes, transformations are lazily evaluated and only executed when an action is called.No, actions are eager and immediately compute the result by triggering the entire execution plan.
Execution TriggerDo not trigger computation immediately. Spark builds a DAG of transformations to optimize execution.Trigger the execution of the transformations and cause Spark to run jobs and return/output data.
Return TypeReturn a new RDD, DataFrame, or Dataset (these are still “recipes” and not materialized).Return a result to the driver program (like a value) or write data to an external storage system.
Example Operationsmap, filter, flatMap, join, groupBy, select, agg, orderBy.count, collect, first, take, reduce, saveAsTextFile, foreach.

Conclusion

  • Transformations are used to define what to do with the data but don’t execute until an action triggers them.
  • Actions are used to retrieve results or perform output, forcing Spark to execute the transformations.