Pentaho Data Integration
InstallationBusiness AnalyticsCToolsData CatalogData QualityLLMs
  • Overview
    • Pentaho Data Integration ..
  • Data Integration
    • Getting Started
      • Configuring PDI UI
      • KETTLE Variables
    • Concepts & Terminolgy
      • Hello World
      • Logging
      • Error Handling
    • Data Sources
      • Flat Files
        • Text
          • Text File Input
          • Text File Output
        • Excel
          • Excel Writer
        • XML
          • Read XML
        • JSON
          • Read JSON
      • Databases
        • CRUID
          • Database Connections
          • Create DB
          • Read DB
          • Update DB
          • Insert / Update DB
          • Delete DB
        • SCDs
          • SCDs
      • Object Stores
        • MinIO
      • SMB
      • Big Data
        • Hadoop
          • Apache Hadoop
    • Enrich Data
      • Merge
        • Merge Streams
        • Merge Rows (diff)
      • Joins
        • Cross Join
        • Merge Join
        • Database Join
        • XML Join
      • Lookups
        • Database Lookups
      • Scripting
        • Formula
        • Modified JavaScript Value
        • User Defined Java Class
    • Enterprise Solution
      • Jobs
        • Job - Hello World
        • Backward Chaining
        • Parallel
      • Parameters & Variables
        • Parameters
        • Variables
      • Scalability
        • Run Configurations
        • Partition
      • Monitoring & Scheduling
        • Monitoring & Scheduling
      • Logging
        • Logging
      • Dockmaker
        • BA & DI Servers
      • Metadata Injection
        • MDI
    • Plugins
      • Hierarchical Data Type
  • Use Cases
    • Streaming Data
      • MQTT
        • Mosquitto
        • HiveMQ
      • AMQP
        • RabbitMQ
      • Kafka
        • Kafka
    • Machine Learning
      • Prerequiste Tasks
      • AutoML
      • Credit Card
    • RESTful API
    • Jenkins
    • GenAI
  • Reference
    • Page 1
Powered by GitBook
On this page
  1. Data Integration

Concepts & Terminolgy

Understanding the key concepts & lingo ..

PreviousKETTLE VariablesNextHello World

Last updated 1 month ago

Introduction

The Data Integration perspective allows you to create two basic workflow types:

Transformations

Transformations are used to describe the data flows for ETL such as reading from a source, transforming data and loading it into a target location.

Jobs

Jobs are used to coordinate ETL activities such as defining the flow and dependencies for what order transformations should be run, or prepare for execution by checking conditions such as, "Is my source file available?" or "Does a table exist in my database?"

Transformation

Transformations are the workhorses of the ETL process. They are comprised of:

Steps

which provide you with a wide range of functionality ranging from reading text-files to implementing slowly changing dimensions.

Steps executed in parallel.

Hops

help you define the flow of the data in the stream. They represent a row buffer between the Step Output and the next Step Input, as illustrated in the below Transformation. Data flows from the Text file input step to Filter rows to Sort Rows, finally to Table output.


Steps

There are some key characteristics of Steps:

  • Step names must be unique in a single Transformation

  • Virtually all Steps read and write rows of data (exception Generate rows)

  • Most Steps can have multiple outgoing hops. These can be configured to either copy or distribute the data. Copy ensures all Steps receive a copy of the row of data; Distribute sends the data in a round robin fashion to each of the Steps.

  • Steps run in their own thread. It’s possible to run multiple copies of the Step, for performance tuning, each in their own thread.

  • All Steps are executed in parallel, so it’s not possible to define an order of execution.

In addition to Steps, Hops, and Notes enable you to document the Transformation.

Parallelism

When a transformation starts, all steps start at the same time. The hop is configured as a buffer, with generally a 10k row set.

The flow for the data stream occurs when the first step has initialized, started reading the first row sets, then writing them into the hop (10 k buffer). The row sets are then read by the next step, while the first step is still reading and writing row sets into the stream, and the second step outputs into the stream for the next step, and so on.. The buffer size can be set in Miscellaneous tab, in the Transformation properties panel.

Adjusting the Queue Size

When trying to optimize performance, you may want to adjust the input/output queue size. Especially if you have a lot of RAM available. The queue size is configured as the “Nr of rows in rowset” in the transformation settings and applies to all transformation steps. Increasing it might finish the opening steps of a transformation more quickly, thus freeing up CPU time for the subsequent steps.

Data Types

PDI data types map internally to Java data types, so the Java behavior of these data types applies to the associated fields, parameters, and variables used in your transformations and jobs.

The following table describes these mappings:

PDI Data Type
Java Data Type
Description
Example

BigNumber

BigDecimal

An arbitrary unlimited precision number.

3.141592653589793238462643383279502884197169399375105820974944

Binary

Byte[]

An array of bytes that contain any type of binary data.

An image file or a compressed file can be stored as Binary data

Boolean

Boolean

A boolean value true or false.

A boolean value true or false

Date

Date

A date-time value with millisecond precision.

2023-10-20T10:48:51.123

Hierarchical -

EE Plugin 9.5+

BinaryTree

Data items that are related to each other by hierarchical relationships

A family tree

Integer

Long

A signed long 64-bit integer.

42

Internet Address

InetAddress

An Internet Protocol (IP) address.

192.168.0.1

Number

Double

A double precision floating point value (64bits).

2.7182818284590452353602874713526624977572470936999

String

String

A variable unlimited length text encoded in UTF-8 (Unicode).

“Hello world!”

Timestamp

Timestamp

Allows the specification of fractional seconds to a precision of nanoseconds.

2023-10-20T10:48:51.123456789

Jobs

In a PDI process, jobs orchestrate other jobs and transformations in a coordinated way to realize our business process:

Job Entries

Represent the different tasks or processes that need to be executed as part of the job. Job entries can include Transformations, shell scripts, database operations, file operations, and more. Each job entry performs a specific task and can be configured with various options and parameters.

Entries executed sequentially.

Pentaho Data Integration - Pentaho Community Wiki
Logo
Steps & Hops = Transformation
copy rows
distribute rows
Steps
parallelism
changing buffer row set
Job Entries