Pentaho Data Integration
InstallationBusiness AnalyticsCToolsData CatalogData QualityLLMs
  • Overview
    • Pentaho Data Integration ..
  • Data Integration
    • Getting Started
      • Configuring PDI UI
      • KETTLE Variables
    • Concepts & Terminolgy
      • Hello World
      • Logging
      • Error Handling
    • Data Sources
      • Flat Files
        • Text
          • Text File Input
          • Text File Output
        • Excel
          • Excel Writer
        • XML
          • Read XML
        • JSON
          • Read JSON
      • Databases
        • CRUID
          • Database Connections
          • Create DB
          • Read DB
          • Update DB
          • Insert / Update DB
          • Delete DB
        • SCDs
          • SCDs
      • Object Stores
        • MinIO
      • SMB
      • Big Data
        • Hadoop
          • Apache Hadoop
    • Enrich Data
      • Merge
        • Merge Streams
        • Merge Rows (diff)
      • Joins
        • Cross Join
        • Merge Join
        • Database Join
        • XML Join
      • Lookups
        • Database Lookups
      • Scripting
        • Formula
        • Modified JavaScript Value
        • User Defined Java Class
    • Enterprise Solution
      • Jobs
        • Job - Hello World
        • Backward Chaining
        • Parallel
      • Parameters & Variables
        • Parameters
        • Variables
      • Scalability
        • Run Configurations
        • Partition
      • Monitoring & Scheduling
        • Monitoring & Scheduling
      • Logging
        • Logging
      • Dockmaker
        • BA & DI Servers
      • Metadata Injection
        • MDI
    • Plugins
      • Hierarchical Data Type
  • Use Cases
    • Streaming Data
      • MQTT
        • Mosquitto
        • HiveMQ
      • AMQP
        • RabbitMQ
      • Kafka
        • Kafka
    • Machine Learning
      • Prerequiste Tasks
      • AutoML
      • Credit Card
    • RESTful API
    • Jenkins
    • GenAI
  • Reference
    • Page 1
Powered by GitBook
On this page
  1. Data Integration
  2. Plugins

Hierarchical Data Type

handling Hierarchical Data Types - JSON & YAML ..

PreviousPluginsNextStreaming Data

Last updated 7 months ago

A hierarchical data type is a data type that represents a hierarchical structure of data, where each data element has a parent-child relationship with other data elements. A hierarchical data type can be used to store and query data that is organized in a tree-like fashion, such as organizational charts, file systems, or taxonomies.

A hierarchical data type has some advantages, such as compactness, depth-first ordering, and support for arbitrary insertions and deletions. However, it also has some limitations, such as the need for application logic to maintain the tree structure, the difficulty of handling multiple parents or complex relationships, and the lack of standardization across different database systems.

A common example is employees and managers: employees and managers are both employees of a company. A manager can have employees they manage, and can also have a manager themselves.

Hierarchical Data Type (HDT) is a new datatype in PDI for handling structured/complex/nested datatype based on JSON / YAML (v10.1 release) format.

There are 7 new plugins/steps:

• Hierarchical JSON Input - is used to get data in HDT from file / previous steps and convert it into JSON formatted string.

• Hierarchical JSON Output -

• Hierarchical YAML Input - is used to get data in HDT from file / previous steps and convert it into ? formatted string.

• Hierarchical YAML Output -

• Extract to Rows -

• Modify values from a single row -

• Modify values from grouped rows -

x

As part of the Pentaho Data Integration & Analytics plugin release journey to decouple plugins from the core Pentaho Server, Pentaho EE 9.5 GA is releasing new plugins and enhancements to its existing plugin collection.

This section is for Reference only. The plugin has been downloaded and installed.

  1. Log into the 'Pentaho Support Portal' and download the plugin.

  1. Select the Pentaho version.

  1. Download selected plugin(s).

  1. Extract HDT plugin.

cd
cd ~/Downloads
unzip hierarchical-datatype-plugin-10.1.0.0-317-dist.zip .
  1. Install HDT plugin.

cd
cd ~/Downloads/hierarchical-datatype-plugin-10.1.0.0-317-dist/hierarchical-datatype-plugin-10.1.0.0-317
./install.sh
  1. Accept License Agreement -> Next

  1. Browse to ../data-integration/plugins directory

  1. Click 'Next' and accept overwrite warning.

  1. Restart Pentaho Data Integration & check for Hierarchical folder.

The following Labs highlight some of the Use Cases

The Extract rows step is obvious .. Working in combination with the Hierarchical JSON Input step you are able to filter and extract specific row(s).

  1. Open the following transformation:

~/Workshop--Pentaho-Data-Integration/Module 3

You can use the Hierarchical JSON input step to load JSON data into PDI from a file / previous step.

Filters to load only the desired data. The data can be split on a hierarchical data path using wildcards.

Source tab

  1. Double-click on the Hierarchical JSON Input step to see how its configured.

Option/Field
Description

From file

Select to specify the file path and name of the JSON file you want to load into PDI.

File name

File path and name of the JSON file to load.

From field

Select to use an incoming field as the JSON file path.

Field with file name

The incoming field containing the JSON file path.


Output

The Split rows across path option is especially useful when loading JSON array objects within large JSON files.

When you use the Split rows across path field you must specify all filter paths rooted at the split path. If you do not use the Split rows across path field a normal HDT extraction path is used.

  1. Click on the Output tab.

Field
Description

Output field

Specify the field name for output column.

Split rows across path

Specify the JSON path to be parsed.

In this example, suppose this JSON file contained other hierarchies based on business units, salary, managers, etc .. The split rows across path: $.employees[*] references all the employees fields, the syntax referencing the path to employees from the root.


Filters

Use the Path field (Optional) to specify the filters to apply while using the Split rows across path option to fetch the subset of a JSON file.

  1. Click on the Filters tab.

Pretty straightforward .. just filtering for: firstName, lastName & address in employees.

You can use the Extract to row step to parse hierarchical data type fields coming from a previous step and put it into the PDI stream. This step supports wildcards for arrays and for string keys. After parsing the data, a data type is assigned to the data.

  1. Double-click on the Extract to rows step to see how its configured.

Option
Description

Step name

Specifies the unique name of the Extract to rows step on the canvas. You can customize the name or leave it as the default.

Source hierarchical field

Specifies the hierarchical input field name from the previous step, which will be used to extract the data.

Pass through fields

Select to add the input fields to the output fields.

Fields

Field
Description

Hierarchical data path

Complete path of the field name in the hierarchical field source.

Output field name

Name of the field that maps to the corresponding field in the hierarchical input source.

Type

Data type of the generated output field.

Path field name

(Optional) Adds the hierarchical path as a new output field with the specified name.

So from the 'employees' data stream field the firstName, lastName and address are extracted.

The address path is referenced in another datastream field: address_Path

Use the Hierarchical JSON output step to convert hierarchical data from a previous step into JSON format.

  1. Double-click on the Hierarchical JSON Output step to see how its configured.

Field
Description

Input hierarchical field

Specifies the hierarchical input field name from a previous step which is formatted to the JSON format.

Output field

Specifies the step output field to contain the generated JSON output.

Options

Option
Description

Pass output to servlet

Select to return the data using a web service instead of passing it to output rows.

Pretty print?

Select to format the output JSON data.

The employees data stream field is formatted as a JSON Object and outputted to the JSON Employee Details datastream field.

  1. RUN the transformation and 'Preview data'.

The JSON output can be consumed further downstream.

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

https://support.pentaho.com/hc/en-us/articles/17591496360589-Pentaho-EE-Marketplace-Plugins-Releasesupport.pentaho.com
Download Plugins
Adjacency List - Hierarchical Data
Company
EE Plugin versions
EE Plugins
Accept License
Install to plugins directory
Installation successful
Hierarchical
Extract rows
Output tab
Filters tab
Extract to rows
Hierarchical JSON Output
Preview data