Complex Flat File Stage Datastage Example Programs

  1. Complex Flat File Stage Datastage Example Programs Download
  2. Complex Flat File Stage Datastage Example Programs List
  3. Complex Flat File Stage Datastage Example Programs List

InfoSphere DataStage is also flexible about meta data. It can cope with the situation where meta data is not fully defined.

Jun 30, 2014  The Complex Flat File stage supports multiple outputs. An output link specifies the data you are extracting, which is a stream of rows to be read. When using the Complex Flat File stage to process a large number of columns, for example, more than 300, use only one output link in your job.

  1. I have a CSV file with a comma separator, but the records have different formats. I want to import it to Datastage with a complex flat file stage. I don't know how to deal with the comma separator. Or, can I use another stage to achieve it? Data example: 1,aa,bb 2,bbbb,ffffff,89983,aa.
  2. Apr 27, 2013  The Complex Flat File stage supports multiple outputs. An output link specifies the data you are extracting, which is a stream of rows to be read. When using the Complex Flat File stage to process a large number of columns, for example, more than 300, use only one output link in your job.

You can define part of your schema and specify that, if your job encounters extra columns that are not defined in the meta data when it actually runs, it will adopt these extra columns and propagate them through the rest of the job.

This is known as Runtime Column Propagation (RCP).

RCP can be enabled for a project via the Administrator client, and set for individual links via the Output Page Columns tab for most stages, or in the Output page General tab for Transformer stages.

You should always ensure that runtime column propagation is turned on if you want to use schema files to define column meta data.

When we run the Datastage Jobs, the columns may change from one stage to another stage. At that point of time we will be loading the unnecessary columns in to the stage, which is not required.

If we want to load the required columns to load into the target, we can do this by enabling a RCP.

If we enable RCP, we can send the required columns into the target.

RCP is mostly useful when we use reusable job where different metadata comes into the picture.

Using RCP With Sequential Stages

Runtime column propagation (RCP) allows DataStage to be flexible about the columns you define in a job.

If RCP is enabled for a project, you can just define the columns you are interested in using in a job, but ask DataStage to propagate the other columns through the various stages.

So such columns can be extracted from the data source and end up on your data target without explicitly being operated on in between.

Read also: When should we use Sparse Lookup or Join in DataStage?

Sequential files, unlike most other data sources, do not have inherent column definitions, and so DataStage cannot always tell where there are extra columns that need propagating.

You can only use RCP on sequential files if you have used the Schema File property to specify a schema which describes all the columns in the sequential file.

You need to specify the same schema file for any similar stages in the job where you want to propagate columns. Stages that will require a schema file are:

  • Sequential File
  • File Set
  • External Source
  • External Target
  • Column Import
  • Column Export

DataStage Tutorials Overview

Welcome to DataStage Tutorials. The objective of these tutorials is to gain understanding of IBM DataStage Tool. In these tutorials, we will cover topics such as DataStage Architecture, Job Sequencing in DataStage, Containers & Joins in DataStage etc.

In addition to DataStage, we will cover common interview questions and issues in Data Stage.

Index

DataStage Overview

DataStage is one of the GUI Based ETL Tool Which is used to create a usable Data Ware House or Data mart Applications.

In Data stage we have three types of Jobs is there:

1.Server Jobs

2.Parallel Jobs

3.Mainframe Jobs

New Features In DataStage

Introduction

DataStage continued to enhance it’s capabilities to manage data quality and data integration solutions. DataStage 8.0 introduced many new features to make development and maintenance of project comfortable. These enhancements include data quality management, connectivity methods, implementation of slowly changing dimension.

What is IBM Information Server?

IBM Information Server, consist of the following components, WebSphere DataStage and Quality Stage, WebSphere Information Analyzer, Federation Server, and Business Glossary, common administration, logging and reporting. These components are designed to provide much more efficient ways to manage metadata and develop ETL solutions. Components can be deployed based on client need.

Top ten features

- The Metadata Server

Complex Flat File Stage Datastage Example Programs Download

With the Hawk release, DataStage has created common administration, logging and reporting and this will improve metadata reporting available, compared to prior releases.

- Quality Stage

Data Quality is highly critical for data integration projects. With earlier releases such as MetaStage, Quality Stages used to add lot of additional overhead in installation, training and implementation. With new release of QualityStage, integration projects using standardization, matching and survivorship to improve quality will be more accessible and easier to use. Also, developer will be able to design jobs with data transformation stages and data quality stages in the same session. Designer is called DataStage and QualityStage Designer in current release, based on it’s usage.

- Frictionless Connectivity and Connection Objects

Managing connectivity information and propagating connectivity information between different environments, has added additional development and maintenance overhead. These new objects help in connecting to remote database connectivity easier. Earlier releases, development team may need to spend considerable time in resolving connectivity issues with the database. DataStage 8 will help the team by providing frictionless connectivity and connectivity objects, ensure reusability and reduces risk of data issues due to wrong connectivity information.

- Parallel job range lookup

It’s always important to get different options to access data for lookup and accessing over a range is always better option when data range is available for improving performance. Range lookup has been merged into the existing lookup form and are easy to use.

- SCD

Data Warehouse developers need to develop complex jobs to implement Slowly Changing Dimension. With this stage introduced in DataStage 8, following enhancements can be done easily, surrogate key generation, there is the slowly changing dimension stage and updates passed to in memory lookups. That's it for me with DBMS generated keys, I'm only doing the keys in the ETL job from now on! DataStage server jobs have the hash file lookup where you can read and write to it at the same time, parallel jobs will have the updateable lookup.

- Collaboration

This new feature allows developers to open any job, which is already opened by other developers. This copy of developer will be READ ONLY. This helps the developers in reducing wait time, when job is currently LOCKED by other user. New enhancements also allows you to unlock the job associated with a disconnected session from the web console in an easier way than prior releases.

- Session Disconnection

Complex Flat File Stage Datastage Example Programs List

With this feature an administrator can disconnect sessions and unlock jobs.

- Improved SQL Builder

This feature reduces the effort spent in synchronizing SQL Select list to the DataStage column list. This will ensure that column mismatches. Adding to this in ODBC Connector, you will be able to complex queries with GUI, which includes adding columns and where clause to the statement.

- Improved job startup times

With this new enhancement, when lot of small parallel jobs gets invocated, this will have less impact on DataStage long running jobs. Connectivity and resource allocation for parallel jobs has improved and load is balanced based on job requirement.

- Common logging

With this new feature, Data Stage has introduced common logging of Data Stage job logs. This helps in searching from Data Stage log. Data Stage has also introduced time based and record based job monitoring.

Change Data Capture

These are add on products (at an additional fee) that attach themselves to source databases and perform change data capture. Most source system database owners I've come across don't like you playing with their production transactional database and will not let you near it with a ten foot poll, but I guess there are exceptions:

-Oracle

-Microsoft SQL Server

Datastage

-DB2 for z/OS

-IMS

There are three ways to get incremental feeds on the Information Server: the CDC products for DataStage, the Replication Server (renamed Information Integrator: Replication Edition, does DB2 replication very well) and the change data capture functions within DataStage jobs such as the parallel CDC stage.

Removed Functions

These are the functions that are not in DataStage 8,

-dssearch command line function

-dsjob '-import'

-Version Control tool

-Released jobs

-Oracle 8i native database stages

-ClickPack

The loss of the Version Control tool is not a big deal as the import/export functions have been improved. Building a release file as an export in version 8 is easier than building it in the Version Control tool in version 7.

Database Connectivity

The common connection objects functionality means the very wide range of DataStage database connections are now available across Information Server products.

Latest supported databases for version 8:

-DB2 8.1, 8.2 and 9.1

-Oracle 9i, 10i, 10gR2 not Oracle 8

-SQL Server 2005 plus stored procedures.

-Teradata v2r5.1, v2r6.0, v2r6.1 (DB server) / 8.1 (TTU) plus Teradata Parallel Transport (TPT) and stored procedures and macro support, reject links for bulk loads, restart capability for parallel bulk loads.

-Sybase ASE 15, Sybase IQ 11.5, 12.5, 12.7

-Informix 10 (IDS)

-SAS 612, 8.1, 9.1 and 9.1.3

-IBM WS MQ 6.1, WS MB 5.1

-Netezza v3.1

-ODBC 3.5 standard and level 3 compliant

-UniData 6 and UniVerse ?

-Red Brick ?

New Stages

A new stage from the IBM software family, new stages from new partners and the convergence of QualityStage functions into Datastage. Apart from the SCD stage these all come at an additional cost.

-WebSphere Federation and Classic Federation

-Netezza Enterprise Stage

-SFTP Enterprise Stage

-iWay Enterprise Stage

-Slowly Changing Dimension: for type 1 and type 2 SCDs.

- Six QualityStage stages

New Functions Existing Stages

-Complex Flat File Stage: Multi Format File (MFF) in addition to existing cobol file support.

-Surrogate Key Generator: the key sourceis a new feature included in this stage which is maintained via integrated state file or DBMS sequence.

-Lookup Stage: Range Look-up is a new function which is equivalent to the operator between. Lookup against a range of values was difficult to implement in previous DataStage versions. By having this functionality in the lookup stage, comparing a source column to a range of two lookup columns or a lookup column to a range of two source columns can be easily implemented.

-Transformer Stage: new surrogate key functions Initialize() and GetNextKey().

-Enterprise FTP Stage: now choose between ftp and sftp transfer.

-Secure FTP (SFTP) Select this option if you want to transfer files between computers in a secured channel. Secure FTP (SFTP) uses the SSH (Secured Shell) protected channel for data transfer between computers over a nonsecure network such as a TCP/IP network. Before you can use SFTP to transfer files, you should configure the SSH connection without any pass phrase for RSA authentication.

Complex Flat File Stage Datastage Example Programs List

New Database Connector Functions

This is a big area of improvement.

LOB/BLOC/CLOB Data: pictures, documents etc of any size can now be moved between databases. Connector can transfer large objects (LOB) using inline or reference methods.However, a connector is the only stage that does reference methods so another connector is needed to transfer the LOB inline later in the job.

Reject Links: Connecter has its own reject-handling function which eliminates the need to add a Modify or a Transformer stage for capturing SQL errors or for aborting jobs. A choice between number of rows or percentage or rows rejected can be specified for terminating the job run.

Schema Reconciliation: Connector has a schema reconciliation function that automatically compares DataStage schemas to external-resource schemas such as a database. Schemas include data types, attributes and field lengths. Based on the reconciliation rules that you specify, runtime errors or extra transformation on mismatched schemas can be avoided.

Improved SQL Builder that supports more database types.

Connector is the best stage to use for your database because it gives themaximum parallel performance and offers more features compared to database

Test button The Test Button on connectors allows developers to test database connections without having to view the data or to run the job.

Connectors are for accessing external data sources and can be used to read, write, look up and filter data or simply to test the database connectivity during job design.

Drag and drop your configured database connections onto jobs.

Before and after SQL defined per job or per node with a failure handling option. Neater than previous versions.

DataStage 8 gives you access to the latest versions of databases that DataStage 7 may never get. Extra functions on all connectors includes improved reject handling, LOB support and easier stage configuration.

Database Repository

Note the database compatibility for the Metadata Server repository is the latest versions of the three DBMS engines. DB2 is an optional extra in the bundle if you don't want to use an existing database.

-IBM Information Server does not support the Database Partitioning Feature (DPF) for use in the repositorylayer

-DB2 Restricted Enterprise Edition 9 is included with IBM Information Server and is an optional part of the installation however its use is restricted to hosting the IBM Information Server repository layer and cannot be used for other applications

  • Oracle 10g
  • SQL Server 2005

Enterprise Packs

Different enterprise packs are available in version 8. These packs are:

-SAP BW Pack

-BAPI: (Staging Business API) loads from any source to BW.

-OpenHub: extract data from BW.

-SAP R/3 Pack

-ABAP: (Advanced Business Application Processing) auto generate ABAP, Extraction Object Builder, SQL Builder, Load and execute ABAP from DataStage, CPI-C Data Transfer, FTP Data Transfer, ABAP syntax check, background execution of ABAP.

-IDoc: create source system, IDoc listener for extract, receive IDocs, send IDocs.

-BAPI: BAPI explorer, import export Tables Parameters Activation, call and commit BAPI.

-Siebel Pack

EIM: (data integration manager) interface tables

-Business Component: access business views via Siebel Java Data Bean

-Direct Access: use a metadata browser to select data to extract

-Hierarchy: for extracts from Siebel to SAP BW.

-Oracle Applications Pack

-Oracle flex fields: extract using enhanced processing techniques.

-Oracle reference data structures: simplified access using the Hierarchy Access component.

Metadata browser and importer

-DataStage Pack for PeopleSoft Enterprise

-Import business metadata via a metadata browser.

-Extract data from PeopleSoft tables and trees.

-JD Edwards Pack

-Standard ODBC calls

-Pre-joined database tables via business views

Code Packs

These packs can be used by server and/or parallel jobs to interact with other coding languages. This lets you access programming modules or functions within a job:

-Java Pack: Produce or consume rows for DataStage Parallel or Server jobs. Using a java transformer.

-Web Service Pack: Access web services operations in a Server job transformer or Server routine.

-XML Pack: Read, write or transform XML files in parallel or server jobs.

The DataStage stages, custom stages, transformer functions and routines will usually be faster at transforming data than these packs however they are useful for re-using existing code.

Database OPEN and CLOSE Commands

The native parallel database stages provide options for specifying OPEN and CLOSE commands. These options allow commands (including SQL) to be sent to the database before (OPEN) or after (CLOSE) all rows are read/written/loaded to the database. OPEN and CLOSE are not offered by plug-in database stages.

For example, the OPEN command could be used to create a temporary table, and the CLOSE command could be used to select all rows from the temporary table and insert into a final target table.

As another example, the OPEN command can be used to create a target table, including database-specific options (tablespace, logging, constraints, etc) not possible with the “Create” option. In general, don’t let EE generate target tables unless they are used for temporary storage. There few options to specify Create table options, and doing so may violate data-management (DBA) policies.

It is important to understand the implications of specifying a user-defined OPEN and CLOSE command. For example, when reading from DB2, a default OPEN statement places a shared lock on the source. When specifying a user-defined OPEN command, this lock is not sent – and should be specified explicitly if appropriate.

Further details are outlined in the respective database sections of the Orchestrate Operators Reference which is part of the Orchestrate OEM documentation.

Data Stage Designer

DataStage Designer is used to design ETL jobs. Some of the functionalities provided are

Detailed below:

-Create DS jobs

-Create and use parameters within jobs

-Insert and link stages

-Configure stage and job properties

-Load and save table definitions

-Save and compile DS jobs

-Run jobs

Logging-In to DS Designer

The ‘Attach to Project’ window is used to log-into DS.

DS Log On Window

Note: Do not use the ‘Omit’ option while working in the UNIX environment. This option is Used for ‘Windows authentication’. It should not be used when DS is run on UNIX

The Data Stage Job

Starting Data Stage

The screen below displays when the user successfully logs-in.

DS Job Selection

Select a ‘New Parallel Job’ from the new job window.

Note: Options to choose from ‘Existing’ jobs or from ‘Recent’ jobs are available from the tab

of the same name.

DataStage EE Canvas

A typical DS Enterprise Edition canvas looks like the example below.

DS Canvas--Typical Data Stage Parallel Job

DS Stages and Usage

The datastage stages are divided into two categories

1.Active Stages

2.Passive Stages

Active Stages :Active stages model the flow of data and provide mechanisms for combining data streams, aggregating data, and converting data from one data type to another.

Ex: Transformer Stage,Aggregator,Sort,Remove Duplicates,Switch…etc

Passive Stages :A passive stage handles access to databases for the extraction or writing of data.

Ex : Sequential File, File Set,Data Set, Db2, Oracle, Hash File Stages

Conclusion

The look and feel of DataStage and QualityStage canvas remains the same but the new functionalities are major enhancements over the previous version. Data Connection Object,Parameter Set, Range Look-up and Slowly Changing Dimension are all designed to simplify design, help cut implementation effort and reduce cost. Advance Find provides a good way to do impact analysis, an important step in project management. Resource Estimation is as important for project planning. Meanwhile, Performance Analysis tool is another useful feature that can be used throughout the lifecycle of a job. By knowing what causes a performance bottleneck, production support groups can better cope with the ever-shrinking batch windows.

While Advance Find will not perform a Replace function and SQL Builder will not let us build complex SQL, all the changes in version 8 have positive impact on job development,production support and project management. Combined with the features offered in Information Server, existing customers who are looking to upgrade or new DataStage clients will benefit from the new enhancements.

For Indepth Knowledge on DataStage, click on below