IBM InfoSphere DataStage



IBM InfoSphere DataStage is an ETL tool and part of the IBM Information Platforms Solutions suite and IBM InfoSphere. It uses a graphical notation to construct data integration solutions and is available in various versions such as the Server Edition and the Enterprise Edition.

A data extraction and transformation program for Windows NT/2000 servers that is used to pull data from legacy databases, flat files and relational databases and convert them into data marts and data warehouses. Formerly a product from Ascential Software Corporation, which IBM acquired in 2005, DataStage became a core component of the IBM WebSphere Data Integration suite.

DataStage originated at VMark[1], a spin off from Prime Computers that developed two notable products: UniVerse database and the DataStage ETL tool.


The first VMark ETL prototype was built by Lee Scheffler in the first half of 1996[1].

Peter Weyman was VMark VP of Strategy and identified the ETL market as an opportunity. He appointed Lee Scheffler as the architect and conceived the product brand name "Stage" to signify modularity and component-orientation[2].

This tag was used to name DataStage and subsequently used in related products QualityStage, ProfileStage, MetaStage and AuditStage.

Lee Scheffler presented the DataStage product overview to the board of VMark in June 1996 and it was approved for development.

The product was in alpha testing in October, beta testing in November and was generally available in January 1997.

VMark acquired UniData in October 1997 and renamed itself to Ardent Software[3]. In 1999 Ardent Software was acquired by Informix[4] the database software vendor.

In April 2001 IBM acquired Informix and took just the database business leaving the data integration tools to be spun off as an independent software company called Ascential Software[5].

In November 2001, Ascential Software Corp. of Westboro, Mass. acquired privately held Torrent Systems Inc. of Cambridge, Mass. for $46 million in cash.

Ascential announced a commitment to integrate Orchestrate's parallel processing capabilities directly into the DataStageXE platform. [6].

In March 2005 IBM acquired Ascential Software[7] and made DataStage part of the WebSphere family as WebSphere DataStage.

In 2006 the product was released as part of the IBM Information Server under the Information Management family but was still known as WebSphere DataStage.

In 2008 the suite was renamed to InfoSphere Information Server and the product was renamed to InfoSphere DataStage[8].

•Enterprise Edition: a name give to the version of DataStage that had a parallel processing architecture and parallel ETL jobs.

•Server Edition: the name of the original version of DataStage representing Server Jobs. Early DataStage versions only contained Server Jobs. DataStage 5 added Sequence Jobs and DataStage 6 added Parallel Jobs via Enterprise Edition.

•MVS Edition: mainframe jobs, developed on a Windows or Unix/Linux platform and transferred to the mainframe as compiled mainframe jobs.

•DataStage for PeopleSoft: a server edition with prebuilt PeopleSoft EPM jobs under an OEM arragement with PeopleSoft and Oracle Corporation.

•DataStage TX: for processing complex transactions and messages, formerly known as Mercator.

•DataStage SOA: Real Time Integration pack can turn server or parallel jobs into SOA services.




Friday, August 22, 2008

Processing a header and trailer textfile in Datastage


Example case

The input file in our scenario is a stream of records representing invoices. It is a text file in a header-trailer format and has the following structure:

- A header starts with a letter H which is followed by an invoice number, a customer number, a date and invoice currency

- Every item starts with a letter I which is followed by product ID, product age, quantity and net value

- And the trailer starts with a T and contains two values which fullfill the role of a checksum: the total number of invoice lines and total net value.

Header and trailer input textfile in Datastage:







Solution
Datastage job design which solves the problem of loading an extract structured into headers and items:







Detailed illustration of each component of the job

TrsfGetKinds transformer - a type of record is extracted from each input line and assigned and written into a kind column

The transformer reads first character of each line to mark records and divide them into kinds:





TrsfAssignHeaders transformer assigns invoice headers to each line of the input flow. Invoice header is a 8 character key which is extracted from the dataline using the following formula: L02.data_line[1,8]

A datastage transformer which assigns headers using a stage variable:




TrsfReformat transformer splits the data flow into invoice headers and invoice lines. A InvoiceLineID stage variable is used to populate a sequence number for each invoice line. In order to correctly load net value, the ereplace function replaces commas with dots in this numeric field.


The following transformer splits the flow into headers and items and reformats records into a corresponding structure:



Target oracle tables with correctly loaded invoice headers and lines:









Processing a header and trailer textfile

Goal

Process a text file which contains records arranged in blocks consisting of a header record, details (items, body) and a trailer. The aim is to normalize records and load them into a relational database structure.

Scenario overview and details

Typically, header and item processing needs to be implemented when processing files that origin from mainframe systems, and also EDI transmission files, SWIFT Schemas, EPIC files.

The input file in our scenario is a stream of records representing invoices. Each invoice consists of :

- A header containing invoice number, dates, customer reference and other details

- One or more items representing ordered products, including an item number, quantity, price and value

- A trailer which contains summary information for all the items

The records are distinguished by the first character in each line: H stands for headers, I for items and T for the trailers.
The lines in a section of the input file are fixed length (so for example all headers have the same number of characters but it varies from items).


Input data:

Text file in a Header-trailer format. The file presented in our example is divided into headers, items and trailers.

- A header starts with a letter H and then contains an invoice number, a customer number, a date and invoice currency

- Every item starts with a letter I which is followed by product ID, product age, quantity and net value

- The trailer starts with a T and contains two values which act as a checksum: the total number of invoice lines and a total net value.


Sample header and trailer input textfile:







Output data:

INVC_HEADER and INVC_LINE relational tables: one with invoice headers and the other with the invoice lines

rejects_080822.txt - a text file with loading errors desired output

0 comments: