IBM InfoSphere DataStage



IBM InfoSphere DataStage is an ETL tool and part of the IBM Information Platforms Solutions suite and IBM InfoSphere. It uses a graphical notation to construct data integration solutions and is available in various versions such as the Server Edition and the Enterprise Edition.

A data extraction and transformation program for Windows NT/2000 servers that is used to pull data from legacy databases, flat files and relational databases and convert them into data marts and data warehouses. Formerly a product from Ascential Software Corporation, which IBM acquired in 2005, DataStage became a core component of the IBM WebSphere Data Integration suite.

DataStage originated at VMark[1], a spin off from Prime Computers that developed two notable products: UniVerse database and the DataStage ETL tool.


The first VMark ETL prototype was built by Lee Scheffler in the first half of 1996[1].

Peter Weyman was VMark VP of Strategy and identified the ETL market as an opportunity. He appointed Lee Scheffler as the architect and conceived the product brand name "Stage" to signify modularity and component-orientation[2].

This tag was used to name DataStage and subsequently used in related products QualityStage, ProfileStage, MetaStage and AuditStage.

Lee Scheffler presented the DataStage product overview to the board of VMark in June 1996 and it was approved for development.

The product was in alpha testing in October, beta testing in November and was generally available in January 1997.

VMark acquired UniData in October 1997 and renamed itself to Ardent Software[3]. In 1999 Ardent Software was acquired by Informix[4] the database software vendor.

In April 2001 IBM acquired Informix and took just the database business leaving the data integration tools to be spun off as an independent software company called Ascential Software[5].

In November 2001, Ascential Software Corp. of Westboro, Mass. acquired privately held Torrent Systems Inc. of Cambridge, Mass. for $46 million in cash.

Ascential announced a commitment to integrate Orchestrate's parallel processing capabilities directly into the DataStageXE platform. [6].

In March 2005 IBM acquired Ascential Software[7] and made DataStage part of the WebSphere family as WebSphere DataStage.

In 2006 the product was released as part of the IBM Information Server under the Information Management family but was still known as WebSphere DataStage.

In 2008 the suite was renamed to InfoSphere Information Server and the product was renamed to InfoSphere DataStage[8].

•Enterprise Edition: a name give to the version of DataStage that had a parallel processing architecture and parallel ETL jobs.

•Server Edition: the name of the original version of DataStage representing Server Jobs. Early DataStage versions only contained Server Jobs. DataStage 5 added Sequence Jobs and DataStage 6 added Parallel Jobs via Enterprise Edition.

•MVS Edition: mainframe jobs, developed on a Windows or Unix/Linux platform and transferred to the mainframe as compiled mainframe jobs.

•DataStage for PeopleSoft: a server edition with prebuilt PeopleSoft EPM jobs under an OEM arragement with PeopleSoft and Oracle Corporation.

•DataStage TX: for processing complex transactions and messages, formerly known as Mercator.

•DataStage SOA: Real Time Integration pack can turn server or parallel jobs into SOA services.




Monday, November 2, 2009

DataStage folder structure

The following is a screen shot showing an example of the folder structure used the project.


General conventions used:

Folders beginning with zz are in the development environment only. They will not be migrated to SIT, UAT or PROD.
zz has been used for-->
1)Load profile data
2)Copy production data to development
3)Sandbox for individual developers play jobs
4)Obsolete to temporarily hold jobs before they are deleted
-->Top level folders are not numbered.
1)_COMMON is used for common objects. The underscore ensures it lists at the top,
2)Each run stream has top level folder, in uppercase. Eg, ALFA, BPS, CMPF.
--->Second level folders are numbered based on the high level processing step.
00 Sequences. Only sequences called from autosys are included here. Sub-sequences (if any) are included in the appropriate sub-category.
10 Preprocess.
20 Extract and Validate
30 Load Staging
40 Transform
50 Target Delta
60 Load GDW
70 Reserved (possibly for Reconciliation)
80 Post process
90 Utility (not yet in use)
99 Obsolete. Used to mark jobs that have already been promoted to production as obsolete. A job will be moved to this folder, with a change history comment as to why it is made obsolete. It will then be promoted to production, and in so doing will move the job out of the original folder.

Note that not all folders will necessarily contain objects.
-->Third level folders are as required.
-->A similar folder structure is used for Shared containers and Routines.

0 comments: