IBM InfoSphere DataStage



IBM InfoSphere DataStage is an ETL tool and part of the IBM Information Platforms Solutions suite and IBM InfoSphere. It uses a graphical notation to construct data integration solutions and is available in various versions such as the Server Edition and the Enterprise Edition.

A data extraction and transformation program for Windows NT/2000 servers that is used to pull data from legacy databases, flat files and relational databases and convert them into data marts and data warehouses. Formerly a product from Ascential Software Corporation, which IBM acquired in 2005, DataStage became a core component of the IBM WebSphere Data Integration suite.

DataStage originated at VMark[1], a spin off from Prime Computers that developed two notable products: UniVerse database and the DataStage ETL tool.


The first VMark ETL prototype was built by Lee Scheffler in the first half of 1996[1].

Peter Weyman was VMark VP of Strategy and identified the ETL market as an opportunity. He appointed Lee Scheffler as the architect and conceived the product brand name "Stage" to signify modularity and component-orientation[2].

This tag was used to name DataStage and subsequently used in related products QualityStage, ProfileStage, MetaStage and AuditStage.

Lee Scheffler presented the DataStage product overview to the board of VMark in June 1996 and it was approved for development.

The product was in alpha testing in October, beta testing in November and was generally available in January 1997.

VMark acquired UniData in October 1997 and renamed itself to Ardent Software[3]. In 1999 Ardent Software was acquired by Informix[4] the database software vendor.

In April 2001 IBM acquired Informix and took just the database business leaving the data integration tools to be spun off as an independent software company called Ascential Software[5].

In November 2001, Ascential Software Corp. of Westboro, Mass. acquired privately held Torrent Systems Inc. of Cambridge, Mass. for $46 million in cash.

Ascential announced a commitment to integrate Orchestrate's parallel processing capabilities directly into the DataStageXE platform. [6].

In March 2005 IBM acquired Ascential Software[7] and made DataStage part of the WebSphere family as WebSphere DataStage.

In 2006 the product was released as part of the IBM Information Server under the Information Management family but was still known as WebSphere DataStage.

In 2008 the suite was renamed to InfoSphere Information Server and the product was renamed to InfoSphere DataStage[8].

•Enterprise Edition: a name give to the version of DataStage that had a parallel processing architecture and parallel ETL jobs.

•Server Edition: the name of the original version of DataStage representing Server Jobs. Early DataStage versions only contained Server Jobs. DataStage 5 added Sequence Jobs and DataStage 6 added Parallel Jobs via Enterprise Edition.

•MVS Edition: mainframe jobs, developed on a Windows or Unix/Linux platform and transferred to the mainframe as compiled mainframe jobs.

•DataStage for PeopleSoft: a server edition with prebuilt PeopleSoft EPM jobs under an OEM arragement with PeopleSoft and Oracle Corporation.

•DataStage TX: for processing complex transactions and messages, formerly known as Mercator.

•DataStage SOA: Real Time Integration pack can turn server or parallel jobs into SOA services.




Monday, November 2, 2009

Use cases highlighting T-ETL

The following four use-case scenarios are designed to highlight the potential benefits that can be achieved when using WebSphere DataStage and WebSphere Federation Server to consolidate data. In each case, a data consolidation scenario using a WebSphere DataStage job is first presented, it is developed further by showing how WebSphere Federation Server can be used in conjunction with WebSphere DataStage to reduce both runtime and resource consumption. It is also shown how the original WebSphere DataStage job can be modified in order to take advantage of the capabilities of WebSphere Federation Server. The end of this section highlights the traits a WebSphere DataStage job should possess in order to benefit from this optimization.

Figure 2 illustrates the configuration used to test the use-case scenarios

Figure 2. Configuration used for use-case scenarios




















Depending upon the configuration of the job, the data is sourced from a number of different UNIX systems. Similarly, the target may be located on one or more UNIX systems. The DB2 UDB API WebSphere DataStage stage is used to access DB2 sources, targets, and the WebSphere Federation Server. All source and target databases are non-partitioned. IBM Information Server (which includes both WebSphere DataStage Enterprise Edition V8.0 and WebSphere Federation Server V9.0) is installed on a dual-CPU Windows Server 2003 machine. Each job within the four use-case scenarios is a parallel job designed to take full advantage of WebSphere DataStage's parallel processing capabilities. Since there are two CPUs on the WebSphere DataStage server, the degree of parallelism used was two.

The use cases refer to the following tables of a hypothetical parts delivery business. The tables are physically located on one or more source systems, depending on the scenario:

A CUSTOMER table with one row per distinct customer key. Each row contains (among other things) the name and account balance of this customer, as well as a designation of the market segment this customer is part of.

An ORDERS table with one row per distinct order key. Each row also contains the customer key that placed the order, the total value of the order, the date the order was placed, and a code describing its priority. There are typically several orders in the database for each customer, but some customers have no orders, or have not placed orders for a long time.

A LINEITEM table with one row for each item that is part of an order. Each row contains the order key of the order it is part of. Typically, an order contains several line items. Each line item row references a particular part key and includes the quantity ordered, the date the parts were shipped, and the shipping method used.

A STOCK table that links part keys and supplier keys and keeps track of the number of parts on hand at each supplier.
Before Next

0 comments: