Teradata Best Practices with Informatica PowerCenter 7.1.
Informatica Confidential. Do not duplicate. Revision: 10/23/2008
1 of 23
Introduction This document discusses configuration and how-tos using PowerCenter 7.1.2 and NCR’s Teradata RDBMS. It covers Teradata basics and also describes some “tweaks” which experience has shown may be necessary to adequately deal with some of the “common” practices one may encounter at a Teradata account. The Teradata documentation (especially the MultiLoad, FastLoad and Tpump reference) is highly recommended reading material, as is the “External Loader” section of the PowerCenter’s Server Manager Guide. Additional Information: All Teradata documentation can be downloaded from the NCR web site (http://www.info.ncr.com/Teradata/eTeradata-BrowseBy.cfm), it is also available on the Informatica Tech Support website (tsspider.informatica.com/Docs/page1.html). There is a nice Teradata FAQ in the Informatica Tech Support knowledge base (it contains a section on how to handle “timestamp” columns). Finally, there is a “Teradata Forum” that provides a wealth of sometimes useful information ( http://www.Teradataforum.com ). Teradata Basics Teradata is a relational database management system from NCR. It offers high performance for very large databases tables because of its highly parallel architecture. It is a major player in the retail space. While Teradata can run on other platforms, it is predominantly found on NCR hardware (which runs NCR’s version of Unix). It is very fast and very scalable. Teradata Hardware The NCR computers on which Teradata runs support both MPP (Massively Parallel Processing) and SMP (Symetric Multi-Processing). Each MPP “node” (or semi-autonomous processing unit) can support SMP. Teradata can be configured to communicate directly with a mainframe’s I/O channel. This is known as “channel attached”. Alternatively, it can be “network attached”. That is, configured to communicate via TCP/IP over a LAN. Since PowerCenter runs on Unix, most of the time you will be dealing with a “network attached” configuration. However, once in a while, a client will want to use their existing “channel attached” configuration under the auspices of better performance. Do not necessarily assume that “channel attached” is always faster than “network attached”. Similar performance has been observed across a channel attachment as well as a 100MB LAN. In addition, “channel attached” requires an additional sequential data move because the data must be moved from the PowerCenter server to the mainframe prior to moving the data across the mainframe channel to Teradata. Teradata Software In the Teradata world, there are Teradata Director Program Ids (TDPIDs), databases and users. The TDPID is simply the name that one uses to connect from a Teradata client to Teradata server (think Oracle “tnsnames.ora” entry). Teradata also looks at databases and users somewhat synonymously. A user has a userid, password and space to store tables. A database is basically a user without a login and password (or a user is a database with a userid and password). Teradata AMPs are Access Module Processors. Think of AMPs as Teradata’s parallel database engines. Although they are strictly software (“virtual processors” according to NCR terminology), Teradata folks often seem to use AMP and hardware “node” interchangeable because in the “old days” an AMP was a piece of hardware.
Informatica Confidential. Do not duplicate. Revision: 10/23/2008
2 of 23
0. The name of the Teradata instance (i.0. ODBC is good for sourcing and lookups. most Teradata systems have many nodes. Tdpid <> Oracle instance id!!!). and each node has its own IP address. this located on the server “curly” (or ip address 192.01). Teradata does not care. It simply takes the name you specify.2 accesses Teradata with severalthrough various Teradata tools. ODBC: Teradata provides 32-bit ODBC drivers for Windows and Unix platforms.115 192.0. then the client will automatically attempt to connect to the node with the “cop2” suffix (i. ODBC is Teradata recommended SQL interface for their partners. use the ODBC driver from Teradata’s TTU7 release (or above) of their client software because this version supports “array reads”. There is no tie here to any kind of database server specific information (this is not similar to Oracle’s instance id. You can really call a server whatever you want.0.80. Teradata uses a naming nomenclature in the “hosts” file. Teradata’s ODBC is on a performance par with Teradata’s SQL CLI.168.168. If possible.1 192. This lastest release of Teradata’s TTU8.) to an IP address.1. hence.e.113).116 localhost curly_1 curly_2 curly_3 curly_4 demo1099cop1 pcop1 pcop2 pcop3 pcop4
This setup allows load balancing of clients among multiple Teradata nodes. looks in the “host” file to map the <name>cop1 (or cop2. Since INFA does not run on NCR Unix. is not optimized for writing data. the tdpid is used strictly to define the name a client uses to connect to a server. every client will connect to one node and eventually this node will be doing more than its “fair share” of client processing. Tests have shown these “new” drivers (3.113 localhost curly demo1099cop1 pcop1
This tells Teradata that when a client tool references the instance “demo1099”.0 uses ODBC v3. you should probably use Tpump defined later instead) because Teradata’s ODBC is optimized for query access and.80.168. Revision: 10/23/2008
3 of 23
. curly_2) and so forth.113 192. and then attempts to establish a connect with Teradata at the IP address.0. Do not use ODBC to write to Teradata unless you’re writing very small data sets (and even then. In fact.0421.Client Configuration Basics for Teradata The client side configuration is wholly contained in the “hosts” file (/etc/hosts on Unix or winnt\system32\drivers\etc\hosts on Win).02) can be 20%-30% faster than the “old” drivers (3. For example: 127.e. curly_1) to the client request to connect to “p”.80.e. etc. Do not duplicate. PowerCenter Designer uses Teradata’s ODBC to import Source and Target table. Each will be defined and as to how it is configured within PowerCenter. when a client tool references instance “p”.114 192. if it takes too long for the node specified with the “cop1” suffix to respond (i.168. tdpid – Teradata Director Program Id) is indicated by the letters and numbers that precede the string “cop1” in a hosts file entry. one should not ever have to deal with the server side.22.214.171.124. it should direct requests to “localhost” (or ip address 127.0. With multiple host file entries.1). Sometimes you’ll see multiple entries in a hosts file with similar tdpids: 127. That is. Without the multiple hosts file entries. Informatica / Teradata touch points Informatica 7.
Informatica Confidential. That is.80.1 192.
Do not duplicate.If you are having performance problems you can use a cmd task with a shell script to call BTEQ. SQL with intermediate work table which could be sourced by PowerCenter
Informatica Confidential. Revision: 10/23/2008
4 of 23
60 Oracle 8 Driver Text=MERANT 3.ODBC Windows:
ODBC Unix When the PowerCenter server is running on Unix. there must be an entry under [ODBC Data Sources] that points to the Teradata ODBC driver shared library (tdata.odbc. Do not duplicate.60 dBase Driver Oracle8=MERANT 3. then ODBC is required to read (both sourcing and lookups) from Teradata As with all Unix ODBC drivers.odbc. The following example shows the required entries from an actual “.60 Informix Driver DB2=MERANT 3.sl on HPUX. To correctly configure the “. the key to configuring the Unix ODBC driver is adding the appropriate entries to the “.60 DB2 Driver MS_SQLServer7=MERANT SQLServer driver
Informatica Confidential. standard shared library extension on other flavors of Unix ).ini” file. Revision: 10/23/2008
5 of 23
.60 Text Driver Sybase11=MERANT 3.ini” file.60 Sybase 11 Driver Informix=MERANT 3.odbc.ini” file (note the path to the driver may be different on each computer): [ODBC Data Sources] dBase=MERANT 3.
sl [TeraTest] Driver=/usr/odbc/drivers/tdata. Important note: Make sure that the Merant ODBC path precedes the Teradata ODBC path information in the PATH and SHLIB_PATH (or LD_LIBRARY_PATH. This is because both sets of ODBC software use some of the same file names. Teradata external loaders PowerCenter 7.1. TeradataWarehouse Builder ( TWB ). As these are external loaders. PowerCenter will only receive back from the loader whether it ran successfully or not.
Informatica Confidential.162.247. Consult with the Teradata administrator for exact details on this (or copy the entries from the PC client’s “hosts” file (see section Client Configuration Basics). etc. Refer to the first section of this document to understand how to correctly enter the value. PowerCenter should use the Merant files because this is the software that has been certified. All of the Teradata loader connections will require a value to the TDPID attribute. Do not duplicate. Revision: 10/23/2008
6 of 23
.2 supports 4 different Teradata external loaders: Tpump. FastLoad. tbuild ) must be accessible by the Powercenter Server generally in the path statement. fastload. mload.TeraTest=tdata.) environment variables. one can specify multiple IP addresses for the DBCName to balance the client load across multiple Teradata nodes. All of these loaders require: • • a load file ( can be configured to be a stream/pipe and is autogenerated by PowerCenter ) a control file of commands to tell the loader what to do ( PowerCenter autogenerates)
All of these loaders will also produce a log file. MultiLoad.sl Description=Teradata Test System DBCName=148.34 Similar to the client “hosts” file setup. This log file will be the means to debug the loader if something goes wrong. The actual Teradata loader executables ( tpump.
the input file.By default. control file and log file will be created in $PMTargetFileDir of the PowerCenter Server executing the workflow
Informatica Confidential. Revision: 10/23/2008
7 of 23
. Do not duplicate.
Informatica Confidential. Revision: 10/23/2008
8 of 23
.Any of these loaders can be used by configuring the target in the PowerCenter session to be a “File Writer” and then choose the appropriate loader. Do not duplicate.
Informatica Confidential. Revision: 10/23/2008
9 of 23
. Click the “Pencil” icon next to the loader connection name. Do not duplicate.The auto-generated control file can be overridden.
Informatica Confidential. Do not duplicate. Then click the down arrow. Revision: 10/23/2008
10 of 23
.Scroll to the bottom of the connection attribute list and click the value next to the “control file content Override” attribute.
The changed control file is stored in the repository.
Informatica Confidential.Click the Generate button and change the control file as you wish. Do not duplicate. Revision: 10/23/2008
11 of 23
Do not duplicate. This effectively turns off the “checkpoint” processing. one should also set the “checkpoint” property to 0. Revision: 10/23/2008
12 of 23
.e. the “is staged” attribute must be checked. then the file will be piped/streamed to the loader. However. Error and Log tables. these will be in the same database as the target table. but rather a named pipe.
Informatica Confidential. “Checkpoint” processing is used for recovery/restart of fastload and multiload sessions. If one selects the non-staged mode for a loader. By default.Most of the loaders also use some combination of internal Work. All of these can now be overridden in the attributes of the connection. the checkpoint processing is not free and we want to eliminate as much unnecessary overhead as possible). Not only does this impact performance (i.
To land the input flat file that the loaders need to disk. but a non-zero checkpoint value will sometimes cause seemingly random errors and session failures when used with named pipe input (as is the case with “streaming” mode). if one is not using a physical file as input. If the “is staged” attribute is not set. then the recovery/restart mechanism of the loaders does not work.
As an alternative to this method. Unix redirects stdout and stderr to /dev/null (i. All
Informatica Confidential. Important note: There are no spaces in the token “2>&1”. a placeholder that throws out anything written to it).cfg > . stderr and stdout will be defined even after the terminal session logs out. In this way.cfg” config file and points stdout and stderr to the file “pmserver. If you do not see this message. one can start the pmserver normally (i./pmserver). WRITER_1_*_1> Thu Jun 16 11:58:21 2005 WRITER_1_*_1> WRT_8068 Writer initialization failed. Partitioned Loading With PowerCenter v7. Teradata loader sessions will fail because they do not permit stdout and stderr to be /dev/null. That is. This tells Unix to point stderr to the same place stdout is pointing.x.Teradata loader Requirements for PowerCenter servers on Unix All Teradata load utilities require a “non-null” standard output and standard error to run properly.cfg file. information written to “standard output” and “standard error” will go the file specified as follows: ConsoleOutputFilename=<FILE_NAME> With this entry in the pmserver. Therefore./pmserver.out”. You will know you are getting this behavior if you see the following entry in the session log: MAPPING> DBG_21684 Target [TD_INVENTORY] does not support multiple partitions. When you start the pmserver without explicitly defining stdout and stderr.e. you must start pmserver as follows (“cd” to the PowerCenter installation directory): .e. If you logout of Unix. if one sets a “round robin” partition point on the target definition and sets each target instance to be loaded using the same loader connection instance. “Standard output” (stdout) and “standard error” (stderr) are Unix conventions that determine the default location for a program to write output and error information. Writer terminating. then chances are the session fails with the following error: WRITER_1_*_1> WRT_8240 Error: The external loader [Teradata Mload Loader] does not support partitioned sessions. Do not duplicate. these both point to the current terminal session. data will be routed to the first partition./pmserver . Revision: 10/23/2008
13 of 23
. then PowerCenter automatically writes all data to the first partition and only starts one instance of FastLoad or MultiLoad.out 2>&1 This starts the pmserver using the “pmserver. At this point. . you can specify the console output file name in the pmserver.cfg file./pmserver.
Multiples Tpump’s can execute simultaneously against the same table as it doesn’t use many resource nor does it require table level locks.Tpump Tpump is an external loader that supports inserts. As stated earlier. but will not be as fast as the other loaders. upserts and deletes and data driven updates. updates. Revision: 10/23/2008
14 of 23
. Do not duplicate.
Informatica Confidential. It is often used to “trickle load” a table. it will be a faster way to update a table as opposed to ODBC.
MultiLoad supports inserts. read this section. It is very fast (millions of rows in a few minutes).MultiLoad This is a sophisticated bulk load utility and is the primary method PowerCenter uses to load/update mass quantities of data into Teradata. incorrectly formatted date columns). the error recovery mechanisms tend to get in the way. for the types of problems normally encountered during a POC (loading null values into a column that does not support nulls. updates. Unlike bulk load utilities from other vendors. It can be resource intensive and will take a table lock.
Cleaning up after a failed MultiLoad: MultiLoad supports sophisticated error recovery.
Informatica Confidential. That is. upserts. To learn how to work around the recovery mechanisms to restart a failed MultiLoad script from scratch. Do not duplicate. To learn about MultiLoad’s sophisticated error recovery read the MultiLoad manual. it allows load jobs to be restarted without having to redo all of the prior work. You can also use variables and embed conditional logic into MultiLoad scripts. However. delete and data driven operations in PowerCenter. Revision: 10/23/2008
15 of 23
set test rows to 1 and target a test database) to generate MultiLoad control files for each of the targets. PowerCenter will start an instance of MultiLoad for every target file. *** Table has been dropped. Sometimes. If you’re working with a hand-coded MultiLoad script.Enter your DBC/SQL request or BTEQ command: release mload infatest. one must “release” the target table from the “MultiLoad” state and also drop the MultiLoad log table. Do not duplicate. then you also will not be able to rerun your MultiLoad job. Upon successful completion.MultiLoad puts the target table into the “MultiLoad” state. when a MultiLoad fails for any reason.Enter your DBC/SQL request or BTEQ command: drop table infatest. this is illegal (if the multiple instances target the same table). a prospect may ask that PowerCenter use a single instance of MultiLoad to load multiple tables (or to load both inserts and updates into the same target table). drop table infatest. Using one instance of MultiLoad to load multiple tables MultiLoad is a big consumer of resources on a Teradata system. Therefore. Note: The “drop table” command assumes that you’re recovering from a MultiLoad script generated by PowerCenter (PowerCenter always names the MultiLoad log table “mldlog_<table name>).
Informatica Confidential. If a MultiLoad log table exists for the target table. In addition. Other times. release mload infatest. Therefore. Revision: 10/23/2008 16 of 23
. By default.e. *** Total elapsed time was 1 second. Note: This should not be an issue with Tpump because Tpump is not as resource intensive as MultiLoad (and a multiple concurrent instances of Tpump can target the same table). the table is left in “MultiLoad” state. release mload <table name>.td_test.td_test. *** Mload has been released. *** Total elapsed time was 1 second. Some systems will have hard limits on the number of concurrent MultiLoad sessions allowed. Here is the actual text from a BTEQ session which cleans up a failed load to the table “td_test” owned by the user “infatest”: BTEQ -. One can do this using BTEQ or QueryMan to issue the following commands: drop table mldlog_<table name>. the target table is returned to the “normal” (non-“MultiLoad”) state. MultiLoad will report an error. MultiLoad also queries the target table’s MultiLoad log table to see if it contains any errors. To recover from a failed MultiLoad. it is just expensive.mldlog_td_test. To make this happen.mldlog_td_test. and one cannot simply re-run the same MultiLoad. Here’s the workaround: 1) Use a dummy session (i. the name of the MultiLoad log table could be anything. we’re back to heavy editing of the generated MultiLoad script file. BTEQ -.
LOGON demo1099/infatest. you might want to change the name of the log table since it may be a log that spans multiple tables. .out.ET_TD_CUSTOMERS . Here’s an example of a control file merged from two default control files: . 6) Copy the “Import” statements into the common control file and modify them to reflect the unique names created for the referenced LAYOUT and DML sections created in steps 4) and 5). DROP TABLE infatest.WT_TD_TEST . name it something different so PowerCenter cannot overwrite it. infatest.mldlog_TD_TEST.2) Merge the multiple control files (one per target table) into a single control file (one for all target tables) 3) Configure the session to call MultiLoad from a post-session script using the control file created in step 2. a single instance of MultiLoad can target at most 5 tables. 7) Run “chmod –w” on the newly minted control file so PowerCenter doesn’t overwrite it. DROP TABLE infatest. better yet. Details on “merging” the control files: 1) There is a single log file for each instance of MultiLoad. Organize the file such that all the layout sections are grouped together.WT_TD_CUSTOMERS .ET_TD_TEST . 2) Copy the work and error table delete statements into the common control file 3) Modify the “BEGIN MLOAD” statement to specify all the tables that the MultiLoad will be hitting 4) Copy the “Layout” sections into the common control file and give each a unique name. ” Organize the file such that all the DML sections are grouped together.UV_TD_CUSTOMERS . Organize the file such that all the Import sections are grouped together. .ldrlog . Therefore. you do not have to change or add anything the “LOGFILE” statement.infatest.
Informatica Confidential.ROUTE MESSAGES WITH ECHO TO FILE c:\LOGS\TgtFiles\td_test. . Do not duplicate.UV_TD_TEST . However.TD_CUSTOMERS ERRLIMIT 1 CHECKPOINT 10000 TENACITY 10000 SESSIONS 1 SLEEP 6 .BEGIN IMPORT MLOAD TABLES infatest. . Integrated support cannot be used because each input file is processed sequentially and this causes problems when combined with PowerCenter’s integrated named pipes and streaming. 8) It’s just that easy!!! Also remember.DATEFORM ANSIDATE. DROP TABLE infatest. don’t combine more than 5 target files into a common file. or. DROP TABLE infatest.TD_TEST. 5) Copy the “DML sections into the common control file and give each a unique name.LOGTABLE infatest. DROP TABLE infatest. Revision: 10/23/2008 17 of 23
. DROP TABLE infatest. Therefore.
Field . Do not duplicate. . . . .Field .TD_TEST ( CUST_KEY CUST_NAME CUST_DATE ) VALUES ( :CUST_KEY :CUST_NAME :CUST_DATEtd ) .Field . 43 CHAR( 2) .
Informatica Confidential.Layout InputFileLayout1.Field CUST_DATEtd CUST_DATE .DML Label tagDML1.Field CUST_NAME . . Revision: 10/23/2008
18 of 23
. .Field CUST_DATEdd . 12) .Filler EOL_PAD 1 CHAR( 12) NULLIF CUST_KEY = '*' .Field CUST_DATE . 50) NULLIF 30) NULLIF 30) NULLIF 72) NULLIF 72) NULLIF 30) NULLIF 2) NULLIF 10) NULLIF 30) NULLIF 30) NULLIF 1) NULLIF 2) .Filler CUSTOMER_KEY CUSTOMER_ID COMPANY FIRST_NAME LAST_NAME ADDRESS1 ADDRESS2 CITY STATE POSTAL_CODE PHONE EMAIL REC_STATUS EOL_PAD 1 13 25 75 105 135 207 279 309 311 321 351 381 382 CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( CHAR( 12) . . . .Field .
. 36 CHAR( 2) . . 13 CHAR( 20) NULLIF CUST_NAME = '*' . 39 CHAR( 4) .Field CUST_DATEmm .Field .Field .
. ./* Begin Layout Section */ .Field .Field CUST_DATEyyyy .Field . INSERT INTO infatest. .DML Label tagDML2.Field . 33 CHAR( 10) NULLIF CUST_DATE = '*' . .
/* End Layout Section */ /* begin DML Section */ . 33 CHAR( 2) . . CUST_DATEyyyy||'/'||CUST_DATEmm||'/'||CUST_DATEdd NULLIF = '*' . .Layout InputFileLayout2.Field . .Field CUST_KEY .Field .TD_CUSTOMERS ( CUSTOMER_KEY CUSTOMER_ID
.Field .Field . .
COMPANY FIRST_NAME LAST_NAME ADDRESS1 ADDRESS2 CITY STATE POSTAL_CODE PHONE EMAIL REC_STATUS
= = = = = = = = = = =
'*' '*' '*' '*' '*' '*' '*' '*' '*' '*' '*'
. INSERT INTO infatest.
. . . . . . . . . .out Layout InputFileLayout2 Format Unformat Apply tagDML2 . . . /* end DML Section */ /* Begin Import Section */
. .Import Infile c:\LOGS\TgtFiles\td_test. .out Layout InputFileLayout1 Format Unformat Apply tagDML1 .
Informatica Confidential. . . .END MLOAD.Import Infile c:\LOGS\TgtFiles\td_customers. . .LOGOFF. Do not duplicate. . . Revision: 10/23/2008
19 of 23
.COMPANY FIRST_NAME LAST_NAME ADDRESS1 ADDRESS2 CITY STATE POSTAL_CODE PHONE EMAIL REC_STATUS ) VALUES ( :CUSTOMER_KEY :CUSTOMER_ID :COMPANY :FIRST_NAME :LAST_NAME :ADDRESS1 :ADDRESS2 :CITY :STATE :POSTAL_CODE :PHONE :EMAIL :REC_STATUS ) . /* End Import Section */ . .
there is one major restriction: the target table must be empty. It is the fastest method to load data into Teradata.Multiple workflows that Multiload to the same table Since multiload puts a lock on the table. this is a very fast utility to load data into Teradata. we require that all multiload sessions must handle wait events so they don't try to access the table simultaneously. Do not duplicate.
Informatica Confidential. Also. Revision: 10/23/2008
20 of 23
. any log files should be given unique names for the same reason. However. FastLoad As the name suggests.
Informatica Confidential. please do. MultiLoad.e. PowerCenter supports TWB. NCR/Teradata does not. TWB has never been formally released (never went “GA”). where each “mode” roughly equates to one of the legacy utilities.Teradata Warehouse Builder (TWB) Teradata Warehouse Builder (TWB) is a single utility that was intended to replace FastLoad. Its ability to support parallel load clients makes some things quite a bit easier. Unfortunately. the release was delayed primarily because of issues with the mainframe version. Tpump and FastExport. It also was to support parallel loading (i. Revision: 10/23/2008
21 of 23
. Do not duplicate. According to NCR. If you find a prospect willing to use TWB. multiple instances of a TWB client could run and load the same table at the same time – something the legacy loaders cannot do). It was to support a single scripting environment with different “modes”.
TTU 6.03.1. Do not duplicate. MultiLoad 3.1 TTU 6.1.1. Note that the minimum version number for the Teradata ODBC driver is 3.not Informatica.02 ODBC driver. Compatibility between a particular version of the Teradata RDBMS and the Teradata Client software is determined by Teradata -.07).1.00. If you are using this driver.1.1. Teradata has made many fixes to the 3. Tpump 1. MultiLoad. FastLoad 7.PowerCenter 7.1.4. please contact NCR support for the latest maintenance release
Unix and NT
Unix and NT
V2R5. and Tpump.1. TTU 7
x The Teradata Client/Teradata RDBMS pairings listed here represent our understanding based on Teradata's documentation.
Informatica Confidential. The TeradataTools and Utilities (TTU) was previously called Teradata Utilities Foundation (TUF).01.
Unix and NT
TUF 6.0. FastLoad.05. The version numbers of each of the Teradata Client components vary with release (e.g. TTU 6.04.2
Unix and NT
TUF 6. Revision: 10/23/2008
22 of 23
.2. V2R4.2 PAM for Teradata Server Client Software Platform Version & Version Source Target
PowerCenter uses the following components of the Teradata Tools and Utilities (TTU): Teradata ODBC driver. TTU7 contains ODBC 3.02. TTU 6.1 TTU 7
Unix and NT
Supported Teradata ODBC driver is v3.
Informatica Confidential. This is critical to Teradata as it is common for the Source and Target to reside in the same database. It should extract large table used for lookups quicker that ODBC.3 for Teradata sources. that is a utility to extract data from Teradata very quickly. PushDown The is also commonly referred to as ELT. It will be supported in PowerCenter 7.1. Revision: 10/23/2008
23 of 23
.Future Support of Teradata:
FastExport FastExport is just as it’s name implies. The Zeus release of PowerCenter will have the ability to create SQL that will execute in the Teradata database server that will replace certain transformation that would normally run in the PowerCenter Server. Do not duplicate.