What is Teradata?

 Teradata

database

is

a

Relational

Database

Management

System(RDBMS).  It has been designed to run the world‘s largest commercial databases.  Preferred solution for enterprise data warehousing  Executes on UNIX MP-RAS and Windows 2000 operating systems  It is compliant with ANSI industry standards  Runs on a single or multiple nodes  It is a ―database server‖  Uses parallelism to manage ―terabytes‖ of data  Capable of supporting many concurrent users from various client platforms Teradata –A Brief History 1979 –Teradata Corp founded in Los Angeles, California –Development begins on a massively parallel computer 1982–YNET technology is patented 1984–Teradata markets the first database computer DBC/1012 –First system purchased by Wells Fargo Bank of Cal. –Total revenue for year -$3 million 1987–First public offering of stock 1989–Teradata and NCR partner on next generation of DBC 1991–NCR Corporation is acquired by AT&T –Teradata revenues at $280 million
Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525 Page 1

1992–Teradata is merged into NCR 1996–AT&T spins off NCR Corp. with Teradata product 1997–The Teradata Database becomes the industry leader in data warehousing 2000–First 100+ Terabyte system in production 2002–Teradata V2R5 released 12/2002; major release including featuressuch as PPI, roles and profiles, multi-value compression, and more. 2003–Teradata V2R5.1 released 12/2003; includes UDFs, BLOBs, CLOBs, and more. 2005–Teradata V2R6 Released Collect Statistics enhancement 2007–Teradata Td12 Released Query Rewrite, 2009–Teradata TD13 Released Scalar Subquery, NOPI Ongoing Development TD14 Temporal feature

How large is a Trillion?

1 Kilobyte

= 10^3 = 1000 bytes

1 Megabyte = 10^6 = 1,000,000 bytes 1 Gigabyte = 10^9 = 1,000,000,000 bytes 1 Terabyte = 10^12 = 1,000,000,000,000 bytes 1 Petabyte = 10^15 = 1,000,000,000,000,000 bytes

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

Page 2

Differences to Teradata RDBMS and Other RDBMS:

Teradata RDBMS 1 Supports unconditional parallelism 2 Designed for DSS & DW systems 3 Architecture is Shared Nothing. 4 Supports Tera Bytes of data 5 Index used for Better storage and fast retrieval

Other RDBMS Supports conditional parallelism Designed for OLTP systems Architecture is shared Everything Supports Giga Bytes of data Index use for Fast Retrieval Handles Millions of Rows data

6 Handles Billions of Rows data

Teradata in the Enterprise Large capacity database machine: The Teradata Database handles the large data storage requirements to process the large amounts of detail data for decision support. Thisincludes Terabytes of detailed data stored in billions of rows and Thousands of Millions of Instructions per Second (MIPS) to process data.

Parallel processing:Parallel processingis the key thing which makes Teradata RDBMS faster than other relational systems.

Single data store: Teradata RDBMS can be accessed by network-attached and channel-attached systems. It also supports the requirements of many
Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525 Page 3

you can look at the BYNET as a bus that loosely couples all the SMP nodes in a multinode system. Scalable growth: Teradata RDBMS allows expansion without sacrificing performance. Data integrity: Teradata RDBMS ensures that transactions either complete or rollback to a stable state if a fault occurs. ph-8374187525 Page 4 . Fault tolerance: Teradata RDBMS automatically detects and recovers from hardware failures. and point-to-point communication and merge functions. this view does an injustice to the BYNET. The BYNET also possesses high-speed logic arrays that provide bidirectional broadcast. SQL: Teradata RDBMS serves as a standard access language that permits customers to control data. Hyderabad. Teradata Architecture and Components: The BYNET At the most elementary level. Aditya Enclave.diverse clients. Ameerpet. because the capabilities of the network range far beyond those of a simple system bus. Niligiri Block. #306. However. multicast. Visualpath.

The total throughput available for each node is 20 megabytes. ph-8374187525 Page 5 . Aditya Enclave. #306. The total bandwidth for each network link to a processor node is ten megabytes. available broadcast bandwidth for any size system is 20 megabytes.The BYNET software also provides a standard TCP/IP interface for communication among the SMP nodes. If one BYNET should fail. The total. Ameerpet. because each node has two network links and the bandwidth is linearly scalable.The following figure shows how the BYNET connects individual SMP nodes tocreate an MPP system. The PDE provides the ability to: Visualpath.A multinode system has at leas two BYNETs. This creates a fault-tolerant environment and enhances interprocessor communication. For example. the second can handle the traffic. Niligiri Block. Load-balancing software optimizes the transmission of messages over the BYNETs. Hyderabad. Both the SMP and MPP machines run theset of software processes called vprocs on a node under the Parallel DatabaseExtensions (PDE) software layer. a 16-node system has 320 megabytes of bandwidth for point-to-point connections. Parallel Database Extensions Parallel Database Extensions (PDE) software is an interface layer on top of theoperating system. Boardless BYNET Single-node SMP systems use Boardless BYNET (or virtual BYNET) software tosimulate the BYNET hardware driver.

Ameerpet. Vprocs are a setof software processes that run on a node under the Teradata Parallel DatabaseExtensions (PDE) within the multitasking environment of the operatingsystem. Aditya Enclave. Virtual Processors: The versatility of the Teradata RDBMS is based on virtual processors (vprocs)that eliminate dependency on specialized physical processors. reset. and stop on Windows systems using the TeradataMultiTool utility and on UNIX MP-RAS systems using the xctl utility. The two types of vprocs are Visualpath. Niligiri Block. Hyderabad. ph-8374187525 Page 6 .• Execute vprocs • Run the Teradata RDBMS in a parallel environment • Apply a flexible priority scheduler to Teradata RDBMS sessions •Debug the operating system kernel and the Teradata RDBMS using resident debugging facilities The PDE also enables an MPP system to: • Take advantage of hardware features such as the BYNET and shared disk arrays • Process user applications written for the underlying operating system on non-Trusted Parallel Application (non-TPA) nodes and disks different fromthose configured for the parallel database PDE can be start. #306.

AMP: The AMP performs database functions to retrieve and update data on the virtual disks (vdisks). A single system can support a maximum of 16. isolatedfrom other vprocs. independent copy of the processor software. Each vproc is a separate. This messagecommunication is done using the Boardless BYNET Driver software on singlenodeplatforms or BYNET hardware and BYNET Driver software on multinodeplatforms. The maximum number of vprocs per node can be as high as 128. Multiple vprocs can run on an SMP platform or a node. but sharing some of the physical resources of the node. Ameerpet. Visualpath. suchas memory and CPUs. Aditya Enclave. ph-8374187525 Page 7 . #306.384 vprocs. Hyderabad.PE: The PE performs session control and dispatching tasks as well as parsing functions. as if they were physically isolated from one another. Vprocs and the tasks running under them communicate using unique-address messaging. Niligiri Block.

Processing alternatives are evaluated and the fastest alternative is chosen. This alternative is converted into executable steps. The PE handles an incoming request in the following manner: The Session Control component verifies the request for session authorization (user names and passwords).Parsing Engine: A Parsing Engine (PE) is a virtual processor (vproc) that manages the dialogue between a client application and the Teradata Database. which are then passed to the Dispatcher. Aditya Enclave. to be performed by the AMPs. ph-8374187525 Page 8 . Consults theData Dictionary to ensure that all objects exist and that the user has authority to access them.Verifies SQL requests for the proper syntax and evaluates them semantically. Each PE can support a maximum of 120 sessions. The Optimizer is cost-based and develops the least expensive plan (in terms of time) to return the requested response set. Hyderabad. Niligiri Block. Ameerpet. The Dispatcher controls the sequence in which the steps are executed and passes the steps received from the optimizer onto the BYNET for execution Visualpath. and either allows or disallows the request. once a valid session has been established. #306. The Parser does the following: Interprets the SQL statement received from the application.

Hyderabad. Niligiri Block. #306. formatting. AMPs do the physical work associated with generating an answer set (output) including sorting.The Dispatcher builds a response message and sends the message back to the user Access Module Processor (AMP ) The AMP is a vproc in the Teradata Database's shared-nothing architecture that is responsible for managing a portion of the database. Ameerpet. Each AMP will manage some portion of each table on the system. and converting. The AMPs retrieve and perform all database management functions on the required rows from a table. Visualpath. After the AMPs process the steps. aggregating. ph-8374187525 Page 9 .by the AMPs. the PE receives their responses over the BYNET. Aditya Enclave.

and restart • Physical input to and output from the Teradata server. Ameerpet. #306. Database Manager subsystem resides on each AMP. Insert.An AMP accesses data from its single associated vdisk. Retrieve information from definitions and tables. verification. which is made up of multiple ranks of disks. delete. ph-8374187525 Page 10 . or delete definitions of tables. the AMPs may redistribute a copy of the data to other AMPs. recovery. or modify rows within the tables. Return responses to the Dispatcher. Hyderabad. including session balancing and queue maintenance • Security Visualpath. An AMP responds to Parser/Optimizer steps transmitted across the BYNET by selecting data from or storing data to its disks. The TDP manages the session traffic between the Call-Level Interface and the RDBMS. modify. Functions of TDP include the following: • Session initiation and termination • Logging. Niligiri Block. Create. Teradata Directory Program The Teradata Director Program (TDP) is a Teradata-supplied program that must run on any client system that will be channel-attached to the Teradata RDBMS. Aditya Enclave. For some requests. This subsystem will: Lock databases and tables.

The MTDP does not control session balancing across PEs. Ameerpet. Client application programs use these routines to perform operations such as logging on and off. The Micro Teradata Director Program (MTDP)is a Teradata-supplied program that must be linked to any application that will be network-attached to the Teradata RDBMS. submitting SQL queries and receiving responses which contain the answer set. Hyderabad. By using MOSI. ph-8374187525 Page 11 . The MTDP performs many of the functions of the channel based TDP including session management. The Teradata ODBC™ (Open Database Connectivity) or JDBC (Java) drivers use open standards-based ODBC or JDBC interfaces to provide client applications access to Teradata across LAN-based environments. These routines are 98% the same in a network-attached environment as they are in a channel-attached. we only need one version of the MTDP to run on all network-attached platforms. Connect and Assign Servers that run on the Teradata system handle this activity. Niligiri Block. Visualpath. The Micro Operating System Interface (MOSI) is a library of routines providing operating system independence for clients accessing the RDBMS. #306.The Call Level Interface (CLI) is a library of routines that resides on the client side. Aditya Enclave. Trusted Parallel Applications The PDE provide a series of parallel operating system services to a special classof tasks called a Trusted Parallel Application (TPA).

ph-8374187525 Page 12 .On an SMP or MPP system. #306. Niligiri Block. TPA services include: • Facilities to manage parallel execution of the TPA on multiple nodes • Dynamic distribution of execution processes • Coordination of all execution threads. Ameerpet. Aditya Enclave. Hyderabad. the TPA is the Teradata RDBMS. whether on the same or on different nodes • Balancing of the TPA workload within a clique • Resident debugging facilities in addition to kernel and application Debuggers Visualpath.

Ameerpet. Hyderabad. Aditya Enclave.NODE: Teradata Architecture: Visualpath. #306. ph-8374187525 Page 13 . Niligiri Block.

ph-8374187525 Page 14 . Ameerpet.Teradata MPP Architecture Visualpath. Niligiri Block. #306. Hyderabad. Aditya Enclave.

Aditya Enclave. WAN  Server Management  One console to view the entire system Shared Nothing Architecture  ―Virtual processors‖ (vprocs) do the work  Two types o AMP: owns and operates on the data o PE: handles SQL and external interaction  Configure multiple vprocs per hardware node o Take full advantage of SMP CPU and memory  Each vproc has many threads of execution o Many operations executing concurrently o Each thread can do work for any user or transaction Visualpath. ph-8374187525 Page 15 . #306. Hyderabad. Niligiri Block. BYNET Interconnect  Fully scalable bandwidth  Nodes  Incrementally scalable to 1024 nodes  Windows or Unix  Storage  Independent I/O  Scales per node  Connectivity  Fully scalable  Channel –ESCON/FICON  LAN. Ameerpet.

ph-8374187525 Page 16 . which allows multiple virtual processors on multiple nodes to communicate with each other.An SMP Teradata Database has a single node that contains multiple CPUs sharing a memory pool. Massively parallel processing (MPP) . The node is where the processing occurs for the Teradata Database. A node is a term for a processing unit under the control of a single operating system. Hyderabad. Benefits of Teradata : Shared Nothing . Software is equivalent regardless of configuration o No user changes as system grows from small SMP to huge MPP  Delivers linear scalability o Maximizes utilization of SMP resources o To any size configuration o Allows flexible configurations o Incremental upgrades SMP vs.Multiple SMP nodes working together comprise a larger. #306.Dividing the Data  Data automatically distributed to AMPs via hashing  Even distribution results in scalable performance Visualpath. There are two types of Teradata Database systems: Symmetric multiprocessing (SMP) . MPP: A Teradata Database system contains one or more nodes. Aditya Enclave. Niligiri Block. The nodes are connected using the BYNET. MPP implementation of a Teradata Database. Ameerpet.

 Prime Index (PI) column(s) are hashes  Hash is always the same . The main component of the "shared-nothing" architecture is that each AMP manages its own dedicated portion of the system's disk space (called the vdisk) and this space is not shared with other AMPs. ph-8374187525 Page 17 . Aditya Enclave. combine partial cylinders o Dynamic and automatic o Background compaction based on tunable threshold  Quotas control disk space utilization o Increase quota (trivial online command) to allow user to use more space Data Management . Ameerpet. share the components of the nodes (memory and cpu). or vprocs (which are the PEs and AMPs). Hyderabad. Niligiri Block. Each AMP uses system resources independently of the other AMPs so they can all work in parallel for high system performance overall. #306. The Teradata Database virtual processors.Bottom Line  No reorgs o Don‘t even have a reorg utility Visualpath.for the same value  No partitioning or repartitioning ever required Space Allocation:  Space allocation is entirely dynamic o No tablespaces or journal spaces or any pre-allocation o Spool (temp) and tables share space pool. no fixed reserved allocations  If no cylinder free.

Aggregation. Niligiri Block. Update. ph-8374187525 Page 18 . Delete Visualpath. Joins. Insert.Parallelization  Cost based optimizer o Parallel aware  Rewrites built-in and cost based  Parallelism is automatic  Parallelism is unconditional  Each query step fully parallelized  No single threaded operations o Scans. Hyderabad. Aditya Enclave. Ameerpet. #306. No index rebuilds  No re-partitioning  No detailed space manageme  Easy database and table definition  Minimum ongoing maintenance o All performed automatically Optimizer . Index access. Sort.

Ameerpet. ph-8374187525 Page 19 . Hyderabad.  Row Hash Locks: Apply to a group of one or more rows in a table Visualpath. Aditya Enclave. Niligiri Block.  Table Locks: Apply to all rows in the table or view. #306.Traditional ―Conditional Parallelism‖ Teradata ―Conditional Parallelism‖ Data Recovery and Protection: Locks  Locks may be applied at three levels:  Database Locks: Apply to all tables and views in the database.

e.100 80 60 40 20 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr East West North The four types of locks are described below. #306. Ameerpet. While the datahas a write lock on it. ph-8374187525 Page 20 . They are the mostrestrictive type of lock. other users can only obtain an access lock. With an exclusive lock. all otherlocks are held in a queue until the write lock is released. no other user can access the database ortable. Niligiri Block. Aditya Enclave.  Exclusive Exclusive locks are applied to databases or tables. During this time. never to rows. Hyderabad.  Write Write locks enable users to modify data while maintaining data consistency. CREATE TABLE).. An exclusive lock on a database or table prevents otherusers from obtaining any lock on the locked object. Visualpath. Exclusive locks are used when a Data Definition Language (DDL) command isexecuted (i.

because you may get "stale data"that has not been updated. Aditya Enclave. The highest level of data protection is RAID 1 with Fallback. Read locks prevent other users from obtaining the following locks on the lockeddata: Exclusive locks and Write locks  Access Access locks can be specified by users unconcerned about data consistency. The use of anaccess lock allows for reading data while modifications are in process.Hardware Data Protection RAID 1 is a data protection scheme that uses mirrored pairs of disks to protect data from a single drive failure RAID 1 requires double the number of disks because every drive has an identical mirrored copy. single-row changes. ph-8374187525 Page 21 . Access locks prevent other users from obtaining the followinglocks on the locked data: Exclusive locks Raid1 . Visualpath. during which time no data modification ispermitted. Several users may holdconcurrent read locks on the same data. Access locks are sometimes called "stale read" locks. Ameerpet. Read Read locks are used to ensure consistency during read operations. Hyderabad. Access locks aredesigned for decision support on tables that are updated only by small. Recovery with RAID 1 is faster than with RAID 5. #306. Niligiri Block.

Disk Allocation in Teradata The operating system. Aditya Enclave. and the Teradata Database do not recognize the Visualpath. any missing data block may be reconstructed using the other 3 disks. the system uses the parity byte to calculate the missing data from the down drive so the system can remain operational.  Rank: For the Teradata Database.Raid5 . #306. which is a set of disks working together.Hardware Data Protection  RAID 5 uses a data parity scheme to provide data protection. ph-8374187525 Page 22 . Ameerpet. RAID 5 uses the concept of a rank. if a disk fails. Note that the disks in a rank are not directly cabled to each other  If one of the disk drives in the rank becomes unavailable. PDE. With a rank of 4 disks. Hyderabad. Niligiri Block.

Pdisks: User Data Space  Space on the physical disk drives is organized into LUNs . The operating system recognizes the LUN as its "disk. Each software component recognizes and interacts withdifferent components of the data storage environment: Operating system: Recognizes a logical unit (LUN). Visualpath. taking up only 35 sectors) o User slices for storing data. Niligiri Block. it is divided into partitions. which is further dividedinto slices: o Boot slice (a very small slice.  In UNIX systems.After a LUN iscreated. ph-8374187525 Page 23 . a LUN consists of one partition." and is not aware that it is actually writing tospaces on multiple disk drives. Hyderabad. #306. Aditya Enclave. Using vdisks instead ofdirect connections to physical disk drives enables the use of RAID technologywith the Teradata Database. These user slices are called "pdisks" in theTeradata Database. This technique enables the use of RAIDtechnology to provide data availability without affecting the operating system.physical disk hardware. Teradata Database: Recognizes a virtual disk (vdisk). Ameerpet. PDE: Translates LUNs into vdisks using slices (in UNIX) or partitions (in MicrosoftWindows and Linux) in conjunction with the Teradata Parallel Upgrade Tool.

or partitions (Linux) and are usedfor storage of the tables in a database.o In summary. A LUN may haveone or more pdisks. partitions(Microsoft Windows). Vdisks The pdisks (user slices or partitions. The combined space on the pdisks is considered the AMP's vdisk. an AMP recognizes only the vdisk. Aditya Enclave. generally all pdisks from a rank (RAID 5) ormirrored pair (RAID 1) are assigned to the same AMP for optimalperformance. Although numerousconfigurations are possible. not thevdisk of any other AMP. #306. depending on the operating system) are assigned to an AMP through the software. Each AMP in the system is assigned one vdisk. Niligiri Block. All AMPs then work in parallel. No cabling is involved. Hyderabad. pdisks are the user slices (UNIX). processing theirportion of the data. The AMP has no controlover the physical disks or ranks that compose the vdisk Fall Back Fallback provides data protection at the table level by automatically storing a Visualpath. ph-8374187525 Page 24 . Ameerpet. However. AnAMP manages only its own vdisk (disk space assigned to it).

Each multi-nodesystem has at least one clique.  Nodes are interconnected via the BYNET. • Automatically applies changes to the offline AMP when it is back online. #306. updates. and deletes) for tables. The disadvantage of fallback is that this method doubles the storage space and the I/O (on inserts. Niligiri Block. • Adds a level of data protection beyond disk array RAID. fallback also provides for automatic recovery of the down AMP once you bring it back online The benefits are • Permits access to table data when an AMP is offline. Nodes and disks are interconnected via shared busesand thus can communicate directly. Ameerpet. ph-8374187525 Page 25 . Aditya Enclave. If you cluster your AMPs. the Teradata Database can access the fallback copy and continue operation. it is not activelyusedwhen the Visualpath. Hyderabad.Whilethe shared access is defined to the configuration.copy of each permanent data row of a table on a different or ―fallback‖ AMP. If an AMP fails. Clique:  A clique is a collection of nodes with shared access to the same disk arrays.

the BYNET redistributes the vprocs of the node to theothernodes within the clique. Ameerpet. Aditya Enclave. ph-8374187525 Page 26 . On a running system. If an AMP fails.systemis up and running. Clustering Clustering provides data protection at the system level.  The shared access allows the system to continue operating during a node failure. Visualpath. A cluster is a logical group of AMPs that provide fallback capability. #306. o Teradata Database recovers. o Processing continues while the node is being repaired. Hyderabad.  If a node fails and then resets: o Teradata Database restarts across all the nodes. the remainingAMPs in the same cluster do their own work plus the work of the down AMP.Teradata recommends the cluster size of 2. each rankof disks is addressed by exactly one node. Niligiri Block. The vprocsremain operational and can access stored data.

Hyderabad. ph-8374187525 Page 27 . AMP Clustering and Fallback If the primary AMP fails. Ameerpet.Although AMPs are virtual processes and cannot experience a hardware failure. The following figure illustrates eight AMPs grouped into two clusters of fourAMPs each.This ensures that one copy of a row is available if one or more hardware orsoftware failures occur within an entire array. the system can still access data on the fallback AMP. an AMP will be unable to access its data. In this configuration. Other AMPs in its cluster. 2. if AMP 3 (or its vdisk) fails and stays offline. Aditya Enclave. Even if AMPs 3 and 5 failsimultaneously and remain offline. they can be ―down‖ if the AMP cannot get to the data on the disk array. If two disks in a rank go down. #306. and 4. or an entire node. the data for each remains available on the other AMPs in its cluster. Niligiri Block. which is the only situation where an AMP will stay down. itsdata remains available on AMPs 1. Visualpath.

This journal consists of two system files stored in user DBC: DBC.ChangedRowJournal and DBC. The recovery operation uses fallback rows to replace primary rows and primary rows to replace fallback rows. When the AMP comes back online. any changes made while the AMP was down. #306. When a clustered AMP is out of service. the Down AMP Recovery Journal automatically captures changes to fallback-protected tables from the other Amps in the cluster Each time a change is made to a fallback-protected row that has a copy that resides on a down AMP. the Down AMP Recovery Journal stores the table ID and row ID of the committed changes. Once Visualpath. Teradata Database opens the Down AMP Recovery Journal to update. Niligiri Block. Ameerpet. The journal ensures that the information on the fallback AMP and on the primary AMP is identical. Aditya Enclave. Hyderabad. or roll forward.OrdSysChngTable. ph-8374187525 Page 28 .Down AMP Recovery Journal The DownAMP Recovery Journal provides automatic data recovery on fallback-protected data tables when a clustered AMP is out of service.

the Down AMP Recovery Journal is discarded automatically. or the ID of a new row after an insertis made. Aditya Enclave. Other methods are automatically activated when particular events occur in the system. or a new row to bedeleted from. Niligiri Block. Each data protection techniqueoffers different types of advantages under different circumstances. It enables the snapshot to be copied back to. Ameerpet. Hyderabad. the TJ stores: • A snapshot of a row before an UPDATE or DELETE • The row ID after an INSERT • A control record for each CREATE and DROP statement • Control records for certain operations Visualpath. Transient Journal The Teradata Database system offers a variety of methods to protect data. ph-8374187525 Page 29 .The TJ protects against failures that may occur during transaction processing.To safeguard the integrity of your data. #306.Some data protection methods require that you set options when you createtables such as specifying fallback.the transfer of information is complete and verified. The followinglist describes a few of automatic data protection methods: • The Transient Journal (TJ) automatically protects data by storing the image ofan existing row before a change is made. the data table if a transaction fails or is aborted.

Visualpath. which permit rollback. which permit rollforward. ph-8374187525 Page 30 . #306. full-table archives Teradata Storage and retrival Architectures. Aditya Enclave. PE Dispatcher receives response. Niligiri Block. Request is passed to the PE(s). SQL request is sent from the client to the appropriate component on the node: a.Permanent journal  Is active continuously  Is available for tables or databases  Can contain "before" images. 2. Channel-attached client: request is sent to Channel Driver (through the TDP). PE Dispatcher sends steps to the AMPs over the BYNET. Hyderabad. 7. 6. Network-attached client: request is sent to Teradata Gateway (through CLIv2 or ODBC). AMPs perform operations on data on the vdisks. or both before and after images  Provides rollforward recovery  Provides rollback recovery  Provides full recovery of nonfallback tables  Reduces need for frequent. 5. b. PEs parse the request into AMP steps. 3. or after images. 4. Ameerpet. Request Processing 1. Response is sent back to PEs over the BYNET.

#306. andmacro names to internal identifiers. Go to step 8 afterchecking access rights (step 4). Parsing Engine Request Processing The SQL parser handles all incoming SQL requests. Hyderabad. Stage 3 :The Resolver adds information from the Data Dictionary (or cached copy ofthe information) to convert database. errors passes an error message back to therequestor and stops. ph-8374187525 Page 31 . Aditya Enclave. It processes an incomingrequest as follows: Stage 1: The Parser looks in the Request cache to determine if the requestis already there. Ameerpet. stored procedure. table. cache Stage 2: The Syntaxer checks the syntax of an incoming request. IF there are… no errors THEN the Syntaxer… converts the request to a parse treeand passes it to the Resolver. IF the request is… in the Request cache THEN the Parser… Reuses the plastic steps found in thecache and passes them togncApply.8. not in the Request Begins processing the request withthe Syntaxer. Niligiri Block. view. Visualpath. Response is returned to the client (channel-attached or network-attached).

Stage 5: The Optimizer determines the most effective way to implement the SQLrequest. #306. IF the access rights are… valid not valid THEN the Security module… passes the request to the Optimizer aborts the request and passes anerror message and stops. Hyderabad. ph-8374187525 Page 32 .Stage 4: The Parser looks in the Request cache to determine if the requestis already there. Ameerpet. Stage 7: The Generator transforms the optimized parse tree into plastic steps andpasses them to gncApply. Stage 8 :gncApply takes the plastic steps produced by the Generator and transformsthem into concrete steps. The Dispatcher Visualpath.then passes the optimized parse tree to the Generator. Aditya Enclave.Plastic steps are directives to the database management system that do notcontain data values. Stage 9: gncApply passes the concrete steps to the Dispatcher. Niligiri Block. Stage 6: The Optimizer scans the request to determine where locks should be placed.Concrete steps are directives to the AMPs that contain any needed user.orsession-specific values and any needed data parcels.

#306. and the two will execute in parallel. the following step requires as input data that is produced by thefirst step. forexample. It continues to do this until all theAMP steps associated with a request are done. Niligiri Block. Hyderabad. the Teradata RDBMS performs steps in parallel toenhance performance. then the following step can't be dispatched until the first stepcompletes. or all AMPs. If there is a dependency. and waits for acompletion response. If there are no dependencies between a step and thefollowing step. Stage 3: The Dispatcher receives a completion response from all expected AMPsand places the next step on the BYNET. Stage2:The Dispatcher places the first step on the BYNET. Ameerpet.The Dispatcher controls the sequence in which steps are executed. Whenever possible. The AMPs Visualpath. It also passesthe steps to the BYNET to be distributed to the AMP database managementsoftware as follows: Stage 1: The Dispatcher receives concrete steps from gncApply. the following step can be dispatched before the first stepcompletes. tells the BYNET whetherthe step is for one AMP. several AMPS. Aditya Enclave. ph-8374187525 Page 33 .

macros. etc. Aditya Enclave. users. Ameerpet.Employee. called a dynamic BYNET group  All AMPs in the system Teradata SQL Reference.An AMP step can be sent to one of the following:  One AMP  A selected set of AMPs.) CREATE REPLACE DROP ALTER Data Manipulation Language (DML) –Manipulates rows and data values SELECT INSERT UPDATE DELETE Data Control Language (DCL) –Grants and revokes access rights GRANT REVOKE Teradata Extensions to SQL HELP SHOW EXPLAIN CREATE SET TABLE Per_DB. Data Definition Language (DDL) –Defines database structures (tables. ph-8374187525 Page 34 . FALLBACK . Niligiri Block. Visualpath. Hyderabad.The AMPs are responsible for obtaining the rows required to process therequests (assuming that the AMPs are processing a SELECT statement). triggers. TheBYNET system controls the transmission of messages to and from the AMPs. views. #306.

NO BEFORE JOURNAL. dept_number SMALLINT. Visualpath. last_name. ph-8374187525 Page 35 .department_number . first_name. department_name FROM Employee E INNER JOIN Department D ON E. hire_date ROM Employee WHERE department_number = 403. NO AFTER JOURNAL ( employee_number INTEGER NOT NULL. #306. Hyderabad. salary_amount DECIMAL(10. birth_date DATE FORMAT 'YYYY-MM-DD'.department_number = D.2)) UNIQUE PRIMARY INDEX ( employee_number ) INDEX ( dept_number). epartment_number. Views Views are pre-defined subsets of existing tables consisting of specified columns and/or rows from the table(s). CREATE VIEW EmpDept AS SELECT last_name. Aditya Enclave. first_name VARCHAR(20) NOT CASESPECIFIC. Niligiri Block. job_code INTEGER COMPRESS . Ameerpet. A single table view:  is a window into an underlying table  allows users to read and update a subset of the underlying table  has no data of its own CREATE VIEW Emp_403 AS SELECT employee_number.

). Visualpath. Ameerpet. customer_number FROM Customer.). thus reduces LAN/channel traffic •Are optimized at execution time •May contain multiple SQL statements To create a macro: CREATE MACRO Customer_List AS (SELECT customer_name FROM Customer. #306. Aditya Enclave. Macros may be created for frequently occurring queries of sets of operations. thus available to all clients •Reduces query size. Hyderabad. ph-8374187525 Page 36 . To Execute a macro: EXEC Customer_List. Macros have many features and benefits: •Simplify end-user access •Control which operations may be performed by users •May accept user-provided parameter values •Are stored on the RDBMS. To replace a macro: REPLACE MACRO Customer_List AS (SELECT customer_name.MACRO A MACRO is a predefined set of SQL statements which is logically stored in a database. Niligiri Block.

Visualpath. Incorporated into SQL query syntax.it exists for the duration of the query. ast_name. irst_name.less system overhead. INSERT INTO birthdays SELECT FROM employee_number. birthdate employee. Ameerpet. Some characteristics of a derived table include:      Local to the query . When the query is done the table is discarded. #306. There is no data dictionary involvement . MIN(t2_2) from T2 group by 1) as D (D1.INSERT INTO target_table SELECT * FROM source_table. ph-8374187525 Page 37 . Hyderabad. UPDATE T1 FROM (SELECT t2_1. Spool rows are also discarded when query finishes. Aditya Enclave.D2) SET Field2 = D2 WHERE Field1 = D1 Temporary Tables There are three types of temporary tables implemented in Teradata:    Global Volatile Derived Derived Tables Derived tables were introduced in Teradata V2R2. Niligiri Block.

 It must be explicitly created using the CREATE VOLATILE TABLEsyntax. Visualpath. and additional benefits such as:  Local to a session . Creates and keeps table definition in data dictionary.   Uses CREATE GLOBAL TEMPORARY TABLE syntax. thus the definition may be shared by many users.it exists throughout the entire session. There is no data dictionary involvement. not just a single query.Volatile Temporary Tables Volatile tables have a lot of the advantages of derived tables. ph-8374187525 Page 38 . Aditya Enclave. however each user session may have its own instance. Ameerpet.   It is discarded automatically at the end of the session. Global Temporary Tables The major difference between a global temporary table and a volatile temporary table is that the global table has a definition in the data dictionary. Attributes of a global temporary table include:  Local to a session. Niligiri Block. Materialized instance of table discarded at session end. Eg derived table To get the top three selling items across all stores. #306. Hyderabad. Each user session can materialize its own local instance of the table.

t.Solution SELECT t. The SELECT statement is always in parenthesis following the FROM clause. The query will be run only one time with this data. Niligiri Block.00 110000.sumsales.00 115000. The table is created in spool using the inner SELECT.prodid.00 Rank 1 2 3 Some things to note about the above query include:     The name of the Derived table is 't'. #306. Visualpath. RANK(t. Aditya Enclave.sumsales)FROM (SELECT prodid. Result prodid Sumsales A C D 170000. sumsales)QUALIFY RANK(sumsales)<=3. Derived tables are a good choice if:  The temporary table is required for this query but no others. ph-8374187525 Page 39 . The Derived column names are 'prodid' and 'sumsales'. Hyderabad. Ameerpet. SUM(sales) FROM salestblGROUP BY 1) AS t(prodid.

Are designed for optimal performance. The default statement is ON COMMIT DELETE ROWS.2) . This statement allows us to use the Volatile table again for other queries in the session. which means the data is deleted when the query is committed. #306.maxsal DEC(9. we stated ON COMMIT PRESERVE ROWS. Must be explicitly created with the CREATE VOLATILE TABLE statement. not the query.2). Niligiri Block.minsal DEC(9. Visualpath. Hyderabad. They are different from derived tables in that they:     Are local to the session. Example CREATE VOLATILE TABLE vt_deptsal.avgsal DEC(9. ph-8374187525 Page 40 . Aditya Enclave. Can be used with multiple queries in the session. Require no Data Dictionary access or transaction locks.Volatile Temporary Tables Volatile temporary tables are similar to derived tables in that they:     Are materialized in spool. Have a table definition that is kept in cache.empcnt SMALLINT) ON COMMIT PRESERVE ROWS.2). LOG (deptno SMALLINT. Are dropped manually anytime or automatically at session end. In the example above.2).sumsal DEC(9. Ameerpet.

table1 (Error if databasename not username) Limitations on Volatile Tables The following commands are not applicable to VT's:      COLLECT/DROP/HELP STATISTICS CREATE/DROP INDEX ALTER TABLE GRANT/REVOKE privileges DELETE DATABASE/USER (does not drop VT's) VT's may not:    Use Access Logging.LOG indicates that a transaction journal is maintained. Niligiri Block. Hyderabad. Ameerpet. Volatile tables do not survive a system restart. ph-8374187525 Page 41 . Aditya Enclave. VT's may be referenced in views and macros Visualpath. LOG is the default.table1 CREATE VOLATILE TABLE table1 CREATE VOLATILE (Explicit) (Implicit) TABLE databasename. Be loaded with Multiload or Fastload utilities. Be Renamed. #306. while NO LOG allows for better performance. Examples CREATE VOLATILE TABLE username.

ON TEMPORARY. #306. Global Temporary Tables Global Temporary Tables are created using the CREATE GLOBAL TEMPORARY command... ph-8374187525 Page 42 .. Ameerpet. Global temporary tables are materialized by the first SQL statement from the following list to access the table:       CREATE INDEX. DROP INDEX. They require a base definition which is stored in the Data Dictionary(DD)....Example CREATE MACRO vt1 AS (SELECT * FROM vt_deptsal. Aditya Enclave...). Niligiri Block.. Session A EXEC vt1 Session B EXEC vt1 Each session has its own materialized instance of vt_deptsal... VT's may be dropped before session ends Example DROP TABLE vt_deptsal... ON TEMPORARY. so each session may return different results... COLLECT STATISTICS DROP STATISTICS INSERT INSERT SELECT Visualpath... Hyderabad..

2). The ON COMMIT DELETE ROWS clause is the default. They can survive a system restart. (But the base definition is still in the DD)   They have LOG and ON COMMIT PRESERVE/DELETE options. Space is charged against the user's 'temporary space' allocation. the base table definition is stored in the Data Dictionary. Niligiri Block. #306.2). The User can materialize up to 32 global tables per session. Global Temporary Tables are similar to Volatile Tables because:   Each instance of a global temporary table is local to a session. ph-8374187525 Page 43 . They require a privilege to materialize the table (see list above). If you want to use the command ON COMMIT PRESERVE ROWS. Example CREATE GLOBAL TEMPORARY TABLE gt_deptsal (deptno SMALLINT. Hyderabad.2) .avgsal DEC(9.empcnt SMALLINT). so it does not need to appear in the CREATE TABLE statement.maxsal DEC(9.2). Visualpath. With global temporary tables. Aditya Enclave. Ameerpet. you must specify that in the CREATE TABLE statement. Materialized tables are dropped automatically at the end of the session.Global Temporary Tables are different from Volatile Tables in that:      Their base definition is permanent and kept in the DD. Materialized table contents are not sharable with other sessions.sumsal DEC(9.minsal DEC(9.

Niligiri Block. Visualpath.ALTER TABLE may also be used to change the defaults. manager_employee_number INTEGER. #306.2) NOT NULL) UNIQUE PRIMARY INDEX ( employee_number ). Hyderabad. NO AFTER JOURNAL ( employee_number INTEGER. Consider the employee table: SHOW TABLE employee. job_code INTEGER. Ameerpet. Creating Tables Using Subqueries Subqueries may be used to limit column and row selection for the target table. hire_date DATE FORMAT 'YY/MM/DD' NOT NULL. first_name VARCHAR(30) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL. Aditya Enclave. NO BEFORE JOURNAL.employee . birthdate DATE FORMAT 'YY/MM/DD' NOT NULL. ph-8374187525 Page 44 . department_number INTEGER. last_name CHAR(20) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL. CREATE SET TABLE Customer_Service. salary_amount DECIMAL(10.FALLBACK .

CREATE SET TABLE Customer_Service. SHOW TABLE emp1. Aditya Enclave. CREATE TABLE emp1 AS (SELECT employee_number . if present. NO FALLBACK) unless otherwise specified. Ameerpet. are not copied from the source table.g. Page 45 Visualpath. #306. department_number INTEGER. salary_amount DECIMAL(10.  Secondary indexes.department_number .NO FALLBACK .2) NOT NULL) PRIMARY INDEX ( employee_number ).emp1 .salary_amount FROM employee) WITH NO DATA. Hyderabad. NO AFTER JOURNAL ( employee_number INTEGER.Example This example uses a subquery to limit the column choices. Niligiri Block.. NO BEFORE JOURNAL.  Table attributes are copied from standard system defaults (e. ph-8374187525 . Note: When the subquery form of CREATE AS is used:  Table attributes (such as FALLBACK) are not copied from the source table.

Aditya Enclave.salary_amount AS sal FROM employee) WITH NO DATA.department_number AS dept . Example This example changes the column names of the subset of columns used for the target table. Visualpath. The first column specified (employee_number) is created as a NUPI unless otherwise specified There are some limitations on the use of subqueries for table creation:   The ORDER BY clause is not allowed. Renaming Columns Columns may be renamed using the AS clause (the Teradata NAMED extension may also be used). CREATE TABLE emp1 AS (SELECT employee_number AS emp . ph-8374187525 Page 46 . Niligiri Block. All columns or expressions must have an assigned or defaulted name. Hyderabad. Ameerpet. #306.

*. View. #306..g. Trigger. tablename. The SHOW command is used primarily to see how an object was created. Ameerpet. Table. Returns CREATE TABLE statement CREATE VIEW statement CREATE MACRO statement Visualpath. Hyderabad. Macro. Niligiri Block. SHOW MACRO macroname. macroname. ph-8374187525 Page 47 . Join Index or Stored Procedure). triggername. (all columns) tablename. tablename. viewname. Command SHOW TABLE tablename. username.HELP Command HELP DATABASE HELP USER HELP TABLE HELP VIEW HELP MACRO HELP COLUMN HELP INDEX HELP STATISTICS HELP JOIN INDEX HELP TRIGGER databasename. join_indexname. table or viewname. The SHOW Command The SHOW command displays the current Data Definition Language (DDL) of a database object (e. SHOW VIEW viewname. Aditya Enclave.

#306.*** Explanation 1. ph-8374187525 Page 48 . 3. For instance. 4. ***QUERY COMPLETED. including the following: 1.department.) Which indexes if any will be used in the query. It does not execute the statement and is a good way to see what database resources will be used in processing your request. First.) An estimate of the number of rows which will be processed. 2. Ameerpet. you may decide to rewrite a request so that it executes more efficiently.) Whether individual steps within the query may execute concurrently (i.) An estimate of the cost of the query (in time increments).10 ROWS FOUND. we lock a distinct CUSTOMER_SERVICE. Hyderabad.e.1 COLUMN RETURNED. Niligiri Block. EXPLAIN provides a wealth of information. EXPLAIN SELECT * FROM department. parallel steps). Visualpath.The EXPLAIN Command The EXPLAIN function looks at a SQL request and responds in English how the optimizer plans to execute it. Aditya Enclave. if you see that your request will force a full-table scan on a very large table or cause a Cartesian Product Join."pseudo table" for read on a RowHash to prevent global deadlock for CUSTOMER_SERVICE.

last_name .department for read.manager_employee_number FROM employee job_code WHERE 430000 AND 439999.department by way of an all-rows scan with no residual conditions into Spool 1. Finally.) Example Select the name and the employee's manager number for all employees whose job codes are in the 430000 range. Page 49 BETWEEN Visualpath. We do an all-AMPs RETRIEVE step from CUSTOMER_SERVICE. Niligiri Block.15 seconds. The estimated time for this step is 0.2. Aditya Enclave. SELECT first_name . ph-8374187525 .15 seconds. Next. #306. The total estimated time is 0. 3. The size of Spool 1 is estimated with low confidence to be 4 rows. 4. Ameerpet. we lock CUSTOMER_SERVICE. Hyderabad. we send out an END TRANSACTION step to all AMPs involved in processing the request. BETWEEN The BETWEEN operator looks for values between the given lower limit <a> and given upper limit <b> as well as any values that equal either <a> or <b> (BETWEEN is inclusive. -> The contents of Spool 1 are sent back to the user as the result of statement 1. which is built locally on the AMPs.

department_number employee NOT WHERE IN (401. to define that a range of values either IS NULL or IS NOT NULL. 403). department_number NOT IN Clause Use the NOT IN operator to locate rows for which a column does not match any of a set of values. SELECT FROM first_name . Aditya Enclave. employee_phone WHERE LIKE Operator The LIKE operator searches for patterns matching character data strings.IN Clause Use the IN operator as shorthand for when multiple values are to be tested. SELECT FROM first_name . Select the name and department for all employees in either department 401 or 403. Hyderabad.department_number employee IN WHERE (401.last_name .last_name . SELECT employee_number extension FROM IS NULL. Specify the set of values which disqualifies the row. 403). Niligiri Block. This query may also be written using the OR operator which we shall see shortly. ph-8374187525 . #306. department_number Using NULL Use NULL in a SELECT statement. String pattern example: Meaning: Page 50 Visualpath. Ameerpet.

Niligiri Block. (rounded). 12*14) SELECT ADD_MONTHS (DATE.75 AS INTEGER). Ameerpet. Hyderabad. ph-8374187525 Page 51 Results 01/03/20 2001-05-20 2015-03-20 2000-12-20 . Result: 6.74 AS DEC(2. Result: 50501.75 AS DEC (6.8 (Rounds up to even number) Visualpath. 2001 */ SELECT ADD_MONTHS (DATE. SELECT CAST (50500.1)).75 AS DEC(2. SELECT CAST(6. Result: 6. Aditya Enclave. SELECT CAST (50500. /* March 20.1)). resulting in a new date. -3) Data Conversions Using CAST The CAST function allows you to convert a value or expression from one data type to another. Query SELECT DATE.7 (Drops precision) SELECT CAST(6. Result: 50500 (truncated). #306.0)).LIKE 'JO%' LIKE '%JO%' LIKE '__HN' LIKE '%H_' begins with 'JO' contains 'JO' anywhere contains 'HN' in 3rd and 4th position contains 'H' in next to last position ADD_MONTHS The ADD_MONTHS function allows the addition of a specified number of months to an existing date. 2) SELECT ADD_MONTHS (DATE.

85 AS DEC(2.  Use AS to specify a name for a column or expression in a SELECT statement.e. Attributes are characteristics which may be defined for columns.1)). such as titlesand formats.  Use CHARACTERS to determine the number of characters in a string. Use FORMAT to alter the display of a column or expression. Niligiri Block. Functions are performed on columns to alter their contents in some way. Hyderabad. Result: 6.  Use TRIM to Trim blank characters or binary zeroes from data. Aditya Enclave.SELECT CAST(6. Ameerpet. #306. (i. Expressions are columns and/or values combined with mathematical operators. Teradata Extension Page 52 Visualpath.8 (Rounds down to even number) Attributes and Functions  Use TITLE to add a heading to your output that differs from the column or expression name. Col1 + Col2 + 3) Attributes for columns and expressions include the following: AS TITLE Provides a new name for a column. ph-8374187525 . ANSI Provides a title for a column.

Niligiri Block. Aditya Enclave. MIN AGGREGATE operations ignore NULLs and produce ONLY single-line answers. Example SELECT COUNT ( salary_amount ) (TITLE 'COUNT') Page 53 Visualpath. Aggregate Operators Aggregate operators perform computations on values in a specified group. Teradata Extension Functions for columns and expressions include the following: CHARACTERS Count the number of characters in a Teradata column. Extension TRIM Trim the trailing or leading blanks or ANSI binary zeroes from a column. The five aggregate operators are: ANSI Standard COUNT SUM AVG MAX MIN Teradata Supported COUNT SUM AVERAGE. #306. Ameerpet.FORMAT Provides formatting for a column. MAX MINIMUM. AVG MAXIMUM. Hyderabad. ph-8374187525 .

00 29250.MIN ( salary_amount ) (TITLE 'SUM SALARY') (TITLE 'AVG SALARY') (TITLE 'MAX SALARY') (TITLE 'MIN SALARY') FROM employee . GROUP BY department_number department_number 401 403 301 Sum(salary_amount) 74150.AVG ( salary_amount ) . Hyderabad.00 49700. Without the GROUP BY clause. SELECT department_number .00 80900. the COUNT would have returned a count of 5.00 58700. Aggregation using GROUP BY To find the total amount of money spent by each department on employee salaries. the average would have reflected an average of only five salaries. Niligiri Block. we could attempt to get an answer by running a separate query against each department.SUM ( salary_amount ) .00 35625.00 Visualpath. regardless of how many departments there are.SUM (salary_amount) FROM employee . In this case. Ameerpet. Aditya Enclave. which will count rows regardless of the presence of NULLs. To COUNT all table rows use COUNT (*). #306. ph-8374187525 Page 54 . Result COUNTSUM SALARYAVG SALARYMAX SALARYMIN SALARY 6 213750.00 NOTE: If one salary amount value had been NULL.. GROUP BY provides the answer with a single query.MAX ( salary_amount ) .

ph-8374187525 Page 55 . Ameerpet. HAVING qualifies and selects only those groups that satisfy a conditional expression. Visualpath. HAVING  Eliminates some (or all) of the groupings based on condition. #306. GROUP BY  Puts qualified rows into desired groupings. Hyderabad. Only rows which satisfy a WHERE condition are eligible for inclusion in groups. Niligiri Block. except that it applies to groups rather than rows. GROUP BY Summary Here is the order of evaluation within a SQL statement if all four clauses are present: WHERE   Eliminates some or all rows immediately based on condition. Aditya Enclave. An ORDER BY clause is needed to control the order of the ouput.GROUP BY and ORDER BY GROUP BY does not imply any ordering of the output. GROUP BY and HAVING Condition HAVING is just like WHERE .

BY The WITH. Result NAME Stein Kaniesk SALARY 29450. It differs from GROUP BY in that detail lines are not eliminated..00 29250.. salary_amount AS SALARY . The WITH.department_number AS DEPT FROM employee WHERE employee_number BETWEEN 1003 AND 1008 WITH SUM(salary)(TITLE 'Dept Total').00 -----------Dept Total Dept Avg 58700. Niligiri Block..00 29350. ph-8374187525 .. Aditya Enclave. #306.ORDER BY  Sorts final groups for output. Ameerpet.BY clause allows subtotal "breaks" on more than one column and generates an automatic sort on all "BY" columns.. AVG(salary)(TITLE 'Dept Avg ')BY DEPT.. SELECT last_name AS NAME.BY clause is a Teradata extension that creates subtotal lines for a detailed list. Hyderabad. (ORDER BY is not implied by GROUP BY) Using WITH.00 Page 56 DEPT 301 301 Visualpath.

Hyderabad. #306. TRIM is most useful when performing string concatenations.00 -----------Dept Total Dept Avg 74150. It is particularly useful for working with VARCHAR fields where the size of the string can vary from row to row. Solution SELECT first_name FROM employee WHERE CHARACTERS (first_name) > 5.00 37075. ph-8374187525 Page 57 . Aditya Enclave. To find all employees who have more than five characters in their first name. Niligiri Block.00 401 401 CHARACTERS Function The CHARACTERS function is a Teradata-specific function which counts the number of characters in a string. Visualpath.00 37850. Ameerpet.Johnson Trader 36300. TRIM Function Use the TRIM function to suppress leading and/or trailing blanks in a CHAR column or leading and/or trailing binary zeroes in a BYTE or VARBYTE column.

last_name (TITLE 'last') FROM employee WHERE CHAR(TRIM(last_name))=4.' || 'Mary' AS Name. Hyderabad. Mary Visualpath. Solution 2 SELECT first_name . Aditya Enclave. . Ameerpet. Example Concatenating of literals without the TRIM function: SELECT Name -----------------------------Jones . ph-8374187525 Page 58 leading and trailing blanks/binary leading and trailing blanks/binary 1: 'Jones' || '. Niligiri Block. TRIM with Concatenation The || (double pipe) symbol is the concatenation operator that creates a new string from the combination of the first string followed by the second.There are several variations of the TRIM function: TRIM ([expression]) zeroes TRIM (BOTH FROM [expression]) zeroes TRIM (TRAILING FROM[expression]) trailing blanks/binary zeroes TRIM (LEADING FROM[expression]) leading blanks/binary zeroes Solution 1 SELECT first_name . #306.last_name (TITLE 'last') FROM employee WHERE CHAR (TRIM (TRAILING FROM last_name)) = 4.

TRIM with Other Characters Example 1: SELECT TRIM(BOTH '?' FROM '??????PAUL??????') AS Trim_String. Trim_String ---------------PAUL?????? Example 3: SELECT TRIM(TRAILING '?' FROM '??????PAUL??????') AS Trim_String. #306. Ameerpet. Aditya Enclave. Trim_String ---------------PAUL Example 2: SELECT TRIM(LEADING '?' FROM '??????PAUL??????') AS Trim_String. Hyderabad. Niligiri Block. ph-8374187525 Page 59 . Trim_String ---------------??????PAUL Visualpath.

Niligiri Block. Obtains a section of a character string.65 Data: 85. Aditya Enclave. the concatenation operator is provided for combining strings. Also.65 Data: 85. Page 60 '999999' 'ZZZZZ9' '999-9999' 'X(3)' '$$9. Locates a character position in a string.99' 'X(3)' Data: 08777 Data: 08777 Result: 008777 Result: 8777 Data: 6495252 Result: 649-5252 Data: 'Smith' Data: 85. The string functions and the concatenation operator are listed here.$$9. Hyderabad.65 Result: 085. Ameerpet. For example: SELECT salary_amount (FORMAT '$$$. #306. String Operator || SUBSTRING INDEX Description Concatenates (combines) character strings together.FORMAT Phrase The FORMAT phrase can be used to format column output and override the default format.99') FROM employee WHERE employee_number = 1004.65 Result: Smi Result: $85.99' '999.65 Result: Error Visualpath. Some Examples FORMAT FORMAT FORMAT FORMAT FORMAT FORMAT FORMAT String Functions Several functions are available for working with strings in SQL. ph-8374187525 .

SELECT SUBSTRING ('catalog' FROM 5 for 3).TRIM * UPPER Trims blanks from a string. Ameerpet. Result 'log' SUBSTRING Result SUBSTRING(‗catalog‘ FROM 5 ‗log‘ FOR 4) SUBSTRING(‗catalog‘ FROM 0 ‗ca‘ FOR 3) SUBSTRING(‗catalog‘ FROM -1 ‗c‘ FOR 3) SUBSTR Result ‗log‘ ‗ca‘ ‗c‘ SUBSTRING(‗catalog‘ FROM 8 0 length string 0 length string FOR 3) SUBSTRING(‗catalog‘ FROM 1 0 length string 0 length string FOR 0) SUBSTRING(‗catalog‘ FROM 5 error FOR -2) SUBSTRING(‗catalog‘ FROM 0) ‗catalog‘ ‗catalog‘ Page 61 error Visualpath. Result 'log' SELECT SUBSTR ('catalog'. #306. 5. Aditya Enclave. Niligiri Block. ph-8374187525 .3). Hyderabad. Converts a string to uppercase.

concatenation of any string with a null produces a null result. Aditya Enclave. The COALESCE Function allows values to be substituted for nulls. the result is null.'x')) FROM tblx. ph-8374187525 Page 62 . Result is: 'ab' If either column contains a null. Niligiri Block. returns 3 returns 1 SELECT INDEX ('Adams'. (The COALESCE function is described in more detail in Level 3 Module 6.) Example: Assume col1 = 'a'. Ameerpet. col2 = 'b' SELECT col1 | | col2 From tblx.SUBSTRING(‗catalog‘ FROM 10) SUBSTRING(‗catalog‘ FROM -1) SUBSTRING(‗catalog‘ FROM 3) 0 length string 0 length string 0 length string 0 length string ‗talog‘ ‗talog‘ COALESCE Function Normally. #306. Result is: 'ax' INDEX Function The INDEX function locates a character position in a string. col2 = null SELECT col1 | | (COALESCE (col2. 't'). SELECT INDEX ('cat'. Hyderabad. Visualpath. 'a'). Solution: Assume col1 = 'a'.

SELECT INDEX ('dog', 'e'); DATE Formats SYNTAX FORMAT 'YYYY/MM/DD‘

returns 0

RESULT 1996/03/27 27 Mar 1996 Mar 27, 1996 27.03.1996 ,first_name ,hire_date (FORMAT

FORMAT 'DDbMMMbYYYY' FORMAT 'mmmBdd,Byyyy' FORMAT 'DD.MM.YYYY' SELECT last_name

'mmmBdd,Byyyy') FROM employee last_name Johnson Kanieski Ryan first_name Darlene Carol Loretta ORDER BY last_name; hire_date Oct 15, 1976 Feb 01, 1977 Oct 15, 1976

Extracting Portions of DATEs The EXTRACT function allows for easy extraction of year, month and day from any DATE data type. The following examples demonstrate its usage. Query SELECT DATE; /* March 20,2001 */ Result 01/03/20 (Default format)

SELECT EXTRACT(YEAR FROM DATE); 2001
Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525 Page 63

SELECT DATE);

EXTRACT(MONTH

FROM

03 20

SELECT EXTRACT(DAY FROM DATE);

Date arithmetic may be applied to the date prior to the extraction. Added values always represent days. Query SELECT EXTRACT(YEAR FROM DATE + 365); SELECT EXTRACT(MONTH FROM DATE + 30); SELECT EXTRACT(DAY FROM DATE + 12); Extracting From Current Time The EXTRACT function may also be applied against the current time. It permits extraction of hours, minutes and seconds. Query SELECT TIME; /* 2:42 PM */ SELECT EXTRACT(HOUR FROM TIME); SELECT EXTRACT(MINUTE FROM TIME); SELECT EXTRACT(SECOND FROM TIME); Set Operators The following are graphic representations of the three set operators, INTERSECT, UNION and EXCEPT
Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

Result 2002 04 01

Result 14:42:32 (Default format) 14 42 32

Page 64

The INTERSECT operator returns rows from multiple sets which share some criteria in common. SELECT manager_employee_number FROM employee INTERSECT SELECT BY 1; manager_employee_number FROM department ORDER

Results manager_employee_number 801 1003 1005 1011 The UNION operator returns all rows from multiple sets, displaying duplicate rows only once. SELECT first_name ,last_name ,'employee' (TITLE

'employee//type') FROM employee WHERE UNION SELECT first_name ,last_name ,' manager ' FROM employee WHERE employee_number = 1019 ORDER BY 2 manager_employee_number = 1019

The EXCEPT operator subtracts the contents of one set from the contents of another.
Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

Page 65

SELECT manager_employee_number FROM department EXCEPT SELECT 1; manager_employee_number FROM employee ORDER BY

Result manager_employee_number 1016 1099 NOTE: Using the Teradata keyword ALL in conjuction with the UNION operator allows duplicate rows to remain in the result set. What is a Trigger? A trigger is an object in a database, like a macro or view. A trigger is created with a CREATE TRIGGER statement and defines events that will happen when some other event, called a triggering event, occurs. A trigger consists of one or more SQL statements which are associated with a table and which are executed when the trigger is 'fired'. In summary, a Trigger is:
  

One or more stored SQL statements associated with a table. An event driven procedure attached to a table. An object in a database, like tables, views and macros.
Page 66

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

Niligiri Block. also apply to triggers.Many of the DDL commands which apply to other database objects. Join indexes are never permitted on tables which have defined triggers. #306. Hyderabad. you must first disable any triggers defined on the affected tables via an ALTER TRIGGER command. Aditya Enclave. All of the following statements are valid with triggers:        CREATE TRIGGER DROP TRIGGER SHOW TRIGGER ALTER TRIGGER RENAME TRIGGER REPLACE TRIGGER HELP TRIGGER Triggers may not be used in conjunction with:     The FastLoad utility The MultiLoad utility Updatable Cursors (Stored Procedures or Preprocessor) Join Indexes To use the FastLoad or MultiLoad utilities. Ameerpet. ph-8374187525 Page 67 . You can drop all Triggers using:   DELETE DATABASE DELETE USER Visualpath. or to create stored procedures with updatable cursors (covered in a later module).

A triggering statement is an SQL statement which causes a trigger to fire. called triggered events to occur.Privileges are required to CREATE and DROP Triggers:     GRANT CREATE Trigger GRANT DROP Trigger REVOKE CREATE Trigger REVOKE DROP Trigger new privileges have been created in the the Data These Dictionary/Directory. In the current module (Module 3). #306. notation will be provided to indicated which features are no longer supported in V2R5. it causes other events. Hyderabad. ph-8374187525 . Note: The Teradata implementation of triggers is updated with Release V2R5.1. A triggered event consists of one or more triggered statements. The changes are fully demonstrated in Level 6. When a trigger fires. Module 15 of this SQL Webbased training. Ameerpet. It is the 'launching' statement. Aditya Enclave. Niligiri Block. Triggered and Triggering Statements A trigger is said to ‗fire‘ when the triggering event occurs and various conditions are met. Triggering statements may be any of the following:  INSERT Page 68 Visualpath.1 (January 2004) to conform to the ANSI specification.

ph-8374187525 Page 69 . We will see how to do this later.   UPDATE DELETE INSERT SELECT A triggered statement is the statement (or statements) which are executed as a result of firing the trigger. Ameerpet. Triggered statements may never be any of these:      BEGIN TRANSACTION CHECKPOINT COMMIT END TRANSACTION SELECT You can do transaction processing in a triggered statement without using Begin Transaction/End Transaction (BTET). Triggered statements may be any of these:       INSERT UPDATE DELETE INSERT SELECT ABORT/ROLLBACK EXEC (macro) A macro may only contain the approved DML statements. Hyderabad. #306. Aditya Enclave. Niligiri Block. Visualpath.

Visualpath. Hyderabad. Ameerpet. #306. Aditya Enclave. ph-8374187525 Page 70 . Niligiri Block.

Hyderabad. Ameerpet.Trigger Types There are two types of triggers: Visualpath. ph-8374187525 Page 71 . Niligiri Block. Aditya Enclave. #306.

Example 2 CREATE TRIGGER trig1 AFTER INSERT ON tab1 REFERENCING NEW_TABLE AS newtable FOR EACH STATEMENT (INSERT INTO tab2 SELECT a + 10. permit only simple inserts.  ROW triggers STATEMENT triggers ROW triggers    fire once for each row affected by the triggering statement. c FROM newtable. Example 1             CREATE TABLE tab1 (a INT. CREATE TABLE tab2 (d INT. or macros containing them in a triggered statement. rollbacks. i INT). Aditya Enclave. b + 10. f INT). c INT). STATEMENT triggers   fire once per statement. CREATE TABLE tab3 (g INT. Niligiri Block. b INT. reference OLD_TABLE and NEW_TABLE subject tables. h INT. Hyderabad. e INT. Example 3 CREATE TRIGGER trig2 AFTER INSERT ON tab2 REFERENCING NEW_TABLE AS newtable FOR EACH STATEMENT Page 72 Visualpath. Ameerpet. ph-8374187525 . reference OLD and NEW rows of the subject table.). #306.

d ----------11 e ----------12 f ----------3 SELECT * FROM tab3. #306. f FROM Example 4 INSERT INTO tab1 VALUES (1. a ----------1 b ----------2 c ----------3 SELECT * FROM tab2. Visualpath. SELECT * FROM tab1. Hyderabad.3). Aditya Enclave. e + 100. d + 100. Ameerpet. Niligiri Block.2.). ph-8374187525 Page 73 . (INSERT INTO tab3 SELECT newtable. g ----------111 h ----------112 i ----------3 RANDOM Function The RANDOM function may be used to generate a random number between a specified range.

RANDOM (Lower limit, Upper limit) returns a random number between the lower and upper limits inclusive. Both limits must be specified, otherwise a random number between 0 and approximately 4 billion is generated. Consider the department table, which consists of nine rows. SELECT department_number FROM department; department_number ----------------501 301 201 600 100 402 403 302 401 Limitations On Use Of RANDOM
 

RANDOM is non-ANSI standard RANDOM may be used in a SELECT list or a WHERE clause, but not both

  

RANDOM may be used in Updating, Inserting or Deleting rows RANDOM may not be used with aggregate or OLAP functions RANDOM cannot be referenced by numeric position in a GROUP BY or ORDER BY clause

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

Page 74

Join processing: Inner Join Suppose we need to display employee number, last name, and department name for all employees. The employee number and last name come from the employee table. The department name comes from the department table. A join, by definition, is necessary whenever data is needed from more than one table or view, In order to perform a join, we need to find a column that both tables have in common. Fortunately, both tables have a department number column, which may be used to join the rows of both tables. Solution SELECT employee.employee_number ,employee.last_name INNER JOIN

,department.department_name FROM employee department ON

employee.department_number = department.department_number;

employee_number last_name department_name 1006 1008 1005 1004 1007 Stein research and development

Kanieski research and development Ryan Johnson Villegas education customer support education
Page 75

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

1003

Trader

customer support

We fully qualified every column referenced in our SELECT statement to include the table that the column is in ( e.g., employee.employer_number). It is only necessary to qualify columns that have identical names in both tables (i.e., department_number). The ON clause is used to define the join condition used to link the two tables Cross Joins A Cross Join is a join that requires no join condition (Cross Join syntax does not allow an ON clause). Each participating row of one table is joined with each participating row of another table. The WHERE clause restricts which rows participate from either table. SELECTe.employee_number,d.department_numberFROM employeeeCROSS JOINdepartmentd WHEREe.employee_number=1008;

employee_number department_number 1008 1008 1008 1008 301 501 402 201
Page 76

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

emp. Niligiri Block. #306.last_name (TITLE 'Mgr//Last Name') Visualpath. With the constraint that the employee_number must equal 1008 (which only matches one row in the employee table).first_name (TITLE 'Emp//First Name') . Without the WHERE clause.first_name (TITLE 'Mgr//First Name') . Aditya Enclave. The department table has 9 rows. Cross Joins by themselves often do not produce meaningful results. Ameerpet. we would expect that 26 x 9 = 234 rows in our result set. ph-8374187525 Page 77 . we now get 1 x 9 = 9 rows in our result set. Hyderabad.mgr.last_name (TITLE 'Emp//Last Name') .mgr.1008 1008 1008 1008 1008 302 600 401 100 403 The employee table has 26 rows. Self Joins A self join occurs when a table is joined to itself. Which employees share the same surname Brown and to whom do they report? SELECT emp. This result shows employee 1008 associated with each department. This is not meaningful output.

Niligiri Block.manager_employee_number = mgr. Results Emp First Name Allen Alan Emp Last Name Brown Brown Mgr First Name Loretta James Mgr Last Name Ryan Trader Join Processing: Rows must be on the same AMP to be joined.FROM employee emp INNER JOIN employeemgr ON emp. the system creates spool copies of one or both rows and Moves them to a common AMP. •Join processing NEVER moves or changes the original table rows. Ameerpet. Aditya Enclave. •If necessary. ph-8374187525 Page 78 . Hyderabad.last_name = 'Brown'.employee_number WHERE emp. Typical kinds of joins are: •Merge Join •Product Join •Nested Join •Exclusion Join The Optimizer chooses the best join strategy based on: Visualpath. #306.

Three general scenarios may occur when two tables are to be Merge Joined: 1. Ameerpet. •This is the only join that doesn't always use all of the AMPs. •It hashes the join column value to access matching Table2 row(s). Nested Joins: This is a special join case. Niligiri Block. Visualpath. –A join on a column of that single row to any index on Table2. Join Redistribution: The Primary Index is the major consideration used by the Optimizer in determining how to join two tables and deciding which rows to move. The Join column is the Primary Index of one of the tables. #306. ph-8374187525 Page 79 . The Join column(s) is the Primary Index of both tables (best case). the Optimizer must have: –An equality value for a unique index (UPI or USI) on Table1. •The system retrieves the single row from Table1. The Join column is not a Primary Index of either table (worst case). Aditya Enclave. 3. •It is the most efficient in terms of system resources. •It is the best choice for OLTP applications.•Available Indexes •Demographics (Collected STATISTICS or Dynamic Sample) EXPLAIN shows what kind of join a query uses. To choose a Nested Join. 2. Hyderabad.

the BTEQ conditional logic will instruct Teradata to create it. Ameerpet. However.RUN FILE = mylogon. Niligiri Block. #306.GOTOINSEMPS Logon to Teradata Make the default database SQL_Class Deletes all the records from the Employee_Table. if the table already exists. Invoke BTEQ 2. Type in the location and output file name.txt DATABASE SQL_Class. Type in the input file name 3. BTEQ conditional logic that will check to ensure that the Page 80 Visualpath. . If the table does not exist. then Teradata will move forward and insert data. The initial steps of the script will establish the logon.txt File Using BTEQ Conditional Logic Below is a BTEQ batch script example. and the delete all the rows from the Employee_Table. ph-8374187525 .Utilities: Bteq: Steps for submitting SQL in BTEQ’s Batch Mode 1. C:/>BTEQ < BatchScript.IF ERRORCODE = 0 THEN . the database.txt. . DELETE FROM Employee_Table. BTEQ is invoked and takes instructions from a file called BatchScript. Hyderabad.txt > Output. The output file is called Output.txt. Aditya Enclave.txt BatchScript.

54500. users will export data to a flat file format that is composed of a variety of characteristics. This is the default mode for BTEQ and brings the data back as if Visualpath.QUIT delete worked or if the table even existed. Below is an expanded explanation of the different mode options.. . #306. A zero (0) indicates that statement worked. Since it is not a report. disk drive file) in native format./* ERRORCODE is a reserved word that contains the outcome status for every SQL statement executed in BTEQ. This will bring data back as a flat file. Ameerpet. 400). 'Harrison' .LABEL INSEMPS INSERT INTO Employee_Table (1232578. Aditya Enclave.00. Hyderabad. indicator mode. or dif mode. Field Mode (also called REPORT mode): This is set by .EXPORT DATA.g. LIMIT=n] Record Mode: (also called DATA mode): This is set by . ph-8374187525 Page 81 . In addition. 'Mandee'.EXPORT REPORT. the BTEQ export function has several export formats that a user can choose depending on the desired output. For example.'Herbert'. Using BTEQ to Export Data BTEQ allows data to be exported directly from Teradata to a file on a mainframe or network-attached computer. Each parcel will contain a complete record. The Label INSEMPS provides code so the BTEQ Logic can go directly to inserting records into the Employee_Table.00. 'Chambers'. this means that INTEGER data is written as a 4-byte binary field.EXPORT <mode> {FILE | DDNAME } = <filename> [. Generally. These characteristics include: field mode. INSERT INTO Employee_Table (1256349. Niligiri Block. there are no headers or white space between the data contained in each column and the data is written to the file (e. 48850. Format of the EXPORT command: . */ . Therefore. it cannot be read and understood using a normal text editor. 100).

Therefore. the bit for that field is turned on by setting it to a ―1‖. However. Likewise. for every eight fields. To remedy this situation. Therefore. but also provides host operating systems with the means of recognizing missing or unknown data (NULL) fields. The output of this BTEQ export would return the column headers for the fields. white space. However. Ameerpet. Again. Otherwise. but a zero or space. it must be imported as DATA and the same is true for INDICDATA. When a Teradata column contains a NULL. Niligiri Block. one bit is needed per field output onto disk. If this data is simply loaded into another RDBMS. This is important if the data is to be loaded into another Relational Database System (RDBMS). the following information is important. This bitmap contains one bit per field/column. the loading utility reads these bits as indicators of NULL data and identifies the column(s) as NULL when data is loaded back into the table. if the data is not NULL. As mentioned earlier. every system uses a zero for a numeric NULL and a space or blank for a character NULL. not bits. if one bit is needed a minimum of eight (8 bits per byte) are allocated. expanded packed or binary data (for humans to read) and can be understood using a text editor. it is no longer a NULL.EXPORT INDICDATA. you must account for these bits when defining the LRECL in the Job Control Language (JCL). computers allocate data in bytes. this internal processing is automatic and potentially important. So. it becomes imperative that you be consistent. Hyderabad. In other Visualpath. INDICATA puts a bitmap at the front of every record written to the disk. Indicator Mode: This is set by . being consistent is our only responsibility. they are the fastest method of transferring data. your length is too short and the job will end with an error. on a mainframe system. Since both DATA and INDICDATA store each column on disk in native format with known lengths and characteristics. When it is exported as DATA. This mode writes the data in data mode. Therefore. Yet. where appropriate. the LRECL becomes 1 byte longer and must be added. However. Aditya Enclave. ph-8374187525 Page 82 . on a network-attached system. The issue is that there is no standard character defined to represent either a numeric or character NULL. the bit remains a zero.it was a standard SQL SELECT statement. To determine the correct length. #306.

Hyderabad. the output record will be two bytes longer. This might be handy in a test environment to stop BTEQ before the end of transferring rows to the file. and will abort if the value is incorrect. 2 bytes are added even though only nine bits are needed. The optional limit is to tell BTEQ to stop returning rows after a specific number (n) of rows. The lengths are: INTEGER 4 bytes Page 83 Visualpath. FoxPro and Lotus. the record length is automatically maintained. When executing on non-mainframe systems. DIF Mode: Known as Data Interchange Format. The following page will discuss how to figure out the record lengths. Determining Out Record Lengths Some hosts. which allows users to export data from Teradata to be directly utilized for spreadsheet applications like Excel. if one bit is required by the host system. There are three issues involving record lengths and they are: Fixed columns Variable columns NULL indicators Fixed Length Columns: For fixed length columns you merely count the length of the column. #306. Niligiri Block.words. INDICDATA mode gives the Host computer the ability to allocate bits in the form of a byte. when exporting to a mainframe. Therefore. INDICDATA mode will automatically allocate eight of them. When selecting nine to sixteen columns. require the correct LRECL (Logical Record Length) parameter in the JCL. Ameerpet. such as IBM mainframes. Aditya Enclave. there is one indicator bit per field selected. This means that from one to eight columns being referenced in the SELECT will add one byte to the length of the record. With this being stated. ph-8374187525 . However. the JCL (LRECL) must account for this addition length. for nine columns selected.

04 Warning error. BTEQ Return Codes Return codes are two-digit values that BTEQ returns to the user after completing each job or task. Hyderabad. In reality you can save much space because trailing blanks are not kept.2) DECIMAL(12. 12 Severe internal error. 08 User error. ph-8374187525 Page 84 . This two bytes is for the number of bytes for the binary length of the field. then add two bytes. Visualpath. VARCHAR(8) 10 Bytes VARCHAR(10) 12 Bytes Indicator columns: As explained earlier. Ameerpet. the indicators utilize a single bit for each field. total digits / 2 +1 ) 8 bytes Variable columns: Variable length columns should be calculated as the maximum value plus two. 02 User alert to log on to the Teradata DBS.2) 2 bytes 1 byte 10 bytes 4 bytes 4 bytes 4 bytes (packed data. If your record has 8 fields (which require 8 bits). The value of the return code indicates the completion status of the job or task as follows: Return Code Description 00 Job completed with no errors. The logical record will assume the maximum and add two bytes as a length field per column.SMALLINT BYTEINT CHAR(10) CHAR(4) DATE DECIMAL(7. If your record has 9-16 fields. Aditya Enclave. then you add one extra byte to the total length of all the fields. Niligiri Block. #306.

This might be handy for debug purposes. This is the reason why FastExport (FEXP) is brilliant by design. validate. FastExport is extremely attractive for exporting data because it takes full advantage of multiple sessions. Niligiri Block. Part of this speed is achieved because FastExport takes full advantage of Teradata‘s parallelism. FastExport can also export from multiple tables during a single operation. ph-8374187525 Page 85 . Aditya Enclave. FastExport utilizes the Support Environment. then FastExport is the best choice to accomplish this task. Keep in mind that FastExport is designed as a one-way utility — that is. In addition. Ameerpet. It does this by harnessing the parallelism that Teradata provides. FastExport has the ability to except OUTMOD routines.You can over-ride the standard error codes at the time you terminate BTEQ. #306. As the demand increases to store data. and preprocess the exported data. In addition. A good rule of thumb is that if you have more than half a million rows of data to export to either a flat file format or with NULL indicators. which provides the user the capability to write. which leverages Teradata parallelism. the sole purpose of FastExport is to move data out of Teradata. Hyderabad. The error code or ―return code‖ can be any number you specify using one of the following: Fast Export: An Introduction to FastExport Why it is called “FAST”Export FastExport is known for its lightning speed when it comes to exporting vast amounts of data from Teradata and transferring the data into flat files on either a mainframe or network-attached computer. which provides a job restart capability from a checkpoint if an error occurs during the process of executing an export job. the ever-growing requirement for tools to export massive amounts of data. select. Visualpath.

all the selected rows are in worktables and it can continue sending them where it left off. However. Hyderabad. BTEQ continues to send rows. an OUTMOD routine can be written to modify the result set after it is sent back to the client on either the mainframe or LAN. You SELECT the data you want exported and FastExport will take care of the rest. BTEQ starts sending rows immediately for storage into a file. You must rerun the BTEQ script from the beginning. Niligiri Block. Remember. ―We don‘t make the products you buy. it quickly overtakes and passes BTEQ‘s row at time processing. If the output data is sorted. The reason they call it FastExport is because it takes data off of Teradata (Exports Data). The other advantage is that if BTEQ terminates abnormally. FastExport is getting behind in the processing. but it does take the SQL SELECT statement and processes the request with lighting fast parallel processing! FastExport Fundamentals #1: FastExport EXPORTS data from Teradata. Just like the BASF commercial states.How FastExport Works When FastExport is invoked. FastExport is designed off the same premise. Ameerpet. FastExport does not import data into Teradata. Additionally. Pretty smart and very fast! Also. Aditya Enclave. all of your rows (which are in SPOOL) are discarded. if FastExport terminates abnormally. ph-8374187525 Page 86 . The only DML statement that FastExport understands is SELECT. we make the products you buy better‖. In comparison. if there is a requirement to manipulate the data before storing it on the computer‘s hard drive. However. it does not make the SQL SELECT statement faster. #2: FastExport only supports the SELECT statement. the utility logs onto the Teradata database and retrieves the rows that are specified in the SELECT statement and puts them into SPOOL. Visualpath. FastExport may be required to redistribute the selected data two times across the AMP processors in order to build the blocks in the correct sequence. While all of this redistribution is occurring. a lot of rows fit into a 64K block and both the rows and the blocks must be sequenced. it must build blocks to send back to the client. #306. like BTEQ it can output multiple files in a single run. when FastExport starts sending the rows back a block at a time. From there.

validate and preprocess the exported data. Of course. FastExport does not record particular error types in a table. FastExports. #6: FastExport does NOT support error files or error limits. FastExport is recommended over BTEQ Export. MultiLoad. For example. If more then 15 simultaneous jobs were supported. Aditya Enclave. calculations. FastExport will work with less data. and conversions. #5: FastExport supports conditional logic. In this case. #7: FastExport supports user-written routines INMODs and OUTMODs. but the speed may not be much faster than BTEQ. The FastExport utility will terminate after a certain number of errors have been encountered. A tip for Visualpath. and FastExport all use large blocks to transfer data. FastExport allows you write INMOD and OUTMOD routines so you can select. #306. You can have multiple SELECT statements with FastExport and each SELECT can join information up to 64 tables. Hyderabad. and FastExport jobs that are attempting to connect. and MultiLoads that can run at the same time. Ameerpet. or FastExport utility jobs. this value is set at 5. This value can be set from 0 to 15. Niligiri Block. which is limited to 15. When Teradata is initially installed. BTEQ Export does not have this restriction. The only drawback is the total number of FastLoads. MultiLoad. The reason for this limitation is that FastLoad. MultiLoad. Maximum of 15 Loads The Teradata RDBMS will only support a maximum of 15 simultaneous FastLoad.#3: Choose FastExport over BTEQ when Exporting Data of more than half a million+ rows. arithmetic calculations. conditional expressions. ph-8374187525 Page 87 . When a large amount of data is being exported.FastExport is flexible and supports the above conditions. This limitation should be viewed as a safety control feature. and data conversions. #4: FastExport supports multiple SELECT statements and multiple tables in a single run. if the maximum numbers of utilities on the Teradata system is reached and another job attempts to run that job does not start. a saturation point could be reached on the availability of resources. Teradata does an excellent job of protecting system resources by queuing up additional FastLoad. This maximum value is determined and configured by the DBS Control record.

―Before you can learn to run. BEGIN EXPORT SESSIONS 12.‖ This same philosophy can be applied when working with FastExport. then there can be only a total of fifteen of them running at any one time”. Hyderabad. However. then several things that appear to be complicated are really very simple. Page 88 . and rewarding all at the same time. Niligiri Block. Aditya Enclave. With this being stated.EXPORT OUTFILE Student. FastExport is clearly the better choice when exporting data.demopwd.SWA_Log.LOGON demo/usr01. Ameerpet. He said to me.txt Visualpath. A FastExport in its Simplest Form The hobby of racecar driving can be extremely frustrating. #306. challenging. BTEQ does not have this load limitation. ph-8374187525 . BTEQ is an alternate choice for exporting data. I always remember my driving instructor coaching me during a practice session in a new car around a road course racetrack. if two many load jobs are running. FastExport can be broken into the following steps: Logging onto Teradata Retrieves the rows you specify in your SELECT statement Exports the data to the specified file or OUTMOD routine Logs off of Teradata LOGTABLE sql01. . ―If the name of the load utility contains either the word “Fast” or the word “Load”. you need to learn how to walk.remembering how the load limit applies is this. Creates the logtable Required Logon to Teradata Begin the Export and set the number of sessions on Teradata Defines the output file name. If FastExport is broken into steps.

. ph-8374187525 Page 89 .LOGOFF. NOTE: The selected columns for the export are being converted to character types. Both modes return data in a client internal format with variable-length records. RECORD mode is the default. NULL columns have a value that is appropriate for the column data type. #306. Ameerpet.MODE RECORD FORMAT TEXT. Aditya Enclave. The difference between the two modes is INDICATOR mode will set the indicator bits to 1 for column values containing NULLS. All variable-length columns are preceded by a two-byte control value indicating the length of the column data. FastExport Modes and Formats FastExport Modes FastExport has two modes: RECORD or INDICATOR. only use RECORD mode. FastExport Formats FastExport has many possible formats in the UNIX or LAN environment. This will simplify the importing process into a different database. In the UNIX or LAN environment. Niligiri Block. /* Finish the Export Job and Write to File */ End the Export and logoff Teradata. specifies the output mode and format (LAN – ONLY) The SELECT defines the column used to create the export file. .END EXPORT. Each individual record has a value for all of the columns specified by the SELECT statement. In addition. but you can use INDICATOR mode if desired. Hyderabad. The FORMAT statement specifies the format for each record being exported which are: Visualpath. In the mainframe world. Remember. INDICATOR mode will set bit flags that identify the columns that have a null value.

Then each AMP tackles its own portion of the task with regard to its portion of the data. TEXT is an arbitrary number of bytes followed by an end-of-record marker. followed by an end-of-record marker. FASTLOAD Format is a two-byte integer. #306. followed by the data. This same ―divide and conquer‖ mentality also expedites the load process. Aditya Enclave. ph-8374187525 Page 90 . both designed for speed. UNFORMAT is exported as it is received from CLIv2 without any client modifications. FastLoad divides its job into two phases. Hyderabad. BINARY Format is a two-byte integer. That will be done later. Sometimes they are referred to as Acquisition Phase and Application Phase. Ameerpet. it does the following: Visualpath. Instead. Niligiri Block. Both the data and the tasks are divided up among the AMPs. followed by data. PHASE 1: Acquisition The primary function of Phase 1 is to transfer data from the host computer to the Access Module Processors (AMPs) as quickly as possible. It is called FASTLOAD because the data is exported in a format ready for FASTLOAD. For the sake of speed.FASTLOAD BINARY TEXT UNFORMAT The default FORMAT is FASTLOAD in a UNIX or LAN environment. They have no fancy names but are typically known simply as Phase 1 and Phase 2. Fast load: FastLoad Has Two Phases Teradata is famous for its end-to-end use of parallel processing. the Parsing Engine of Teradata does not does not take the time to hash each row of data based on the Primary Index.

This is like the shipping parcel being sent from a hub city to its destination city! PHASE 2: Application Following the scenario described above. all but one of the client sessions begins loading raw data in 64K blocks for transfer to an AMP. Then. Therefore. one session is created for each AMP. Second. So how do the rows get to the correct AMPs where they will permanently reside? Following the receipt of every data block. unhashed. The result is that data rows arrive on different AMPs than those they would live. First. Niligiri Block. all the data blocks in the load get rushed randomly to any AMP. This just gets them to a ―hub‖ somewhere in Teradata country. How do the key players in this industry handle a parcel? When the shipping company receives a parcel. it is normally a good idea to limit the number of sessions using the SESSIONS command. ph-8374187525 Page 91 . the rows are packed. for the sake of speed. each AMP forwards them to their true destination. This capability is shown below. had they been hashed. from that hub it is sent to the destination city. the shipping vendor must do more than get a parcel to the destination city. Once the packages arrive at the destination city. The first priority of Phase 1 is to get the data onto the AMPs as fast as possible. it is often sent to a shipping hub in a seemingly unrelated city. the rows are written to a worktable on the AMP but remain unsorted until Phase 1 is complete. local destinations. It then opens a Teradata session from the FastLoad client directly to the AMPs. #306. that parcel is not immediately sent to its final destination. Visualpath. each AMP hashes its rows based on the Primary Index. By default. and redistributes them to the proper AMP. Phase 1 can be compared loosely to the preferred method of transfer used in the parcel shipping industry today.When the Parsing Engine (PE) receives the INSERT command. FastLoad‘s Phase 1 uses the AMPs in much the same way that the shipper uses its hubs. placed onto local trucks and be driven to their final. Instead. Hyderabad. Ameerpet. into large blocks and sent to the AMPs without any concern for which AMP gets the block. Simultaneously. The PE is the Teradata software processor responsible for parsing syntax and generating a plan to execute the request. they must then be sorted by street and zip code. Aditya Enclave. At this point. To accomplish this. on large systems. it uses one session to parse the SQL just once.

Rows of a table are stored on the disks in data blocks. Visualpath.‖] in front of them and therefore need a semi-colon. However. it is important to specify how many sessions you need. Why would we recommend this? We do because as FastLoad‘s capabilities get enhanced with newer versions. TEXT. UNFORMATTED OR VARTEXT. You will quickly see that the utility commands in FastLoad are similar to those in BTEQ. no wonder it is the darling of the Teradata load utilities!. Aditya Enclave. Hyderabad. At this point we chose to have Teradata tell us which version of FastLoad is being used for the load. FastLoad must know the structure and the name of the flat file to be used as the input FILE. Step Three: If the input file is not a FastLoad format. ph-8374187525 Page 92 . This enables the Primary table to become accessible as soon as possible. Then it writes the rows into the table space on disks where they will permanently reside. If the table is Fallback protected. Niligiri Block. We have used VARTEXT in our example with a comma delimiter. Step Four: Next. The other options are FastLoad. the syntax of the scripts may have to be revisited. or source file for the load. The syntax is [SESSIONS {n}]. then the Fallback will be loaded after the Primary table has finished loading. most of the FastLoad commands do not allow a dot [―. FastLoad commands were designed from the underlying commands in BTEQ. You need to know this about your input file ahead of time. before you describe the INPUT FILE structure in the DEFINE statement. The AMP uses the block size as defined when the target table was created. each AMP sorts the rows in its worktable. where it will be stored on disk). unlike BTEQ. Steps to write Fastexport script: Step One: Before logging onto Teradata. you must first set the RECORD layout type for the file being passed by FastLoad. FastLoad is so ingenious.Similarly. #306.. you LOGON to the Teradata system. Step Two: Next. In this phase. FastLoad‘s Phase 2 is mission critical for getting every row of data to its final address (i.e. comes the DEFINE statement. Ameerpet.

However. they are named ―Emp_Err1‖ and ―Emp_Err2‖. #306. it will not proceed to Phase 2 without the END LOADING command. This makes FastLoad even faster! Visualpath. Phase 1 uses ―Emp_Err1‖ because it comes first and Phase 2 uses ―Emp_Err2‖. RERUN means that the job is capable of running all the processing again from the beginning of the load. this provides a very valuable capability for FastLoad. it prevents loading rows as they arrive from different time zones. Since the table must be empty at the start of the job. There are two very different. R‘s to consider whenever you run FastLoad. In the BEGIN LOADING statement we have also included the optional CHECKPOINT parameter. In reality. You may call them whatever you like. Therefore. Although not required. In this instance. Did you notice that there is no CREATE TABLE statement for the error tables in this script? FastLoad will automatically create them for you once you name them in the script. ph-8374187525 Page 93 . children were always told to focus on the three ―R‘s‘ in grade school (―reading. Hyderabad. it allows FastLoad to resume loading from the first row following the last successful CHECKPOINT. In the old days. Then run the last FastLoad job with an END LOADING and you have partitioned your load jobs into smaller segments instead of one huge job. the script must name the target table and the two error tables for the load. Niligiri Block. We included [CHECKPOINT 100000]. Aditya Enclave. simply omit the END LOADING on the load job. causing it to fail. At the same time. The names are arbitrary. Ameerpet. ‗riting. Then. to accomplish this processing. and ‗rithmatic‖). this optional parameter performs a vital task with regard to the load. Step Six: FastLoad focuses on its task of loading data blocks to AMPs like little Yorkshire terrier‘s do when playing with a ball! It will not stop unless you tell it to stop. they must be unique within a database. so using a combination of your userid and target table name helps insure this uniqueness between multiple FastLoad jobs occurring in the same database. of course. yet equally important.Step Five: FastLoad makes no assumptions from the DROP TABLE statements with regard to what you want loaded. When CHECKPOINT is requested. We will learn more about CHECKPOINT in the section on Restarting FastLoad. RESTART means that the job is capable of running the processing again from the point where it left off when the job was interrupted. They are RERUN and RESTART. In the BEGIN LOADING statement. you can run the same FastLoad multiple times and continue loading the worktables until the last file is received.

FastLoad must be restartable. they are dropped automatically. At this point the table lock is released and if there are no rows in the error tables. every script is exactly the same with the exception of the last one. Aditya Enclave. Hyderabad. That‘s a pretty clever way to do a partitioned type of data load. FastLoad will compare that to the column definitions in the Data Dictionary and convert the data for you! But the cardinal rule is that only one data type conversion is allowed per column. However. Ameerpet. Then. Niligiri Block. FastLoad allows six kinds of data conversions. Step Seven: All that goes up must come down. Converting Data Types with FastLoad Converting data is easy. take the appropriate action and drop the table manually. This will be the last utility command in your script. ph-8374187525 Page 94 TO TO TO TO TO TO TO NUMERIC DATA VARIABLE LENGTH DATA DATE DECIMALS INTEGERS CHARACTER DATA CHARACTER DATA . which contains the END LOADING causing FastLoad to proceed to Phase 2. In the example below.Of course to make this work. you are responsible to check it. #306. Here is a chart that displays them: IN FASTLOAD YOU MAY CONVERT CHARACTER DATA FIXED LENGTH DATA CHARACTER DATA INTEGERS DECIMALS DATE NUMERIC DATA Figure 4-4 Visualpath. Therefore. if a single row is in one of them. notice how the columns in the input file are converted from one data type to another simply by redefining the data type in the CREATE TABLE statement. Just define the input data types in the input file. you cannot use the DROP or CREATE commands within the script. Additionally. And all the sessions must LOGOFF.

Let‘s see how this is done. anyway? Perhaps you might experience a system reset or some glitch that stops the job one half way through it. you might want to make sure that the job is totally restartable. we meant that it is easy for the user. It is actually quite resource intensive. When You Can RESTART FastLoad If all of the following conditions are true. Hyderabad. then FastLoad is ALWAYS restartable: The Error Tables are NOT DROPPED in the script The Target Table is NOT DROPPED in the script The Target Table is NOT CREATED in the script You have defined a checkpoint Visualpath. #306.When we said that converting data is easy. thus increasing the amount of time needed for the load. this is not a good idea because it wastes time. ph-8374187525 Page 95 . So the most common way to deal with these situations is simply to RESTART the job. Maybe the mainframe went down. Therefore. it is NOT restartable: The Error Tables are DROPPED The Target Table is DROPPED The Target Table is CREATED Why might you have to RESTART a FastLoad job. Well. But what if the normal load takes 4 hours. If any of the following conditions are true of the FastLoad script that you are dealing with. Aditya Enclave. However. keep the number of columns being converted to a minimum! When You Cannot RESTART FastLoad There are two types of FastLoad scripts: those that you can restart and those that you cannot without modifying the script. Ameerpet. when you are loading a billion rows. and the glitch occurs when you already have two thirds of the data rows loaded? In that case. if speed is important. Niligiri Block. it is not really a big deal because FastLoad is so lightning-fast that you could probably just RERUN the job for small data loads.

Dept_Err1. they would be automatically dropped. Let‘s go back to the script we just reviewed above and see how we can break it into the two parts necessary to make it fully RESTARTABLE. If there had been no errors in the error tables. Ameerpet. DROP TABLE SQL01. Next. DROP TABLE SQL01.Dept_Err2. STEP TWO: Run the FastLoad script This is the portion of the earlier script that carries out these vital steps: Defines the structure of the flat file Tells FastLoad where to load the data and store the errors Specifies the checkpoint so a RESTART will not go back to row one Loads the data If these are true. Aditya Enclave. DROPS TARGET TABLE AND ERROR TABLES CREATES THE DEPARTMENT TARGET TABLE IN THE SQL01 DATA BASE IN TERADATA Figure 4-6 First. if they existed previously. you have not lost anything. Niligiri Block. are blown away. Hyderabad. Now. if you need to drop or create tables. all you need do is resubmit the FastLoad job and it starts loading data again with the next record after the last checkpoint. STEP ONE: Run the following SQL statements in Queryman or BTEQ before you start FastLoad: DROP TABLE SQL01. you ensure that the target table and error tables. Imagine that you have a table whose data changes so much that you typically drop it monthly and build it again. If these tables did not exist. if needed. do it in a separate job using BTEQ. #306. with Visualpath.Department. you create the empty table structure needed to receive a FastLoad.So. It is broken up below. ph-8374187525 Page 96 .

What Happens When FastLoad Finishes You Receive an Outcome Status The most important thing to do is verify that FastLoad completed successfully. This is because FastLoad assumes that it will need them for its restart. You may optionally use the RECORD command to manually restart on the next record after the one indicated in the message.that said. As you can imagine. ph-8374187525 Page 97 TOTAL RECORDS READ = 1000000 TOTAL ERRORFILE1 = 50 TOTAL ERRORFILE2 = 0 TOTAL INSERTS APPLIED = 999950 TOTAL DUPLICATE ROWS = 0 . #306. you can simply submit a script with only the BEGIN LOADING and END LOADING. Following is an example of such a report. the best course of action is normally to get it to finish successfully via a restart. You Receive a Status Report What happens when FastLoad finishes running? Well. Ameerpet. This is accomplished by looking at the last output in the report and making sure that it is a return code or status code of zero (0). When running FastLoad. or lastly. At the same time. Any other value indicates that something wasn‘t perfect and needs to be fixed. if you did not request a checkpoint. The locks will not be removed and the error tables will not be dropped without a successful completion. if the FastLoad job aborts in Phase 2. rerun it from the beginning. It will then restart right into Phase 2. Niligiri Block. Line 1: Line 2: Line 3: Line 4: Line 5: Figure 4-7 Visualpath. the output message will normally indicate how many records were loaded. First choice is that you get it to run to a successful completion. Aditya Enclave. the lock on the target table will not be released either. you realistically have two choices once it is started. Now. you can expect to see a summary report on the success of the load. Hyderabad.

Both error tables contain the same three columns. Note on duplicate rows: Whenever FastLoad experiences a restart. Line 3 shows that there were zero entries into the second error table. Ameerpet. Line 4 shows that there were 999950 rows successfully loaded into the empty target table. DataParcel. To check errors in Errorfile1 you would use this syntax: Visualpath. the first row after the checkpoint and some of the consecutive rows are sent a second time. Hence. Finally. so they were not loaded. Hyderabad. Therefore. These will be caught as duplicate rows after the sort. You Can Troubleshoot In the example above. FastLoad generates two error tables that will enable us to find the culprits. The second column. some number of rows will be sent to the AMPs again because the restart starts on the next record after the value stored in the checkpoint. which we named Errorfile1. there will normally be duplicate rows that are counted. you can select from either error table. This restart logic is the reason that FastLoad will not load duplicate rows into a MULTISET table. Were all of them loaded? Not really. The first error table. the number of rows in lines 2 through 5 should always total the number of records read in line 1. The third column. fifty entries were made in the first error table.The first line displays the total number of records read from the input file. they just track different types of errors. ph-8374187525 Page 98 . But that is not enough. Corresponding to this. there were no duplicate rows. The second line tells us that there were fifty rows with constraint violations. named ErrorField. Aditya Enclave. indicating that there were no duplicate Unique Primary Index violations. Niligiri Block. As a user. contains just three columns: The column ErrorCode contains the Teradata FastLoad code number to a corresponding translation or constraint error. It assumes they are duplicates because of this logic. This is due to the fact that a error seldom occurs on a checkpoint (quiet or quiescent point) when nothing is happening within FastLoad. When FastLoad reports on its efforts. Now we need to troubleshoot in order identify the errors and correct them. specifies which column in the table contained the error. contains the row with the problem. we know that the load was not entirely successful. when a restart occurs. #306. the duplicates would only have been counted. They are not stored in the error tables anywhere. Had there been any duplicate rows.

If the FastLoad script requests a CHECKPOINT (other than 0). If such an error occurs in Phase 1. This log contains a list of all currently running FastLoad jobs and the last successfully reached checkpoint for each job. the Data Acquisition phase is incomplete. Aditya Enclave. FastLoad will merely go back to the last successfully reported checkpoint prior to the error. use the RECORD command. Here are the two options: Suppose Phase 1 halts prematurely. Ameerpet. if the job fails. ph-8374187525 Page 99 .Fastlog table. then it is restartable from the last successful checkpoint. Hyderabad.Corrected rows may be inserted to the target table using another utility that does not require an empty table. simply resubmit the job. Resubmit the FastLoad script. If you wish to manually specify where FastLoad should restart. locate the last successful checkpoint record by referring to the SYSADMIN. Restarting with CHECKPOINT Sometimes you may need to restart FastLoad. Then FastLoad sends a checkpoint report (entry) to the SYSADMIN. To check errors in Errorfile2 you would the following syntax: The definition of the second error table is exactly the same as the target table with all the same columns and data types. with CHECKPOINT 0. Should an error occur that requires the load to restart. Visualpath. #306. Niligiri Block. Therefore. the AMPs will all pause and make sure that everything is loading smoothly. FastLoad will always restart from the very first row. To specify where a restart will start from. It will then restart from the record immediately following that checkpoint and start building the next block of data to load. At each CHECKPOINT. FastLoad will begin from RECORD 1 or the first record past the last checkpoint.FASTLOG table. 2A0C022C00000 How the CHECKPOINT Option Works CHECKPOINT option defines the points in a load job where the FastLoad utility pauses to record that Teradata has processed a specified number of rows. When the parameter ―CHECKPOINT [n]‖ is included in the BEGIN LOADING clause the system will stop loading momentarily at increments of [n] rows.

then CHECKPOINT defaults to 100. If the output print file shows that checkpoint 100000 occurred. INMODs replace the normal mainframe DDNAME or LAN defined FILE name with the following statement: DEFINE INMOD=<INMOD-name>. see the chapter of this book titled ―INMOD Processing‖. Hyderabad. In this case. #306. Restarting without CHECKPOINT (i.Normally. Visualpath. CHECKPOINT 0) When a failure occurs and the FastLoad Script did not utilize the CHECKPOINT (i. it is not necessary to use the RECORD command — let FastLoad automatically determine where to restart from. providing that the appropriate programming languages are used. CHECKPOINT 0). these rows will be rejected as duplicates. Aditya Enclave. However. Here are some other options available to you: Resubmit job again and hope there is enough PERM space for all the rows already sent to the unsorted target table plus all the rows that are going to be sent again to the same target table. For a more in-depth discussion of INMODs.. Using INMODs with FastLoad When you find that FastLoad does not read the file type you have or you wish to control the access for any reason. If the interruption occurs in Phase 2.000. We know that the error is in the Application Phase. one procedure is to DROP the target table and error tables and rerun the job.]. As you can imagine. Niligiri Block. resubmit the FastLoad script with only the BEGIN and END LOADING Statements. You can perform a manual restart using the RECORD statement. ph-8374187525 Page 100 . This statement will skip records 1 through 10000 and resume on record 100001. this is not the most efficient way since it processes many of the same rows twice. then it might be desirable to use an INMOD.e. is fully compatible with FastLoad in either mainframe or LAN environments. Other than using space. If CHECKPOINT wasn‘t specified.e. An INMOD (Input Module). This will restart in Phase 2 with the sort and building of the target table. the Data Acquisition phase has already completed.. Ameerpet. use something like the following command: [RECORD 100001.

you have the freedom to ―mix and match‖ up to twenty (20) INSERTs. In other words. UPDATEs or DELETEs on up to five target tables. MultiLoad is the utility of choice when it comes to loading populated tables in the batch environment. ―Bring it on!‖ Leo Tolstoy once said.Multiload: Why it is called “Multi”Load If we were going to be stranded on an island with a Teradata Data Warehouseand we could only take along one Teradata load utility. #306. Niligiri Block. MultiLoad demonstrates its user-friendly flexibility. we will point them out for you. MultiLoad shines when it can impact more than one row in every data block. And it gets better. Instead. Two MultiLoad Modes: IMPORT and DELETE MultiLoad provides two types of operations via modes: IMPORT and DELETE. The similarities will be evident as you work with them. MultiLoad has many similarities to FastLoad. although they may have some differences. MultiLoad looks at massive amounts of data and says. This is in stark contrast to its fleet-footed cousin. The execution of the DML statements is not mandatory for all rows in a table. UPDATE. ph-8374187525 Page 101 . which can only loadone table at a time. MultiLoad would be our choice.‖ Like happy families. including INSERT. For UPDATEs or DELETEsto be Visualpath. As the volume of data being loaded or updated in a single block. FastLoad. against one or more tables. their execution hinges upon the conditions contained in the APPLY clause of the script. Aditya Enclave. It has even more commands in common with TPump. yet! This feature rich utility can perform multiple types of DML tasks. Ameerpet. DELETE and UPSERT on up to five (5) empty or populated target tables at a time. the performance of MultiLoad improves. the Teradata load utilities resemble each other. MultiLoad has the capability to load multiple tables at one time from either a LAN or Channel environment. For these reasons. These DML functions may be run either solo or in combinations. ―All happy families resemble each other. You are going to be pleased to find that you do not have to learn all new commands and concepts for each load utility. Once again. clearly. Where there are some quirky differences. Hyderabad. In MultiLoad IMPORT mode.

the block is written one time and a checkpoint is written. The other factor that makes a DELETE mode operation so good is that it examines an entire block of rows at a time. monthly data is being stored in a quarterly table. MultiLoad does not do a rollback. This is a smart way to continue. A rollback can take longer to finish then the delete.BEGIN DELETE MLOAD is that it bypasses the Transient Journal (TJ) and can be RESTARTed if an error causes it to terminate prior to finishing. Hyderabad. it does a restart. Niligiri Block. Aditya Enclave. Visualpath. if a restart is necessary. The reason to use . when using the TJ all deleted rows are put back into the table from the TJ as a rollback. ph-8374187525 Page 102 . The Purpose of DELETE MLOAD In the above diagram. this is a global operation. When performing in DELETE mode. monthly data is rotated in and out. they must reference the Primary Index in the WHERE clause. the DELETE SQL statement cannot referencethe Primary Index in the WHERE clause. Ameerpet. This due to the fact that a primary index access is to a specific AMP. So. The MultiLoad DELETE mode is used to perform a global (all AMP) delete on just one table. Remember.successful in IMPORT mode. it simply starts deleting rows from the next block without a checkpoint. #306. To keep the contents limited to four months. Once all the eligible rows have been removed.

This is why MultiLoad does not use the Transient Journal. Is it pure magic? No. #306. however.At the end of every month. delete a month. the oldest month of data is removed and the new month is added. can RESTART when an AMP fails. Block and Tackle Approach MultiLoad never loses sight of the fact that it is designed for functionality. Hyderabad. Once again. the process is to perform a MultiLoad DELETE to DELETE FROM Table A WHERE <datecolumn>< ‗2002-02-01‘. The cycle is ―add a month. Normally. speed. This is much faster than writing data one row at a time like BTEQ. Here is a question for you: What if there was another way to accomplish this same goal without consuming all of these extra resources? To illustrate. that means that January data must be deleted to make room for May‘s data.‖ In our illustration. Visualpath. this demonstrates its tremendous flexibility as a load utility. but it almost seems so. This allows users to access the base table immediately upon completion of the MultiLoad while fallback rows are being loaded in the background. MultiLoad has full RESTART capability in all of its five phases of operation. The final step would be to INSERT the new rows for May using MultiLoad IMPORT. if the table is fallback protected. Ameerpet. Here is a key difference to note between MultiLoad and FastLoad. Niligiri Block. You want to delete a range of rows based on a date and then load in fresh data to replace these rows. you can use the AMPCHECK option to make it work like FastLoad if you want. The benefit is reduced time to access the data. add a month.‖ When using FastLoad. thus averting timeconsuming rollbacks when a job halts prematurely. As the same time. Fallback table rows are written after the base table has been loaded. Aditya Enclave. Amazingly. MultiLoad makes effective useof two error tables to save different types of errors and a LOGTABLE that stores built-in checkpoint information for restarting. It tackles the proverbial I/O bottleneck problem like FastLoad by assembling data rows into 64K blocks and writing them to disk on the AMPs. you must restart the AMP to restart the job. and the ability to restart. ph-8374187525 Page 103 . MultiLoad. let‘s consider the following scenario: Suppose you have Table A that contains 12 billion rows. delete a month. Sometimes an AMP (Access Module Processor) fails and the system administrators say that the AMP is ―down‖ or ―offline.

But unlike FastLoad. So. one worktable (per target table). They consist of two error tables (per target table). MultiLoad will not load data into tables that are defined with Referential Integrity (RI). but a USI (different AMP) is not. Aditya Enclave. To keep MultiLoad running smoothly. #306. Hyderabad. constraint or uniqueness violations during a load. In essence. and one log table. Like a USI. ―Logtable‖) is used to store successful checkpoints during load processing in case a RESTART is needed. Work Tables are used to receive and sort data and SQL on each AMP prior to storing them permanently to disk. Rule #5: The host will not process aggregates. Rule #4: No concatenation of input files is allowed. a NUSI (same AMP) is fine. arithmetic functions or exponentiation. Again. Niligiri Block. Rule #2: Referential Integrity is not supported. you might be better off using an INMOD to prepare the data prior to loading it. MultiLoad does not support Unique Secondary Indexes (USIs). Work Tables and Log Tables Besides target table(s). ph-8374187525 Page 104 . LikeFastLoad. this is a multi-AMP operation and to a different table. Ameerpet. If you need data conversions or math. MultiLoad requires the use of four special tables in order to function.MultiLoad Imposes Limits Rule #1: Unique Secondary Indexes are not supported on a Target Table. MultiLoad does not want you to do this because it could impact are restart if the files were concatenated in a different sequence or data was deleted between runs. If two AMPs must communicate. When these tables are to be stored in any Visualpath. they are not independent. this requires the AMPs to communicate with each other. it does support the use of Non-Unique Secondary Indexes (NUSIs) because the index subtable row is on the same AMP as the data row. Error Tables. A Log Table (also called. Rule #3: Triggers are not supported at load time. MultiLoad uses every AMP independently and in parallel. Therefore. disable all Triggers prior to using it. Triggers cause actions on related tables based upon what happens in a target table. RI constraints must be dropped from the target table prior to using MultiLoad. the Error Tables will be used to store any conversion. HINT: Sometimes a company wants all of these load support tables to be housed in a particular database.

Do not underestimate the value of these tables. MultiLoad can only load one occurrence into a table. you will be glad that the row did not load since Kara Morgan is already in the Employee table. Ameerpet. It contains all translation and constraint errors that may occur while the data is being acquired from the source(s). because employee numbers must be unique. MultiLoad will not accept error table names that are the same as target table names. #306. For example. They are vital to the operation of MultiLoad. let‘s look at each type of table individually. if you do not name them. In either case.<tablename>) in the script or use the DATABASE command to change the current database. ph-8374187525 Page 105 . if the name showed up as David Jackson. It does not matter what you name them. then you must give them a qualified name (<databasename>. Two Error Tables: Here is another place where FastLoad and MultiLoad are similar. Each error table does the following: Identifies errors Provides some detail about the errors Stores the actual offending row for debugging You have the option to name these tables in the MultiLoad script (shown later). Any duplicate value will be stored in the UV error table. The first error table is the acquisition Error Table (ET). However. Aditya Enclave. Without them a MultiLoad job can not run. MultiLoad will automatically create these tables. they default to ET_<target_table_name> and UV_<target_table_name>. Worktables and error tables can be named in the BEGIN MLOAD statement. then you know that further investigation is needed.database other than the user‘s own default database. Rows are inserted into these tables only when errors occur during the load process. you might see a UPI error that shows a second employee number ―99.‖ In this case. if the name for employee ―99‖ is Kara Morgan. It is recommended that you Visualpath. Since a UPI must be unique. Now that you have had the ―executive summary‖. The second is the Uniqueness Violation (UV)table that stores rows with duplicate values for Unique Primary Indexes (UPI). Hyderabad. Niligiri Block. Where will you find these tables in the load script? The Logtable is generally identified immediately prior to the . Alternatively. Both require the use of two error tables per target table.LOGON command.

Ameerpet. So. the cost is space. There is one LOGTABLE for each run. In the DELETE mode. If the script uses multiple SQL statements for a single data record. Niligiri Block. Aditya Enclave. there is no such thing as a free lunch. Work Table(s): MultiLoad will automatically create one worktable for each target table. In other words. Later.standardize on the naming convention to make it easier for everyone on your team. the data is sent to the AMP once for each SQL statement. this is very important. #306. However. the efficiency of the MultiLoad run is in your hands. For more details on how these error tables can help you. Since MultiLoad will not resubmit a command that has been run previously. MultiLoad Has Five IMPORT Phases MultiLoad IMPORT has five phases. Hyderabad. This replication guarantees fast performance and that no SQL statement will ever be done more than once. ―Troubleshooting MultiLoad Errors. but don‘t be fazed by this! Here is the short list: Phase 1: Preliminary Phase Phase 2: DML Transaction Phase Phase 3: Acquisition Phase Visualpath. They can become very large. it will use the LOGTABLE to determine the last successfully completed step. The purpose of worktables is to hold two things: The Data Manipulation Language (DML) tasks The input data that is ready to APPLY to the AMPs The worktables are created in a database using PERM space. see the subsection in this chapter titled. This table keeps a record of the results from each phase of the load so that MultiLoad knows the proper point from which to RESTART. you will only have one worktable since that mode only works on one target table.‖ Log Table: MultiLoad requires a LOGTABLE. you will see that using a FILLER field can help reduce this disk space by not sending unneeded data to an AMP. ph-8374187525 Page 106 . This means that in IMPORT mode you could have one or more worktables.

But it is important to know the essence of each phase because sometimes a load fails. #306. why try to run a script when the system will just find out during the load process that the statements are not useable? MultiLoad knows that it is much better to identify any syntax errors. Every AMP plays an essential role in the MultiLoad process. The general rule of thumb for the number of sessions to use for smaller systems is the following: use the number of AMPs plus two more. And if you can picture what MultiLoad actually does in each phase. you need to know in which phase it broke down since the method for fixing the error to RESTART may vary depending on the phase. After all. All the preliminary steps are automated. No user intervention is required in this phase. For larger systems with hundreds of AMP processors. Niligiri Block. all MultiLoad sessions with Teradata need to be established. right up front. The default is the number of available AMPs. When the rows come to an AMP. But. Remember. Ameerpet. ph-8374187525 Page 107 . the SESSIONS option is available to lower the default. it stores them in worktable blocks on disk.” MultiLoad uses Phase 1 to conduct several preliminary set-up activities whose goal is to provide a smooth and successful climate for running your load. Hyderabad. you will likely write better scripts that run more efficiently. When it does. Second. Phase 1: Preliminary Phase The ancient oriental proverb says. Aditya Enclave. lest we get ahead of ourselves. Cut once. these sessions are running on your poor little computer as well as on Teradata. Each session loads the data to Teradata across the network or channel. suffice it to say that there is ample reason for multiple sessions to be established. They receive the data blocks. “Measure one thousand times.Phase 4: Application Phase Phase 5: Cleanup Phase Let‘s take a look at each phase and see what it contributes to the overall load process of this magnificent utility. The first task is to be sure that the SQL syntax and MultiLoad commands are valid. Visualpath. Should you memorize every detail about each phase? Probably not. Teradata will quickly establish this number as a factor of 16 for the basis regarding the number of sessions to create. hash each row and send the rows to the correct AMP.

too many sessions will reduce the resources available for other important database activities. Phase 2: DML Transaction Phase In Phase 2. The first error table contains constraint violations. Aditya Enclave. each AMP is going to work off the same page. Ameerpet. access locks are placed on all target tables. The second is a back up or alternate for logging. LOGTABLE The LOGTABLE keeps a record of the results from each phase of the load so that MultiLoad knows the proper point from which to RESTART. all of the SQL Data Manipulation Language (DML) statements are sent ahead to Teradata. Hyderabad. If you specify too few sessions it may impair performance and increase the time it takes to complete load jobs. ph-8374187525 Page 108 . the first one is a control session to handle the SQL and logging. WORKTABLES Work Tables hold two things: the DML tasks requested and the input data that is ready to APPLY to the AMPs. Niligiri Block. This leads us to Phase 2. Although. Teradata‘s Parsing Engine (PE) parses the DML and generates a step-by-step plan to execute the request. On the other hand. MultiLoad allows the use of multiple DML functions. this lock does prevent the opportunity for a user to request an exclusive lock. Figure 5-2 The final task of the Preliminary Phase is to apply utility locks to the target tables. This execution plan is then communicated to each AMP and stored in the appropriate worktable for each target table. Third. You may have to use some trial and error to find what works best on your system configuration. However.What about the extra two sessions? Well. the required support tables are created. no one else may DROP or ALTER a target table while it is locked for loading. In other words. #306. allowing other users to read or write to the table for the time being. Visualpath. while the second error table stores Unique Primary Index violations. They are the following: Type of Table Table Details ERRORTABLES MultiLoad requires two error tables per target table. these locks will still allow the MultiLoad user to drop the table. Initially.

The AMP hashes each row on the primary index and sends it over the BYNET to the proper AMP where it will ultimately be used. each AMP begins to deal with the blocks that they have been dealt. This matching tag for SQL and data is the reason that the data is replicated for each SQL statement using the same data record. Niligiri Block. there is no Acquisition Phase when you perform a MultiLoad DELETE task. The blocks are simply sent. The match tags will not actually be used until the data has already been acquired and is about to be applied to the worktable. The receiving AMP must first do some preparation before that happens. The letter is a ―match tag‖ for the student to his school schedule. since no data is being acquired. ph-8374187525 Page 109 . a match tag is assigned to each DML request that will match it with the appropriate rows of input data. Aditya Enclave. #306. Next. Teradata does not care about which AMP receives the data block.Later. they can be sorted into the proper order for storage in the target table. Now the utility places a load lock on each target table in preparation for the Application Phase. Phase 3: Acquisition Phase With the proper set-up complete and the PE‗s plan stored on each AMP. For their part. during the Acquisition phase the actual input data will also be stored in the worktable so that it may be applied in Phase 4. professor‘s names. Visualpath. Hyderabad. Similarly. At this point. just yet. Don‘t you have to get ready before company arrives at your house? The AMP puts all of the hashed rows it has received from other AMPs into the worktables where it assembles them into the SQL. This is where it gets interesting! MultiLoad now acquires the data in large. the AMPs will keep some data rows from the blocks and give some away. to the next AMP in line. Ameerpet. It is like a game of cards — you take the cards that you have received and then play the game. although it will not be used for several months. Of course. unsorted 64K blocks from the host and sends it to the AMPs. Why? Because once the rows are reblocked. the Application Phase. But the row does not get inserted into its target table. and classroom locations for the upcoming semester. You want to keep some and give some away. MultiLoad is now ready to receive the INPUT data. This is somewhat like a student who receives a letter from the university in the summer that lists his courses. one after the other.

Phase 4: Application Phase The purpose of this phase is to write. Every hash-sequence sorted block from Phase 3 and each block of the base table is read only once to reduce I/O operations to gain speed. updated or deleted before the entire block is written back to disk. MultiLoad will merely need to be RESTARTed from the point where it failed. Ameerpet. This is why the match tags are so important. worktables and the log table are dropped. To accomplish this substitution of data into SQL. What happens when several tables are being updated simultaneously? In this case.‖ If the last error code is zero (0). All locks. Those match tags are used to join the data with the proper SQL statement based on the SQL statement within a DMP label. all matching rows in the base block are inserted. one time. MultiLoad allows for the existence of NUSI processing during a load. In addition to associating each row with the correct DML statement.e. ―All is well that ends well. #306. Then. This being the case. Niligiri Block. They guarantee that the correct operation is performed for the rows and blocks with no duplicate operations. And each time a table block is written to disk successfully. The utility looks at the final Error Code (&SYSRC). If there is a failure at any point of the load process. Changes are made based upon corresponding data and DML (SQL) based on the match tags. Any errors will be written to the proper error table. all empty error tables. are released. MultiLoad believes the adage. Aditya Enclave. the specified changes to both the target tables and NUSI subtables. when sending the data. all of the job steps have ended successfully (i. No rollback is required. the host has already attached some sequence information and five (5) match tags to each data row. all of the updates are scripted as a multi-statement request. all has certainly ended well). a record is inserted into the LOGTABLE. Phase 5: Clean Up Phase Those of you reading these paragraphs that have young children or teenagers will certainly appreciate this final phase! MultiLoad actually cleans up after itself. even when a RESTART occurs.. This permits MultiLoad to avoid starting again from the very beginning if a RESTART is needed. Remember. ph-8374187525 Page 110 . a block at a time. both Teradata and MultiLoad. The Visualpath. Hyderabad. match tags also guarantee that no row will be updated more than once. That means that Teradata views them as a single transaction. it is married up to the SQL for execution. or APPLY. Once the data is on the AMPs.

Aditya Enclave. Restarting MultiLoad is a topic that will be covered later in this chapter. Once you name the Logtable. Work and Error tables —In this step of the script you must tell Teradata which tables to use. you use Visualpath. We have called it CDW_Log. Niligiri Block.LOGON command. So what happens if the final error code is not zero? Stay tuned. ph-8374187525 Page 111 . each MultiLoad session is logged off. Immediately after this you log onto Teradata using the .statistics for the job are generated for output (SYSPRINT) and the system count variables are set. the load scripts are understandable when you think through what the IMPORT mode does: Setting up a Logtable Logging onto Teradata Identifying the Target. or it may be placed in another database. it will be automatically created for you. Teradata will give a warning message. To do this. Notice that the commands in MultiLoad require a dot in front of the command key word. second.LOGTABLE command. A Simple MultiLoad IMPORT Script MultiLoad can be somewhat intimidating to the new user because there are many commands and phases. The order of these two commands is interchangeable. If you reverse the order. After this. Ameerpet. In reality. Hyderabad. Step Two: Identifying the Target. The Logtable may be placed in the same database as the target table. #306. Work and Error tables Defining the INPUT flat file Defining the DML activities to occur Naming the IMPORT file Telling MultiLoad to use a particular LAYOUT Telling the system to start loading Finishing loading and logging off of Teradata Step One: Setting up a Logtable and Logging onto Teradata — MultiLoad requires you specify a log table right at the outset with the . but it is recommended to define the Logtable first and then to Log on.

the . these names are optional. Step Four: Defining the DML activities to occur —The .TABLE can be used to automatically define all the . Then.IMPORT command. Niligiri Block. ph-8374187525 Page 112 . Then list the fields and their data types used in your SQL as a . it is specified in the LAYOUT. Keep in mind that you get to name and locate these tables. All you must do is name the tables and specify what database they are in. The LAYOUT name will be referenced later in the . If you do not do this. Work tables and error tables are created automatically for you. You can use a DATABASE command to point all table creations to it or qualify the names of these tables individually. In our example. Teradata might supply some defaults of its own! At the same time. Ameerpet. the .DML LABEL names and defines the SQL that is to execute. If you are listing fields in order and need to skip a few bytes in the record. Aditya Enclave.FIELD. you can either use the . MultiLoad is being told to INSERT a row into the SQL01. Did you notice that an asterisk is placed between the column name and its data type? This means to automatically calculate the next byte in the record. respectively. Also. Use the . #306.Employee_Dept table. large Teradata systems have a work database with a lot of extra PERM space. If the WORKTABLES and ERRORTABLES had not specifically been named. WORKTABLES AND ERROR TABLES. The name of the worktable would be WT_EMPLOYEE_DEPT1 and the two error tables would be called ET_EMPLOYEE_DEPT1 and UV_EMPLOYEE_DEPT1. This is where all of the logtables and worktables are normally created. If the input file is created with INDICATORS.FILLER is not needed.LAYOUT command to name the layout.FILLER (like above) to position to the cursor to the next field. Hyderabad. The Visualpath. It is like setting up executable code in a programming language. Sometimes. One customer calls this database CORP_WORK. They would have been built in the default database for the user. It is used to designate the starting location for this data based on the previous fields length. Then you will preface the names of these tables with the sub-commands TABLES. but using SQL. if the input record fields are exactly the same as the table.the . the script would still execute and build these tables.BEGIN IMPORT MLOAD command. or the ―*‖ on the Dept_No field could have been replaced with the number 132 ( CHAR(11)+CHAR(20)+CHAR(100)+1 ). Step Three: Defining the INPUT flat file record structure — MultiLoad is going to need to know the structure the INPUT flat file.FIELDS for you.

Finally. part yourself on the back. Then we list the FORMAT type as TEXT. If the script is to run on a mainframe. Ameerpet. Step Six: Finishing loading and logging off of Teradata —This is the closing ceremonies for the load. we referenced the LAYOUT named FILEIN to describe the fields in the record.DML LABEL Visualpath. Aditya Enclave.VALUES come from the data in each FIELD because it is preceded by a colon (:). This is how it determines one operation from another. Important note: Since the script above in Figure 5-7 does not DROP any tables.DML LABEL Command MultiLoad allows you to tailor how it deals with different types of errors that it encounters during the load process. it is very important or it would have attempted to process the END LOADING as part of the IMPORT — it wouldn‘t work. Do you think it is restartable? If you said no. Error Treatment Options for the . to INSERT the data rows into the target table. it is completely capable of being restarted if an error occurs. Compare this to the next script in Figure 5-8.IMPORT clause. Step Five: Naming the INPUT file and its format type —This step is vital! Using the . Notice that the .IMPORT command. to fit your needs. MultiLoad to wrap things up. Here is a summary of the options available to you: ERROR TREATMENT OPTIONS FOR . ph-8374187525 Page 113 . the INFILE name is actually the name of a JCL Data Definition (DD) statement that contains the real name of the file. Next. This is still a sub-component of the . and logs off of the Teradata system. #306.IMPORT goes on for 4 lines of information. Are you allowed to use multiple labels in a script? Sure! But remember this: Every label must be referenced in an APPLY clause of the .IMPORT MLOAD command. Hyderabad. we have identified the INFILE data as being contained in a file called ―CDW_Join_Export. Therefore. we told MultiLoad to APPLY the DML LABEL called INSERTS — that is. Niligiri Block. This is possible because it continues until it finds the semi-colon to define the end of the command.txt‖. closes the curtains.

This tells MultiLoad to insert a row from the data source if that row does not exist in the target table because the update didn‘t find it. The DO INSERT FOR MISSING UPDATE ROWS is mandatory. do you want MultiLoad to IGNORE the duplicate row. in more detail. or record all of the errors. this keeps these rows out of your error table. there are two rules to remember: The default is IGNORE MISSING UPDATE ROWS. why do an UPSERT. The table that follows shows you.Figure 5-9 In IMPORT mode. or to MARK it (list it) in an error table? If you do not specify IGNORE.‖ will keep them out of the error table. then MultiLoad will MARK. You do not need to see all the errors. MultiLoad runs much faster since it is not conducting a duplicate row check. Using the following syntax ―IGNORE DUPLICATE INSERT ROWS. #306. otherwise. you anticipate that some rows are missing. When doing an UPSERT. 2. 3. how flexible your options are: Visualpath.000 duplicate row errors. The error table is not filled up needlessly. Hyderabad. you gain three benefits: 1. you may specify as many as five distincterror-treatment optionsfor one. ph-8374187525 Page 114 . Imagine you have a standard INSERT load that you know will end up recording about 20. Aditya Enclave. By ignoring those errors. Mark is the default for all operations. For example. Niligiri Block. if there is more than one instance of a row.DML statement. Ameerpet. So. When doing an UPSERT.

Niligiri Block. ph-8374187525 .ERROR TREATMENT OPTIONS IN DETAIL DML LABEL OPTION MARK DUPLICATE INSERT ROWS WHAT IT DOES This option logs an entry for all duplicate INSERT rows in the UV_ERR table. This is required to accomplish an UPSERT. This option says. This is a good option when doing an UPSERT since UPSERT will INSERT a new row. This option ensures a listing of data rows that had to be INSERTed since there was no row to UPDATE. ―Do not tell me that a row to be deleted is missing‖. IGNORE DUPLICATE INSERT ROWS MARK DUPLICATE UPDATE ROWS This logs the existence of every duplicate UPDATE row. It tells MultiLoad that if the row to be updated does not exist in the target table. Use this when you want to know about the duplicates. Page 115 IGNORE MISSING UPDATE ROWS MARK MISSING DELETE ROWS IGNORE MISSING DELETE ROWS DO INSERT for MISSING UPDATE ROWS Visualpath. Ameerpet. This tells MultiLoad to IGNORE duplicate INSERT rows because you do not want to see them. This option makes a note in the ET_Error Table that a row to be deleted is missing. IGNORE DUPLICATE UPDATE ROWS MARK MISSING UPDATE ROWS This eliminates the listing of duplicate update row errors. Aditya Enclave. Hyderabad. This tells MultiLoad NOT to list UPDATE rows as an error. #306. then INSERT the entire row from the data source.

Names the DML Label Tells MultiLoad NOT TO LIST duplicate INSERT Page 116 /*Drop Error Tables */ DROP TABLE WORKDB.FIELD Last_Name * CHAR(20). . DROP TABLE WORKDB. . #306.SQL01. . ph-8374187525 . .FIELD Dept_Name * CHAR(20). /* Setup the MultiLoad Logtables. Visualpath. Niligiri Block.CDW_Log.FIELD First_Name * CHAR(14). Aditya Enclave.CDW_UV. the label will be referenced for use in the APPLY Phase when certain conditions are met. Specifies the database in which to find the target table.An IMPORT Script with Error Treatment Options The command . DATABASE SQL01. /* Begin INSERT Process on Table */ . Ameerpet.FIELD Employee_No * CHAR(11). . Logon Sets up a Logtable and Statements*/ then logs on to Teradata. Each label must be given a name. /* Begin Import and Define Work and Error Tables */ . Work table and error tables are in a work database. Names the LAYOUT of the INPUT file. . /* Define Layout of Input File */ . Begins the Load Process by telling us first the names of the Target Table. In IMPORT mode.LAYOUT FILEIN.BEGIN IMPORT MLOAD TABLES Employee_Dept WORKTABLES WORKDB. Note there is no comma between the names of the error tables (pair). Defines the structure of the INPUT file. Drops Existing error tables in the work database.LOGON TDATA/SQL01. Notice the dots before the FIELD command and the semicolons after each FIELD definition. UPDATE OR DELETE) that immediately follow it in the script. Hyderabad.CDW_ET WORKDB.CDW_UV.LOGTABLE SQL01. .CDW_WT ERRORTABLES WORKDB.DML LABEL INSERTS IGNORE DUPLICATE INSERT ROWS.FIELD Dept_No * CHAR(6).CDW_ET.DML LABEL names any DML options (INSERT.

END MLOAD.INSERT INTO SQL01.txt FORMAT TEXT LAYOUT FILEIN APPLY INSERTS.Employee_Dept ( Employee_No .Last_Name .:Dept_Name ).LOGOFF.:Dept_No. in order. . rows in the error table.:First_Name.Dept_Name ) VALUES ( :Employee_No . Niligiri Block. . Lists.Dept_No . The clause ―DO INSERT FOR MISSING UPDATE ROWS‖ indicates an UPSERT. Visualpath. the VALUES to be INSERTed. then insert a new row. The DML statements that follow this option must be in the order of a single UPDATE statement followed by a single INSERT statement. #306. Ends MultiLoad and logs off of Teradata An UPSERT Sample Script The following sample script is provided to demonstrate how do an UPSERT — that is. . to update a table and if a row from the data source table does not exist in the target table. notice the option is placed AFTER the LABEL identification and immediately BEFORE the DML function. ph-8374187525 Page 117 . Hyderabad. . Names the Import File and States its Format type. In this instance we are loading the Student_Profile table with new data for the next semester.First_Name .IMPORT INFILE CDW_Join_Export.:Last_Name. . /* Specify IMPORT File and Apply Parameters */ . Ameerpet. names the Layout file to use and tells MultiLoad to APPLY the INSERTs. Aditya Enclave.

UPSERT. Sets Up a Logtable and then logs on to Teradata.SQL01. i. Niligiri Block.DML LABEL UPSERTER is not one to be DO INSERT FOR MISSING UPDATE UPDATED. Ameerpet. work table and error tables. Visualpath. /* Without the above DO.e..First_Name = :First_Name .Student_Profile SET Last_Name = :Last_Name . ph-8374187525 Page 118 .LOGTABLE SQL01. Begins the Load Process by telling us first the names of the target table. . Hyderabad. #306. Logon Statements*/ . Notice the dots before the FIELD command and the semi-colons after each FIELD definition. DATABASE SQL01. ROWS. one of these is guaranteed to fail on this same table. it corrects by doing the INSERT */ Defines the UPDATE. UPDATE SQL01. Specifies the database to work in (optional). If the UPDATE fails because rows is missing. Defines the structure of the INPUT file./* Setup Logtable. An ALL CHARACTER based flat file. Aditya Enclave.CDW_Log.Class_Code = :Class_Code Qualifies the UPDATE. /* Begin INSERT and UPDATE Process on Table Names the DML Label */ Tells MultiLoad to INSERT a row if there . Names the LAYOUT of the INPUT file.LOGON CDW/SQL01.

the Acquisition Error and the Application error table. Aditya Enclave. MARK will ensure that the error rows. Ameerpet. Defines the INSERT.Grade_Pt = :Grade_Pt WHERE Student_ID = :Student_ID. Now we need to troubleshoot in order identify the errors and correct them. Earlier on.LOGOFF. we noted that MultiLoad generates two error tables. it is as if the error never occurred.DAT Layout file to use and LAYOUT FILEIN tells MultiLoad to APPLY UPSERTER.IMPORT INFILE CDW_EXPORT. field overflow errors on non-PI columns. /* Specify IMPORT File and Apply Parameters */ Names the Import File and it names the . they also have the capability to STORE those errors. Niligiri Block. and constraint errors that occur in the APPLY phase. APPLY the UPSERTs.. MultiLoad error tables not only list the errors they encounter. Ends MultiLoad and logs off of Teradata Troubleshooting MultiLoad Errors — More on the Error Tables The output statistics in the above example indicate that the load was entirely successful.:Class_Code . #306.END MLOAD. THREE COLUMNS SPECIFIC TO THE ACQUISITION ERROR TABLE Visualpath.:First_Name . the Acquisition error table logs errors that occur during that processing phase. The Application error table lists Unique Primary Index violations. IGNORE does neither. But that is not always the case. if desired. Do you remember the MARK and IGNORE parameters? This is where they come into play. ph-8374187525 Page 119 . .:Last_Name . We recommend placing comma separators in front of the following column or value for easier debugging.:Grade_Pt ). along with some details about the errors are stored in the error table.Student_Profile VALUES ( :Student_ID . INSERTINTO SQL01. Hyderabad. You may select from these tables to discover the problem and research the issues. . For the most part.

Niligiri Block. Name of the column in the target table where the error happened. System code that identifies the error. Aditya Enclave. RESTARTing MultiLoad Who hasn‘t experienced a failure at some time when attempting a load? Don‘t take it personally! Failures can and do occur on the host or Teradata (DBC) for many reasons. MultiLoad uses neither the Transient Journal nor rollbacks during a failure. The data row that contains the error. Ameerpet.ErrorCode ErrorField System code that identifies the error. That is why you must designate a Logtable at the beginning of your script. ph-8374187525 Page 120 . HostData Figure 5-19 THREE COLUMNS SPECIFIC TO THE APPLICATION ERROR TABLE Uniqueness DBCErrorCode DBCErrorField Contains a certain value that disallows duplicate row errors in this table. if desired. Then MultiLoad takes over right where it left off. can be ignored. MultiLoad has the impressive ability to RESTART from failures in either environment. Name of the column in the target table where the error happened. Here are the factors that determine how it works: First. it requires almost no effort to continue or resubmit the load job. the Logtable is essential for restarts. NOTE: A copy of the target table column immediately follows this column. Hyderabad. In fact. MultiLoad either restarts by itself or waits for the user to resubmit the job. MultiLoad will check the Restart Logtable and automatically resume the load process from the last successful CHECKPOINT before the failure occurred. is left blank if the offending column cannot be identified. Visualpath. is Left blank if the offending column cannot be identified. Remember. #306.

minutes are assumed. Niligiri Block. HINT: The default number for CHECKPOINT is 15 minutes. suppose Teradata experiences a reset while MultiLoad is running. In this case. If the job halted in the Apply Phase. most customers would rather just go ahead and RESTART. or the job is aborted. The bad news is that if you have been loading multiple millions of rows. Therefore. RELEASE MLOAD — When You DON'T Want to Restart MultiLoad What if a failure occurs but you do not want to RESTART MultiLoad? Since MultiLoad has already updated the table headers. the point of no return was when Teradata received the DELETE statement. during the Acquisition Phase the CHECKPOINT (n) you stipulated in the .Second. If you specify the checkpoint at 61 or above. The results are stored in the Logtable. Fourth. Visualpath. the rollback may take a lot of time. you will have to RESTART the job. but if you specify the CHECKPOINT as 60 or less. ph-8374187525 Page 121 . Before V2R3: In the earlier days of Teradata it was NOT possible to use RELEASE MLOAD if one of the following three conditions was true: In IMPORT mode. The good news is that if the job you may use the RELEASE MLOAD command to release the locks and rollback the job. MultiLoad will find out where it stopped and start again from that very spot.‖ In DELETE mode. it assumes that it still ―owns‖ them. CHECKPOINTs are logged each time a data block is successfully written to its target table. if MultiLoad halts during the Application Phase it must be resubmitted and allowed to run until complete. Hyderabad. During the Application Phase. the number of records is assumed. Aditya Enclave. if a host mainframe or network client fails during a MultiLoad. Ameerpet. you may simply resubmit the script without changing a thing. You do not have to do a thing! Third. once MultiLoad had reached the end of the Acquisition Phase you could not use RELEASE MLOAD. #306. So what is a user to do? Well there is good news and bad news. Fifth. the host program will restart MultiLoad after Teradata is back up and running. For this reason. This is sometimes referred to as the ―point of no return.BEGIN MLOAD clause will be enacted. it limits access to the table(s).

MultiLoad and INMODs INMODs.With and since V2R3: The advent of V2R3 brought new possibilities with regard to using the RELEASE MLOAD command. #306. no Permanent Journals You should be very cautious using the RELEASE command. but please don‘t get too reliant on it for production runs. if: You are running a Teradata V2R3 or later version You use the correct syntax: RELEASE MLOAD <target-table> IN APPLY The load script has NOT been modified in any way The target tables either: Must be empty.IMPORT INMOD=<INMOD-name> You will find a more detailed discussion on how to write INMODs for MultiLoad in ―Teradata Utilities: Breaking The Barriers‖. no NUSIs. Page 122 Visualpath. Ameerpet. Aditya Enclave. ph-8374187525 . How MultiLoad Compares with FastLoad Function Error Tables must be defined FastLoad Yes MultiLoad Optional. Therefore. It can NOW be used in the APPLY Phase. or Must have no Fallback. may be called by MultiLoad in either mainframe or LAN environments. INMODs replace the normal MVS DDNAME or LAN file name with the following statement: . Niligiri Block. They allow MultiLoad to focus solely on loading data by doing data validation or data conversion before the data is ever touched by MultiLoad. They should be allowed to finish to guarantee data integrity. or Input Modules. providing the appropriate programming languages are used. here MultiLoad. Hyderabad. It could potentially leave your table half updated. INMODs are user written routines whose purpose is to read data from one or more sources and then convey it to a load utility. it is handy for a test environment. for loading into Teradata.

Work Tables must be defined No Optional. 1 Error Table has to exist for each target table and will automatically be assigned. Ameerpet. in all 5 phases (auto CHECKPOINT) Yes Yes Page 123 Stores UPI Violation Rows Allows use of Aggregated. Yes No No Yes No Five INSERT. #306. DELETE.2 Error Tables have to exist for each target table and will automatically be assigned. Hyderabad. Yes No Visualpath. Niligiri Block. UPDATE. Aditya Enclave. ph-8374187525 . and ―UPSERT― Logtable must be defined Allows Referential Integrity No No Allows Unique Secondary Indexes No Allows Non-Unique Secondary Indexes Allows Triggers No No Loads a maximum of n number of One tables DML Statements Supported INSERT DDL Statements Supported Transfers data in 64K blocks Number of Phases Is RESTARTable CREATE and DROP DROP TABLE TABLE Yes Two Yes Yes Five Yes.

TPump was developed to handle batch loads with low volumes. TPump is a perfect fit when you have a large. this knife has thrilled generations with its multiple capabilities. Visualpath. ph-8374187525 Page 124 . TPump is the Swiss ArmyTM knife of the Teradata load utilities. Hyderabad. Why It Is Called “TPump” TPump is the shortened name for the load utility Teradata Parallel Data Pump. And. ―My assumption is that the story of any one of us is in some measure the story of us all. you will find that TPump has similarities with the rest of the family of Teradata utilities. Picture in your mind the way that huge ice blocks used to be floated down long rivers to large cities prior to the advent of refrigeration. 1 per column Yes Yes Yes T-Pump: An Introduction to TPump The chemistry of relationships is very interesting. Frederick Buechner once stated. To understand this. just as the Swiss ArmyTM knife easily fits in your pocket when you are loaded down with gear. Just as this knife was designed for small tasks. Niligiri Block. Aditya Enclave.‖ In this chapter. Do you remember the first Swiss ArmyTM knife you ever owned? Aside from its original intent as a compact survival tool. Instead. Because it locks at this level. There they were cut up and distributed to the people. Let‘s look in more detail at the many facets of this amazing load tool. Ameerpet. TPump does NOT move data in the large blocks. you must know how the load utilities move the data. Both FastLoad and MultiLoad assemble massive volumes of data rows into 64K blocks and then moves those blocks. But this newer utility has been designed with fewer limitations and many distinguishing abilities that the other load utilities do not have. busy system with few resources to spare. it loads data one row at a time. using row hash locks. #306.Arithmetic calculations or Conditional Exponentiation Allows Data Conversion NULLIF function Yes.

You could then lower the rate when they return and begin running their business queries. as the other utilities require. But how soon can you get the information pertaining to that transaction into the data warehouse? Can you afford to wait until a nightly batch load? If not. or concurrent. Pumping in a very slow. DML Functions: Like MultiLoad. when it pulls data from a single source. In essence. An example: Having this capability. are known for their tremendous speed in executing transactions. like the water pump. TPump is a data pump which. But. you may change the statement rate during the job. Note that it also supports UPSERTs like MultiLoad. you need not have such clearly defined load windows. Throttle-switch Capability: What about the throttle capability that was mentioned above? With TPump you may stipulate how many updates may occur per minute. But strong and steady pumping results in a powerful stream of water that would require a larger container. you may ―throttle‖ the flow of data based upon your system and business user requirements. including INSERT. These can be run solo. TPump does DML functions. you might want to throttle up the rate during the period from 12:00noon to 1:30 PM when most of the users have gone to lunch. TPump is THE PUMP! TPump Has Many Unbelievable Abilities Just in Time: Transactional systems. or ―throttling down‖ the number of updates with a lower one. Ameerpet. such those implemented for ATM machines or Point-of-Sale terminals. ―throttling up‖ the rate with a higher number. Niligiri Block. But here is one place that TPump differs vastly from the other utilities: FastLoad can only load one table and MultiLoad can load up to five tables. may allow either a trickle-feed of data to flow into the warehouse or a strong and steady stream. This is also called the statement rate. TPump can load more than 60 tables at a Visualpath.and not at the table level like MultiLoad. UPDATE and DELETE. updates on a table. ph-8374187525 Page 125 . Remember. In fact. Hyderabad. or in combination with one another. #306. gentle manner results in a steady trickle of water that could be pumped into a cup. Aditya Enclave. and just control its flow rate. This way. You can have TPump running in the background all the time. Envision TPump as the water pump on a well. then TPump may be the utility that you are looking for! TPump allows the user to accomplish near real-time updates from source systems into the Teradata data warehouse. TPump can make many simultaneous.

not 15. imagine partitioning a huge table horizontally into multiple smaller tables and then performing various DML functions on all of them in parallel. As to the existence of Triggers. and MultiLoad. which allows neither. Similar to the knife. Is this too good to be true? Are there no limits to this load utility? TPump does not like to steal any thunder from the other load utilities. Keep in mind that TPump places no limit on the number of sessions that may be established. TPump allows the target tables to either be empty or to be populated with data rows. maybe by your computer. The Support Environment aids in the execution of DML and DDL that occur in Teradata. TPump always seems to have another advantage in its list of capabilities. and aids in the processing of conditions for loads. Hyderabad. The SE coordinates TPump activities. Besides this. Referential Integrity is allowed and need not be dropped. Ameerpet. Here are several that relate to TPump requirements for target tables. Visualpath. The possibilities are endless. Stopping without Repercussions: Finally. but Teradata does not care. which allows just NUSIs. More benefits: Just when you think you have pulled out all of the options on a Swiss ArmyTM knife. but unlimited for Teradata! Well OK. ph-8374187525 Page 126 . Niligiri Block. Tables allowing duplicate rows (MULTISET tables) are allowed. Aditya Enclave. assists in managing the acquisition of files. I cannot imagine my laptop running 20 TPump jobs. That‘s right. Now.time! And the number of concurrent instances in such situations is unlimited. TPump says. How could you use this ability? Well. unlike FastLoad. #306. but it just might become one of the most valuable survival tools for businesses in today‘s data warehouse environment. outside of the load utility. ―No problem!‖ Support Environment compatibility: The Support Environment (SE) works in tandem with TPump to enable the operator to have even more control in the TPump load environment. this utility can be stopped at any time and all of locks may be dropped with no ill consequences. there always seems to be just one more blade or tool you had not noticed. Like MultiLoad. TPump allows both Unique and NonUnique Secondary Indexes (USIs and NUSIs). think of ways you might use this ability in your data warehouse environment.

This can cause the potential for row hash conflicts between the Access Log and the target tables. TPump is not designed to support this. the maximum file size when using TPump is 2GB. A Simple TPump Script — A Look at the Basics Setting up a Logtable and Logging onto Teradata Begin load process. four files can be directly read in a single run. Rule #2: TPump will not process aggregates. This must be specified when you create the table.TPump Has Some Limits TPump has rightfully earned its place as a superstar in the family of Teradata load utilities. add Parameters. Hyderabad. ph-8374187525 Page 127 . #306. The reason for this is that TPump uses normal SQL to accomplish its tasks. You may not use SELECT in your SQL statements. Besides the extra overhead incurred. Rule #5: Dates before 1900 or after 1999 must be represented by the yyyy format for the year portion of the date. Rule #7: TPump performance will be diminished if Access Logging is used. Ameerpet. not the default format of yy. Aditya Enclave. you might consider using an INMOD to prepare the data prior to loading it. Rule #3: The use of the SELECT function is not allowed. Any dates using the default yy format for the year are taken to mean 20th century years. Niligiri Block. This means that a most. then Teradata will make an entry in the Access Log table for each operation. if you use Access Logging for successful table updates. arithmetic functions or exponentiation. But this does not mean that it has no limits. This is true for a computer running under a 32bit operating system. Rule #6: On some network attached systems. It has a few that we will list here for you: Rule #1: No concatenation of input data files is allowed. Rule #4: No more than four IMPORT commands may be used in a single load task. naming the error table Defining the INPUT flat file Visualpath. If you need data conversions or math.

It is quite similar to MultiLoad. Much of the TPump command structure should look quite familiar to you.SQL01.txt file contains: . Niligiri Block. Hyderabad. The CREATE TABLE statement for this table is listed for your convenience. In most instances you will use existing tables. DATABASE SQL01.logon . Also specifies the database to find the necessary tables. The logon. Ameerpet.RUN.. . ph-8374187525 Page 128 .LOG_PUMP.txt.Defining the DML activities to occur Naming the IMPORT file and defining its FORMAT Telling TPump to use a particular LAYOUT Telling the system to start loading data rows Finishing loading and logging off of Teradata The following script assumes the existence of a Student_Names table in the SQL01 database. /* This script inserts rows into a table Sets Up a Logtable and then logs called student_names from a single file */ on with . Visualpath. You may use pre-existing target tables when running TPump or TPump may create the tables for you. It will be used as an associative table for linking various tables in the data warehouse. Aditya Enclave. Specifies optional parameters.BEGIN LOAD ERRLIMIT 5 CHECKPOINT 1 SESSIONS 64 TENACITY 2 PACK 40 RATE 1000 Begins the Load Process. #306. In this example. . the Student_Names table is being loaded with new data from the university‘s registrar. TDATA/SQL01.RUN FILE C:\mydir\logon.LOGTABLE WORK_DB.

Hyderabad.ERRORTABLE SQL01.ERR_PUMP. the more_junk field moves the field pointer to the start of the First_name data. . Step One: Setting up a Logtable and Logging onto Teradata — First.FILLER commands and the semi-colons after each FIELD definition. ERRORTABLE names the error table for this run.LOGTABLE command.FIELD and . Notice the comment in the script.END LOAD. you define the Logtable using the . Comma separators are placed in front of the following column or value for easier debugging. Names the LAYOUT of the INPUT record. #306. Aditya Enclave. Niligiri Block. Figure 6-4 Tells TPump to stop loading and logs off all sessions. The Logtable is automatically created for you. It may be placed in any database by qualifying the table Visualpath. Names the IMPORT file. Also. Names the DML Label Tells TPump to INSERT a row into the target table and defines the row format. Notice the dots before the . the VALUES to be INSERTed. tells TPump which DML Label to APPLY. ph-8374187525 Page 129 . We have named it LOG_PUMP in the WORK_DB database. in order. . Ameerpet.LOGOFF. Colons precede VALUEs. Names the LAYOUT to be called from above. Lists.

You may set the limit that is tolerable for the load. In our example.FIELD. naming the Error Table— Here. like MultiLoad. SESSIONS 64 tells TPump to establish 64 sessions with Teradata.FILLER or . ph-8374187525 Page 130 . like those in MultiLoad.FILLER with the correct number of bytes as character to position to the cursor to the next field. Hyderabad.ERR_PUMP. require a dot in front of the command key word.TABLE commands. Notice that the commands in TPump. Now let‘s look at each parameter: ERRLIMIT 5 says that the job should terminate after encountering five errors. you can either use the . Aditya Enclave. the connection is made to Teradata. PACK 40 tells TPump to ―pack‖ 40 data rows and load them at one time. Following that. Ameerpet. the script reveals the parameters requested by the user to assist in managing the load for smooth operation. When you use this technique. Step Two: Begin load process. needs to know the structure the INPUT flat file record. Niligiri Block. If you are listing fields in order and need to skip a few bytes in the record. If the factor is between 1 and 60. It is used to designate the starting location for this data based on the previous field‘s length. it refers to minutes. the . then to keep on trying for a period of two hours. If it is over 60. RATE 1000 means that 1. It also names the one error table.FILLER is not needed. Step Three: Defining the INPUT flat file structure — TPump. then it refers to the number of rows at which the checkpointing should occur. calling it SQL01.name with the name of the database by using syntax like this: <databasename>.LAYOUT command to name the layout. Did you notice that an asterisk is placed between the column name and its data type? This means to automatically calculate the next byte in the record.000 data rows will be sent per minute. TENACITY 2 says that if there is any problem establishing sessions.<tablename> Next. add Parameters. or the ―*‖ can be replaced by a number that equals the lengths of all previous fields added together plus 1 extra byte. You use the . CHECKPOINT 1 tells TPump to pause and evaluate the progress of the load in increments of one minute. . #306. you list the columns and data types of the INPUT file using the . this says to begin Visualpath.

In our example.LOG_PUMP.END LOAD command tells TPump to finish the load process.Student_NAMES. the . It also names the columns receiving data and defines the sequence in which the VALUES are to be arranged. Each LABEL used must also be referenced in an APPLY clause of the .txt‖. Step Four: Defining the DML activities to occur — At this point. /* Begin Load and Define TPUMP Parameters and Error Tables */ Visualpath. TPump is to INSERT a row into the SQL01. Niligiri Block. continue on to load Last_Name.LOGTABLE SQL01. and finish when First_Name is loaded. we have identified the INPUT data file as ―CDW_Export. . Hyderabad. .with Student_ID. The data values coming in from the record are named in the VALUES with a colon prior to the name. ―FILELAYOUT. This provides the PE with information on what substitution is to take place in the SQL. #306. Finally. we told TPump to APPLY the DML LABEL called INSREC — that is. ph-8374187525 Page 131 Specifies the database containing the table. Ameerpet. Step Six: Associate the data with the description — Next. . Step Seven: Finishing loading and logging off of Teradata —The .IMPORT clause. Aditya Enclave. Step Five: Naming the INPUT file and defining its FORMAT —Using the .IMPORT INFILE command. we told the IMPORT command to use the LAYOUT called.DML LABEL names and defines the SQL that is to execute. to INSERT the data rows into the target table. The file was created using the TEXT format. TPump logs off of the Teradata system.SQL01.‖ Step Seven: Telling TPump to start loading —Finally. Logon Statements Sets up a Logtable and and Database Default */ then logs on to Teradata. 2A0C022B00000 TPump Script with Error Treatment Options /* Setup the TPUMP Logtables. DATABASE SQL01.LOGON CDW/SQL01.

here.FIELD First_Name * VARCHAR (14). . after the last option. Defines the structure of the INPUT file.Last_Name .FIELD Class_Code * VARCHAR (2). .FIELD Student_ID * VARCHAR (11).DML LABEL INSREC IGNORE DUPLICATE ROWS IGNORE MISSING ROWS IGNORE EXTRA ROWS. . Ameerpet. #306.:Last_Name Visualpath. Tells TPump to INSERT a row into the target table and defines the row format.Grade_Pt ) VALUES ( :Student_ID . .Class_Code . Names the DML Label. BEGINS THE LOAD PROCESS SPECIFIES MULTIPLE PARAMETERS TO AID IN PROCESS CONTROL NAMES THE ERRROR TABLE.. all Variable CHARACTER data and the file has a comma delimiter.LAYOUT FILELAYOUT. Niligiri Block.BEGIN LOAD ERRLIMIT 5 CHECKPOINT 1 SESSIONS 1 TENACITY 2 PACK 40 RATE 1000 ERRORTABLE SQL01. Hyderabad. SPECIFIES 3 ERROR TREATMENT OPTIONS with the . ph-8374187525 .IMPORT below for file type and the declaration of the delimiter. .FIELD Last_Name * VARCHAR (20).ERR_PUMP.First_Name . See . Aditya Enclave. Names the LAYOUT of the INPUT file.FIELD Grade_Pt * VARCHAR (8). . Note that we place comma separators in front of the following column Page 132 . TPump HAS ONLY ONE ERROR TABLE. INSERTINTO Student_Profile4 ( Student_ID .

:Class_Code .IMPORT INFILE CDW_Export. Hyderabad. TPump HAS ONLY ONE ERROR TABLE PER TARGET TABLE Visualpath.:First_Name .LOGOFF.:Grade_Pt ). Niligiri Block. or value for easier debugging.. Lists. . Aditya Enclave. Names the IMPORT file. Ameerpet.‘ LAYOUT FILELAYOUT APPLY INSREC. Tells TPump to stop loading and Logs Off all sessions. #306. . A colon always precedes values. A TPump UPSERT Sample Script Begins the load process Specifies multiple parameters to aid in load management Names the error table. in order. the VALUES to be INSERTed.txt FORMAT VARTEXT ‗. . Tells TPump which DML Label to APPLY. ph-8374187525 Page 133 .END LOAD. Notice the FORMAT with a comma in the quotes to define the delimiter between fields in the input record. Names the LAYOUT to be called from above.

Hyderabad. Niligiri Block. Ameerpet.Visualpath. #306. Aditya Enclave. ph-8374187525 Page 134 .

Niligiri Block. Ameerpet. Hyderabad.A TPump UPSERT Sample Script Sets Up a Logtable and then logs on to Teradata. ph-8374187525 Page 135 . #306. Aditya Enclave. Visualpath.

.END LOAD. in order. Aditya Enclave. . the VALUES to be INSERTed.DAT. Tells TPump to INSERT a row into the target table and defines the row format. Names the 1st DML Label and specifies 2 Error Treatment options. The file type is FASTLOAD. Lists. A colon always precedes values.LOGOFF. #306. Ameerpet. Niligiri Block.Begins the load process Specifies multiple parameters to aid in load management Names the error table. TPump HAS ONLY ONE ERROR TABLE PER TARGET TABLE Defines the LAYOUT for the 1st INPUT file. ph-8374187525 . also has the indicators for NULL data. Hyderabad. Tells TPump to stop loading and logs off all Page 136 Visualpath. Names the Import File as UPSERT-FILE. The file name is under Windows so the ―-― is fine.

Visualpath. there might soon be another way to accomplish this task. If you want TPump to run unmonitored. For now. Key to this monitor is the ―SysAdmin. You can start a monitor program under UNIX with the following command: Below is a chart that shows the Views and Macros used to access the ―SysAdmin. Niligiri Block.TpumpStatusTbl‖ table in the Data Dictionary Directory. This continues to work. Hyderabad.sessions.TpumpStatusTbl‖ table. Monitoring TPump TPump comes with a monitoring tool called the TPump Monitor. then the table is not needed. You may update the table to change the statement rate for an IMPORT. TPump will update it on a minute-by-minute basis when it is running. use the original coding technique. ph-8374187525 Page 137 . NCR has built an UPSERT and we have tested the following statement. #306. NOTE: The above UPSERT uses the same syntax as MultiLoad. or if it is handled internally. If your Database Administrator creates this table. However. Queries may be written against the Views. Ameerpet. This tool allows you to check the status of TPump jobs as they run and to change (remember ―throttle up‖ and ―throttle down?‖) the statement rate on the fly. without success: We are not sure if this will be a future technique for coding a TPump UPSERT. The macros may be executed. Aditya Enclave.

If you specify nothing. Like that table. Hyderabad.TpumpStatusTbl View View Macro Macro SysAdmin. These options are listed in the . Entries are made to these tables whenever errors occur during the load process. not two. It is the errors that occur when the data is being moved.TPumpStatusX Sysadmin. Niligiri Block. It will also want to report any difficulties compiling valid Primary Indexes.Views and Macros to access the table SysAdmin. Aditya Enclave.UserUpdateSelect Handling Errors in TPump Using the Error Table One Error Table Unlike FastLoad and MultiLoad. When doing an UPSERT. Like MultiLoad. TPump offers the option to either MARK errors (include them in the error table) or IGNORE errors (pay no attention to them whatsoever). TPump will assume the default. #306. If you name the table. TPump uses only ONE Error Table per target table.TPumpUpdateSelect TPumpMacro. ph-8374187525 . such as data translation problems that TPump will want to report on. COLUMNS IN THE TPUMP ERROR TABLE ImportSeq Sequence number that identifies the IMPORT command where the error occurred Page 138 Visualpath. The general default is to MARK.DML LABEL sections of the script and apply ONLY to the DML functions in that LABEL. The error table does the following: Identifies errors Provides some detail about the errors Stores a portion the actual offending row for debugging When compared to the error tables in MultiLoad. TPump has less tolerance for errors than FastLoad or MultiLoad. Ameerpet. this default does not apply. TPump will create it automatically. it stores information about errors that take place while it is trying to acquire data.TPumpStatus SysAdmin. the TPump error table is most similar to the MultiLoad Acquisition error table. Remember.

limited to the first 63. Visualpath. Hyderabad. This is different from MultiLoad.DMLSeq SMTSeq ApplySeq SourceSeq DataSeq ErrorCode ErrorMsg ErrorField Sequence number for the DML statement involved with the error Sequence number of the DML statement being carried out when the error was discovered Sequence number that tells which APPLY clause was running when the error occurred The number of the data row in the client file that was being built when the error took place Identifies the INPUT data source where the error row came from System code that identifies the error Generic description of the error Number of the column in the target table where the error happened. Ameerpet. Aditya Enclave. #306. is left blank if the offending column cannot be identified. This could save you time getting to the root of some common errors you could see in your future! #1: Error 2816: Failed to insert duplicate row into TPump Target Table.728 bytes related to the error HostData Common Error Codes and What They Mean TPump users often encounter three error codes that pertain to: Missing data rows Duplicate data rows Extra data rows Become familiar with these error codes and what they mean. The data row that contains the error. which supplies the column name. ph-8374187525 Page 139 . Niligiri Block.

ph-8374187525 . it can be a very good thing.‖ This is the case when there are EXTRA rows when TPump is attempting an UPDATE or DELETE. #2: Error 2817: Activity count greater than ONE for TPump UPDATE/DELETE.BEGIN LOAD Parameters UNIQUE to TPump MACRODB <databasename> This parameter identifies a database that will contain any macros utilized by TPump. Niligiri Block. It places Page 140 Visualpath. #306. but the duplicate row will not. Aditya Enclave. It means that TPump is notifying you that it discovered a DUPLICATE row. In fact.Nothing is wrong when you see this error. This error jumps to life when one of the following options has been stipulated in the .DML LABEL: MARK DUPLICATE INSERT ROWS MARK DUPLICATE UPDATE ROWS Note that the original row will be inserted into the target table. TPump does not run the SQL statements by itself. Ameerpet. Sometimes you want to know if there were too may ―successes. #3: Error 2818: Activity count zero for TPump UPDATE or DELETE. TPump will log an error whenever it sees an activity count greater than zero for any such extra rows if you have specified either of these options in a . Sometimes. Remember. Hyderabad. the associated UPDATE or DELETE will be performed. indicating that the UPDATE or DELETE did not occur. To see this error. you want to know if a data row that was supposed to be updated or deleted wasn‘t! That is when you want to know that the activity count was zero.DML LABEL: MARK EXTRA UPDATE ROWS MARK EXTRA DELETE ROW At the same time. you must have used one of the following parameters: MARK MISSING UPDATE ROWS MARK MISSING DELETE ROWS .

then you may end up with extra rows in your error tables. #306. It shows the initial maximum number of statements that will be sent per minute. PACK (n) RATE ROBUST ON/OFF TPump and MultiLoad Comparison Chart Visualpath. in which case they would have recorded those errors in an error table. A zero or no number at all means that the rate is unlimited.them into Macros and executes those Macros for efficiency. and possibly unneeded. The downside of running TPump in ROBUST mode is that it incurs additional. Why? Because some of the statements in the original run may have already have found errors. Ameerpet. ph-8374187525 Page 141 . you are telling TPump to utilize ―simple‖ RESTART logic: Just start from the last successful CHECKPOINT. This refers to the Statement Rate. ROBUST ON means that one row is written to the Logtable for every SQL transaction. overhead. If you specify ROBUST OFF. Hyderabad. NOMONITOR Use this parameter when you wish to keep TPump from checking either statement rates or update status information for the TPump Monitor application. Be aware that if some statements are reprocessed. If the Statement Rate specified is less than the PACK number. Multistatement requests improve efficiency in either a network or channel environment because it uses fewer sends and receives between the application and Teradata. then TPump will send requests that are smaller than the PACK number. Aditya Enclave. Niligiri Block. ROBUST defines how TPump will conduct a RESTART. ON is the default. such as those processed after the last CHECKPOINT. Use this to state the number of statements TPump will ―pack‖ into a multiple-statement request.

Niligiri Block. Hyderabad. with MARK option No Yes Yes Page 142 Visualpath. 2 per target table Optional. moves data at row level Yes Yes.Function Error Tables must be defined Work Tables must be defined Logtable must be defined Allows Referential Integrity Allows Unique Secondary Indexes MultiLoad Optional. Aditya Enclave. 1 per target table No Yes Yes Yes Yes Yes 60 Unlimited Row Hash Allows Non-Unique Secondary Yes Indexes Allows Triggers No Loads a maximum of n number Five of tables Maximum Concurrent Load Instances Locks at this level DML Statements Supported How DML Statements are Performed DDL Statements Supported Transfers data in 64K blocks RESTARTable Stores UPI Violation Rows Allows use of Aggregated. Arithmetic calculations or Conditional Exponentiation Allows Data Conversion 15 Table INSERT. with MARK option No Compiles DML into MACROS and executes All No. UPDATE. ph-8374187525 . ―UPSERT― Runs actual DML commands All Yes Yes Yes. Ameerpet. 1 per target table Yes No No TPump Optional. #306. UPDATE. ―UPSERT― DELETE. INSERT. DELETE.

Performance Improvement Table Access During Load

As data volumes increase Uses WRITE lock on tables in Application Phase Consequences Hogs available resources

By using multistatement requests Allows simultaneous READ and WRITE access due to Row Hash Locking No repercussions Allows consumption management via Parameters

Effects of Stopping the Load Resource Consumption

Some important commands: ABORT DEFAULTS LOGOFF LOGON Abort any and all active running requests and transactions, but do not exit BTEQ. Reset all BTEQ Format command options to their defaults. This will utilize the default configurations. End the current session or sessions, but do not exit BTEQ. Starts a BTEQ Session. Every user, application, or utility must LOGON to Teradata to establish a session. End the current session or sessions and exit BTEQ. Specifies the number of sessions to use with the next LOGON command. Write error messages to a specific output file. Open a file with a specific format to transfer information directly from the Teradata database. Enable/inhibit the page-oriented format command options.
Page 143

QUIT SESSIONS ERROROUT EXPORT FORMAT

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

IMPORT INDICDATA

Open a file with a specific format to import information into Teradata. One of multiple data mode options for data selected from Teradata. The modes are INDICDATA, FIELD, or RECORD MODE. Limit BTEQ output displays to all error messages and request processing statistics. Submit the next request a certain amount of times Execute Teradata SQL requests and BTEQ commands directly from a specified run file. Abort any active transactions and requests. Assign severity levels to particular error numbers. End the current session or sessions and exit BTEQ. Skip all intervening commands and resume after branching forward to the specified label. Pause BTEQ processing for a specific amount of time. Test a stated condition, and then resume processing based on the test results. The GOTO command will always GO directly TO a particular line of code based on a label. Specifies a maximum allowable error severity level. End the current session or sessions and exit BTEQ. Submit the next request a certain amount of times. Limit BTEQ output displays to all error messages and request processing statistics. One of multiple data mode options for data selected from Teradata. (INDICDATA, FIELD, or RECORD).
Page 144

QUIET REPEAT RUN ABORT ERRORLEVEL EXIT GOTO HANG IF…THEN LABEL MAXERROR QUIT REPEAT QUIET RECORDMODE

Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525

SEPARATOR SUPPRESS ACCEPT LOGON LOGTABLE

Specifies a character string or specific width of blank characters separating columns of a report. Replace each and every consecutively repeated value with completely-blank character strings. Allows the value of utility variables to be accepted directly from a file or from environmental variables. LOGON command or string used to connect sessions established through the FastExport utility. FastExport utilizes this to specify a restart log table. The purpose is for FastExport checkpoint information. Used to point to a file that FastExport is to use as standard input. This will Invoke the specified external file as the current source of utility and Teradata SQL commands. Assigns a data type and value to a variable. Constitutes a field in the input record section that provides data values for the SELECT statement. Specifies a field in the input record that will not be sent to Teradata for processing. It is part of the input record to provide data values for the SELECT statement. Specifies the data layout for a file. It contains a sequence of FIELD and FILLER commands. This is used to describe the import file that can optionally provide data values for the SELECT.

RUN FILE

SET FIELD FILLER

LAYOUT

BEGIN LOADING This identifies and locks the FastLoad target table for the duration of the load. It also identifies the two error tables to be used for the load. CHECKPONT and INDICATORS are subordinate commands in the BEGIN LOADING clause of the script. CHECKPOINT, which will be discussed below in detail, is not the default for FastLoad. It must be
Visualpath, #306, Niligiri Block, Aditya Enclave, Ameerpet, Hyderabad. ph-8374187525 Page 145

That becomes your responsibility.specified in the script. #306. Deletes all the rows of a table. This handy command can be a lifesaver when you are not sure how corrupt the data in the Input file is. Builds the table columns list for use in the FastLoad DEFINE statement when the data matches the Create Table statement exactly. Aditya Enclave. this is not a good thing to do within a FastLoad script since it cancels the ability to restart. Hyderabad. This will only work in the initial run of the script. INDICATORS is a keyword related to how FastLoad handles nulls in the input file. DEFINE DELETE This names the Input file and describes the columns in that file and the data types for those columns. ph-8374187525 . When the INDICATORS option is on. FastLoad looks at each bit to identify the null column. It identifies columns with nulls and uses a bitmap at the beginning of each row to show which fields contain a null instead of data. The more corrupt it is. Drops a table and its data. You may specify a particular number of error rows beyond which FastLoad will immediately precede to the abort. Niligiri Block. ERRLIMIT provides you with a safety valve. the greater the clean up effort required after the load finishes. At the same time. Upon restart. all the rows in the error table are not in the data table. Remember. In real life this does Page 146 DROP TABLE ERRLIMIT HELP HELP TABLE Visualpath. The INDICATORS option does not work with VARTEXT. Specifies the maximum number of rejected ROWS allowed in error table 1 (Phase I). the Help command provides a list of all possible FastLoad commands along with brief. This provides the option to restart the FastLoad or to scrub the input data more before loading it. Designed for online use. Ameerpet. It is used in FastLoad to drop previous Target and error tables. it will fail because the table is locked. but pertinent tips for using them.

#306. such as error limits or checkpoints will be included under the . Page 147 TENACITY .BEGIN command. If there were no success by that time. INSERT SLEEP This is FastLoad’s favorite command! It inserts rows into the target table. Sometimes there are too many sessions already established with Teradata for a FastLoad to obtain the number of sessions it requested to perform its task or all of the loader slots are currently used. Any parameters for the load. Working in conjunction with TENACITY. TENACITY specifies the amount of time. Ameerpet. If several FastLoad jobs are executed at the same time.not happen very often. all efforts to logon would cease. Hyderabad. The default for FastLoad is ―no tenacity‖. For example. Niligiri Block. This situation can occur if all of the loader slots are used or if the number of requested sessions are not available.BEGIN DELETE MLOAD Visualpath. in hours. then FastLoad would attempt to logon every 10 minutes for up to 4 hours. meaning that it will not retry at all.BEGIN [IMPORT] MLOAD . if you do not include them. We recommend using the word IMPORT to make the coding consistent and easier for others to read. Note that the word IMPORT is optional in the syntax because it is the DEFAULT. too. ph-8374187525 . the SLEEP command specifies the amount minutes to wait before retrying to logon and establish all sessions. suppose that Teradata sessions are already maxed-out when your job is set to run. It is important to know which commands or parameters are optional since. to retry to obtain a loader slot or to establish all requested sessions to logon. The default is 6 minutes. If TENACITY were set at 4 and SLEEP at 10. but DELETE is required. meaning that the system will continue trying to logon for the number of sessions requested for up to four hours. Task This command communicates directly with Teradata to specify if the MultiLoad mode is going to be IMPORT or DELETE. Aditya Enclave. we recommend setting the TENACITY to 4.

Bteq import: importing a parameter file into a database.MultiLoad may supply defaults that may impact your load. Dept_No SmallInt) Unique Primary Index (Employee_No). INSERT INTO Employee_Table (1256349. 'Harrison' . Niligiri Block. Salary Decimal(8. Task This instructs MultiLoad to finish the APPLY operations with the changes to the designated databases and tables. 100). The LABEL is defined first. 400). 'Chambers'. Hyderabad.export reset .00.1/database name then password) DATABASE tmp. DELETE FROM Employee_Table. */ Create Table Employee_Table (Employee_No Integer. . A LABEL is simply a name for a requested SQL activity.run file = mylogon. ph-8374187525 Page 148 . . Last_name char(20). When writing the script.END MLOAD .LABEL INSEMPS INSERT INTO Employee_Table (1232578.FIELD Task This defines a column of the data source record that will be sent to the Teradata database via SQL. Ameerpet. This command is used with the LAYOUT command. you must include a FIELD for each data field you need in SQL. 48850. #306. A zero (0) indicates that statement worked. .txt database tmp.DML LABEL Task The DML LABEL defines treatment options and labels for the application (APPLY) of data for the INSERT. Bteq scripts: Simple script: .RUN FILE = mylogon. and then referenced later in the APPLY clause.00. UPSERT and DELETE operations.txt sel * from employee_table.GOTO INSEMPS /* ERRORCODE is a reserved word that contains the outcome status for every SQL statement executed in BTEQ. . 54500. UPDATE.logoff exit. 'Mandee'.0. .2).IF ERRORCODE = 0 THEN . . Visualpath.QUIT Bteqexport script: exporting a file from database to a parameter file .'Herbert'.0. First_name char(20).export indicdata file= sample1ex. . Aditya Enclave.txt (127.

OUT. Fast export using set command: .z1 FROM T1 WHERE y1 = '&YY' AND z1 = &ZZ ORDER BY 1 .l_name(char(20)). . .RestartLog1_fxp.last_name. .first_name.y1.1/dbc.deptno (smallint) insert into employee_table(employee_no.sal(decimal(8.0. . Visualpath. 600.0000) .0000) .import indicdata(mode) file = sample1ex. Hyderabad. Aditya Enclave.:f_name. .:deptno).0000) .END EXPORT . 500.'Netezza' . .RestartLog1_fxp.Program: .'Netezza' .dept_no) values(:eno.ins t1(1.LOGTABLE tmp. .y1 char(10).ins t1(4.'Netezza' .0000. 600. 500. z1 decimal(9. database tmp. Fast export using acceptcommand: .4)).SET ZZ TO 600.logon 127.0000) .txt .Run file = mylogon.BEGIN EXPORT SESSIONS 4 .ins t1(5.logon 127.repeat * using eno (integer).ins t1(3. . 600.LOGTABLE tmp.:sal.'DB2' . . Ameerpet.:l_name.SET YY TO 'Netezza'.quit Fast export scripts: Data: ct t1(x1 int.0.dbc .txt database tmp(name). SELECT x1. .EXPORT OUTFILE FXP_DEF. Niligiri Block.1/dbc.0.LOGOFF .ins t1(2.f_name(char(20)).0.quiet on . . ph-8374187525 Page 149 . #306. database tmp.salary.0000) .'teradata'.dbc .2)).

0. ph-8374187525 Page 150 .logon 127.0. .IMPORT INFILE 'fexplaydatafile.sleep 3.z1 FROM T1 WHERE y1 = :YY AND z1 = :ZZ ORDER BY 1 .LOGTABLE tmp. logon 127. .0. . .sal (INTEGER).BEGIN EXPORT SESSIONS 4 .1/dbc.1/dbc.END EXPORT . Hyderabad.:ename.ename (VARCHAR(10)). Visualpath.dbc.emp_table values(:empno. database tmp.ACCEPT YY. . errlimit 1000. loc (CHAR(10)) file=myfexpload.:job.0.txt.0. Niligiri Block.. define empno (INTEGER).:loc).:sal.emp_err1.emp_err2.txt.LOGOFF . begin loading tmp.z1 FROM T1 WHERE y1 = '&YY' AND z1 = &ZZ ORDER BY 1 . Fast load scripts: sessions 8.FIELD YY 1 CHAR(8). . insert into tmp.tenacity 4.y1. Aditya Enclave. .BEGIN EXPORT SESSIONS 4 . Fast export using layout command: . end loading. logoff.dbc . ZZ FROM FILE parmfile.0.LOGOFF . .dbc.job (CHAR(10)).y1.emp_table errorfiles tmp.END EXPORT . .FIELD ZZ * CHAR(8).EXPORT OUTFILE FXP_DEF_LAYOUT.txt' LAYOUT Record_Layout FORMAT TEXT.1/dbc. . . fload optimized scripts: LOGON 127. tmp.out.LAYOUT Record_Layout. SELECT x1. . . #306.RestartLog1_fxp. SELECT x1.EXPORT OUTFILE FXP_DEF_ACCEPT. Ameerpet.txt.

Ameerpet. . HELP TABLE TMP. .T1. .layout internal.out.dbc. #306.layout internal. insert tmp.begin import mload tables tmp.BEGIN LOADING TMP.t1_er2 .field z1 26 decimal(9.t1 worktables tmp.dml label tdmload.T1. Multiload scripts using vartxt mode: logtable tmp. .end mload.logoff.import infile md.t1 worktables tmp.0. Niligiri Block.txt format vartext '. .0. :y1. DEFINE FILE=FXP_rec_text. END LOADING.logoff.0.T1 ERRORFILES TMP. .field z1 * varchar(10).logon 127.t1_er2 . Aditya Enclave.4). .t1_log. . .1/dbc. . insert tmp.t1_er1 tmp.import infile md.z1) values (:x1. . Hyderabad. LOGOFF. .dbc.txt format text layout internal apply tdmload.field x1 * varchar(10). Multiload scripts using txt mode: . . . INSERT INTO TMP. ph-8374187525 Page 151 .*.t1(x1. .field y1 * varchar(20).t1_log.t1(x1.T1_2. TMP. . :z1).T1_1.end mload.dml label tdmload.field x1 1 Integer.field y1 13 varchar(20).t1_er1 tmp.logon 127.t1_wrk errortables tmp. Visualpath. . .1/dbc.t1_wrk errortables tmp. . :z1). .z1) values (:x1.y1.y1.begin import mload tables tmp.0.' layout internal apply tdmload.logtable tmp. :y1.

Aditya Enclave. Niligiri Block. #306. Ameerpet.Visualpath. ph-8374187525 Page 152 . Hyderabad.

Sign up to vote on this title
UsefulNot useful