1) I have a table named product as shown below

:
product_id product_name
1 AAA
1 BBB
1 CCC
2 PPP
2 QQQ
2 RRR
Now my output should be:
product_id product_name_1 product_name_2 product_name_3
1 AAA BBB CCC
2 PPP QQQ RRR
A) SEL PRODUCT_ID,MIN(PRODUCT_NAME_1) AS PROD1,MIN
(PRODUCT_NAME_2) AS PROD2,MIN(PRODUCT_NAME_3) AS PROD3
FROM
(SEL PRODUCT_ID,PRODUCT_NAME,
ROW_NUMBER() OVER (PARTITION BY PRODUCT_ID ORDER BY
PRODUCT_NAME ASC) AS PRODRANK,
CASE WHEN PRODRANK=1
THEN PRODUCT_NAME END AS PRODUCT_NAME_1,
CASE WHEN PRODRANK=2
THEN PRODUCT_NAME END AS PRODUCT_NAME_2,
CASE WHEN PRODRANK=3
THEN PRODUCT_NAME END AS PRODUCT_NAME_3

FROM PRODUCT) A
GROUP BY 1;
2) I have a employee table with EMPID, EMPNAME, DEPTID, SAL and want to fetch the maximum and
minimum salary on each dept id with the emp name. Can anyone help in this? The result should contain the
EMPNAME, DEPTID, SAL.
A) Sel Empname,deptid, (qualify row_number() over (partition by deptid order by sal asec)=1) as min_sal,
(qualify row_number() over (partition by deptid order by sal desc)=1) as max_sal from EMP
3). I have a table with emp id, emp name, dept id and sal where dept id is NUSI. SEL * FROM EMP WHERE
DEPTID = 100. Can any one explain how it will fetch the record.
A) . Whenever NUSI is created, subtable will be created for each AMP's with the fields rowid and secondary index
value.For NUSI ,subtable will be AMP local ie it will be having the records which are specific to the particular AMP.
When someone queries with the NUSI, all the AMPs will search in their own subtable for particular secondary index
.For AMPs which found the particular record in their subtable ,it will retrieve the actual data from the AMP and PE
will be consolidating the records and give to user
Since NUSI involves all AMP operation , PE will be deciding whether FULL table scan or ALL AMP operation will
be more efficient .
4)---Why do Hash joins usually perform better than Merge Joins?
In MERGE join rows to be join must be present in same AMP.. If the rows to be joined are not on the same
AMP, Teradata will either redistribute the data or duplicate the data in spool to make that happen based on row hash
of the columns involved in the joins WHERE Clause.Hash join takes place if one or both of the tables on each can
fit completely inside the AMP's memory.AMP chooses to hold small tables in its memory for joins happening on
ROW hash.
Usually optimizer will first identify a smaller table, and then sort it by the join column row hash sequence. If the

9)what is the structure of UV table in MLOAD? . DELETE STATEMENT.logon tdpid/user.assuming usage of a simple restart table. teradata will redistribute the table rows into SPOOL memory and sort them by hash code. the script would be restarted from update statement.So. Where as in MERGE join Columns to be join is Non INDEXED column.ACTIVITYCOUNT=1 THEN . Then the larger table is processed one row at a time by doing a binary search against the smaller table for matched record. . IF ERRORCODE = 0 THEN . IF ERRORCODE <> 0 THEN . IF . INSERT STATEMENT. . If the insert or delete statements fail. the sorted smaller table will be duplicated to all the AMPs. so the join can happen on redistributed data 5) Which circumstance does the optimizing choose a product join? Product join compares all rows in 1st table with rows in 2nd table. the script would be started from insert statement.GOTO UPDTDML.LABEL REST1.EXIT.IF ERRORCODE <> 0 THEN . This is because hash values cannot be compared for greater than or lesser than which results in product join. . Otherwise. which is not desirable. If the update statement fails. This join is generally chosen by optimizer whenever there in an inequality condition in join.GO TO REST.EXIT.ACTIVITYCOUNT=0 THEN . . IF . .smaller table is really small and can fit in the memory. the performance will be best. INSERT INTO RESTART_TABLE ('RESTART'). product join uses spool files for redistribution or duplication of tables. .EXIT.GOTO INSDML. .So that matching data lies on same amp.LABEL UPDTDML.EOF 8) what is DYNAMIC SQL in TD? dynamic sql is invoking SQL inside a stored procedure during run time. SELECT * FROM RESTART_TABLE WHERE STATUS='RESTART'.password. DEL FROM RESTART_TABLE.what is the logic for the above requirement? Adding to above. UPDATE STATEMENT. IF ERRORCODE <> 0 THEN . .label INSDML. bteq << EOF . DEL FROM RESTART_TABLE. .GO TO REST1.below logic can be used. Below sql will not take care of insert or delete statement failure.EXIT. Also.when the BTEQ is restarted i need to run the BTEQ from after DEL statment(means no need to run the insert & del stat).LABEL REST. 6) in a BTEQ we have 2 insert 2 del 2 update statment. execute immediate or call can be used to invoke dynamic sql. it is advisable to avoid product join as far as possible.label DELDML.

1. 14) what is referential constraints?how do you implement RI in teradata? Referential Integrity is based upon how Primary and Foreign keys are defined.so for distribution of data P. This can be seen in the explain plan as "by way of a RowHash match scan with a condition".To avoid this take SI on the table.id=b. Along with the row hash value. In the second query. PI is mandatory in TD. If a primary index is not defined.id and b. partition numbers are also stored.In this situation system does full table scan. 15) How many macros we can create inside a macro we cant create macro inside a macro 16) what is use of PI(primary index) AND SI(secondary index) The data can be access in teradata in 3ways. Using create or alter statement. 3.we would be able to understand the difference. Only 1 column can be specified as identity column.deptno=10. a referenced column must be a Primary Key or must be Unique and Not Null.I need. 2. This means that it is a left outer jon on id column between the tables with table b with only one row..But we are using other than primary index colums in the where clause as part of frequently retrival data. 12) sel a. This means that it is a plain left outer jon on id column between the tables and the where condition is applied after the join process. accessing and retrieval is possible through primary index. PPI is used to store the data in partition. This can be seen in the explain plan as "by way of a RowHash match scan with no residual conditions 13) What is Difference b/w PI & PPI?how to implement PPI? PI determines the distribution and retrieval of data for a table in TD. If we look at the explain plans.Primary index 3.t1 from emp a left outer join dept b ON a. sel a. Join indexes.. what is the difference on the above 2 queries? This is a good question.Teradata distribute data across all amps. 2.id where b.10)what happen if a query fail in dispatcher? 11)what is identity column in TD? Identity columns are system generated values. Either RANGE_N or CASE_N can be used. the condition given in where clause is applied after the left outer join process on the tables. the condition given in and statement clause is applied along with the left outer join process on the tables. .Means it doesnot distribute the data. Because of this rule. In the first query. While creating the table.Using scenario::-There is a primary index on the table.which contains.t1 from emp a left outer join dept b ON a."Default" and "Always with cycle" option in identity columns may result in duplicates.t1.b.id=b. PARTITION BY clause is used to partition the data. Generally these columns are used for unique values.b.Full table scan 2. PPIs.t1. either a PK or Unique key or first column is assigned to be a PI. hash indexes cannot have identity columns as indexes.1)UNIQUE PRIMARY INDEX 3)NON UNIQUE PRIMARY INDEX 3)PARTITIONED PRIMARY INDEX 4)NO PRIMARY INDEX(from teradta13) SECONDARY INDEX:only retrival possibele through secondary index. 1. Some restrictions are 1. FK can be created.deptno=10.secondary index PRIMARY INDEX:This one is Mandatory index.

address2. which means exact or close approximate no of rows processed will be displayed. Whereas.address1 from tablet1 union all select 101. Where as Qualify is used like the having clause on functions like RANK. Confidence levels should be observed (low.When we collect stastics it shows HIGH CONFIDENCE. address3 and the output shoulb like below 101 address1 101 address2 101 address3 A) select 101. explain plan is like a gift to every developer. TD displays a low confidence. We have the source data as below.mark1. which in turn displays a high confidence. 22) which utitility can use for loading the data from non-teradata to teradata ? OLE LOAD IS ONLY UTILITY TO TRANSFER THE DATA FROM NON TERADATA TO TERADATA 23) I have a sql question.. Check for the recommendations of statistics.mark2 fields…like bellow ID Mark1 Mark2 Mark3 1 10 20 20 2 20 30 30 3 40 10 40 4 50 50 50 Update db_name. where TD stores the demographic data and stores. 20) Difference between Qualify and group Group by is used to find an aggregate like SUM or count for a group of values. TD prepares a estimate based on the data available. When generating the explain plan. Use secondary index and join indees appropriately. stats will be available for TD.17) Give some points about Teradata Viewpoint ? TD viewpoint is a framework where applications are displayed.WHAT IS THE INTERNAL ARCHITECTURE ACTUALLY GOING ON? statistics collection is a concept. if stats are collected for columns involved in sql. 21) I have a table with 3 fields like id. This web based framework can be used to manage and monitor the teradata systems using a browser. 19) One query is there when we dont collect stastics on it. TD looks for the stats for the columns used in the sql. Avoid unnecessary joins.Tab_name set mark3 = case when mark1>mark2 then mark1 else mark2 end . particularly the columns used on both sides of the join.If enough stats are not available. 101 address1. Try to observe the plan for any product joins or nested joins.address2 from tablet1 union all select 101.exlapin will show us LOW CONFIDENCE. 18) If the query is NOT WRITTEN PROPERLY then what are the recommendations you can give to the developer ? In TD.address from tablet1 . high or no).mark2 and I would like to update a mark3 field that would calculate the max for each record (so the max value of the 2 fields) in Teradata ID Mark1 Mark2 Mark3 1 10 20 2 20 30 3 40 10 4 50 50 I Have to write a update statement Mark3 with max value of mark1. Try to repeat the steps in the explain plan after tuning.

in apply In fload between begin loading and end loading there not write any code and run the script.But script failed at 120 records. when we are restarting the script. 30) can we load 10 millions of records into target table by using tpump? yes we can load 10 millions of records into target table by using tpump but it takes more time. But vartext is one of the input file formats which is supported in mload.. VARTEXT is one of the option.In the same table write lock is posible or not? yes. 29) where we can use the delimiter in mload? pls let me know i am not sure about the question. But PI is mandatory as the distribution of data is completely dependent on PI.y because it applies rowhash lock on each row.tablename in application phase Release mload tablename .45 is input. write lock can be inteded for the the same table. 123. 28) write a query following data. 31)whether Nulls will be counted while doing average? example: we have table column A with following values A -5 Null 8 .tbname set salary= substr(cast (salary as char(10)).'..45) load into database ?how it possible? update dbname. But here in our case the script was failed after loading 120 records and we have a check point option at 100 records..So one entry will be there in the log table after 100 records loaded into the table... >>generally less volume of data we go for tpump.locks will be relesed 27) If table have access lock. 26) How to release locks in fastload and multiload? example? In mload lock in accquisition phase Release mload databasename.')) or update table_name set sal=sal-cast(sal as int)). FL for every 100 records there is a check point..as FLOAD utilty has the automatic restart capability. How can we load data in FL? I agree with your statement using FLOAD we can't load the data into a table which already contains the data this is valid at intial run of FLOAD. 25) Why FLOAD doesn't supports NUSI? Where as Mload supports NUSI.when access lock is present in a table. If it is how can we create Primary Key for a table in TD PK is a logical concept and not mandatory in TD.24) I want to load 1000 rcds using.It will start from 101 record not from 121 record and the records from 101 to 120 already loaded into the table It will omit those 20 records(from 101 to 120)and those will be loaded into the Errortable2(UV table) remaining records from 121 to 1000 will be loaded into target table. Is PK concept available in Teradata.index(cast (salary as char(10)). but Target table contains populated data but FL doesnt support existing data in target table.. Choice of PI is very important and can contain upto 16 columns.write a query after decimal (ex:.its possible. >>if the table doen't have any limitations of fastload and multiload then go for fastload. it starts from last ckpt.

Night trafiic is high. afternoon trafiic is less.. How do i simulate the same in teradata. data is is stored parallel across the AMP's available.as said above.. data can be partitioned using months also. 37) plz explain parlla distribution and subtable concept in teradata parlla distribution: This is the key concept of Teradata storing data in parallel.how to change Gender Male to Female & Female to Male??? Write sql query for this question??? I am not sure whether decode will work in teradata 12. 39) On which column will you take primary index??? Generally. 32) id name gender 1 Ram Female 2 Kumar Female 3 sathish Female 4 Santhya Male 5 Durga Male 6 Priya Male This is my input. as it uses PI to store data. PPI can be a better option. Hash & Nested Join 41) Can u load same data into multiple tables using multiload? How will be the loading process? Whether it will be serially or parallely? .For the last file specify End Loading stmt in the script and Run.sal empname then i want output as deptno subtotalofdept totalsal 10 3700 3700 20 3400 7100 -------------> select csum(sal. 36) We are migrating an oracle table into teradata.deptno) from employee. the unique primary index helps TD to distribute the row evenly among amps. It is used to fetch the data using secondary index. 38) ur table contains the coloumns like deptno.data and hash value for the data.3 Now what is the average of A? yes. the volume of data is huge and partitioned (year wise list partition). 40) What is the default join strategy in Teradata??? The Teradata Cost based optimizer will decide the join strategy based on the optimum path. update stud set gender = case gender when 'male' then 'female' when 'female' then 'male' end. 34) How do you Generate sequence at the time of Display? By Using CSUM 35) How do you load Multiple files to a table by using fastload scripts? Remove End Loading statement in the script and Replace the file one by one in the script till last file and submit everytime so that data appended in Amp Level.so that it runs frm Amp to table.According to this situation which Utility you use and how do you load.Morning trafic is high. similarly. it will have rowid. This is because. "sel count(a)" will give you 3 ignoring null. while counting that column alone.0 but case statement can be used. Sub table: This table is created when a Unique secondary Index is created on the table. The common startegies followed are from Merge. Using PPI..which utility used? Tpump is suggestable here By Using packet size increasing or decreasing we can handle traffic. nulls would be completely ignored while calculating averages. 33) There is a load to the Table every one hour and 24/7. PI on an unique column would be recommended.

How can I load this data fastly in to the 4th column with out using update. If we try to load multiset table using fload with duplicate rows.. But you have to declare it out of the FLOAD Block it means it should not come between .1. it would be parallel. Pseudo table locks prevent such deadlocks from occurring. becuase fload does not insert duplicate rows. two import statements. When the user_1 request attempts to lock table rows on AMP4 (or when the user_2 request attempts to lock table rows on AMP3). suppose a request from user_1 locks table rows on AMP3. Analysts utilize the system to leverage information to predict what will happen next in the business to proactively manage the organization's strategy. The Data Manipulation Language (DML) tasks 2. The input data that is ready to APPLY to the AMPs MultiLoad will automatically create one worktable for each target table.4th column is empty. Two dml statements. 44) Why MLOAD needs Work Tables? Work Tables are used to receive and sort data and SQL on each AMP prior to storing them permanently to disk. use different layouts. 45) can I use “drop” statement in the utility “fload”? YES. 43) Why FLOAD does not support multiset tables?? Fload will allow multiset table. 2.CREATE.begin loading. use single begin statement. Pseudo table locks Example “ For example. 3. But there is no advantage of loading multiset table using fload. In the DELETE mode.Now I have got 5 million records data which has to be loaded into 4th column. they fload inserts distinct rows into the target table and the duplicate row count displayed under second error table.The purpose of worktables is to hold two things: 1. yes it is possible. . a global deadlock occurs. while user_2 locks the table rows on AMP4 first.end loading FLOAD also supports DELETE. 42) There is a table with 4 columns in that 3 columns has been already loaded with 5 million records. This means that in IMPORT mode you could have one or more worktables.DROP statements which we have to declare out of FLOAD block in the FLOAD Block we can give only INSERT 46) In which phase of the Active Data Warehouse evolution do you use data to determine what will happen? In predicting phase. you will only have one worktable since that mode only works on one target table.