You are on page 1of 44
w— J This cours i cuentas of Teradata Database 14.00. Honor Welcome to DATA MAINTENANCE a subset of Teradata SQL Advanced CCickon a module name othe ght arrow atthe bottom ofthe page to get started v1: Compression v2: Statistics v3: CREATE TABLE AS v4: Error Tables v5: ANSI Merge vPost-Test formation is marked withthe icons: Teradata Database 14.10 (EEE) Teradata Database 15.00 EEN Teradata Database 15.10 (EEO Module 1: Compression VE Objectives -¥E] COMPRESS Options “VE impact of mvc J ¥1LE Multi-Value Compression P-VI ALTER TABLE and Compression -vIld Algorithmic Compression IVIg Block-Level Compressic +E Compression Comparison VE] Lab -VEI Solutions -Aster completing this module, you shouldbe able to Implement column compression on a table using the ALTER TABLE command Recognize the benefts of column compression implemented wit ALTER TABLE Implement muit-value column compression fr table celurmns Recognize certain benefits, considerations and limatone of mut-vaiue column compression Teradata Database offers 3 compression options that can be used separately or together: 4. Mul-Value Compression (er Value List Compression) ‘A feature that allows up to 255 distinct values (plus NULL) to be compressed per column. Ths reduces storage cost and improves performance because there is less physical data to retrieve dunng scar-oriented queries, 2. Algorithmic Compression (ALC) or Column Level Compression Allows users to apoly 3 compressson algorthm fo data at the colurmn level in row. Compression‘decompression is dane by ‘specifying @ UDF function, 3. Block Level Compression (BLC) Compression is performed by Teradata Database atthe fle system level on whole data blocks before the data blocks are actualy writen tatead from storage devices, ‘You can specty multvelue compression and algorthmic compression in either order. System Storage Costs Compression reduces storage costs by storing more Togica' data using fewer ‘physica resources: In general, compression causes physical rows to be smaller, consequently permiling more rows per data block and thus fewer data blocks for the table. The amount of storage reducton is a function ofthe folowng factors. +The numer of values compressed + The sizeof the values compressed. * The percentage of rowsin he table with these values. System Performance ‘System performance can be expected to improve as 2 resut of value compression. Because each data block can hold more rows, fewer blocks need tobe read to resolve a quer. and thus fewer physical O's can be expected to take place. Also, because the deta remains compressed whie in memory, more rows can be avaiable in cache for processing per UO. Compression Transparency Compression is transparent to all user applications, utes, ETL tools, ad noc queries and views Mult.Value Compression (VG) has three variations an syntax Null are compressed 2 cousress OL Nulls are compressed. This is equivalent to option 1 3. campness ,, ‘Nuls an the spectied value(s) are compressed The last ofthese three options allows up to 255 values to be compressed fora single column, and the maxmum number of characters ina compressed value 1s 510, ‘An example of single-value column compression (costooer id DITzGER yaocount Type CHAR(1D) COMPRESS SAVINGS") Things to notice in this example Both nulls and the string 'S igs willbe compressed wen the row is stored. +The value of Savings witli te table header for tale banke aecount_data on each AMP inthe system 'An example of muipl-value column como (costooer id DITzGER ‘Socoune. Type CHAR (IO) COMPRESS ‘SAVINGS’, ‘CHECKING, "CD", "HOTUAL FOND") ) Things to notice in this example + Each of he four specified strings are now compressed when the raw is stored + Nulls are also compressed Each of these values is writen tothe table header on each AMP ere space is takenin the physical row for each of these values. This can significantly reduce the amount of space needed to contain the table. Examples of highly compressible values Any of he folowing should be considered candidates for compression, when the frequeney of their aceurence is high Nulls Zeroes Spaces Default Values Compression is case-sensitive, so be aware ofthe case used for any character data (eg. Teradata is efferent from Yeradata’) ‘Suggested Application Columns For Compression ‘Any column with a high frequency values, or with a relatively small number of values should be considered a candidate for ‘compression. The folowing isa fst of possibie candidate columns. State city Cotnty Augomoile Make Creel Card Type Account Type Fist Name Last Name Limitations Lut-value compression is not supported for + Any column thats a member ofthe primary index column set fora table. + Row-level securty constraint columns, + Partioning columns. + Parent orchid Ri columns + Derwved' deta columns + Wdentty columns ‘A macmum of 255 values (plus NULL) may be compressed per column, ‘The maximum Size of @ compressed value is 510 byes. “The aggregate ofall compressed values may nat exceed the maximum size of a table header (MB) Columns defined with non-matching COMPRESS ativbutes cannet participate in fast path INSERT ... SELECT operations Non-compressible Data Types Luti-value compression is not supported for columns defined with the folowing data types: + BLOB + cloB + Geospatial + Period ‘+ Structured, ARRAYIVARRAY, Period, and Geospatial UDTS + Ison + XML ‘The SGL ALTER TABLE command supsors ading, changing, or deleting column compression on one or more existing columns ofa able, whether the table is loaded with data ors empty. Here are some examples using ALTER TABLE to implement mut-value compression ALTER TABLE Table? ADD Colt CHAR(L0) COUGEESS. F the column Colt exists and is nullable, then the colurm wil nw be & compressible column with NULL as the compressed value. Note that we use ADD even ithe column already exists. There is no MODIFY for colurnns. F the column Colt does not ext, then the column is added to the table and the column willbe a compressible column with NULL as the compressed value ALTER TABLE Tablet ADD Col CHAR(L0) COMPRESS NULL; Same as previous example. ALTER TABLE Tablet ADD Colt CHAR(L0) COMEHESS ‘Savings’: The column wil be compressed or nulls and for the constant value ‘Savings F the column s already compressed the constant value ‘Savings’ will eplace the existing compressed valu or ist of values ALTER TABLE Taplol ADD Col1 CHAR(LO) COMPRESS (HULL, ‘Savings', ‘Checktng') The column will be compressed on the specified compress ist F the column i aleady compressed, the new compress lt wil eplace the existing compressed value ars of values Null willbe compressed in ether case Here are some examples using MVC on a numeric column + Column willbe compressed for one value zero, + Nuls wil also be compressed if the column is nulable (defauit is nulable). ALTER TABLE Tablet ADD Col? COUERESS (9, 100, 1000) + Add compressed values (100, 1000) + Note value zero must be restated his folows the previous ALTER statement. ALTER TABLE Table? ADD Col? couPRESS (HULL,0,200,1900,10000) + Adds compressed values 10000. + Only 10000 is added to ist since NULL, 0, 100 and 1000 are already compressed in prior example. ALTER TABLE Tablet ADD Col? COUPRESS (HULL, 0, 200) + Reduces the comoression list to null and two values. + Values 1000 and 0000 are not compressed + Allcompressed values are now disabled + Golumnis now uncompressed How is compression physical implemented via the ALTER TABLE command? +The fable wil actualy be rebuil al the time ofthe execution of the ALTER TABLE statement Space overhead requirement for rebuilding a table ic around 2 NB. The table i rebuilt one eyknder at a tame, 20 no matter how big or smal a table 13, the overhead remans the same ‘full dupcate copy ofthe ables never creates nor required ‘The ALTER process is restatable via checkpoints the Transient Journal No rolback process is possible. a restore ofthe original table is needed this can be accomplished by + Re-ALTERing the table with the original compression specifications + Anarchive ofthe orginal table folowed by a restore The ALTER TABLE ADD cnamte syniax allows certain other attibutes to be included in the same statement as the COMPRESS altibute. There are exceptions to this rue + Acelumn CONSTRAINT cannet be defined atthe sametime. + ACOMPRESS modification with a NULL value in the compress Ist is not alowed in conjunction with @ NOT NULL attroute change, + Aitring 2 non-compressible column to a compressible column isnot alowed f changing the eolumn to an Identity column at the same imme. These changes may be implemented separately Additional Considerations + Compressing columns on whith secondary indexes are defined is allowed, unless the indexis the ether the Pi a Reterental Constraint. + An Exclusive lock is required onthe tale being compressed Algorithmic Compression (ALC), or Column Level Compression, lows users to apply a compression algorithm to data atthe column level a row. Compressionidecompression is done by specifying a UDF functon. One common seis to compress two-byte Unicode ino one byte when the data is Lain (ASCI), Teradata Database provides some UDES to do compression for UNIGODE and LATIN data columns User defied ALC routines can also be used with BLOB andlor CLOB columns. User can create and apply ther own mpressionidecompression algerthms io columns Here is an example ofthe compression of a UNICODE column using Algorithmic Compression. Example Cpromocts nvrpsen ior wot, Extagery, GUR(@O) MOF MILLS Souarintion WARGHAR( S00) CHARACTER SET GxTCcOE SeSOUPRESS Usa 70" SYOPULIB TranaUTFeTooms cede) “The Deseription colurmn is defined witha character set of Unicode, which means that every character uses 2 bytes of space. Use 2 Teradata Datebase-suppled function to save space for common ACSI Latin 7-bit (USA) characters. The TransUnicode ToUTFB function compresses all non-null values inthe Deseription column. Use this function # a Unicode ‘column contains mostly ASCII Latin 7-bit (USA) characters, With Block Level Compression (SLC), Teradata Database compresses data at the fie system level on whole data blocks before the data blocks are actually witien toread from storage devices, Block Level Compression can be st for a table in 3 ways 1 When loading data, use the query band option Se QUEER RAND = "BLOCRCSMERRSSIONGHRS/WO:' FOR SESSION: 2 Feretuity has commands 3. DBSConttol settings in the feld group "Compression! There is @ CPU cost to compressidecompress on whole data blocks, but this is generally considered an acceptable tradeof since GPU costs decreasing whie VO cost remans hgh, Pe A rar tive ‘Compression (ALC) fetes Easy to apply towel understood deta columas and Set once and forget values, : Use TeodsiaDaabese | Nemdloaaize CFU Analysis. Need to analyze data for ‘algorithms or user-defined | ys. on turn on for alldata on fegurea | ““Somimenvaces "| Sompressonagesnmata_|YoUcan tn oferta on rach ue dpi. | PYM Oc 2 ‘Automatically combined with Reticed 10 dae eS No or minimal CPU usage ‘algonthm used CPU cycles are used to ae Works fora wide variety of | Autoratcaly invoked for aa ata and stuatons. values not replaced by MVC. Performance Impact Industry data, UNICODE, Latin ‘Applicability Replaces common values ps Al Data You can choose any combination or all tree on a columntaole F you have not set up your lab server connection, click on the Lab Setup button atthe bottom ofthe page to get instructions. You wll need these instructions to log on to Teradata Database. you experience problems connecting tothe lab server, contact rating Support@ Teradata com. For this set of lab questions you may need information from the Database info decurnent (ick on the Next button atthe bottom of the page to see the answers 1.) Display the table definition forthe Customer_Service accounts table 2.) How many distinct values ofthe column ‘cy’ are found in the accounts table and how many occurrences are there for each value? 3) How much space is taken by this table curenthy? 41) Create the table Accouns in your database wth compression spectied forthe following ‘cy values: ‘cuiver City", “Hormosa Heaoh', ‘Les angeles", “Santa Hontca 15) Populate your table with the rows from Customer Service accounts 4) See how much space your table requires compared to the uncompressed version in the Customer Service database, as seen nlab #8. 7.)How many distinct cles, slates and zip codes are contained inthe accounts table. Use a single query to answer this question Solution Solution 2 SELECT city, comT() Module 2: Statistics ven -Aster completing this module, you shouldbe able to Golect statistics using a sample ofa table, The COLLECT STATISTICS command permis collection of statistical information about the dstnbuton of values in one or more columns of table. Terailata Database's cost-based optimizer can only choose the best quety plans ft has an accurate statistical picture ofthe tables nvoled. you havent collected statics, the optimizer rekes on random AMP samples to delermine the size and distribution of 3 table Random AMP samples are often adequate fo large, wel dstrouted tables. Sut statistics are especially helpful when a random AMP ‘sample doesn't capture the rue dstnbuton of values in the table, as inthe case of small or unevenly dstrbuted tables, The COLLECT STATISTICS command of SQL is used to collect statistics on @ column or on an index. Each execution ofthe command may spect either one spectc column ora set of columns or ane specific index (which may consist of multiple columns) Example 1 Colect statistics on the department_number column of the employee table ‘The keyword STATISTICS may be abbreviated to either STATS or STAT, so these two SQL commands will reduce the same collection Example 2 Create a unique index on the network id column ofthe employee lable, then collet stalstics onthe index Assign the name emp network io the index (CREATE UAEQUE TIDEX enp network sx (network sd) ON employee COLLECT STARS MIIQUE INDEX (network 24) OM eaployee: Example 3 Create a value-ordered index on the salary_amt colurmn ofthe emplayo table, then collect statistics on the index Assign the name emp_sal_ixto the index. (CREATE THORS ou sal_1x (oatary_ant) ORDER BY VALUES (aalary ant) OF omptoyeo: COLLEGE SPAS TUNE (aslary ant) ORDER BY VALUES (salary aut) Ol eaployee: Staistics may be captured on a combination of columns, with or without a muti-column index. Ths collection only makes sence you use these columns together in a WHERE clause as a Selection condition, or nan ON clause as oin conditions, F these columns are also used individually for query condtions, you may colect statistics on them indielualy as well Example 4 COLLRCT STATISTICS COLIMM (employes muer, departacnt nuxDer) OH Eaploves Or f theres an index on the columns: Example 6 CREATE TIDEX onp_dopt_ Sx (oaplovee_muber, dapartaant nosbor) OF BupLoves} COLLECT STAFISTECS COLIAM (euploves muiber, departant_oonbee) OH EapLoves Souuzcr STAPESTICS TORK (omleyee.tmber, departvant monDer) ON PApLeree? COLLECT STAFISTICS INDEX exp_dept_1x Ol Beployee Example 6 You can collect statistics on muftpl indexes ofa table with a single SQL. statement This is not only easier to wt, butt can be 2 performance boost Teradata Database can use one scan of the table for multiple statistics collections. Example7 (EE Beginning with Teradata Database 14.10, you can collect satstics on a single-table expression This is especialy valuable if you have queries with WHERE clauses using this expression. You can also give @ name to a statistics collection. In this example, the collection is named stars month price COLMA (EXTRACT (MOUTH FROM o orderdate), © totalprice) AS state nonth price {full STATISTICS collection can be time-consuming and CPU-intensive. But there is an option to collect only the quick SUMMARY slaistes ‘SUMMARY sists give the optimizer only object level demographics such as row counts, AMIP samaling estimates, average row ‘Size, average block size, and co an. F yeu specify SUMMARY, you cannot spect USING options, column references, or explicit COLUMN or INDEX references, SUMMARY stasis are collected along with any other statistics collection, but you can collet them by themselves 1s recommended to have (at minimum) SUMMARY stats for every table in regular use for queries ‘You cannat specity SUMMARY for volatile tables Collecting statistics can be time and resource consuming because it scans and sorts all ofthe data fo compute the number of ‘occurrences of each cistnet value, Ancther option fo reduce resource usage isto use SAMPLE. coLizcr Stars USING SAMPLE CoLMM (departaent_samer) OF mIELOVEE SAMPLE collects on a percentage ofthe data instead of althe data. The drawback is tha! the results may not be az accurate as a full colecton, which in tum may affect the quay of plans chosen by the optimizer. Some points to consider Re-colecton collects statistics using the same mode (ull scan or sampled) as specifed in the last specified collection. System does not store both sampled and complete statistics for the same index/eolurnn set Only one set of statistics is stored fora given index or column. Sampled statistics are more accurate for data that is unformiy distributed ‘Sampling is useful fr very large tables with reasonable cistribution of data, ‘Samping should not be considered for highly skewed data as the optimizer needs to be aware of such skew. ‘Sampling is more appropriate for indexes than norvindexed columns). Full statistics are colected forthe PARTITION column Without a specified percent, system determines appropriate sample size to generate accurate statistics. the table is small ‘and the system determines that the cost of collecting stats 's under (0.5 sec CPU time) iis considered as a small cost and {ull statsics are colected unless a spectic PERCENT is specified To request a specific sample size of rows on which to calect statistics, use this syntax To request thatthe system size sample be used (for example, after a collection spectied at 10 percent you want to retu toleting the system decide), Use tis syntax To request tha ull statistics be collected (fr example, after you have collected a sample but now you want fo collect onthe entire table) use this syriax Beginning wth Teradata Database 14.10, the sampling is intligen. When you use USING SYSTEM SAMPLE, the database can automaticaly determine the appropriate sample percentage to beneft from sample statistics, based onthe statics history ‘appropriate, the system can even automatcally downgrade aul statistics collecton to sample statistics, ‘You can (and should) colect statistics on the system derived PARTITION for row (PP!) or column partioned tables, The optimizer uses PARTITION statistics to estimate the cost of various operations including + Static pation elimination + Dynamic pation efmination ‘The optimizer can use this information tp better estimate the query cost when there ae a signicant number of empty parttions ‘This is a very fast collection as only the Cylinder indexes need to be analyzed ‘When storing cobected statistics, the default wilh of values (eg. MinValue, ModeValue, MaxValue) stored inthe inirvas is 18 bytes. Teradata Database never uncates numer values fr singke-colummn slalistics, however, somelimes character data can be truncated. Ths is especialy apparent wih mult-column statistics. You can override this limit with the MAXVALUELENGTH n opton [CoLMeI (o enstonar, 0 custonerioaatien) Ot ordors: When i this helpful? + the frst column in a mut-column stats is larger than 18 bytes. For example, ithe fist column in a mult-column stats Ccolection 6 CHAR(20), then no values frm the ether columns wil be recorded inthe histogram tervals FF large number ofthe values in @ CHARACTER colurmn start with a common sting, for example “State Department of” There would be no differentiation for these particular values inthe histograms, because the ferentiation comes after the 16 byte mit. This can resut in the optimizer expecting skew where there iS none The defaut numberof intervals in a statistics histogram is 260. This means that each interval hoks information about 0.4% ofthe dota ‘You can increase this number. The lager the number of intervals, the more optimal the granularity ofthe statistical data n the histogram. A ier granulanty enables beter single table and jon Selectiiy estimates for non-uniform data But be selective about when you use this feature! The larger the number of intervals, the larger the sizeof the histogram, which can increase query parsing ime ‘To change the number of intervals (you can choose any integer from Oto 500, use the MAXINTERVALS n option: F you wart to return to using the system defaut, use SYSTEM MAXINTERVALS: With Teraota Database 141 tats sentancemerts, there fs 2 THRESHOLD oon Usns he THRESHOLD option, youcan tlle deabace here yours re or weno skp a iatsiescolecon This can rece the se of sem resources bed wendove This telshe sistem nottoecolct the stasis ae younger than 7 days: tls the system notiorecolct ithe tas s have not changed by mote than 10 percent The percent and days options can be ANDed together. This tells the system not to recollect the statistics are younger than 7 days. and f the states have not changed by more ihan 10 percent. Unies you specty bth a time threshold (in DAY'S) and a percent threshold, the system wal default its own value fr the one you didnt specify. For example, in the frst examole above: {he percent threshold defaults to the number spectied af the system level (default is 0, which means that threshold is disabled. Your DBA wil need to set this toa non-zero integer valve to use ths) ‘System Threshold you want to use the THRESHOLD opton but simply default to system levels you can use SYSTEM THRESHOLD: ‘You can use the system level for ether PERCENT or DAYS No Threshold Specifying NO THRESHOLD does one ofthe following + Does not apply any thresholds tothe colection of statistics + Removes the exisng threshold before collecting the statistics. xs can specify NO THRESHOLD for ether PERCENT or DAYS: Beginning with Teradata Database 14.10, there ae options to automate your statistics colection. Now the database can inteligerty make Statisics recommendations, Ike the folowing when to COLLECT statistics vwhen to skip collection ‘shen to collection on an expression when a SAMPLE is suficient Limits On Collecting Statistics ‘You can combine up to 84 columns ina joint non-index coleetion set A maumusm of 32 sets of mult-column non-ndex sets per table can be collected and maintained ‘A maximum of 512 column sets/indexes can be collected implicitly, winout any expie column or index specified Implicit colection is done without referencing expicly to any columns or indexes. This imple collects statistics forall {columns or combinations of celumns which currently Rave colectad statistics When To Use SAMPLE Sampled statistics should be considered when colecting staistics on very large tables, where resource consumption from the collection process is a periormance concem. Sampled statistics should not be used on smal tables. Sampling should not be used as a wholesale replacement for exsting full scan colectons. Sampled statistics are more accurate for data that s unformlydistnduted. Ths le because sampling can degrade the qualty ofthe resulting stetistes and the subsequent query plans chosen by he Optimizer. Columns or indexes that are unique or nearly unique are uniformly dsiiouted. Samping should not be considered for dta that is highly skewed because the Optimizer needs to be fuly aware of such skew Sampling is generally more appropriate for indexes than for non-indexed columns. For indexes, the scanning techniques ‘employed during samoling can take advantage ofthe hashed organization of the data to improve the accuracy ofthe resulting stasties F you have not set up your lab server connection, click on the Lab Setup button a the bottom of the page to get instructions. You 6 ‘il need these structions to lg on to Teradata Database. you experience problems connecting fo he lab server, contact Training Suppot@ Teradata com For this set of lab questions you may need information from the Database info decurnent (ick on the Next button atthe bottom of the page to see the answers 1.) Greate a copy ofthe Customer_Service.emp_phone_2000 tabie in your database with data. Add stasis tothe folowing columns and index using @ separate COLLECT STATISTICS command for each ‘employee _namper (2ndex) roa_sode, phone (gaitt-cotmm) Do a HELP STATISTICS on the table and note the values returned 2.) Resubmit the same GOLLEGT STATISTICS statements, this ne with the USING SYSTEM SAMPLE clause added. Do a HELP STATISTICS on the table and note the values returned. Are they the same or have they changed and why? 3) Drop al statistics on the emp_phone_2000 table using a single DROP STATISTICS command. Op a HELP STATISTICS on the table fo confrm that staistes are dropped, Resubma the COLLECT STATISTICS statements again wih the USING SYSTEM SANPLE clause added Do a HELP STATISTICS on the table and note the values returned. Are they the same or have they changed and why? Solution (eneava TADLE Ep Phone 2000 4S Castoner Service: ap Phone 2000 ITH DATA, COLLECT STAPISHICS THDEX (eapleyee_mumber) [Stina (Gressede, phone reaper) off Bap, Paone. 2000 oR COLLECT STATISTICS INDEX (oupieyoe nuuber) Ot ap Fone 2000 Sottzcr Starrsries Govier (sven sede) cit Bp Pham 20007 COLLECT STATISTICS COL {ares_cods, phone Auaber) OM Bap Phone 2000, Solution? [COLLECT STATISTICS USING SYSTEN SAMPLE INDEX (oupLoySe_nmbor) (COLMA (area coda, phone nasber) Module 3: CREATE TABLE AS ca alae ‘ter compleing tis mode, you shook be abe JE Copy WITH DATA eee Recognize the benefis and imdatons of copying Wd Special Cases sabi VE Lab Realize the benefits of using this feature in conjunction VE Solutions wh both the DATA ard NO DATA options. + Copy statistics from a source to a target table as part of 2 CREATE TABLE AS operaton The CREATE TABLE AS statement creates 2 copy ofan exstng table or a subset of eustng source tables. also copies the data values, f the statement includes the WITH DATA clause, from the source tables) tothe target tab. Example What is net copied? * Referental integrity + Thagers CREATE TABLE AS ... WITH DATA AND STATISTICS You can also copy statistics from the source table tothe target table, the statement includes the AND STATISTICS clause. Stalisics refers to data demographics which are used by the Optimizer in query planning. Collecing statistics ensures that the Cpianzer has the mast accurate information in ord to ereate the best access and jae plans. CREATE TABLE AS ... WITH NO DATA AND STATISTICS CREATE TABLE AS can also copy zeroed siatstics from the source table tothe target table when the statement includes the WITH NO DATA clause along withthe AND STATISTICS clause This is done pamarly to designate the appropriate colurmns for future statistics collection. Subsequently, a single COLLECT STATS command on the table wil refresh the stats forall pre designated collectible columns, Why is copying statistics valuable? Copying statistes as well as deta reduces the time and effort needed to make a new table query-ready Since collecting statistics requires a complete scan ofthe table, copying stasis from a source table can substantially save time ‘nd resources You can copy table structure, table data, and table statistics: You can copy table structure, table data, and column statistics using the subquery variant of CREATE TABLE AS command CleaTE TABLE Mow Tapl9 AS (SELECT + FRU HxIsting_TabIo) ‘When using the subquery variant, tis important to adhere tothe requirement thatthe carcnaity (numberof rows) forthe source ‘an target table must be dentical since a subquery can ater the number or rows copied. Fhe cardnalty rue is Volated, sa ‘cannot be copied The syntax forthe STATISTICS portion is below. Note that you can use STATISTICS! ‘STATS’, or even simply STAT" Amp [0] (SPATISTIes | stATS | stAT) ‘When copying zeroed statistics, you have similar options, To copy table structure, no table data, and zero-valued table stalistics: “To copy table structure, with table deta, and zero-valued table statistics using the subquery variant of CREATE TABLE AS command, There are some requirements for the AND STATISTICS feature to be used correctly. When does the copy not work? (CREATE TABLE WITH DATA AND STATS Case 1; MULTISET Source and SET Target Statistics will not be copied when the folowing three situations are rue +The source table is a muttset table +The target table is a settable + There are no unique columns in the source table ‘Why not? in this situation itis possible tha the source table (MULTISET, no unique constraints) has duplicate rows. But since the target tablets a SET table, dupicate rows wil net be inserted. This means that the statistics would nat be vald, so Teradata Database does not copy thers Case 2: SET Target and mismatched CASESPECIFIC column props Statistics will nol be copied when the folowing two situations are ue: +The target table is a settable + NOT CASESPECIFIC is spectied for any column in the taryet table but isnot speciid in the source table Wy not? in this suction, a possibly exits that a row in the source table is duplicated across al columns wah the exception of single colurmn with the CASESPECIFIC aftnbute.In ths case, the same value wil appear as two dflrent values based on the case ‘of each eniry. When copied toa NOT CASESPECIFIC column, an attempt wil be made to copy these rows {othe target table a= two duplicate rows. Since only one of the duplcales can be copied Io a settable, the cardinally ofthe table will change, thereby rwaldatng the statistical profile Case 3: Mismatched UPPERCASE column in Target Statistics will not be copied when a colurmn has the UPPERCASE attribute in the target table but nat inthe source table This isa similar stuaton to case 2 n ths stuation, a possibilty exsts thal a row in he source table's dupic.ated across all columns with the exception ofa single column In this case, the same value wil appear as two diferent values based on the case of each enty. When coped fo an UPPERCASE colurin, Teradata Database wil convert that column's value to uppercase letters and attempt to inser these rows to te target table 2s two duplicate rows. Since only one of the duplicates can be copied toa settable, the cardnalfy ofthe table will change, hereby invakdaing he staiscal profile ‘When does the copy not work? (CREATE TABLE WITH NO DATA AND STATS ‘most cases, ereating 2 table with ne data but with statis results in 2ero-vaue statistics being copied forthe colectable columns The restrictions which result from using the WITH DATA optan do not apply, since having zero-valued statisti does not cor with any ofthe data in the target table. The current timestamp is recorded asthe last ime stats were undated F you have not set up your lab server connection, click on the Lab Setup button a the bottom of the page to get instructions. You 6 ‘il need these structions to lg on to Teradata Database. you experience problems connecting fo he lab server, contact Training Suppot@ Teradata com For this set of lab questions you may need information from the Database info decurnent CCick on the Next buiton atthe bottom of the page to see the answers 1.) Using the folowing instructions, create a copy ofthe emp_phone_2000 table currently in your database from the last lab + Use the CREATE TABLE AS statement and the WITH DATA AND STATISTICS option in creating the table * Callthe new table emp_phone_new * you do not currently have the emp_phone_200D table in your database, fst do Lab #1 ofthe previous module. Do a HELP STATISTICS on the table and note the values retureé. Then, DROP emp_phone_new. 2) Create the emp_phone_new again wn the same SQL statement but modiy to copy NO DATA Do a HELP STATISTICS on the table and note the message retuned Load the emp_phone_new table with INSERT SELECT using the data from the original emp_phone_2000 table. Submit a COLLECT STATISTICS ON emp_chone_new and do a HELP STATISTICS on the table. Are there Statistics colected on the table? Why or why not? Now, DROP emg_phone_new. 3) Using the folowing instructions, create a copy ofthe employee table curently in your database from the pro exercises: Solution jam pata aw sianerseas nee HELP STATTSPICS enp_phone_new: Solution 2 Neue STATISTICS enp_phone new: Module 4: Error Tables “7b Objectives ‘Aster completing this module, you shouldbe able to: “VE Complex Error Handling + Recognize the benefts and imitations of using Error VEE) LOGGING ERRORS Clause “ables in a load operation. iE LOGGING ERRORS Syntax VI What is Logged? E-vld Administration Utize evar tables as part of a recavery operation I-¥ld Error Tablo Usage +E Limitations and Considerations “VE Lab -VEI Solutions (Create, drop and view error tables. Used With INSERT SELECT or MERGE INTO Teradata Database provides complex error handing capabities in conjunction with the MERGE INTO or INSERT .. SELECT Sfalements. This accomplised trough the use of SQL-based errr tables Erors arising fom these bulk insert operations are logged 0 an eror table whe the INSERT. SELEGT or MERGE INTO continues i run instead of aboring Compared with Fastioad and Multiload Error tables increase the fesibilty in creating load strategies by allowing the use of SQL for batch updates which contain errs. ‘also provides error reporting similar tothe application lad utities (Fasload, Mutioad) while overcoming ther inherent restctions ‘on having Unique Secondary Indexes (USIS), Join or Hash indaxes, ReferentalIndewes (Rls) and triggers onthe target tables. Compared with TPump TPump also does SQL-based loads and therefore does not cary these restrictions, but can serve a déferent purpose, For example, when logged-on sessons ate near capaciy, INSERT .. SELECT or NERGE INTO may be preferable fo TPump since they only require a single session, Conversely. when the order of updates matters, TPumip would be the appropriate choice. Unike wit the standard lad utiles, error tables must be manually created and dropped The LOGGING ERRORS clause may be added to MERGE INTO or INSERT... SELECT buk SOL statements, Example Here is an example of an INSERT SELECT operation using the eror tale feature Error rows are logged row-at-aime, just ike with the clent load utes. Unike the cent load utiles, eror row insertions are not transient journaled since they ae not rolled back when a request is aborted. Performance i typically two to sk imes faster than lagging of errors using the load utes, however may sil ake considerable ne to lg thousands of ears. Normally when the ERRORS clause is specified for an INSERT SELECT or MERGE INTO, the request willbe commited ater logging the errors. Exceptions to this occur with any ofthe folowing The eror mit is reached {An eror that cannot be handled is detected Referentiaintegrty (Rl) violation Unique Secondary index (USI) violation In these cases, the request wil be aborted and rolled back, and logged error rows curren inthe table are preserved The full syntax for the LOGGING clause is as follows: [Locomia (ALL) eanoRs [error Iémit option] Where error limit option is (wo tare ware cy (Lire OF } ALL - keyword is optional and has no effect on functional WITH LIMIT OF - clause may be specified in the SQL syntax to impose its on the numberof ears that are allowed to be inserted in he ertor table before the transaction aborts. ‘The eror limit value 's specified as a range between 1 and 16,000,000. ‘A default imi of 10 is used i the clause is not specified ‘no error lint is desired, a WITH NO LIMIT clause may be specified in the example on the previous page, the transaction will abort ater 100 errors ae detected and logged INSeRT ege SELECT + FROM are The following typical erors are logged when 2 LOGGING ERRORS clause 's added to an INSERT SELECT or MERGE INTO faternent: 22700 - Reterential constraint olation: invalid foreign key value 2728 -Referential constrain vation: egal parent key update 2801 - Duplicate unique prime key errr 2802 - Duplicate rows arising from an INSERT-SELECT into @ SET table are logged in ANSI mode and ignored in Teradata made. 2808 - Secondary index uniqueness vilation 3604 - NOT NULL viltion. This errr wil be lagged when the execution plan does not spool the Souree rows priate the row merge operation. 8317 - Check constraint violation ‘7887 - Same target row being updted by multiple source rows error Handling of Ri and USI Violations Rl.r USI violations cause the request to be aborted and rotled back However, viektions are sti logged for this typeof abort. ons that do not allow useful recovery information are not logged in the error table. These errors typically occur during ‘of mput data before the row merge process begins. They cause the request abort and rolback before {ny supported error conditions can be logged. Examples of Such errors are User Defined Type (UDT) errors, User Defined Function (UDF), and Table Function ertors erty column errs Santy cheek eros such as version change or non-existent able Down amp request against nonfallback abe errs Erors arising from building spool rw. Cut of permanent or spool space errors. Join Index maintenance erors. ‘An error table must ist be created before speciying LOGGING ERRORS for a request (eneAW2 EOROR TABLE [[edntabase>.]¢erzor table>] POR Error tables have the folowing properties: . n auldtion to the target table contents, the error table contains 19 additional errr~elated columns: ‘Access nights required for the CREATE ERROR TABLE statement are the same as forthe CREATE TABLE statement: tthas the same Plas the data table, minus the uniqueness attrioue i tis specified, ie., a UP! on the data table is converted to aNUPlon the error table. Pariticning, if any, ofthe P forthe deta table isnot carried over tothe errr table Only data types of data table columns are copied over - column atributes are not Secondary indexes defined on the base table are not copied over. Secondary indexes may be added later to facitate ‘analysis of data captured inthe error table, but they cannot exist prior te running a LOGGING ERRORS request. [An error table is a muit-set table to avoid the expense of duplicate row checks. [An error table has the same fallback properties as the data table ther properties, such as journaling, checksum, data block size, etc, that appl to table creation are set tothe usual defaults appied fo CREATE TABLE statements. eadcrumb Auto Text> Example Create the target table frst. Then create the referencing errr table ‘An error table consists ofall the columns from the data row fllowed by the folowing thirteen fined columns, each prefied with ETC." (for Error Table Column) to distngush it from data table columns. The fllowing ae the Error Table Golurnns ETC_DBQL_QID: The Database Query Logging (DBGL) query ID. The system sets up a time-based query id regardless of ‘whether DBGL is enabled, and increments i fer each new request. The query ID is used fo uniquely ini all eror rows for ‘a paricuar request ETC_DML Type: Denctes the type of request. Value is, 'U, or'D' for Insert, Update, or Delete. U' indicates a MERGE INTO update erorF denotes ether an °NSERT SELECT error or 3 MERGE INTO inser error, The'D) valve willbe used ina fire release. ETC_ErrorCode: DBC enor code ETC_Tablele: kleniies the target table forthe load ETC_Fieldld: Stores thei of the column that caused an error condition as found in DBC. TVFields. ETC_RITableld-dentfies the other table involved in an Rl violation For chil insert errors would identiy the referenced pare table, and for paren-update enor, the referencing child table. ETC_RiField klenifies the field n the table associated with an Rl velation. For a parent table, the field would be the missing UP or US| key value referenced by an inserted child rew. For a child table, ¢ would be the foreign key fd that referenced the UPI or USI key in a deleted parent row. ETC_ErrSeq. Error sequence number. starts with a value of for the frst eror ofa given request and increments by 1 for each subsequent error. ETC_IndexNumber: Contains the index id that caused a USI or Ri violation, rll ethenwise ‘An error table must fist be dropped before its data table can be dropped. This restriction prevents orphaned errr tables from being left behind nthe system, To drop an error table, use ether ofthe follwing statements: fabase containing an error table ic deleted via DELETE DATABASE, the erro table is cropped regardless of where ts deta table resides. A database containing a data table with an erro table resitng in ancther database cannot be cropped. ‘Access nights required for DROP ERROR TABLE staloments are the same as those for DROP TABLE statements: Error table structure and column information may be displayed with ether ofthe folowing requests respectively nformation on data tables and ther error tables may be retrieved by querying one of 3 system views: DBC.Error TolsV (all data and error tables) DBC. Error TbIsVXx (data and error tables accessiole to requesting user) DBC. Tables3VX (use this to look up database table, and column names associated with the table IDs and column IDs stored in error tabi ros) Error row retrieval s simpler & deletion of obsolete eror table rows is done before each new load. All rows in an error table would then be relevant tothe new load. several batch laads need to be dane before errr rows can be examined, eror rows for a particular load wil hen have tobe isolated from those related to other loads. There ae three ways to Klenfy and retreve error rows fora particular load. In order of preference, they are Method #1 - WHERE conditions in responses Method #2 - QueryiD extracted Method #3 - Use Timestamp in WHERE clause Method #1 (WHERE conditions in responses) Relieve the ertor ows fiom the esr table using the query i in the WHERE conuiton. The query ids included inthe warring or error response tothe bulk SOL request when errors are logged. t's also output as part of the WARNING message from the SQL. load operation, 1. Perform load operation A 2. Retrieve entorsfor A. 3. Perform load operation B MENGE INTO tat USMHG axe 9 ON tgt.ctmere.ct 4 Retrieve erors for 8 Method #2 (QueryiD extracted) F the Query ID isnot saved or captured, it may be extracted from DAC DBQLogTbIf DBAL is enabled °* the folowing two nes are BTEQ commands to contol the display of the rest“ SELCH guavytext, Startins, Queryl4 (FORA '-2(27)59") ‘FRott DBC. DapLouTO2 QueryiD 98051697000677537 grerytext exe_peno QueryIb 9505:697000877530, Method #3 (Use Timestamp in WHERE clause) the quar ID is not available because the query ouiputis not saved and DBL is disabled, then the ETC_TimeStamp value inthe ferro table may be used to associate error rows wh the approximate tes of diferent leads ere are logged errors andthe request is aborted, make the necessary changes before resubmiting the load. This may involve 6 deleting oF updating some rows in the staging table, orinsering or deleting rows In parent orchid tables in the case of RI veolations, F there are logged errors and the request is commited, examine the logged errors and determine f any row needs to be corected and reloaded. Correction and read! may be dane wih an alternate table cloned from the error table and with removal of the ETC_columns, Example (CREATE TABLE arc? AS (SELECT + FROM ET_tgt MIERE ete_dbgl_qid=123456709012545678) WITH DATA: ‘ter making the necessary cerrections, drop all the ETC_ columns inthe atemate table and re-oad the corrected data from the attematetaole DROP BTC DSQL OLD, The following imitations for error tables apply Supports INSERT SELECT and ANS! MERGE INTO requests only. Earler VORS MERGE INTO is not supported One cannot ate the target table data structure once an error table is created “The tage! table specie in the bulk SQL request with @ LOGGING ERRORS clause must be a permanent table. lt may be ‘a queved table. butt cannot be a view LOGGING ERRORS alows LOB columns in the target table. However, LOBS inthe source table wil not be copied over to the erortabe Instead, the eorresponding LOB columns inthe eror table wil have nul vals. ‘To maintain comoatibity between data tables and ther errr tables, the folowing operations are nat allowed: + ALTER TABLE request on an error table ALTER TABLE request that adds or drops a data table column ALTER TABLE request that changes the primary index (P) column of @ data table. tering the parttoning ofa data table's partioned primary index (PP) is alowed + ALTER TABLE request that changes the falfoack protection of a data table Trggers and join or hash indexes cannot be defined on error tables Complex error handing may not be used in mul-staternent requests F you have not set up your lab server connection, click on the Lab Setup button a the bottom of the page to get instructions. You 6 ‘il need these structions to lg on to Teradata Database. you experience problems connecting fo he lab server, contact Training Suppot@ Teradata com For this set of lab questions you may need information from the Database info decurnent (ick on the Next button atthe bottom of the page to see the answers Create 2 populated copy ofthe employee table rom the Customer_Servce database and also its associated eror table, 2.) Now, attempt to load the same employee rows a seconé time using INSERT SELECT from the Customer_Serice.employee fable. Selecta count of rows inthe ET employee table. Can you explan the count? 3) To make the second set of rows unique, update all salary amounts Inthe employee table to zero. Attempt to load the employee roms agan as before. What hapsens? Change the INSERT SELECT io fx the problem and resubmit if another failure ‘OCCUS, you can Use the eror table to nd the cause. Selecta random row from the eror table - use empleyee 1002. What i the reported error-code for that row? Use the folowing SQL statement to find out what caused the error. Solution NA Solution? No rows were inserted info the employee table (or the ET_emplayee table) due to duplicate row violations. These violations {are only lagged in the error table when using ANSI session made. Solution Module 5: ANSI Merge VE Objectives -Aster completing this module, you shouldbe able to “VE ANS! MERGE Overview copie he benassi alee tepeeeioks ognize ne benefits and imtations of using omens MERGE in a load operation, ‘VIE Example - MERGE as UPSERT Compare and contrast the advantages of using 7 Example - MERGE as INSERT MERGE instead of MutiLaad iE MERGE as UPDATE Ib VL ON Clause Restrictions “VE MERGE vs, MultiLoad -VEl Summary VE Lab WIE! Solutions Merge is ANSt-standard SQL syntax that can perform bulk operations on tables using the extrac, load and transform (ELT) function. These operations merge daa from one source fable info a target fable for performing masswve insets, update and upserts MERGE statement merges a source row set into a primary indexed target table based on whether any taget rows salisty @ specified matching condtion wih the source row. Why use the MERGE function instead of UPDATE ELSE INSERT? or an UPDATE join? or an INSERT-SELECT? Because often the MERGE is more performant does the UPSERT wih one pass of the data insiead of wo passes, and it handles indexes smoothly Wy use the MERGE function instead of MutiLoad? NultLoad (or TPT UPDATE) can also do an UPSERT with one pass ofthe data. Bul whereas Mutil.oad protocol does not accept USIs or Ror tggers or jon indexes, MERGE is regular SGI. and therefore doesn't have those restrictions. ANSI MERGE can be used in place of the folowing operations: + Buk UPDATENSERT (UPSERT) operations. Update operations on a USI Update operations on a foreign key column, Condlional INSERT SELECT operations Conwentional join updates How does MERGE Work? f the source and target rows satisfy the matching condition, then the merge operation updates or deletes based on the WHEN MATCHED THEN UPDATE or WHEN MATCHED THEN OELETE clause F the source and target rows do not satisy the matching condition, then the merge operation inserts based on the WHEN NOT MATCHED THEN INSERT clause Atts most basic level, the MERGE syntax: (a, 2) Things to Nowe + INTO is optional + the source table can also be a subquery {UPDATE SET can ee abbreviated UPD SET + af must be the primary index of the target table (andi the target_table s PP! then the partitioning columns must be involved in ine matching cordon) + b2can be an expression ‘An UPSERT is the classic use of ANSI Merge. Example MERGE INTO Employee (Gn nevenp. Seployee tanber = Exployee ployee Nese VALUES (nowaup.Faplovee Muubor, nevaub.Faployee Hao, novexp.Salary Aunt, nowoup.departnent, Moxber) Things to Notes “The VALUES clause implies tha the structure ofthe source table (Nen Employee) does not need to match the structure of the target table (Empioyee) ‘The ON condition must reference the Primary Index of the target table ‘The Primary Index value in the inser’ must match the ON value. ‘The WHEN MATCHED and WHEN NOT MATCHED fields imely an UPSERT isto be performed. You can aso do an INSERT using Merge. Example The below merge slatement inserts rows from a source table info a target table. No Pi matches are expected Wien TOT HATCHED Taw TNGERT VALUES (Ordered onder _nusber, Orders3.tnvoies number, Se ese Only insert rows are expected, and this may also be accomplished with an INSERT .. SELECT How do you choose? Performance. ff USI exsts onthe target table, the merge exemple will outperform the INSERT. SELECT function. However, USI does not exist on the target table, then the INSERT .. SELECT function could be sight faster. n yet another form of merge, the syrtax forthe update-only merge is jut a variaon of the inser-only syntax_in the folowing ‘exemple, the WHEN NOT MATCHED THEN INSERT clause s removed and only the WHEN MATCHED THEN UPDATE clause is referenced Example eEnGE ITO Ewproyoe (ON tlevExp.zuploves Numer = Euployee.2aployee umber Inthis merge statement, note that The merge update can perform signicantly better than its equivalent SQL update This performance advantage grows even greater when USI maintenance is involved Whereas duplicate rows are siently discarded for any inser-select info @ SET table, they are not discarded for buk updates. The LOGGING ERRORS clause is optonal, but recommenced. You wil need to creale an error tabi before executing the merge. Without the enor table reference, any falures woud abor the merge and all the updates would be rolled back. But ‘th the error table enabled, qualifying falures would be logged and the merge would continue ‘The ON clause abvays needs to have an equality constraint onthe primary index column, but additonal conctons do not have this resincton. This rue also apples to partitoning cokumns. Example This MERGE request is not vail because the primary ON clause condition, a182, is an inequality, But the request is stil valid because any operators valid fora secondary condition ‘The primary index condition in the ON clause should not have any expression on the target table primary index This rule aso 4 apples to partitioning columns Example Consider the below target and source table defntions: CREATE TABLE tA ( eu INreceR) PARTETEOM BY (21) creare TABLE t2 ( 2 INTEGER); ‘The folowing MERGE request isnot ald because the primary coneition in the ON clause specifes the expressions af 10 and cf"b1 on the primary index at, and the parttioning column c1, of target tablet, respectively, fs not vania! +/ However ithe primary index, oF the pattioning column se, or both are specified ina secondary condition, this restriction does ‘The primary index should always be conjunetive (using AND) withthe additonal condions inthe ON clause. The aditonal condition terms can be disjunctve (using OR) amongst themselves. Example The two MERGE statements below are val The primary index expression used in the INSERT statement should always match the ON clause pimary index coneltion. This rule appbes when the MERGE statement is an UPDATEINSERT or INSERT only. Ths rue also apples to parttioning colurms Example The following Statement is valid, because 82+ matches the ON elause specification TUSERT (2241, 22, 62); Example The following MERGE statement is invalid, Because b2+1 does not match the ON clause specication F non-deterministic functions are used in the primary index condition and the MERGE statement is an UPDATEINSERT or INSERT only the statement resuts na parser error. Example The folowing MERGE statement i invalid Dil at ~ Won Deterministic UDP (ha) INSERT on Doterainiatic ODF (22), 32, 62); statement would result ina parser error. Also, f the user defined function is marked as deterministic even though iis ron: determmiste n nalure, the statement may cause an AUP intemal error during execution and the request woukd be aborted ln many cases, the MERGE statement wil be considered as a strong alternative to Mulloa for data loading, The most significant differences belveen MERGE and Mutload are below + Use of Transient Journal - MutiLoad's APPLY phase does net do Transient Joumal processing, but al ANSI MERGE requests do process a Transient Journal. The Transient Journal records are writen every time a modified data block containing updated rows andlor newly inserted rows is writen tothe isk. Thus, the everhead of transient rural processing increases as a function ofthe number of data blocks modified No Restartabiliy - Mult.oad is restartale, while an ANSI MERGE request isnot. To be able to restart, MultLoad must ‘wie a checkpoint row for every data block that is commited in order to keep track ofthe lat row that was updated or inserted. Such checkpointng s not required for ANSI MERGE requests because an erring request willbe aborted and rolls back (assuring you do not specty ary error handling) No Fallback Requirement - A significant benef of ANSI MERGE requests over Mutil.oadis that MERGE source tables need not be defined as fallback when merging rews into 3 targe table defined with falback In MutiLoad, the system tums the staging table int fallback fhe target table defined with fallback This causes an extra write tothe isk for MulL.oad to ‘construct the fallback table. For an ANSI MERGE oad, you can use FastLoad to directly lad the staging table, which need nol be a fallback table. Ths avorss the entra write step needed to prepare the staging table asa fallback table + Indexes - With Muiti.oad, the target table must not have any jon indexes, referential itegniy, oringgers. A MERGE request's target ible can have any of these ANSI MERGE provides capabilty for bulk processing and loads. [ANSI MERGE allows a true bulk Upsest operation with a standard SOL query “The capable of ANSI MERGE make tan excellent alternative te MultLoad in some cases Unice MutiLoad, ANSI MERGE supports target tables with Triggers, Unique Secondary Indexes, Join indexes, Hash Indexes, and Relerential integrity For tables with secondary indexes, ANSI MERGE may be faster than an INSERT. SELECT. F you have not set up your lab server connection, click on the Lab Setup button a the bottom of the page to get instructions. You 6 ‘il need these structions to lg on to Teradata Database. you experience problems connecting fo he lab server, contact Training Suppot@ Teradata com For this set of lab questions you may need information from the Database info decurnent (ick on the Next button atthe bottom of the page to see the answers Create 2 populated copy ofthe department table from the Customer_Senvice database. 2.) Create and populate a staging table that wil be used to update the department table (Gopar‘mont nubor” SWALLINT \ceperement=name, CHAR (39) HOT MULL odaet amount DECTIAL (20,2) plovas manor INTEGER ) PRIMARY INDESC (Gepartment_noaber) usin namo stage department VALS (402, cusroqsEn surEORT’.) INSERT 1170 stage_epactnent VALUES (201, "TECHNICAL OPERATIONS») INSERT ITO seago_aeparteont VALUES (SOL, RESEARCH AND DEVELO) INSERT Stage departuent VALUES (602, ‘INSTRUMENTATION’ ,226009,1016) ENGHNT ime atage-seperteent vain (ea, Mathrenann’,932000,2005) INSERT 170 stage_departnent VALUES (604, PUBLIC RELATIONS", 300000, 1012) Solution NA Solution? NA Solution 3 (G-dopartaont_ amber, 2.departuent_aama, acieoct neccnt, ‘aintanoee euplores costes) Solution 4

You might also like