QUERY PROCESSING and OPTIMIZATION

How queries are processed In SQL?

SQL is a nonprocedural language in which we specify What we need without How we can get it. With higher level database query languages such as SQL and QUEL, a special component of the DBMS called the Query Processor takes care of arranging the underlying access routines to satisfy a given query.

DB ACCESS
Users/Programmers

Database System

Application Programs/Queries

DBMS Software

Software: Query Processing & Programs

Software: Data Access

Database Definition

Database

Methods of Optimization  Heuristic (Logical Transformations)   Transformation Rules Heuristic Optimization Guidelines Data Storage/Access Refresher Catalog & Costs  Cost Based (Physical Execution Costs)   IV. Query Processing and Optimization: Why? II. Steps of Processing III. What All This Means To YOU? .Agenda I.

Query Optimization or planning the execution strategy 3. Query Code Generation (interpreted or compiled) 4.How queries are processed? A query is processed in four general steps: 1. Execution in the runtime database processor . Scanning and Parsing 2.

Relational Query Processing Scanning Parsing Validating Query Intermediate form of Query (query Tree) Catalog Query Optimizer Execution Plan Compile d Code Query Code Generator Query Executable Code Execution in Runtime processor .

Attribute names.    The tokenized representation is suitable for processing by the parser.1. Table names. Query Recognition  Scanning is the process of identifying the tokens in the query. This is according to rules of language grammar . Token examples are SQL keywords. … This representation may be in a tree form.  Parser checks the tokenized representation for correct syntax.

Query Recognition  Validating. the output (intermediate form of query) is called the Canonical Query Tree. checks are made to determine if columns and tables identified in the query exist in the database.  .1. If the query passes the recognition checks.

Relational Query Processing Scanning Parsing Validating Query Catalog Query Optimizer Intermediate form of Query (query Tree) Execution Plan Compile d Code Query Code Generator Query Executable Code Execution in Runtime processor .

2. Optimization typically takes one of two forms: Heuristic Optimization or Cost Based Optimization  . Query Optimization  The goal of the query optimizer is to find an efficient strategy for executing the query using the access routines.

2. Query Optimization
 

For any given query, there may be a number of different ways to execute it. Each operation in the query (SELECT, JOIN, etc.) can be implemented using one or more different Access Routines. For example, an access routine that employs an index to retrieve some rows would be more efficient than an access routine that performs a full table scan. The query optimizer has determined the execution plan

Relational Query Processing

Scanning Parsing Validating

Query

Intermediate form of Query (query Tree)

Catalog

Query Optimizer Execution Plan

Compile d Code

Query Code Generator Query Executable Code Execution in Runtime processor

3. Query Code Generator

Once the query optimizer has determined the execution plan (the specific ordering of access routines), the code generator writes out the actual access routines to be executed. With an interactive session, the query code is interpreted and passed directly to the runtime database processor for execution. It is also possible to compile the access routines and store them for later execution

projection.Access Routines    are algorithms that are used to access and aggregate data in a database. intersection. Cartesian product. set difference. We are interested in access routines for selection. . join and set operations such as union. A RDBMS may have a collection of general access routines that can be combined to implement a query execution plan. etc.

Relational Query Processing Scanning Parsing Validating Query Intermediate form of Query (query Tree) Catalog Query Optimizer Execution Plan Compile d Code Query Code Generator Query Executable Code Execution in Runtime processor .

Any runtime errors are also returned. planned and (possibly) compiled. parsed. The runtime database processor then executes the access routines against the database.4. Execution in the runtime database processor     At this point. . The results are returned to the application that made the query in the first place. the query has been scanned.

Query Processing & Optimization What is Query Processing?  Steps required to transform high level SQL query into a correct and “efficient” strategy for execution and retrieval. . What is Query Optimization?  The activity of choosing a single “efficient” execution strategy (from hundreds) as determined by database catalog statistics.

E = 2 .A = "c" AND S.C AND R.C) S(C. D FROM R.C=S.Example R(A.E) SELECT B. S WHERE R.D.B.

R A a b c d e B 1 1 2 2 3 C 10 20 10 35 45 B 2 S C 10 20 30 40 50 D x D x y z x y E 2 2 2 1 3 Answer But this is your intelligent way.. .

. projection .Do projection.• How to execute query? Basic idea .Do Cartesian product RxS.Select tuples. .

1 1 10 10 10 20 x y 2 2 Got one.A R.C S. 2 10 10 x 2 . ..E a a .D S.B R.. .C S.RxS R. c .

where n = |R| and m = |S|. > . − need more efficient evaluation methods.Problem  A Cartesian product RxS may be LARGE:   need to create and examine n x m tuples. For example. n = m = 1000 => 106 records.

E 2 ∧R =S .C S WHERE ..Relational Algebra: used to describe logical plans.A .S − > OR: Π B.D − > σ R =“c”∧S = .C .C . − > x R S FROM R.A .D SELECT B.C (RxS)] .E 2 ∧R =.. Ex: Original logical query plan Π B.D [ σ R = “c”∧S = .

Improved logical query plan: Plan II Π B.A = “c” σ S S.E = 2 .D natural join σ R R.

D .R A B C a 1 10 b 1 20 c 2 10 d 2 35 e 3 45 S σ A 'c =' (R) σ E2 = (S) C D E 10 x 2 20 y 2 30 z 2 40 x 1 50 y 3 A B C c 2 10 C D E 10 x 2 20 y 2 30 z 2 Π B.

order of execution steps. use the index on S.S tuples.E ≠ 2.A index to select tuples of R with R.C value found. (4) Join matching R.A = “c”. how relations are For example: accessed.algorithms to implement operations. . (1) Use R.C to find matching tuples. and place in result. project on attributes B and D. (2) For each R.Physical Query Plan: Detailed description to execute the query: . (3) Eliminate S tuples with S.

R A B C a 1 10 b 1 20 c 2 10 d 2 35 e 3 45 I1 =“c” A C I2 S C D E 10 x 2 <c.2.15> .2> 20 y 2 check=2? z 2 30 40 x 1 50 y 3 output: <2.x.7.x> next tuple: <c.10> <10.

Building blocks of physical query plans.Physical operators    Principal methods for executing operations of relational algebra. Major strategies:   scanning tables. indexing. . hashing. sorting.

will lead to the most efficient solution plan? For each algebraic operator. equivalent to the given query. disk buffer. what algorithm (of several available) do we use to compute that operator? How do operations pass data (main memory buffer.…)? Will this plan minimize resource usage? (CPU/Response Time/Disk)    .Questions for Query Optimization  Which relational algebra expression.

Overview of Query Execution SQL query parse parse tree convert logical query plan apply laws “improved” l.q...p statistics estimate result sizes l.C1>.P2.q.} answer execute Pi pick best {P1.. +sizes consider physical plans {P1. } estimate costs .….p.

Processing Steps .

Three Major Steps of Processing (1) Query Decomposition    Analysis Derive Relational Algebra Tree Normalization (2) Query Optimization   Heuristic: Improve and Refine relational algebra tree to create equivalent Logical Query Plans Cost Based: Use database statistics to estimate physical costs of logical operators in LQP to create Physical Execution Plans (3) Query Execution .

Query Decomposition    ANALYSIS Lexical: Is it even valid SQL? Syntactic: Do the relations/attributes exist and are the operations valid? Result is internal tree representation of SQL query (Parse Tree) <Query> SELECT select_list <attribute> * FROM <from_list>  … .

Query Decomposition (cont…)  RELATIONAL ALGEBRA TREE    Root : The desired result of query Leaf : Base relations of query Non-Leaf : Intermediate relation created from relational algebra operation Convert WHERE clause into more easily manipulated form Conjunctive Normal Form(CNF) : (a v b) ∧ [(c v d) ∧ e] ∧ f (more efficient) Disjunctive Normal Form(DNF) : ∨ ∨  NORMALIZATION    .

branchNo=Branch. Results in these equivalent relational algebra statements (1)σ (2) σ (p s n ‘Mn g r’)^ ity ‘L n o ’)^ ta ra c N = ra c .branchNo AND s.b n h o o itio = a a e (c = o d n (S ff.branchNo = b.position = ‘Manager’ AND b.city = ‘london’. Branch b WHERE s.branchNo (3) [σ (position= anager’) ‘M (Branch)] .b n h o B n h ra c N ) (p s n ‘Mn g r’)^ ity ‘L n o ’) o itio = a a e (c = o d n (Staff X Branch) Branch) [σ (c = o d n ity ‘L n o ’) (Staff wvStaff.branchNo=Branch.branchNo (Staff)]  Staff.Query Processing: Who needs it? A motivating example: Identify all managers who work in a London branch SELECT * FROM Staff s.

A Motivating Example (cont…) Assume:   1000 tuples in Staff. ~ 5 London branches No indexes or sort keys All temporary results are written back to disk (memory is small) Tuples are accessed one at a time (not in blocks)      . ~ 50 Managers 50 tuples in Branch.

branchNo=Branch.branchNo) (Staff X Branch) Requires (1000+50) disk accesses to read from Staff and Branch relations Creates temporary relation of Cartesian Product (1000*50) tuples Requires (1000*50) disk access to read in temporary relation and test predicate Total Work = (1000+50) + 2*(1000*50) = 101.050 I/O operations .Motivating Example: Query 1 (Bad) σ    (position=‘Manager’)^(city=‘London’)^(Staff.

branchN o Again requires (1000+50) disk accesses to read from Staff and Branch Joins Staff and Branch on branchNo with 1000 tuples (1 employee : 1 branch ) Requires (1000) disk access to read in joined relation and check predicate Total Work = (1000+50) + 2*(1000) =  3050 I/O operations 3300% Improvement over Query 1 .branchNo=Branch.Motivating Example: Query 2 (Better) σ   (p s io = a a e’)^ it = o d n o it n ‘Mn g r (c y ‘L n o ’) (Staff Branch)  Staff.

branchNo=Branch.Motivating Example: Query 3 (Best) [ σ (p s io = a a e o it n ‘Mn g r’) (Staff) ]  Staff.branchNo [σ (c y ‘L n o ’) it = o d n (Branc  Read Staff relation to determine ‘Managers’ (1000 reads)  Create 50 tuple relation(50 writes)  Read Branch relation to determine ‘London’ branches (50 reads)  Create 5 tuple relation(5 writes)  Join reduced relations and check predicate (50 + 5 reads) Total Work = 1000 + 2*(50) + 5 + (50 + 5) = 1160 I/O operations 8700% Improvement over Query 1 Consider if Staff and Branch relations were 10x size? 100x? !!! .

Heuristic Optimization GOAL:  Use relational algebra equivalence rules to improve the expected performance of a given query tree. Consider the example given earlier:   Join followed by Selection (~ 3050 disk reads) Selection followed by Join (~ 1160 disk reads) .

S .Relational Algebra Transformations Cascade of Selection  (1) σ p ∧q ∧r (R) = σ p(σ q(σ r(R))) Commutativity of Selection Operations  (2) σ p(σ q (R)) = σ q(σ p(R)) In a sequence of projections only the last is required  (3) Π LΠ M…Π N(R) = Π L(R) Selections can be combined with Cartesian Products and Joins   (4) σ p( R x S ) = R  S (5) σ p( R  S ) = R  p q q^p S σ p Visual of 4 x S = R  p R Note : The above is an incomplete List! For a complete list see the text.

More Relational Algebra Transformations Join and Cartesian Product Operations are Commutative and Associative (6) R x S = S x R (7) R x (S x T) = (R x S) x T (8) R  p S = S  p R (9) (R  p S)  q T = R  p (S  q T) Selection Distributes over Joins  If predicate p involves attributes of R only: If predicate p involves only attributes of R and q involves only attributes of S: pq ^ (10)  σ p ( R wvq S ) = σ p (R)  q S (R)  r σ (11) σ (R  r S) = σ p q (S) .

Move σ down the query tree for the earliest possible execution (reduce number of tuples processed). Break apart and move as far down the tree as possible lists of projection attributes. Replace σ -x pairs by  (avoid large intermediate results). Perform the joins with the smallest expected result first .Optimization Uses The Following Heuristics Break apart conjunctive selections into a sequence of simpler selections (preparatory step for next heuristic). create new projections where possible (reduce tuple widths early).

flightNo = c.name AND Canonical c.flightNo AND f. Crew c f.date = ’01-01-06’ AND f.job = ’Pilot’ “What are the ticket numbers of the pilots flying to France on 01-0 06?” Relational Algebra Expression .flightNo AND f .ticketno Flight f . Passenger p.Heuristic Optimization Example SELECT FROM WHERE p.to = ’FRA’ AND p.name = c.flightNo = p.

Heuristic Optimization (Step 1) .

Heuristic Optimization (Step 2) .

Heuristic Optimization (Step 3) .

Heuristic Optimization (Step 4) .

Heuristic Optimization (Step 5) .

Heuristic Optimization (Step 6) .

Sort-Merge Join? Pipelining vs.Physical Execution Plan  Identified “optimal” Logical Query Plans   Every heuristic not always “best” transform Heuristic Analysis reduces search space for cost evaluation but does not necessarily reduce costs  Annotate Logical Query Plan operators with physical operations (1 : *)    Binary vs. Materialization?  How does optimizer determine “cheapest” plan? . Linear search for Selection? Nested-Loop Join vs.

Physical Searching .

indicate records with special marker and give record lengths or offsets .Physical Storage  Record Placement Types of Records:    Variable Length Fixed Length  Record Separation   Fixed records don’t need it If needed.

but wastes space   Spanned Records are across multiple blocks  Require pointer at the end of the block to the next block with that record  Essential if record size > block size  .Record Separation  Unspanned Records must stay within a block  Simpler.

Record Separation  Mixed Record Types – Clustering Different record types within the same block  Why cluster? Frequently accessed records are in the same block  Has performance downsides if there are many frequently accessed queries with different ordering   Split Records  Put fixed records in one place and variable in another block .

Record Separation  Sequencing  Order records in sequential blocks based on a key  Indirection Record address is a combination of various physical identifiers or an arbitrary bit string  Very flexible but can be costly  .

Accessing Data  What is an index? Data structure that allows the DBMS to quickly locate particular records or tuples that meet specific conditions  Types of indicies:   Primary Index  Secondary Index  Dense Index  Sparse Index/Clustering Index  Multilevel Indicies .

Accessing Data  Primary Index Index on the attribute that determines the sequencing of the table  Guarantees that the index is unique   Secondary Index An index on any other attribute  Does not guarantee unique index  .

Accessing Data  Dense Index Every value of the indexed attribute appears in the index  Can tell if record exists without accessing files  Better access to overflow records   Clustering Index  Each index can correspond to many records .

Dense Index 10 20 30 40 50 60 70 80 90 100 110 120 10 20 30 40 50 60 70 80 90 100 .

Accessing Data  Sparse Index Many values of the indexed attribute don’t appear  Less index space per record  Can keep more of index in memory  Better for insertions   Multilevel Indices Build an index on an index  Level 2 Index -> Level 2 Index -> Data File  .

Sparse Index 10 30 50 70 90 110 130 150 170 190 210 230 10 20 30 40 50 60 70 80 90 100 .

B+ Tree Use a tree model to hold data or indices  Maintain balanced tree and aim for a “bushy” shallow tree  100 120 150 180 100 101 110 120 130 150 156 179 180 200 3 5 11 30 35 30 .

each node must have between n/2 and n pointers and children  For a tree of order n.B+ Tree  Rules: If root is not a leaf. the number of key values in a leaf node must be between (n-1)/2 and (n-1) pointers and children  . it must have at least two children  For a tree of order n.

that is.B+ Tree (cont…)  Rules: The number of key values contained in a non-leaf node is 1 less than the number of pointers  The tree must always be balanced. every path from the root node to a leaf must have the same length  Leaf nodes are linked in order of key values  .

Hashing      Calculates the address of the page in which the record is to be stored based on more or more fields Each hash points to a bucket Hash function should evenly distribute the records throughout the file A good hash will generate an equal number of keys to buckets Keep keys sorted within buckets .

key → h(key) records .Hashing . .

Hashing  Types of hashing:  Extensible Hashing  Pro:    Handle growing files Less wasted space No full reorganizations Uses indirection Directory doubles in size  Con:   .

Hashing  Types of hashing:  Linear Hashing  Pro:     Handle growing files Less wasted space No full reorganizations No indirection like extensible hashing Still have overflow chains  Con:  .

A = 5  Indexing is good for:  Range searches SELECT * FROM R WHERE R.A > 5 . Hashing  Hashing is good for:  Probes given specific key SELECT * FROM R WHERE R.Indexing vs.

Cost Model .

 WRITE: transfer data from RAM to disk. so must be planned carefully! . relative to inmemory operations. This has major implications for DBMS design!  READ: transfer data from disk to main memory (RAM).Disks and Files   DBMS stores information on (“hard”) disks.  Both are high-cost operations.

 B(R): # of blocks to hold all tuples of R. A): # of distinct values for attribute R.Parameters for Estimation • M: # of available main memory buffers (estimate).A  = SELECT COUNT (DISTINCT A) FROM R .  Kept as statistics for each relation R: T(R) : # of tuples in R.  V(R.

consider a clustered-file organization of relations: DEPT(Name. For example. stored in blocks exclusively used for representing R.Cost of Scanning a Relation  Normally assume relation R to be clustered. …)  . that is. …) and EMP(Name. Dname.

Toy.. …  DEPT: Sales. EMP: Ann. Toy. EMP: John. relation DEPT probably not... … …   Relation EMP might be considered clustered. T(R) . For a clustered relation R.DEPT: Toy.. ... EMP: Ken. sufficient to read (approx. . If relation R not clustered. EMP: Bob.. .. . . Sales.. Sales... .) B(R) blocks for a full scan. most tuples probably in different blocks => input cost approx..

Classification of Physical Operators  By applicability and cost:  one-pass methods  if at least one argument relation fits in main memory. storing intermediate results on disk.  process relations twice.  multi-pass  generalization of two-pass for HUGE relations .  two-pass methods  if memory not sufficient for one-pass.

Implementing Selection

How to evaluate σ C(R)?

Sufficient to examine one tuple at a time − Easy to evaluate in one pass: >
 Read

each block of R using one input buffer.  Output records that satisfy condition C.

If R clustered, cost = B(R); else T(R).

Projection π A(R) in a similar manner.

Index-Based Selection
 

Consider selection σ

A='c'

(R).

If there is an index on R.A, we can locate tuples t with t.A='c' directly. What is the cost?

How many tuples are selected?
estimate: T(R)/V(R,A) on the average.  if A is a primary key, V(R,A) =T(R) − 1 disk > I/O.

Index-Based Selection (cont.)

Index is clustering, if tuples with A='c' are stored in consecutive blocks (for any 'c')

A
A index

10 10 10 20 20

g. A) index => is an   estimate for the number of block accesses • Further simplifications: Ignore.A) of all R tuples to satisfy A='c’. Apply same estimate to data blocks accessible through a clustering  B(R) /V (R.Selection using a clustering index  We estimate a fraction T(R)/V(R. – cost of reading the (few) index blocks – unfilled room left intentionally in blocks – … .. e.

A is key) − cost = 1 > .000 if R clustered: cost = B(R) = 1000 not clustering − cost = T(R)/V(R.A)=20.A)=100 and index is …    if V(R.15 sec 15 ms B(R)=1000..A) = 200 > clustering − cost = B(R)/V(R. Time if disk I/O 15 ms 5 min 15 sec 3 sec 0.A simple scan of R   if R not clustered: cost = T(R) = 20.000.000 (i. and there's an index on R.Selection Example Consider σ  A0 = (R) when T(R)=20.e.A)= 10 >  if V(R.

Y) S(Y.Z)  general joins rather similarly. possibly with additional selections (for complex join conditions)  Assumptions: Y = join attributes common to R and S  S is the smaller of relations: B(S) ≤B(R)  .Processing of Joins  Consider natural join R(X.

i. S fits in memory Read entire S in memory. find matching tuples from the dictionary. Build a dictionary (balanced tree. hash table) using join attributes of tuples as search key Read each block of R (using one buffer).One-Pass Join     Requirement: B(S) < M. and output their join I/O cost ≤ B(S) + B(R) . For each tuple t..e.

What If Memory Insufficient?  Basic  join strategy:  "nested-loop" join ”1+n pass” operation:  one relation read once. the other repeateadly  no memory limitations be used for relations of any size  can .

Y then output join of r and s. Nested-loop join (conceptually) for each tuple s ∈ S do for each tuple r ∈ R do if r. • Cost (like for Cartesian product): T(S) *(1 + T(R)) = T(S) + T(S)T(R) .Y = s.

 If R and S clustered. can apply block-based nested-loop join: for each chunck of M-1 blocks of S do Read blocks in memory. . for each block b of R do Read b in memory. for each tuple r in b do Find matching tuples from the dictionary. output their join with r. Insert tuples in a dictionary using the join attributes.

Cost of Block-Based Nested-Loop Join  Consider R(X. and M = 101   Use 100 buffers for loading S − 500/100 = 5 chunks > Total I/O cost = 5 x (100 + 1000) = 5500 blocks • R as the outer-loop relation − I/O cost > 6000 – in general.Y) S(Y.Z) when B(R)=1000.B(S) operations . using the smaller relation in the outer loop gives an advantage of B(R) . B(S)=500.

but sometimes the only choice Next: More efficient join algorithms   . Each reads M-1 + B(R) blocks − total cost = B(S) + B(S)B(R)/(M> 1).Analysis of Nested-Loop join  B(S)/(M-1) outer-loop iterations. or approx. B(S)B(R)/M blocks Not the best method.

Sort-Based Two-Pass Join  Idea: Joining relations R and S on attribute Y is rather easy. since they do not fit in memory) . (E. and we may need to resort to nested-loop join)  If relations not sorted already. if the relations are sorted using Y  IF not too many tuples join for any value of the join attributes.g. if π Y(R) = π Y(S) = {y}. all tuples match. they have to be sorted (with two-phase multi-way merge sort.

Merge the sorted relations. Sort R with join attributes Y as the sort key.skip tuples whose Y-value y not in both R and S .output all possible joins of the matching tuples r ∈ R and s ∈ S .Sort-Based Two-Pass Join 1. 2. 3. Do the same for relation S.read blocks of both R and S for all tuples whose Y value is y . using 1 buffer for current input block of each relation: .

… … Main memory .... 2 c 3 c 4 d b 1 c 2 c 3 c 4 e 5 . 2c 2 2c 3 2c 4 3c 2 3… c 3 3c 4 3 c 4 d 5 e ...Example: Join of R and S sorted on Y R(X.. Y) S(Y... 5 e 1 a . Z) 1 a 2 c b 1 c 2 c 3 c 4 e 5 .

Z) when B(R)=1000. and M = 101  Remember two-phase multiway merge sort:  each block read + written + read + written once − 4 x (B(R) + B(S)) = 6000 disk I/Os > B(R) + B(S)= 1500 disk I/Os   Merge of sorted relations for the join:  Total I/O cost = 5 x (B(R) + B(S)) = 7500  Seems big.Analysis of Sort-Based Two-Phase Join  Consider R(X. B(S)=500.Y) S(Y. but for large R and S much better than B(R)B(S)/M of block-based nested loop join .

and merge all of them (can handle at most M) for the join – I/O cost = 3 x (B(R) + B(S)) – requires the union of R and S to fit in at most M sublists. B(S )} • Variation: Perform only phase I (building of sorted sublists) of the sorting. each of which at most+M(S ) M ≥ B(R) B blocks long − works if B(R)+ B(S) ≤≤M2 > . B(S)) ≤≤M2 M ≥ max{B(R).Analysis of Sort-Based Two-Phase Join  Limitations? Sorting requires max(B(R).

write it on disk as the next block of that bucket . use Y as the hash key  Hash Phase: For each relation R and S:    Use 1 input buffer. and M-1 output buffers as hash buckets Read each block and hash its tuples. Then join tuples in each pair of buckets. For a join on attributes Y. When output buffer gets full.Two-Phase Join with Hashing  Idea: If relations do not fit in memory. first hash the tuples of each relation in buckets.

Y = s.Y => h(r. ….Two-Phase Join with Hashing  The hashing phase produces buckets (sequences of blocks) R1.Y) = h(s. SM-1 Tuples r ∈ R and s ∈ S join iff r. RM-1 and S1.Y) => r occurs in bucket Ri and s occurs in bucket Si for the same i  . ….

B(S )} . B(S)) < M2 M > min{B(R). M-1. and B(S)/M for bucket Si − Approximated memory requirement > min(B(R). …. B(R)/M.Hash-Join: The Join Phase   For each i = 1. perform onepass join between buckets Ri and Si − the smaller one has to fit in M-1 > main memory buffers Average size for bucket Ri is approx.

and M = 101 Hashing − 100 buckets for both R and S.Cost of Hash-Join    Consider R(X.Z) when B(R)=1000. > with avg sizes 1000/100=10 and 500/100=5 I/O cost 4500 blocks:   hashing phase 2x1000 + 2x500 = 3000 blocks join phase: 1000 + 500 (in total for the 100 onepass joins)  In general: cost = 3(B(R) + B(S)) . B(S)=500.Y) S(Y.

Z)  Assume there's an index on S.Y) S(Y.Index-Based Join Still consider R(X.Y. and  outputting their join with tuple t   Efficiency depends on many factors .Y  Can compute the join by  reading each tuple t of R  locating matching tuples of S by indexlookup for t.

Cost of Index-Based Join  Cost of scanning R:  B(R). if index clustered • Cost of loading tuples of S dominates . if index not clustered – T(R)B(S)/V(S.Y) matching tuples found by index lookup. T(R). if not • On the average. if clustered.Y). T(S)/V(S.Y). Cost of loading them (total for all tuples of R): – T(R)T(S)/V(S.

and the index on S.000 x 500/100 = 51. and V(S. B(S)=500.Y) = 100 Assume R clustered.000.000 > blocks  Often not this bad… .Y) S(Y.Y is clustering − I/O cost 1000 + 10.Example: Cost of IndexJoin   Again R(X. T(S) = 5000. T(R) = 10.Z) with B(R)=1000.

if R clustered. if R not clustered .Y) large (i. if Y primary key of S:  each of the T(R) index lookups locates at most one record of relation S => at most T(R) input operations to load blocks of S => Total cost only  B(R) + T(R).Y is selective)  For example. and V(S. and  T(R) + T(R) = 2T(R).Index-Join useful … when |R| << |S|.e. the index on S.

Joins Using a Sorted Index   Still consider R(X.Z) Assume there's a sorted index on both R.Y) S(Y.Y  B-tree or a sorted sequential index  Scan both indexes in the increasing order of Y    like merge-join. can skip nonmatching tuples withou loading them very efficient  Details to excercises? .Y and S. without need to sort first if index dense.

Questions? Thank you for your time. Questions? Comments? .

Sign up to vote on this title
UsefulNot useful