84 views

Uploaded by Raghu

save

- POP Report
- Lecture12-4
- Binary Search Trees
- Proof Complexity
- avl
- 6. IMPLEMENTATION OF BINARY SEARCH TREE
- Ybus
- Programming and Data Structure by NODIA
- Placementpapers 2013 IBM
- Ads working in a system
- BINARY SEARCH TREE (BST) JAVA
- Quiz 3 Review
- Rolling the Dice Faster
- Tree Data Structure
- 103-145-2-PB
- A Incremental Algorithm Based on Rough Set for Concept Hierarchy Tree
- IT4103 Syllabus
- Interview Questions_Vikas Bhardwaj
- CyS-Clasitex
- Find the Orphan
- 11-Trees_1pps.pdf
- Trees and More With PostgreSQL MSCOSCONF09 #MOSC2010
- Fr Mobileuser
- 06200359 (1)
- Data Structures
- Article Summary
- Binomial Heaps.pdf
- BSIT_41_MQP
- Print a Binary Tree in Vertical Order _ Set 1 _ GeeksforGeeks
- cs301 (1)
- Paper 17
- DAA Notes
- 118R1D0518
- Bi Connected Components
- 55233854-JavaAI3rd
- Octave
- BEE Lab-Purchase Order-Future Tech
- lec24
- sec8.6

You are on page 1of 25

Rytas

12/12/04

1.Preface

OBST is one special kind of advanced tree. It focus on how to reduce the cost of the search of the BST. It may not have the lowest height ! It needs 3 tables to record probabilities, cost, and root.

In particular.n-1.«.2.Premise It has n keys (representation k1. and dn represents all values greater than kn.«. d0 represents all values less than k1.kn) in sorted order (so that k1<k2<«<kn). and the data keys mean internal nodes. .k2. For each ki . and so we also have n+1 ³dummy keys´ d0.we have a probability pi that a search will be for ki.dn representating not in ki.d1. The dummy keys are leaves (external nodes). In contrast of.«. and we wish to build a binary search tree from these keys. and for i=1. the dummy key di represents all values between ki and ki+1.2. some searches may be for values not in ki.

3. one is success.Formula & Prove The case of search are two situations. without saying. can get the first statement : (i=1~n) pi + (i=0~n) qi = 1 Success Failure We . and the other. is failure.

we can determine the expected cost of a search in a given binary search tree T. . Let us assume that the actual cost of a search is the number of nodes examined.plus1. Then the expected cost of a search in T is : (The second statement) E[ search cost in T] = (i=1~n) pi (depthT(ki)+1) + (i=0~n) qi (depthT(di)+1) =1 + (i=1~n) pi depthT(ki) + (i=0~n) qi depthT(di) Where depthT denotes a node¶s depth in the tree T. i. the depth of the node found by the search in T.e.. Because we have probabilities of searches for each key and each dummy key.

k2 k2 k1 k4 k1 k5 d0 d1 d0 k3 k5 d1 k4 d5 d2 d3 d4 d5 k3 d4 Figure (a) i pi qi 0.10 d2 d3 Figure (b) .20 0.10 2 0.05 4 0.05 0.10 0.05 0 1 0.05 3 0.05 5 0.10 0.15 0.

15 0.15 0.20 0.15 0.20 0.10 0.05 0.10 0.10 cost 0.10 0.05 0.20 0.60 0.05 0.05 0.30 0.30 0.40 Cost= Probability * (Depth+1) .20 0.20 0. we can calculate the expected search cost node by node: Node# k1 k2 k3 k4 K5 d0 d1 d2 d3 d4 d5 Depth 1 0 2 1 2 2 3 3 3 3 3 probability 0. By Figure (a).10 0.05 0.

yet the root of the OBST shown is k2.75.(The lowest expected cost of any BST with k5 at the root is 2.20 + 0.on another.20 + 0.the total cost = (0.15 + 0. and that tree is really optimal. the Figure (b) costs 2.10 + 0. We can see the height of (b) is more than (a) .20 + 0.80 So Figure (a) costs 2.20 + 0.30 + 0. and the key k5 has the greatest search probability of any key.80 .85) And .60 + 0.40 ) = 2.15 + 0.30 + 0.

a subtree that contains keys ki. It must contain keys in a contiguous range ki. Consider any subtree of a BST.Step1:The structure of an OBST an observation about subtrees. .kj. for some 1 i In addition.kj . .

. one of these keys. We need to use the optimal substructure to show that we can construct an optimal solution to the problem from optimal solutions to subproblems. As long as we examine all candidate roots kr. say kr (I r j). kr-1 and those containing kr+1 . where I r j. and the right subtree will contain the keys (kr+1 .«.«. kj .«. dr-1). we are guaranteed that we will find an OBST.«. and we determine all optimal binary search trees containing ki . kj. kj) and the dummy keys( dr . dj).«. Given keys ki . will be the root of an optimal subtree containing these keys. The left subtree of the root kr will contain the keys (ki .«. kr-1) and the dummy keys( di-1 .«.

There is one detail worth nothing about ³empty´ subtrees. It is easy to know that subtrees also contain dummy keys. this right subtree contains no actual keys. ki µs left subtree contains the keys ki.. but it does contain the dummy key dj. .«.. It is natural to interpret this sequence as containing no keys.. ki-1. The sequence has no actual keys but does contain the single dummy key di-1. if we select kj as the root.. we select ki as the root.kj. Suppose that in a subtree with keys ki.kj. Symmetrically. kj+1 «. By the above argument. then kjµs right subtree contains the keys.

(It is when j=i-1 that ther are no actual keys. and j i-1. we have just the dummy key di-1. .«. Ultimately.n].) Let us define e[i.Step2: A recursive solution We are ready to define the value of an optimal solution recursively. where i 1. We pick our subproblem domain as finding an OBST containing the keys ki.kj. kj. j n. we wish to compute e[1.j] as the expected cost of searching an OBST containing the keys ki.«.

The expected search cost is e[i.i-1]= qi-1. what happens to the expected search cost of a subtree when it becomes a subtree of a node? The answer is that the depth of each node in the subtree increases by 1.kj and then make an OBST with keys ki. When j 1.«. Then we have just the dummy key di-1. By the time. The easy case occurs when j=i-1.«.kr-1 its left subtree and an OBST with keys kr+1. .«.kj its right subtree. we need to select a root krfrom among ki.

r-1))+(e[r+1.j]= pr + (e[i.r-1)+ pr +w(r+1. we have E[i.j]+w(r+1. j) = (l=i~j) pl + (l=i-1~j) ql Thus.«. By the second statement.«.r-1]+w(i. the excepted search cost of this subtree increases by the sum of all the probabilities in the subtree.kj let us denote this sum of probabilities as w (i .j) .kj. if kr is the root of an optimal subtree containing keys ki.j)) Nothing that w (i . For a subtree with keys ki. j) = w(i.

r-1] + e[r+1.j]+w(i.j]= case1: if i j. We rewrite e[i.r-1]+e[r+1.j]= qi-1 .j]= e[i. E[i.j]=min{e[i.i r j E[i.j] as e[i.j) The recursive equation as above assumes that we know which node kr to use as the root. giving us our final recursive formulation: E[i.j)} case2: if j=i-1.j]+w(i. We choose the root that gives the lowest expected search cost.

j].kj. to be the index r for which kr is the root of an OBST containing keys ki. for 1 i j n. we define root[i. The e[i.«.j] values give the expected search costs in OBST. . To help us keep track of the structure of OBST.

This table uses only the entries for which 1 i j n. The second index needs to start from 0 because in order to have a subtree containing only the dummy key d0..n+1. for recording the root of the subtree containing keys ki. We will use only the entries e[i.n]. The first index needs to run to n+1rather than n because in order to have a subtree containing only the dummy key dn. kj. 0.j] values in a table e[1.«. we will need to compute and store e[1.n]. we will need to compute and store e[n+1. .j] for which j i-1..j]. we also use a table root[i.0].Step3: Computing the expected search cost of an OBST We store the e[i.

j) from scratch every time we are computing e[i. For j I..j-1]+pi+qi .n]. We will need one other table for efficiency..i-1] = qi-1 for 1 i n. Rather than compute the value of w(i.j] ----. we compute w[i.0. For the base case. we compute : w[i.j]=w[i.n+1.we tore these values in a table w[1.

q.i-1] qi-1 For l 1 to n do for i 1 to n-l +1 do j i+l-1 e[i.j] t root [i.j] r Return e and root For i .j] then e[i.i-1] qi-1 do w[i.j-1]+pj+qj For r i to j do t e[i.j] w[i.r-1]+e[r+1.OPTIMAL²BST(p.j] if t<e[i.n) 1 to n+1 do e[i.j]+w[i.j] w[i.

j]computed by Optimal-BST .80 3 0.70 2 0.05 0.20 0.55 0.30 0.50 0.90 1.45 0. and root [i.60 0.35 0.75 1.90 0.00 4 3 2 1 0 0.10 0.50 5 3 4 5 6 0.e 5 4 3 2 1 0 0.25 1.20 0.05 4 0.05 0.40 0.35 0.10 root 5 4 3 2 1 1 1 2 2 2 3 2 2 4 4 2 4 5 5 5 1 2 3 4 5 The tables e[i.05 0.05 0.30 0.j].05 6 0.05 0.00 1. w[i.45 0.30 0.75 w 1 2 2.25 0.10 0.15 0.50 5 0.10 1 1.25 0.70 0.05 2.60 0.30 0.j].

formula (1) . Then rewrite that n § pi i !1 n + § qi i !0 =1 ««..Advanced Proof-1 All keys (including data keys and dummy keys) of the weight sum (probability weight) and that can get the formula: n § ki + § di i !1 n i !0 Because the probability of ki is pi and di is qi.

kj data. «. maybe we can get another formula for w[i. just for some part of the full tree. but not in all. we can struct the weight table. and 1 i and ensures that ki.j-1]+Pj+Qj By this . . j § Pl l !i j § Ql l ! i 1 For recursive structure.j]=w[i.Advanced Proof-2 We first focus on the probability weight . That means we have ki. .

Then define the recursive structure¶s cost e[i. 1 i . which means ki. we want to discuss our topic. «.j]. which is expected to be the optimal one.Advanced Proof-3 Finally. the cost. without saying. kj.

r-1] + e[r+1.r-1] + w[i.S.j] So.r-1] + w[r+1.j] Nothing that : Pr + w[i.r-1] + e[r+1. if ki. it means that the sequence have no actual key.j] Get the minimal set And we use it to struct the cost table! P. but j=i-1. but a dummy key. kj. .j]) + w[i. Neither weight nor cost calculating.Advanced Proof-4 The final cost formula: E[i.j] = Pr + e[i.j] = w[i. E[i.«.j] + w[r+1.j] = (e[i.

06 0.05 .05 0.04 0.06 0.14 qi 0.Exercise i 0 1 2 3 4 5 6 7 pi 0.08 0.10 0.06 0.06 0.05 0.02 0.06 0.12 0.05 0.

- POP ReportUploaded bydisguizedmagician
- Lecture12-4Uploaded byJohnathan Hok
- Binary Search TreesUploaded byNishant Shah
- Proof ComplexityUploaded bygeenflauwidee
- avlUploaded byRirin Hutagalung
- 6. IMPLEMENTATION OF BINARY SEARCH TREEUploaded byeffffffff
- YbusUploaded bySamsuzzaman Tutul
- Programming and Data Structure by NODIAUploaded byJyoti Goswami
- Placementpapers 2013 IBMUploaded byAravind Chidambaram
- Ads working in a systemUploaded byRanbeersa Abhishek
- BINARY SEARCH TREE (BST) JAVAUploaded byakukurt
- Quiz 3 ReviewUploaded byjuenobueno
- Rolling the Dice FasterUploaded bydignor.sign3941
- Tree Data StructureUploaded byTanmay Baranwal
- 103-145-2-PBUploaded byirinarm
- A Incremental Algorithm Based on Rough Set for Concept Hierarchy TreeUploaded byIJEC_Editor
- IT4103 SyllabusUploaded bythejaka
- Interview Questions_Vikas BhardwajUploaded byPakhi Katiyar
- CyS-ClasitexUploaded byshkh1
- Find the OrphanUploaded bychepimanca
- 11-Trees_1pps.pdfUploaded byClark Fan
- Trees and More With PostgreSQL MSCOSCONF09 #MOSC2010Uploaded byHarisfazillah Jamel
- Fr MobileuserUploaded byomda4uu
- 06200359 (1)Uploaded bybinukiruba
- Data StructuresUploaded byAnurag Goel
- Article SummaryUploaded bynomore891
- Binomial Heaps.pdfUploaded byDeepanshu Sharma
- BSIT_41_MQPUploaded byAshish Sarangi
- Print a Binary Tree in Vertical Order _ Set 1 _ GeeksforGeeksUploaded byShubham Mishra
- cs301 (1)Uploaded byRizwanIlyas