You are on page 1of 40

Algorithm Course

Search
Algorithms
Search Algorithms
A search algorithm is a method of locating a specific
item of information in a larger collection of data.

Why Search?

Everyday life -We always Looking for something – yellow


pages, universities, hospitals,…etc.

World wide web –different searching mechanisms,

Databases –use to search for a record

2
Sequential search
linear Search

Sorted array search


Binary Search

Hashing
Hashing Functions

Recursive structures search


Binary Search tree

Multidimensional search
3
Linear Search

This is a very simple algorithm.

It uses a loop to sequentially step through an array,


starting with the first element.

It compares each element with the value being


searched for(key) and stops when that value is found
or the end of the array is reached.

4
Linear Search Example

5
Algorithm Pseudo Code:

Found = false;
Position = 0; Index = 0
while index < number of elements, found = false
if list[index] is equal to search value
Found = true
Position = index
end if
Index = Index +1
end while
return Position
6
Linear Search Tradeoffs

Benefits:
Easy algorithm to understand
Array can be in any order

Disadvantages:
Inefficient (slow)

7
Efficiency of a sequential Search of an Array

In best case , you will locate the desired item first in the array
You will have made only one comparison
So search will be O(1)

In worst case you will search the entire array, either desired
item will be found at the end of array or not at all
In either event you have made n comparisons for an array of
n elements
Worst case is just O(n)

In the average case , you will look at about one-half of the


elements in the array . Thus is O(n/2), which is just O(n)
8
Binary Search
Requires array elements to be in order

1. Divides the array into three sections:


• middle element
• elements on one side of the middle element
• elements on the other side of the middle element

2.If the middle element is the correct value, done. Otherwise,


go to step 1. using only the half of the array that may contain
the correct value.

3.Continue steps 1. and 2. until either the value is found or


there are no more elements to examine

9
How a Binary Search Works
Always look at the center value.
Each time you get to discard half of the remaining list.

10
Binary Search Example
Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102,
114}.
Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114
Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114
Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114

Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.

Step 1 (middle element is 19 < 103): -1 5 6 18 19 25 46 78 102 114


Step 2 (middle element is 78 < 103): -1 5 6 18 19 25 46 78 102 114
Step 3 (middle element is 102 < 103): -1 5 6 18 19 25 46 78 102 114
Step 4 (middle element is 114 > 103): -1 5 6 18 19 25 46 78 102 114
Step 5 (searched value is absent): -1 5 6 18 19 25 46 78 102 114

11
Complexity Analysis

Huge advantage of this algorithm is that it's complexity


depends on the array size logarithmically in worst case.

In practice it means, that algorithm will do at most log2(n)


iterations, which is a very small number even for big arrays.

On every step the size of the searched part is reduced by


half. Algorithm stops, when there are no elements to search
in.

12
Linear vs Binary Search
When compared to linear search, whose worst-case
behavior is n iterations, we see that binary search is
substantially faster as n grows large.

For example, to search a list of one million items takes as


many as one million iterations with linear search, but never
more than twenty iterations with binary search.

However, a binary search can only be performed if the list is


in sorted order.

13
Hashing

IMPORTANT AND WIDELY CONSTANT TIME PER WORST CASE TIME


USEFUL TECHNIQUE FOR OPERATION (ON THE AVERAGE) PROPORTIONAL TO THE SIZE OF
IMPLEMENTING DICTIONARIES THE SET FOR EACH OPERATION
Basic Idea
• Use hash function to map keys into positions in
a hash table

Ideally
• If element e in Array A has key k and h is hash
function, then e is stored in position h(k)
of array A.
• To search for e, compute h(k) to locate position
in array A. If no element, dictionary does not
contain e.
A table h table
e
e 3 K=1
K=1 2 1 Bucket
4
K=2
K=2 1 2
5 4 K=3
K=3 7
5 K=4
K=4
K=5
K=5
For e=5 h (5) =4 A (4)=5
K=6
A(h(e)) = e
h(e) >> index of e in Array AK=7
16
Analysis (Ideal Case Unrealistic)
• O(b) time to initialize hash table (b number of positions
or buckets in hash table)
• O(1) time to perform insert, remove, search
• Works for implementing dictionaries, but many
applications have key ranges that are too large to have 1-1
mapping between buckets and keys!

Example:
• Suppose key can take on values from 0 .. 65,535 (2 byte
unsigned integers)
• Expect  1,000 records at any given time
• Impractical to use hash table with 65,536 slots!
Hash Functions
If key range too large:
• use hash table with fewer buckets and
• a hash function which maps multiple keys to same bucket:
h(k1) =  = h(k2): k1 and k2 have collision at slot 
Popular hash functions: hashing by division
h(k) = k%D, where D number of buckets in hash table
(% …. MOD …. Reminder of division)

Example: hash table with 11


buckets h(k) = k%11
80  3 (80%11= 3), 40  7, 65 
10
58  3 collision!
Hashing
0
U
(universe of keys)
h(k1)

h(k4)
K k1 k4
(actual k2 collision h(k2)=h(k5)
keys) k5
k3

h(k3)

m–1
A table
h table
K=1 2 Bucket
2 K=0
4
3,4 K=1 collision!
K=2 1
5 1 K=2
7 5
K=3
K=3
collisions :
K=4
2/4=2 stored outside the table
hash function 4/4=0 (open hashing)
K=5
1/4=1
h(k) = k%4 5/4=1 storing one of the records
7/4=3 at another slot in the
table (closed hashing)

20
Collision Resolution Policies
• Two classes:
– Open hashing, separate chaining
– Closed hashing, open addressing

• Difference has to do with whether collisions are


stored outside the table (open hashing) or
whether collisions result in storing one of the
records at another slot in the table (closed
hashing)
Methods of Resolution
• Open hashing : Chaining
0
– Store all elements that hash to the same k1 k4

slot in a linked list. k5 k2 k6

– Store a pointer to the head of the linked k7 k3

list in the hash table slot. k8

m–1

• Closed hashing : Open Addressing


– All elements stored in hash table itself.
– When collisions occur, use a systematic
(consistent) procedure to store elements
in free slots of the table.
Open Hashing
• Each bucket in the hash table is the head
of a linked list

• All elements that hash to a particular bucket are


placed on that bucket’s linked list

• Records within a bucket can be ordered by order


of insertion or by key value order
Collision Resolution by Open hashing
0
U
(universe of keys)
h(k1)=h(k4)
X
k1
k4
K
(actual k2 k6 X
k5 h(k2)=h(k5)=h(k6)
keys)
k8 k7
k3 X
h(k3)=h(k7)
h(k8)
m–1
Collision Resolution by Open hashing
0
U
(universe of keys)
k1 k4

k1
k4
K
(actual k2
k5 k6 k5 k2
keys)
k8 k7
k3
k7 k3

k8
m–1
Open hashing : Analysis
• Open hashing is most appropriate when the hash table is
kept in main memory, implemented with a standard in-
memory linked list

• We hope that number of elements per bucket roughly equal


in size, so that the lists will be short

• If there are n elements in set, then each bucket will have


roughly n/D, where D number of buckets in hash table

• If we can estimate n and choose D to be roughly as large,


then the average bucket will have only one or two
members
Open hashing : Analysis
Average time operation:

• D buckets, n elements  average n/D elements per


bucket

• insert, search, remove operation take O(1+n/D)


time each

• If we can choose D to be about n, constant time


Closed Hashing
To search for key k:
• Examine slot h(k). Examining a slot is known as
a probe.
• If slot h(k) contains key k, the search is successful. If the
slot contains NIL, the search is unsuccessful.
• There’s a third possibility: slot h(k) contains a key
that
is not k.
– Compute the index of some other slot, based on k and
which probe we are on.
– Keep probing until we either find key k or we find a
slot
holding NIL.
Advantages: Avoids pointers; so can use a larger
Closed Hashing

U
(universe of keys)

k1
k4
K
(actual k2 X
k5 k6
keys)
k8 k7
k3

29
Closed Hashing
• Associated with closed hashing is a rehash strategy:
“If we try to place x in bucket h(x) and find it
occupied, find alternative location h1(x), h2(x), etc.
Try each in order, if none empty table is full,”

• In general, collision resolution strategy is to


generate a sequence of hash table slots (probe
sequence) that can hold the record; test each slot
until find empty one (probing)
Computing Probe Sequences
Auxiliary hash functions:
• Linear Probing.
• Quadratic Probing.
• Double Hashing.

Simplest rehash strategy is called linear Probing


hi(x) = (h(x) + i) % D

h1(d) = (h(d)+1)% D h(k, i) = (h(k)+i) mod m

h2(d) = (h(d)+2)% D key Probe number Auxiliary hash function


h3(d) = (h(d)+3)% D
Example Linear (Closed) Hashing
D=8,
keys a,b,c,d have hash values h(a)=3, h(b)=0, h(c)=4, h(d)=3

Where do we insert d? 3 already filled 0 b


Probe sequence using linear hashing: 1
h1(d) = (h(d)+1)%8 = 4%8 = 4 2
h2(d) = (h(d)+2)%8 = 5%8 = 5* 3 a
h3(d) = (h(d)+3)%8 = 6%8 = 6 4 c
etc. 5 d
7, 0, 1, 2 6
Wraps around the beginning of the 7
table!
Example Insert 1052 h(k) = k%11

0 1001
0 1001
h(1052) = 1052%11 = 7
1 9537
1 9537
h1(1052) = (7+1)%11 = 8
h2(1052) = (7+2)%11 = 9 2 3016
2 3016
h3(1052) = (7+3)%11 = 10 3
3
4
4 If next element has home bucket
5
0,1,2?  go to bucket 3
5 6
6 Only a record with home position 3 7 9874
7 will stay.
9874 8 2009
8 2009 Only records hashing to 4 will end up 9 9875
9 9875 in 4 (p=1/11); same for 5 and 6 10 1052
10
Ex: Linear Probing
• Example:
– h’(x)  x mod 13
– h(x)=(h’(x)+i) mod 13

– Insert keys 18, 41, 22, 44, 59, 32, 31, 73, in this order

0 1 2 3 4 5 6 7 8 9 10 11 12

41 18 44 59 32 22 31 73
0 1 2 3 4 5 6 7 8 9 10 11 12
Pseudo-code for Search

Hash-Search (T, k)
1. i  0
2. repeat j  h(k, i)
3. if T[j] = k
4. then return j
5. ii+1
6. until T[j] = NIL or i = m
7. return NIL
Linear Probing
• Suffers from primary clustering:
– Long runs of occupied sequences build up.
– Long runs tend to get longer, since an empty slot
preceded by i full slots gets filled next with
probability (i+1)/m.
– Hence, average search and insertion times
increase.
Quadratic Probing

• h(k,i) = (h(k) + c1i + c2i2) mod m c1 c2


•key Probe number Auxiliary hash function
• The initial probe position is T[h(k)], later probe
positions are offset by amounts that depend on a
quadratic function of the probe number i.

• Must constrain c1, c2, and m to ensure that we get a full


permutation of 0, 1,…, m–1.
• Can suffer from secondary clustering:
• – If two keys have the same initial probe position, then their
probe sequences are the same.
Double Hashing
• h(k,i) = (h1(k) + i h2(k)) mod m
key Probe number Auxiliary hash functions
• Two auxiliary hash functions.
– h1 gives the initial probe. h2 gives the remaining probes.
• Must have h2(k) relatively prime to m, so that the probe
sequence is a full permutation of 0, 1,…, m–1.
– Choose m to be a power of 2 and have h2(k) always
return an odd number. Or,
– Let m be prime, and have 1 < h2(k) < m.
• (m2) different probe sequences.
– One for each possible combination of h1(k) and
h2(k).
– Close to the ideal uniform hashing.
Performance Analysis - Worst Case

• Initialization: O(b), b# of buckets

• Insert and search: O(n), n number of elements in table;


all n key values have same home bucket

• No better than linear list for maintaining dictionary!


Performance Analysis - Avg
Case

• Expected cost of hashing is a function of how full the table


is: load factor  = n/b

• Average costs under linear hashing (probing) are:


– Insertion: 1/2(1 + 1/(1 - )2)
– Deletion: 1/2(1 + 1/(1 - ))

You might also like