Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more ➡
Download
Standard view
Full view
of .
Add note
Save to My Library
Sync to mobile
Look up keyword
Like this
2Activity
×
0 of .
Results for:
No results containing your search query
P. 1
06 Hashing

06 Hashing

Ratings: (0)|Views: 213|Likes:
Published by nragavendiran

More info:

Published by: nragavendiran on Mar 16, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See More
See less

12/01/2010

pdf

text

original

 
CS 373 Lecture 6: Hash Tables Fall 2002
Aitch EAre EyAy GeBee JaCue KaDee OhDouble U PeaEe SeEf TeEl VeEm WEn YEss Ze
— Sidney Harris, “The Alphabet in Alphabetical Order”
6 Hash Tables (September 24)
6.1 Introduction
A
hash table
is a data structure for storing a set of items, so that we can quickly determine whetheran item is or is not in the set. The basic idea is to pick a
hash function 
h
that maps every possibleitem
x
to a small integer
h
(
x
). Then we store
x
in slot
h
(
x
) in an array. The array is the hashtable.Let’s be a little more specific. We want to store a set of 
n
items. Each item is an element of some finite
1
set
called the
universe
; we use
u
to denote the size of the universe, which is justthe number of items in
. A hash table is an array
[1
..m
], where
m
is another positive integer,which we call the
table size
. Typically,
m
is much smaller than
u
. A
hash function 
is a function
h
:
U {
0
,
1
,...,m
1
}
that maps each possible item in
to a slot in the hash table. We say that an item
x
hashes
to theslot
[
h
(
x
)].Of course, if 
u
=
m
, then we can always just use the trivial hash function
h
(
x
) =
x
. In otherwords, use the item itself as the index into the table. This is called a
direct access table
(or morecommonly, an
array 
). In most applications, though, the universe of possible keys is orders of magnitude too large for this approach to be practical. Even when it is possible to allocate enoughmemory, we usually need to store only a small fraction of the universe. Rather than wasting lots of space, we should make
m
roughly equal to
n
, the number of items in the set we want to maintain.What we’d like is for every item in our set to hash to a different position in the array. Unfor-tunately, unless
m
=
u
, this is too much to hope for, so we have to deal with
collisions
. We saythat two items
x
and
y
collide
if the have the same hash value:
h
(
x
) =
h
(
y
). Since we obviouslycan’t store two items in the same slot of an array, we need to describe some methods for
resolving 
collisions. The two most common methods are called
chaining 
and
open addressing 
.
1
This finiteness assumption is necessary for several of the technical details to work out, but can be ignored inpractice. To hash elements from an infinite universe (for example, the positive integers), pretend that the universeis actually finite but very very large. In fact, in
real 
practice, the universe actually
is
finite but very very large. Forexample, on most modern computers, there are only 2
64
integers (unless you use a big integer package like GMP, inwhich case the number of integers is closer to 2
2
32
.)
1
 
CS 373 Lecture 6: Hash Tables Fall 2002
6.2 Chaining
In a
chained 
hash table, each entry
[
i
] is not just a single item, but rather (a pointer to) a linkedlist of all the items that hash to
[
i
]. Let
(
x
) denote the length of the list
[
h
(
x
)]. To see if an item
x
is in the hash table, we scan the entire list
[
h
(
x
)]. The worst-case time required tosearch for
x
is
O
(1) to compute
h
(
x
) plus
O
(1) for every element in
[
h
(
x
)], or
O
(1+
(
x
)) overall.Inserting and deleting
x
also take
O
(1 +
(
x
)) time.
A
L
G
O
R
I
T
H
M
S
A chained hash table with load factor 1.
In the worst case, every item would be hashed to the same value, so we’d get just one long listof 
n
items. In principle, for any deterministic hashing scheme, a malicious adversary can alwayspresent a set of items with exactly this property. In order to defeat such malicious behavior, we’dlike to use a hash function that is as random as possible. Choosing a truly random hash functionis completely impractical, but since there are several heuristics for producing hash functions thatbehave close to randomly (on real data), we will analyze the performance
as though our has function were completely random 
. More formally, we make the following assumption.
Simple uniform hashing assumption:
If 
x
=
y
then Pr[
h
(
x
) =
h
(
y
)] = 1
/m
.Let’s compute the expected value of 
(
x
) under the simple uniform hashing assumption; thiswill immediately imply a bound on the expected time to search for an item
x
. To be concrete,let’s suppose that
x
is not already stored in the hash table. For all items
x
and
y
, we define theindicator variable
x,y
=
h
(
x
) =
h
(
y
)
.
(In case you’ve forgotten the bracket notation,
x,y
= 1 if 
h
(
x
) =
h
(
y
) and
x,y
= 0 if 
h
(
x
)
=
h
(
y
).)Since the length of 
[
h
(
x
)] is precisely equal to the number of items that collide with
x
, we have
(
x
) =
y
x,y
.
We can rewrite the simple uniform hashing assumption as follows:
x
=
y
=
E[
x,y
] =1
m.
Now we just have to grind through the definitions.E[
(
x
)] =
y
E[
x,y
] =
y
1
m
=
nm
We call this fraction
n/m
the
load facto
of the hash table. Since the load factor shows upeverywhere, we will give it its own symbol
α
.
α
=
nm
2
 
CS 373 Lecture 6: Hash Tables Fall 2002
Our analysis implies that the expected time for an unsuccessful search in a chained hash table isΘ(1+
α
). As long as the number if items
n
is only a constant factor bigger than the table size
m
, thesearch time is a constant. A similar analysis gives the same expected time bound
2
for a successfulsearch.Obviously, linked lists are not the only data structure we could use to store the chains; any datastructure that can store a set of items will work. For example, if the universe
has a total ordering,we can store each chain in a balanced binary search tree. This reduces the worst-case time for asearch to
O
(1+log
(
x
)), and under the simple uniform hashing assumption, the expected time fora search is
O
(1 + log
α
).Another possibility is to keep the overflow lists in hash tables! Specifically, for each
[
i
], wemaintain a hash table
i
containing all the items with hash value
i
. To keep things efficient, wemake sure the load factor of each secondary hash table is always a constant less than 1; this canbe done with only constant
amortized 
overhead.
3
Since the load factor is constant, a search inany secondary table always takes
O
(1) expected time, so the total expected time to search in thetop-level hash table is also
O
(1).
6.3 Open Addressing
Another method we can use to resolve collisions is called
open addressing 
. Here, rather than buildingsecondary data structures, we resolve collisions by looking elsewhere in the table. Specifically, wehave a sequence of hash functions
h
0
,h
1
,h
2
,...,h
m
1
, such that for any item
x
, the
probe sequence
h
0
(
x
)
,h
1
(
x
)
,...,h
m
1
(
x
)
is a permutation of 
0
,
1
,
2
,...,m
1
. In other words, different hashfunctions in the sequence always map
x
to different locations in the hash table.We search for
x
using the following algorithm, which returns the array index
i
if 
[
i
] =
x
,‘absent’ if 
x
is not in the table but there is an empty slot, and ‘full’ if 
x
is not in the table andthere no no empty slots.
OpenAddressSearch
(
x
):for
i
0 to
m
1if 
[
h
i
(
x
)] =
x
return
h
i
(
x
)else if 
[
h
i
(
x
)] =
return ‘absent’return ‘full’The algorithm for inserting a new item into the table is similar; only the second-to-last line ischanged to
[
h
i
(
x
)]
x
. Notice that for an open-addressed hash table, the load factor is neverbigger than 1.Just as with chaining, we’d like the sequence of hash values to be random, and for purposesof analysis, there is a stronger uniform hashing assumption that gives us constant expected searchand insertion time.
Strong uniform hashing assumption:
For any item
x
, the probe sequence
h
0
(
x
)
,h
1
(
x
)
,...,h
m
1
(
x
)
is equally likely to be any permutation of the set
{
0
,
1
,
2
,...,m
1
}
.
2
but with smaller constants hidden in the
O
()—see p.225 of CLR for details.
3
This means that a single insertion or deletion may take more than constant time, but the total time to handleany sequence of 
k
insertions of deletions, for any
k
, is
O
(
k
) time. We’ll discuss amortized running times after thefirst midterm. This particular result will be an easy homework problem.
3

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->