You are on page 1of 91

SYLLABUS

Algorithms, Abstract Data Type, The Running Times Of a Program, Good Programming Practice,
The data type, Implementation of lists, Array implementation of list, Pointer implementation of
list, Doubly-link lists, Stack, Queues, Mapping.

Sets, An ADT with union, intersection and difference, Bit vector, implementation of sets, Link-list
implementation of sets, The data dictionary.

Efficiency of algorithms, Analysis of recursive programs, Solving recurrence equation, Divide and
conquer algorithms, Dynamic programming, Greedy algorithm, Prim’s algorithm, Kruskal’s
algorithm, Dijkstra’s method, Backtracking.

Basic terminology, Implementation of tree, An Array Representation of Trees, Representation of


Trees by List of Children, Binary trees, The internal sorting model, Bubble sort, Insertion sort,
Selection sort, Heap sort, Divide and conquer, Merge sort, Quick sort, Binary search.

A Model of External Computation, External sorting, Characteristics of External Sorting, Criteria


for Developing an External Sorting Algorithm, Important Uses of External Sorting, Merge Sort--A
Digression, Top-Down Strategy, Bottom-Up Strategy, Storing Information in Files, Hashed Files,
Indexed Files

The Issues in Memory, Garbage Collection Algorithms For Equal-Sized Block, Collection in Place,
Buddy system, Distribution of blocks, Allocation blocks, Returning blocks to available storage,
Storage compaction and Compaction problem

Introduction to NP Problem, Polynomial-time, Abstract Problems, Encoding, NP-Completeness and


Reducibility, NP-Completeness, Circuit Satisfiability, NP-Complete Problems, The Vertex-cover
Problem, The Subset-sum Problem, The Hamiltonian-cycle Problem, The Traveling-salesman
Problem

1 www.arihantinfo.com
TABLE OF CONTENTS

UNIT 1
Basic of Algorithms
1.1 Algorithm
1.2 Abstract Data Type
1.3 The Running Times Of a Program
1.4 Good Programming Practice

UNIT 2
Basic Data Type
2.1 The data type “list”
2.2 Static and Dynamic Memory Allocation
2.3 Pointers
2.4 Implementation of lists
2.5 Linear Linked List
2.6 Array implementation of list
2.7 Pointer implementation of list
2.8 Doubly-link lists
2.9 Stack
2.10Queues
2.11Mapping

UNIT 3
Basic Operations and Sets
3.1 Sets
3.2 An ADT with union, intersection and difference
3.3 Bit vector implementation of sets
3.4 Link-list implementation of sets
3.5 The data dictionary

UNIT 4
Algorithms Analysis Techniques
4.1 Efficiency of algorithms
4.2 Analysis of recursive programs
4.3 Solving recurrence equation

UNIT 5
Algorithms Design Technique
5.1 Divide and conquer algorithms
5.2 Dynamic programming
5.3 Greedy algorithm
5.4 Minimum-cost spanning trees
5.5 Minimum Spanning Tree
5.6 Prim’s Algorithm
5.7 Kruskal’s Algorithm
5.8 Shortest Paths
5.9 Dijkastra’s Algorithm
5.10 Backtracking

UNIT 6

2 www.arihantinfo.com
Trees and Sorting
6.1 Basic terminology
6.2 Implementation of tree
6.3 An Array Representation of Trees
6.4 Representation of Trees by List of Children
6.5 Binary trees
6.6 The internal sorting model
6.7 Bubble sort
6.8 Insertion sort
6.9 Selection sort
6.10 Heap sort
6.11 Divide and conquer
6.11.1 Merge sort
6.11.2 Quick sort
6.11.3 Binary search

UNIT 7
Algorithms for External Storage
7.1 A Model of External Computation
7.2 External sorting
7.3 Characteristics of External Sorting
7.4 Criteria for Developing an External Sorting Algorithm
7.5 Important Uses of External Sorting
7.6 Merge Sort--A Digression
7.7 Top-Down Strategy
7.8 Bottom-Up Strategy
7.9 Storing Information in Files
7.10 Hashed Files
7.11 Indexed Files

UNIT 8
Memory Management
8.1 The Issues in Memory
8.2 Garbage Collection Algorithms For Equal-Sized Block
8.3 Collection in Place
8.4 Buddy system
8.5 Distribution of blocks
8.6 Allocation blocks
8.7 Returning blocks to available storage
8.8 Storage compaction and Compaction problem

UNIT 9
NP Complete Problem
9.1 Introduction
9.2 Polynomial-time
9.3 Abstract Problems
9.4 Encoding
9.5 NP-Completeness and Reducibility
9.6 NP-Completeness
9.7 Circuit Satisfiability
9.8 NP-Complete Problems
9.9 The Vertex-cover Problem
9.10 The Subset-sum Problem
9.11 The Hamiltonian-cycle Problem
3 www.arihantinfo.com
9.12 The Traveling-salesman Problem

4 www.arihantinfo.com
UNIT 1

Basic of Algorithms

1.1 Algorithm
1.2 Abstract Data Type
1.3 The Running Times Of a Program
1.4 Good Programming Practice

1.1 Introduction to Algorithms

Computer Science can be defined as the study of algorithms. This study encompasses four distinct
areas:

• Machines for executing algorithms.


• Languages for describing algorithms.
• Foundations of algorithms.
• Analysis of algorithms.

It can be easily seen that "algorithm" is a fundamental notion in computer science. So it deserves
a precise definition. The dictionary's definition, "any mechanical or recursive computational
procedure," is not entirely satisfying since these terms are not basic enough.

Definition: An algorithm is a finite set of instructions, which, if followed, accomplish a particular


task. In addition every algorithm must satisfy the following criteria:

1. Input: there are zero or more quantities which are externally supplied;
2. Output: at least one quantity is produced;
3. Definiteness: each instruction must be clear and unambiguous;
4. Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm
will terminate after a finite number of steps;
5. Effectiveness: every instruction must be sufficiently basic that a person using only pencil
and paper can in principle carry it out. It is not enough that each operation is definite, but
it must also be feasible.

In formal computer science, one distinguishes between an algorithm, and a program. A


program does not necessarily satisfy the fourth condition. One important example of such a
program for a computer is its operating system, which never terminates (except for system
crashes) but continues in a wait loop until more jobs are entered.

An algorithm can be described in several ways. One way is to give it a graphical form of
notation such as. This form places each processing step in a "box" and uses arrows to indicate the
next step. Different shaped boxes stand for different kinds of operations.

If a computer is merely a means to an end, then the means may be an algorithm but the
end is the transformation of data. That is why we often hear a computer referred to as a data
processing machine. Raw data is input and algorithms are used to transform it into refined data.
So, instead of saying that computer science is the study of algorithms, alternatively, we might say
that computer science is the study of data.

5 www.arihantinfo.com
1.2 Abstract Data Type:

ADT is kind of data abstraction where a type's internal form is hidden behind a set of
access functions. Values of the type are created and inspected only by calls to the access
functions. This allows the implementation of the type to be changed without requiring any
changes outside the module in which it is defined. Objects and ADT are both forms of data
abstraction, but objects are not ADT. Objects use procedural abstraction (methods), not type
abstraction.
A classic example of an ADT is a stack data type for which functions might be provided to
create an empty stack, to push values onto a stack and to pop values from a stack.
ADT A type whose internal form is hidden behind a set of access functions. Objects of the
type are created and inspected only by calls to the access functions. This allows the
implementation of the type to be changed without requiring any changes outside the module in
which it is defined.
Abstract data types are central to object-oriented programming where every class is an
ADT.

1.3 The Running Time of a Program


When solving a problem we are faced frequently with a choice among algorithms. On what
basis should choose? There are two often-contradictory goals.
1. We would like an algorithm that is easy to understand code and debug.
2. We would like an algorithm that makes efficient use of the computers resources,
especially, one that runs as fast as possible.
Measuring the running time of a program: The running time of a program depends on factors
such as:
1. The input of a program.
2. The quality of a code generated by the compiler used to create the object program.
3. The nature and speed of the instructions on the machine used to execute the program.
4. The time complexity of the algorithm underlying the program.

Calculating the Running Time of a Program

Some Important Formulas for finding the Running Time of a program are the following:

When finding the Running Time of a Program with that has Scary nested loops it's a good
thing to know how Summations work and why you use Summations.
We use summations because of what a Summation stands for is very similar to how a loop
is run.

6 www.arihantinfo.com
Some important properties of Summations are good to review in order to make it easier to
work with them. The following property shows one way of manipulating summations to work to
your advantage in certain questions.

Another important aspect of summations is how to manipulate the equation properly to fit the
first helpful equations given above.

Some rules for finding the Running Time that come are following:

7 www.arihantinfo.com
A good way to understand how Big Oh works is to look at the following diagram.

Expression Name
O(1) constant
logarithmic

log squared
O(n) linear
N log n
Quadratic
cubic
exponential

Table: The Names of Common Big Oh Expressions

8 www.arihantinfo.com
1.4 Good Programming Practice:

There are a substantial number of ideas we should bear in mind when designing an algorithm
and implementing it as a program. These ideas often appear platitudinous, because by and large
they can be appreciated only through their successful use in real problems, rather than by
development of theory.

1. Plan the design of a program. This strategy of sketch-then-detail tends to produce a


more organized final program that is easier to debug and maintain.
2. Encapsulate. Use procedures and ADT’s to place the code for each principal operation
and type of data in one place in the program listing. Then, if changes become
necessary, the section of code requiring change will be localized.
3. Use or modify an existing program. One of the chief inefficiencies in the
programming process is that usually a project is tackled as if it were the first program
ever written. One should first look for an existing program that does all or a part of the
task.
4. Program at the command level. A well-designed operating system will allow us to
connect a network of available programs together without writing any programs at all,
except for one list of operating system commands. To make command compos able, it
is generally necessary that each behave as a filter, a program with one input file and
one output file.

9 www.arihantinfo.com
UNIT 2

Basic Data Type

2.1 The data type “list”


2.2 Static and Dynamic Memory Allocation
2.3 Pointers
2.4 Linear Linked List
2.4.1 Array implementation of list
2.4.2 Pointer implementation of list
2.4.3 Doubly-link lists
2.5 Stack
2.6 Queues
2.7 Mapping

2.1 The Data Type “List”


Using ADT’s allows the data in a specific piece of code to be hidden from other pieces of
code that don't need and shouldn't have access to it. This is often called modular programming or
encapsulation. The idea behind the ADT is to hide the actual implementation of the code, leaving
only its abstraction visible. In order to do this, one needs to find a clear division between the
linked list code and the surrounding code. When the linked list code is removed, what remains is
its logical abstraction. This separation makes the code less dependent on any one platform. Thus,
programming using the ADT method is usually considered a necessity for cross-platform
development as it makes maintenance and management of the application code much easier.
The ADT concept is developed here via the example of how the doubly linked list
Application Programming Interface (API) was created. The doubly linked list (DLL) can also be an
independent C module and compiled into a larger program.
An abstract data type (ADT).

2.2 Static and Dynamic Memory Allocation: -


In static memory allocation memory is allocated at compile time. If we declare a stack
through an array of 100 elements (all integers) then the statement will be:
int stk[100];
This declaration would typically be used if 100 records are to be stored in memory. The
moment we make this declaration 200 bytes are reserved in memory for storing 100 integers in it.
However it may so happen that when we actually run the program we might be interested in
storing only 60 records, even in this case 200 bytes would get reserved in memory that would

10 www.arihantinfo.com
result in wastage of memory. There may be cases when we need to store more than 100 records,
in this case array would fall short in size.
We can overcome these problems by allocating memory at Run Time (instead of compile
time), this is called Dynamic Memory Allocation. It is done by using Standard Library functions
malloc() and calloc(). Syntax for allocating memory through malloc:
p = (data type*) malloc (n*size of (<datatype>));

2.3 Pointers
Pointer is a variable that gives the Address (location or link or reference) of some other
variable. As other variables Pointer also cannot be used before declaration. We can declare the
pointer variable as:
datatype *p;
This declaration specifies that p is a pointer, which will store the address of variable of
particular datatype. '*' stands for 'value of address'. Thus if we declare int *p; then it would mean,
the value at the address contained in p is an integer.
Implementation of Lists: -
In implementation of lists we shall describe data structures that can be used to represent
lists. We shall consider array, pointer and cursor implementation of list.
Linked Linear List
We saw in previous chapters how static representation of linear ordered list through Array
leads to wastage of memory and in some cases overflows. Now we don't want to assign memory to
any linear list in advance instead we want to allocate memory to elements as they are inserted in
list. This requires Dynamic Allocation of memory and it can be achieved by using malloc() or
calloc() function.
But memory assigned to elements will not be contiguous, which is a requirement for linear
ordered list, and was provided by array representation. How we could achieve this?
We have to consider a logical ordered list, i.e. elements are stored in different memory
locations but they are linked to each other and form a logical list as in Fig. This link represents
that each element

A1 A2 A3 A4 …… An
.

Figure: Logical List


has address of its logical successor element in the list. We can understand this concept through a
real life example. Suppose their is a list of 8 friends, x1, x2......x8. Each friend resides at different
locations of the city. x1 knows the address of x2, x2 knows the address of x3 and so on .... x7 has
the address of x8. If one wants to go to the house of x8 and he does not know the address he will
go to x2 and so on Fig.
The concept of linked list is like a family despaired, but still bound together. From the
above discussion it is clear that Link list is a collection of elements called nodes, each of

x1 A d d .o f x2 x2 A d d .o f x3 X3 ……. X8 NU LL

Figure
Which stores two items of information:
• An element of the list
• A link or address of another node
Link can be provided through a pointer that indicates the location of the node containing the
successor of this list element. The NULL in the last node indicates that this is the last node in the
list.

11 www.arihantinfo.com
2.4 Linked List

2.4.1 Array Implementation of List: -


This type of implementation of a linked list is sometimes useful. It suffers from the usual
array problem of having to estimate the needed size of the array in advance. There are some ways
around this problem (such as using a vector instead of an array, allocating space for the array
dynamically, or even allocating space for additional arrays as needed), but they have their own
complications. We will assume here that the maximum size needed can be foretold. The array will
hold a fixed number of records (structures), each of which represents a node.
In this implementation, a linked list object contains an array of nodes, perhaps called
NodeArray, and four integers fields labelled Avail, Front, Rear, and Count. Each record in
NodeArray contains an Info field and an integer Next field. Avail, Front, Rear, and each Next field
all contain fake pointers. What we use as a fake pointer to a node is the array index of that node.
To represent the NULL pointer we can use an illegal array index, such as -1. We will not start our
list with a dummy node, though it is possible to do so by making a few small changes. The picture
of an empty list is as follows:

The idea is that Front points to the first node of the list. Since it is -1, our imitation NULL
pointer, this means that we have an empty list. The Count of zero also indicates the same thing.
Rear is -1, indicating that there is no node at the rear of the list (as the list is empty). The Avail
field is intended to point to a list of free nodes. The programmer must arrange to manage free
space within the array with this method! In out picture above, all of the nodes are free and are
linked together in a list with Avail containing zero and thus pointing at the first free node,
NodeArray[0]. Then this node has a Next value of 1, showing that it points to NodeArray[1], etc.
Finally, NodeArray[4].Next contains our imitation NULL pointer to mark the end of the list.
After items have been dynamically inserted into the list and perhaps items have been
deleted as well, we might end up with a picture such as the following.

12 www.arihantinfo.com
Can you read what is on this list? All you do is follow the (fake) pointers. Since Head contains 1,
the first node contains the string "Ann". This node contains a pointer to node 0, so the second
node contains "Sam". This node contains a pointer to node 4, so the third node contains "Mary".
Finally, this node has -1 in the Next field, marking the end of the list. So the three data items in
the list are "Ann", "Sam", and "Mary". If you need a quick way to get to the last node on the list,
you use the 4 found in the Rear field. The list of available (free) nodes contains node 2 and node 3.
Note that such a list object always contains two linked lists: the list of data nodes and the list of
available (free) nodes. It is left as an exercise to the reader to implement a linked list class using
this array-based scheme.

2.4.2 Pointer Implementation of List: -


It is very common to implement a linked list using pointers and either structures or
objects for the nodes. We will choose objects for the nodes and set up the class ListNodeClass as
shown below.
class ListNodeClass
{
private:
ItemType Info;
ListNodeClass * Next;
public:
// First, the constructor:
ListNodeClass(const ItemType & Item, ListNodeClass * NextPtr = NULL):
Info(Item), Next(NextPtr)
{
};
void GetInfo(ItemType & TheInfo) const;
friend class ListClass; // very convenient to allow this
};

typedef ListNodeClass * ListNodePtr;


Note that the two data fields are private, as usual. The type ItemType would have to be set
up to handle whatever data we want to hold in the list. The Next field is a pointer to another
ListNodeClass object, since the * refers to whatever Next points to.
The most important function is the constructor. It takes a data item and a pointer as its
parameters and uses them to fill in the fields of the object being constructed. Note that the pointer
parameter has a default value of NULL. Recall that the Info(Item), Next(NextPtr) means to copy
Item into Info and NextPtr into Next. The GetInfo function simply sends back a code of the data
from the object.
13 www.arihantinfo.com
The ListNodeClass is also told that it has a friend class named ListClass. Functions of the
friend class have direct access to the private data fields (Info and Next) of ListNodeClass objects. It
is not necessary to allow this direct access, but it is easier than the alternative: using get and set
class functions every time a ListClass function wants to look at or change the private fields of a
ListNodeClass object. Some people refuse to use friend classes, however, because they give more
access than is really necessary, thus partially defeating the information hiding that class provide.
Finally, note that the above code sets up ListNodePtr as a convenient type name for a
pointer to a node. This means that we won't have to use ListNodeClass * from here on in the rest
of the code, but can instead use ListNodePtr.
Next, let's set up a class declaration for list objects. Each list will have three private data
fields: a pointer to the first node on the list, a pointer to the last node on the list, and a count of
how many data nodes are in the list.
The Front pointer will point to what is called a dummy node. This is a node containing no data. It
is easier to code the insertion and deletion operations if the first node on the list contains no data.
Otherwise, the code to insert a new first node or to delete the current first node must be different
from the code to handle the normal cases.
The following is the class declaration for ListClass. Helping functions, which are only used
by other ListClass functions are made private, so that they cannot be accessed otherwise. (This is
another instance of information hiding.) There is a comment that the three data fields are
sometimes made into protected fields. The purpose of this is that if we ever created a subclass
(using inheritance), functions of the subclass would be able to directly access these fields. With
private fields, the fields would be inherited, but not directly accessible by the subclass. Since we
do not plan to inherit a new class from ListClass, this is not a concern.
class ListClass
{
private:
ListNodePtr GetNode(const ItemType & Item,
ListNodePtr NextPtr = NULL);
void FreeNode(ListNodePtr NodePtr);
void ClearList(void);
// Next 3 are sometimes made into protected fields:
ListNodePtr Front, Rear;
int Count;
public:
// constructor:
ListClass(void);
// destructor:
~ListClass(void);
int NumItems(void) const;
bool Empty(void) const;
void InsertFront(const ItemType & Item);
void InsertRear(const ItemType & Item);
void InsertInOrder(const ItemType & Item);
ItemType RemoveFront(void);
ListNodePtr Find(const ItemType & Item) const;
};
The public functions above provide the usual linked list operations. The above class
declaration thus shows that, from an abstract data type point of view, the list operations are list
creation, list destruction, getting the number of items in the list, checking if the list is empty,
inserting an item at the front of the list, inserting an item at the rear of a list, inserting an item
into an ordered list, removing an item from the front of a list, and finding an item in the list.
Certainly other operations are possible. For example, why not have an operation to remove the
item from the rear of a list? (One practical reason not to do so is that it is not easy to adjust the
Rear pointer after deleting the last node and to get a NULL into the Next field of the new last node.
The only way to do so appears to be traverse the list from the first to the last node, before
removing the final node, in order to get a pointer to the next to the last node!)

14 www.arihantinfo.com
A ListClass object, named List is pictured below. Functions are not shown in the objects in
order to keep the diagram from becoming cluttered, but if they were, the private functions should
be shown as completely boxed in, whereas the public function should be shown in open boxes to
indicate that they are available for public use. Note that a ListClass object doesn't really contain
the data in the list. Rather, the data is contained in node objects that are outside of the List object
itself. The List object simply contains pointers to the first and last nodes of the list.

The complete code files for this example are given below. The last file contains a simple
test program to try out several linked list operations. This test program is straightforward and will
not be commented upon here. Read through these files to get the general idea, and then go on to
the notes that follow.
In the next to the last file above, you will note that the constructor initializes a ListClass
object to contain a Count of zero, and to have Front and Rear pointers that point to a dummy
node. Of course, this means that it has to create a dummy node. This is handled by the helping
function GetNode. In turn, GetNode uses new with the ListNodeClass constructor to dynamically
allocate a new node.
Since a ListClass object points to other objects outside of it, it is important to have a
destructor to clean up the space used by these ListNodeClass objects. The destructor uses the
ClearList function to reclaim the space used by the data nodes and then calls FreeNode(Front) to
get rid of the dummy node. The ListClass object itself is automatically removed.
The FreeNode function is trivial in that it just uses delete to remove the node pointed to.
The ClearList function, however, is more interesting. Let's look at it in more detail since it is a
good example of a list processing function.
/* Given: Nothing.
Task: Clear out all nodes on the list except the dummy.
The list is thus set to an empty list.
Return: Nothing directly, but the implicit ListClass object is modified.
*/
void ListClass::ClearList(void)
{
ListNodePtr Current, Temp;

Current = Front->Next;
while (Current != NULL)
{
Temp = Current;
Current = Current->Next;
FreeNode(Temp);
}

Front->Next = NULL;
Rear = Front;
Count = 0;
}
In essence the idea is to traverse the list, freeing up each data node as it is encountered.
The Current pointer is used to point to the successive nodes on the list. Note that it is initialized
to point where the dummy node's Next field points, that is, to point to the first data node. Then we
have a while loop. Inside of it, the Current pointer is copied into Temp, Current is advanced by
placing in it a copy of the Next field of the node we have been pointing at, and FreeNode is used to
free up the node that Temp points to. Notice that we can't simply free the node that Current
points to each time around the loop. If we did that we would lose our pointer into the linked list
and have no way to advance to the next node. The loop stops when Current is advanced to pick

15 www.arihantinfo.com
up the NULL value marking the end of the list. After the loop ends, the ListClass object is adjusted
so that the Front and Rear pointers are NULL and the Count is zero. In other words, the list object
is changed into the typical picture of an empty list.
Let's look at a few more of the list manipulation functions, starting with InsertFront.
/* Given: Item A data item.
Task: To insert a new node containing Item at the front of the implicit
ListClass object.
Return: Nothing directly, but the implicit object is modified.
*/
void ListClass::InsertFront(const ItemType & Item)
{
ListNodePtr NodePtr;

NodePtr = GetNode(Item, Front->Next);


Front->Next = NodePtr;

if (Count == 0)
Rear = NodePtr;
Count++;
}

It is clear that the GetNode function is being used to create a node to hold the new data item. The
second parameter to this function is used to initialize the Next field of this new node so that it
points to the first data node. At this point we have the following picture:

To finish things off, the function adjusts the Next field of the dummy node so that it points
to the new node. The Count field for the list also needs to be incremented. That finishes things for
the example in our picture.
Unfortunately, it is not always quite that simple. There is a special case lurking in the
background. What happens if we insert at the front of an empty list? Remember that an empty list
looks like this:

16 www.arihantinfo.com
The steps we previously did to insert a new node at the front of the list are still fine, but
they leave one thing out: they do not adjust the Rear pointer, currently pointing at the dummy
node, to point to the newly inserted node. That is handled by the if in the code above.
You may not have noticed it, but the ListClass functions are directly accessing the private data
fields of the ListNodeClass objects. That would not normally be allowed, but it is legal here
because we made ListClass to be a friend of ListNodeClass.
Next, let's look at the InsertInOrder function. Note that it assumes that the list is already
in order.
/* Given: Item A data item.
Task: Insert Item into a new node added to the implicit ListClass object
(assumed to already be in ascending order based on the Info
field) so as to maintain the ascending order.
Return: Nothing directly, but the implicit object is modified.
*/
void ListClass::InsertInOrder(const ItemType & Item)
{
ListNodePtr Current, Last, Temp;
bool Proceed;

Temp = GetNode(Item);
Current = Front->Next;
Last = Front;
Proceed = true;

while (Proceed && (Current != NULL))


if (Item > Current->Info)
{
Last = Current;
Current = Current->Next;
}
else
Proceed = false;

Last->Next = Temp;
Temp->Next = Current;

if (Current == NULL) // item was inserted at rear of list


Rear = Temp;
Count++;
}

Suppose that we have an ordered list containing 4, 6, 9, and that we want to insert 8
using the above function. Note that it moves a pair of pointers, Current and Last, through the list,
with Last always pointing to the node before the one that Current points to. Current starts out by
being initialized to point to the first data node, while Last points to the dummy node. Each time
17 www.arihantinfo.com
around the while loop a check is made to see if the 8 is greater than the item contained in the
node that Current points to. If so, the pointers are advanced so that each points to the next node.
If not, then the place to insert has been found, so the loop is stopped by setting the Proceed flag to
false. The picture at the point where the insertion location has been found is shown below:

All that remains to be done is to link in the new node, up the Count by 1, and check for any
special cases. It turns out that the only unique case is when the new item ends up going at the
very end of the list. In such a case the Rear pointer must be adjusted (as shown in the code
above).
The Remove Front function is simpler. However, it exits with an error message if there is
no data item to remove. If not, it sets up NodePtr to point to the first data node and extracts the
value from this node's Info field. It then adjusts the Next field of the dummy node so that it points
to the second data node, skipping around the first one. The Count is decremented and the first
data node is freed up. Not surprisingly, there is a special case. This occurs when the list only has
one data node. After removing it, the Rear field must be adjusted to point to the dummy node.

2.4.3 Doubly-Link lists: -

A doubly linked list has two pointer fields in each node, often named Left and Right. If the
linked list is pictured as a horizontal sequence of nodes, the Right pointers are used to traverse
the lists from left to right (that is, from beginning to end). However, the Left pointers can be used
to back up to the left whenever that is desired. The following is one possible picture of a doubly
linked list. Sometimes dummy nodes are added at the front and rear as well.

18 www.arihantinfo.com
One can thus traverse a doubly linked list in both the forward and backward directions.
This can make coding simpler, for example, when writing a sequential search function to find a
given node in the list, with the plan that we will use this to find the node after which to insert a
new node. With a singly linked list we needed to return both a pointer to the node found and a
pointer to the previous node. With a doubly linked list it suffices to return a pointer to the
matching node, since we can always follow its Left field to get at the previous node.

Layout of Doubly Linked List in memory

The arrows indicate to what the Prior and Next pointers point. The Current pointer can
point to any Node Struct, so it is open-ended.
In order to fulfill the first requirement, I decided to strictly adhere to the ANSI C standard,
and, with the possible exception of how one sets up one's data and uses the DLL's input/output
functions, there should be no endian (byte order) problems. The second requirement was met with
the creation of a top-level structure. There is only one of these structures per linked list. It keeps
track of the node pointers, the size of the applications data in bytes, how many nodes are in the
list, whether or not the list has been modified since it was created or loaded into memory, where
searching starts from, and what direction a search proceeds in. Figure 1 illustrates how the top-
level structure is integrated into the DLL.
typedef struct list
{
Node *head;
Node *tail;
Node *current;
Node *saved;
size_t infosize;
unsigned long listsize;
DLL_Boolean modified;
DLL_SrchOrigin search_origin;
DLL_SrchDir search_dir;
} List;

This and the next typedef structure remain hidden from the application program. The
node pointers mentioned above are defined in the next structure, which includes the pointers to
the application's data and the pointers to the next and prior nodes. One of these structures is
created for each node in the list.

typedef struct node


{
Info *info;
struct node *next;
struct node *prior;
} Node;
The last definition is a dummy typedef of the user's data. It is defined as type void so that
the DLL's functions will be able to handle any of C's or an application's data types.
typedef void Info;
As you can see, if the two structures mentioned above are hidden from the application, all
of the ugly innards of how the linked list operates will by default be hidden from the application.
Thus, we have an abstract data type.

2.5 STACK: -

A stack is an abstract data structure that consists of an ordered collection of items with
two basic operations called push and pop. The push operation appends a single new item into the
stack collection. The pop operation removes a single item off the stack collection, but in the
reverse order that the item was added using the push operation.

19 www.arihantinfo.com
We can model a stack data structure in terms of a pipe that has one end open and the
other closed. The push operation places items into the pipe from the open end. The pop
operation removes an item only from the open end. Assuming the items do not juggle around in
the pipe, the items placed into the pipe can only be removed in the order that they are placed into
the pipe. Although, in theory the stack is like an infinite length pipe that can hold an endless
number of items, in practice, due to the finite memory available on a computer system, the stack
has a maximum size. This is called the maximum stack size. In theory, one never needs to
track the number of elements currently in the stack collection, but in practice this is called the
current stack size. The operation of performing a pop on an empty stack is given a special error
called a stack underflow. Since in practice, a stack can only contain a finite number of elements,
an error called a stack overflow occurs when the stack is full and a push operation is performed.
Stack will work on LIFO application.

Adding into stack

procedure add(item : items)


{add item to the global stack stack;
top is the current top of stack and n is its maximum size}
begin
if top = n then stackfull;
top := top+1;
stack(top) := item;
end: {of add}

Deletion in stack

procedure delete(var item : items)


{remove top element from the stack stack and put it in the item}
begin
if top = 0 then stackempty;
item := stack(top);
top := top-1;
end; {of delete}

These two procedures are so simple that they perhaps need no more explanation. Procedure delete actually
combines the functions TOP and DELETE, stackfull and stackempty are procedures which are left unspecified since
they will depend upon the particular application. Often a stackfull condition will signal that more storage needs to be
allocated and the program re-run. Stackempty is often a meaningful condition.

20 www.arihantinfo.com
2.6 QUEUE: -

A Queue is an ordered list of items that have two basic operations: adding new items to the
queue, and removing items of the queue. The operation of adding new items on the queue occurs
only at one end of the queue called the rear(or back). The operation of removing items of the
queue occurs at the other end called the front. The following consists of a queue that has five
integers:
front: 23 33 -23 90 2 : rear
The operations of adding 2 new items,5 and 4, to the queue occurs in the back and the
queue looks like so.
front: 23 33 -23 90 2 5 4 :rear
The operations of removing three items from the queue would yield the queue
front: 90 2 5 4 : rear
The first item on the queue, the item that will be removed with the next remove operation,
is called the front of the queue. The last item is called
the rear of the queue. The number of items in the queue is called the queue size. If the number of
items in the queue is zero, an attempt at a remove
operation produces a queue underflow error. In theory, there does not exist a queue overflow.
But in practice, implementations of a queue have a maximum queue size.

Addition into a queue

procedure addq (item : items)


{add item to the queue q}
begin
if rear=n then queuefull
else begin
rear :=rear+1;
q[rear]:=item;
end;
end;{of addq}

Deletion in a queue

procedure deleteq (var item : items)


{delete from the front of q and put into item}
begin
if front = rear then queueempty
else begin
front := front+1
item := q[front];
end;
end; {of deleteq}

2.7 Mapping

A mapping or associative store is a function from elements of one type, called the domain
type to elements of another type, called the range type. We express the fact that the mapping M
associates element r of range type range type with element d of domain type domain type by M(d)=
r.
Certain mapping such as square(i)= i2 can be implemented easily by giving an arithmetic
expression or other simple means for calculating M(d) from d . However for many mapping there is
no apparent way to describe M(d) other than to store for each d the value of M(d).

21 www.arihantinfo.com
22 www.arihantinfo.com
UNIT 3

Basic Operations and Sets

3.1 Sets
3.2 An ADT with union, intersection and difference
3.3 Bit vector implementation of sets
3.4 Link-list implementation of sets
3.5 The data dictionary

3.1 Sets
A set is a collection of member (or elements) each member of a set either is itself a set or
primitive element called an atom. All member of a set are different, which means no set can
contain two copies of the same elements.
When used as tool in algorithm and data structure design, atoms usually are integers,
characters or strings and all elements in any one set are usually of the same type. We shall often
assume that a relation, usually denoted and read “less than” or “process”, linearly orders atoms. A
linear order < on a set S satisfies two properties:
1. For any a and b in S, exactly one a< b, a=b or b< a is true.
2. For all a, b and c in S, if a<b and b<c then a<c (transitivity).

3.2 An ADT with union, intersection and difference


A set is an unordered collection of particular objects. These objects are referred to as elements.
An example is shown as:
A = {1, 2, 3, 4, 5, 9, 10}
Where. A is the set and the elements are identified within the braces. Generally a set is denoted
by a capital letter while lower case letters represents the elements. For a large series with a
pattern, the set could be designated as:

N = {1, 2, 3, 4, 5, … }

The use of the symbol ∈ in set theory means that object is an element of a set. For example, a ∈ S
means that a is an element within the set S. More than one element can be identified such as a, b
∈ S which indicates that both a and b are elements of the set S. Continuing in this vein, ∉ means
that an element is not in a set. For example, a ∉ S states that a is not an element within the set S.

A subset is a set where every element belongs to another set. The symbols ⊂ and ⊃ are used to
designate subsets, depending upon the syntax. If all of the elements of V are also elements of T
then V is a subset of T and this can be written as V ⊂ T , which can be interpreted as saying V is
contained in T, or T ⊃ V, which can be used to imply that T contains V. For example,
T = {all letters in the English alphabet}
V = {all vowels}

then V⊂T

If there exists one element that exists in V that is not in T, then R is not a subset and this
is designated as V ⊄ T.

If two sets, S and T, contain exactly the same identical elements, then they are equal and
S = T. It is also true that S ⊂ T and T ⊂ S. If on the other hand, S ⊂ T and S ≠ T then S is called a
23 www.arihantinfo.com
proper subset of T. This means that there is at least one element of T, which is not an element of
S.

It is important to recognize the syntax utilized in set theory. For example, if the set S = {a,
b, c}, then a ∈ S is the correct form that means a is an element of the set S. But, using {a} ∈ S is
incorrect because the braces mean a set. S can consist of subsets that contain only one element.
For example, {a} can be used to designate a subset of S containing only a. Then, it is proper to
state that {a} ⊂ S which means that the set containing {a} is contained within S.

It is also possible to represent a set of elements in one set from another set, which has
certain properties. This is written as:

S = {x ∈ T | x has the property p}

or, if the set T is clear to the user, this can be shortened to the form:

S = {x | x has the property p}

The following example shows the set S contains all of those elements from the set of natural
numbers N, where x + 2 = 7.

S = {x ∈ N | x + 2 = 7}

Since S contains only one element, 5, this is called a unit set. Another example shows that the
definition of a particular set can be done in different fashions. In this first definition, the set E will
contain all even natural numbers:

E = {x ∈ N | x is even}
hence,
E = {2, 4, 6, 8, … }

Another way of defining this set is as follows:

E = {2n | n ∈ N }

By convention, uses of certain letters usually define the element of a set. For example, the
null set, ∅, is an empty set, which contains no elements. ∅ is a subset of all sets. The universal
set, U, is the set consisting of all the elements that are of interest for a problem.
N is generally used for natural values (N = {1, 2, 3, 4, ...}, sometimes 0 is considered a
natural number), and Z is used for integers (Z = {..., -3, -2, -1, 0, 1, 2, 3, ...}), and R represents all
real numbers.

The union of sets defines another set, which consists of elements from two or more sets. It
can be shown mathematically as A ∪ B whose elements are {x | x ∈ A or x ∈ B}. This operation is
often called a Boolean OR operation since the element must be in either set A or set B to be
included in the newly formed set. For example, if

A = {0, 2, 3, 5, 7, 9, 11}
and
B = {1, 2, 4, 6, 7, 9, 11}
then
C = A ∪ B = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11}

A figure called a Venn diagram is often used to


depict the different relationships in set theory. If the
two sets have at least one common element then
they are called conjoint. If, on the other hand,
24 www.arihantinfo.com

Venn Diagram showing the union of two


sets.
there are no common elements in the two sets then they are said to be disjoint. For example, E =
{all male students at Ferris} and F = {all female students at Ferris}, the sets are disjoint because a
student cannot be a member of both groups. In the example above, the sets A and B are conjoint
since there is at least one common element.

The intersection of two sets is denoted as C = A ∩ B where the elements of this new set are
defined by {x | x ∈ A and x ∈ B}. This is sometimes called a Boolean AND operation since an
element must be in both sets to be included in the new set. If the sets are disjoint then A ∩ B = 0.
In the example above, where:

A = {0, 2, 3, 5, 7, 8, 9, 11}
and
B = {1, 2, 4, 6, 7, 9, 11}
then
C = A ∩ B = {2, 7, 9, 11}

The complement of a set is defined by the elements that


are a part of the universal set but not in the subset. If A is
a subset then the complement is shown as Ac or comp (A).
As an example, if
U = {all male and female students at FSU}
then the subset
A = {all male students at FSU}
then
comp (A) = {all female students at FSU}

Venn Diagram showing the intersection of


two sets.
Venn Diagram showing the A minus B set
operation.Another useful set of operations is A\B
which means A minus B. The new set will
contain all of those elements of A that are not in
B or {x | x ∈ A and x ∉ B}. Using the examples of
A and B from above,

C = A\B = {0, 3, 5, 8}

Sets also follow the distributive law. Thus,

A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
and
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)

While not a proof, this property is depicted in using Venn diagrams.

25 www.arihantinfo.com
A B A B A B

C C C

A B A B A B

C C C

b
Graphical "proof" of the distributive law.

3.3 A Bit –Vector Implementation of Sets


The best implementation of a SET ADT depends on the operation to be performed and on
the size of the set. When all sets in our domain of discourse are subsets of a small “universal set”
whose elements are the integers 1,2,3…………….N for some fixed N, then we can use a bit-vector
(boolean array) implementation. A set is represented by a bit vector in which the I th bit is true if I
is an element of the set. The major advantage of this representation is that MEMBER, INSERT and
DELETE operation can be performed in constant time by directly addressing the appropriate bit.
UNION, INTERSECTION and DIFFERENCE can be performed in time proportional to the size of
the universal set.

3.4 Link-list implementation of sets: -


It should also be evident that linked lists can represent sets, where the items in the list are the
elements of the set. Unlike the bit-vector representation, the list representation uses space
proportional to the size of the set represented, not the size of the universal set.

When we have operations like INTERSECTION on sets represented by linked lists, we have several
options. If the universal set is linearly ordered, then we can represent a set by a stored list. That
is, we assume all set members are comparable by a relation “<” and the members of a set appear
on a list in the order e1, e2, e3………………..en, where e1<e2<e3<………………en. The advantage of a
stored list is that we do not need to search the entire list to determine whether an element is on
the list.

26 www.arihantinfo.com
3.5 The data dictionary: -

“A data dictionary is used to describe all aspects of a computer based information system: -
in particular its data and its processes, i.e. data about data.”

“To DOCUMENT for all posterity the data elements, structures, flows, stores, processes, and
external entities in the system to be designed.”

Specific arrangements of data attributed (elements) that define the organization of a single
instance of a data flow. Data structures belong to a particular data store. Think about this. For
example, you may desire to collect information about an instance of an "employee." Employees
pave payroll information; perhaps work address information, and human resources information.
Because the purpose of this information is to serve several systems or data flows, you might want
to store the whole of the employee information in several data structures.

Data Elements: -

The descriptive property or characteristic of an entity. In database terms, this is a "attribute" or a


"field." In Microsoft and other vendor terms, this is a "property." Therefore, the data element is
part of a data structure, which is part of a data store. It has a source, and there may be data
entry rules you wish to enforce on this element.

Data Flows:

Represent an input of data to a process or the output of data (or information) from a process. A
data flow is also used to represent the creation, deletion, or updating of data in a file or database
(called a "data sore" in the Data Flow Diagram (DFD.) Note that a data flow is not a process, but
the passing of data only. These are represented in the DFD. Each data flow is a part of a DFD
Explosion of a particular process.

When we use a set in the design of an algorithm, we may not need powerful operations like union
and intersection. Often, we only need to keep a set of “current” objects, with periodic insertions
and deletions from the set. Also, from time to time we may need to know whether a particular
element is in the set. A set ADT with the operations INSERT, DELETE and MEMBER has been
given the name dictionary. We shall also include MAKENULL as a dictionary operation to initialize
whether data structure is used in the implementation.

27 www.arihantinfo.com
UNIT 4

Algorithms Analysis Techniques

4.1 Efficiency of algorithms


4.2 Analysis of recursive programs
4.3 Solving recurrence equation

4.1 Efficiency of algorithms


1. Algorithms (Algorithm is a well-defined sequence of steps)
2. Computational resources: time and space (which leads to solving a certain problem. Steps
should be:
a. Unambiguous
b. There should be finitely many of them)
3. Best, worst and average case performance
4. How to compare algorithms: machine-independent measure of efficiency
5. Growth rate.
6. Complexity measure O ( ).

Best, worst and average case

• Best performance: the item we search for is in the first position; examines one position.
• Worst performance: item not in the array or in the last position; examines all positions.
• Average performance (given that the item is in the array): examines half of the array.

Rate of Growth

We don't know how long the steps actually take; we only know it is some constant time. We can
just lump all constants together and forget about them.

What we are left with is the fact that the time in sequential search grows linearly with the input,
while in binary search it grows logarithmically much slower.

Big Oh complexity measure

Big O notation gives an asymptotic upper bound on the actual function, which describes
time/memory usage of the algorithm.

The complexity of an algorithm is O(f(N)) if there exists a constant factor K and an input size N0
such that the actual usage of time/memory by the algorithm on inputs greater than N0 is always
less than K f(N).

Big O

"Big O" refers to a way of rating the efficiency of an algorithm. It is only a rough estimate of the
actual running time of the algorithm, but it will give you an idea of the performance relative to the
size of the input data.

28 www.arihantinfo.com
An algorithm is said to be of order O(expression), or simply of order expression (where
expression is some function of n, like n2 and n is the size of the data) if there exist numbers p, q
and r so that the running time always lies below between p.expression+q for n > r. Generally
expression is made as simple and as small as possible.

For example, the following piece of code executes the innermost code(n2 - n) / 2, and the whole
expression for the running time will be a quadratic. Since this lies below n2 for all n > 100, the
algorithm is said to have order O(n2).

for i := n downto 2 do
begin
h := 1;
for j := 2 to i do
if list[j] > list[h] then h := j;
temp := list[h];
list[h] := list[i];
list[i] := list[temp];
end;

(this is the algorithm for Selection Sort)

Choosing an algorithm

Choosing an algorithm isn't a case of simply picking the fastest one and implementing it.
Especially in a competition, you will also have to balance this against the time it will take you to
code the algorithm. In fact you should use the simplest algorithm that will run in the time
provided, since you generally get no more points for a faster but more complex algorithm. In fact it
is sometimes worth implementing an algorithm that you know will not be fast enough to get
100%, since 90% for a slow, correct algorithm is better than 0% for a fast but broken one.

When choosing an algorithm you should also plan what type of data structures you intend
to use. The speed of some algorithms depends on what types of data structures they are combined
with.

Analysis of recursive programs


The techniques for solving different equation are sometimes subtle and bear considerable
resemblance to the methods for solving differential equation. Consider the sorting program, there
the procedure merge sort takes a list of length n as input and returns a sorted list as its output.
The procedure merge(L1, L2) takes as input two sorted list L1 and L2, scans them each, element
by element, from the front. At each step, the larger of the two front elements is deleted from its list
and emitted as output. The result is a single sorted list containing the elements of L1 and L2. The
time taken by merge on lists of length n/2 is O(n).
function mergesort(L : LIST, n : integer) : LIST;
{ L is a list of length n. A sorted version of is
returned. We assume n is a power of 2. }
var
L1,L2: LIST
begin
if n=1 then
return(L);
else begin
Break L into two halves L1 and L2, each of length n/2;
return(merge(mergesort(L1,n/2), mergesort(L2,n/2)));
end
end; { mergesort }
29 www.arihantinfo.com
Let T(n) be the worst case running time of procedure mergesort, we can write a recurrence
( or difference ) equation the upper bounds T(n), as follows

T(n) <= c1 if n=1


Or
T(n) < = 2T(n/2) + c2 n if n>1
C1 and C2 are constant.

Solving recurrence equation

There are three different approaches we might take to solving a recurrence equation.
1. Guess a solution f(n) and use the recurrence to show that T(n) <= f(n). Some times we
guess only the form of f(n), leaving some parameters unspecified (e.g guess f(n)=an 2 for
some a) and deduce suitable values for the parameters as we try to prove T(n) <= f(n) for all
n.
2. Use the recurrence itself to substitute for any T(m), m < n, on the right until all terms T(m)
for m > 1 have been replaced by formulas involving only T(1) . Since T(1) is always a
constant, we have a formula for T(n) in terms of n and constants. This formula is what we
have referred to as a “closed form” for T(n).

30 www.arihantinfo.com
UNIT 5

Algorithms Design Technique

5.1 Divide and conquer algorithms


5.2 Dynamic programming
5.3 Greedy algorithm
5.4 Minimum-cost spanning trees
5.5 Minimum Spanning Tree
5.6 Prim’s Algorithm
5.7 Kruskal’s Algorithm
5.8 Shortest Paths
5.9 Dijkastra’s Algorithm
5.10 Backtracking

5.1 Divide-and-Conquer algorithms


This is a method of designing algorithms that (informally) proceeds as follows:
Given an instance of the problem to be solved, split this into several, smaller, sub-instances (of
the same problem) independently solve each of the sub-instances and then combine the sub-
instance solutions so as to yield a solution for the original instance. This description raises the
question:
By what methods are the sub-instances to be independently solved?
The answer to this question is central to the concept of Divide-&-Conquer algorithm and is
a key factor in gauging their efficiency.
Consider the following: We have an algorithm, alpha say, which is known to solve all
problem instances of size n in at most c n^2 steps (where c is some constant). We then discover
an algorithm, beta say, which solves the same problem by:
• Dividing an instance into 3 sub-instances of size n/2.
• Solves these 3 sub-instances.
• Combines the three sub-solutions taking d n steps to do this.
Suppose our original algorithm, alpha, is used to carry out the `solves these sub-instances' step 2.
Let
T(alpha)( n ) = Running time of alpha
T(beta)( n ) = Running time of beta
Then,
T(alpha)( n ) = c n^2 (by definition of alpha)
But

T(beta)( n ) = 3 T(alpha)( n/2 ) + d n

= (3/4)(cn^2) + dn
So if dn < (cn^2)/4 (i.e. d < cn/4) then beta is faster than alpha
In particular for all large enough n, (n > 4d/c = Constant), beta is faster than alpha.
This realisation of beta improves upon alpha by just a constant factor. But if the problem
size, n, is large enough then

n > 4d/c
n/2 > 4d/c
...
n/2^i > 4d/c

31 www.arihantinfo.com
Which suggests that using beta instead of alpha for the `solves these' stage repeatedly until the
sub-sub-sub..sub-instances are of size n0 < = (4d/c) will yield a still faster algorithm.
So consider the following new algorithm for instances of size n

Procedure gamma (n : problem size ) is


Begin
if n <= n^-0 then
Solve problem using Algorithm alpha;
else
Split into 3 sub-instances of size n/2;
Use gamma to solve each sub-instance;
Combine the 3 sub-solutions;
end if;
end gamma;
Let T(gamma)(n) denote the running time of this algorithm.

cn^2 if n < = n0
T(gamma)(n) =
3T(gamma)( n/2 )+dn otherwise

We shall show how relations of this form can be estimated later in the course. With these methods
it can be shown that
T(gamma)( n ) = O( n^{log3} ) (=O(n^{1.59..})
This is an asymptotic improvement upon algorithms alpha and beta.
The improvement that results from applying algorithm gamma is due to the fact that it
maximise the savings achieved beta.
The (relatively) inefficient method alpha is applied only to "small" problem sizes.
The precise form of a divide-and-conquer algorithm is characterised by:
• The threshold input size, n0, below which the problem size is not sub-divided.
• The size of sub-instances into which an instance is split.
• The number of such sub-instances.
• The algorithm used to combine sub-solutions.
It is more usual to consider the ratio of initial problem size to sub-instance size. The threshold in
(I) is sometimes called the (recursive) base value. In summary, the generic form of a divide-and-
conquer algorithm is:
Procedure D-and-C (n : input size) is
Begin
if n < = n0 then
Solve problem without further
sub-division;
else
Split into r sub-instances
each of size n/k;
for each of the r sub-instances do
D-and-C (n/k);
Combine the r resulting
sub-solutions to produce
the solution to the original problem;
end if;
end D-and-C;

5.2 Dynamic Programming

The term ``Dynamic Programming" (DP) refers to a collection of algorithms that can be
used to compute optimal policies given a perfect model of the environment as a Markov decision
process (MDP). Classical DP algorithms are of limited utility in reinforcement learning both
32 www.arihantinfo.com
because of their assumption of a perfect model and because of their great computational expense,
but they are still very important theoretically. DP provides an essential foundation for the
understanding of the methods presented in the rest of this book. In fact, all of these methods can
be viewed as attempts to achieve much the same effect as DP, only with less computation and
without assuming a perfect model of the environment.

Starting with this chapter, we usually assume that the environment is a finite MDP. That is, we

assume that its state and action sets, and , for , are finite, and that its dynamics

are given by a set of transition probabilities, , and

expected immediate rewards, , for all ,

, and ( is plus a terminal state if the problem is episodic). Although DP


ideas can be applied to problems with continuous state and action spaces, exact solutions are
possible only in special cases. A common way of obtaining approximate solutions for continuous
state and action tasks is to quantize the state and action spaces and then apply finite-state DP
methods.

The key idea of DP, and of reinforcement learning generally, is the use of value functions
to organize and structure the search for good policies. As discussed there, we can easily obtain

optimal policies once we have found the optimal value functions, or , which satisfy the
Bellman optimality equations:

or

for all , , and . As we shall see, DP algorithms are obtained by turning


Bellman equations such as these into assignment statements, that is, into update rules for
improving approximations of the desired value functions.

5.3 Greedy Algorithm


An algorithm is a step-by-step recipe for solving a problem. A greedy algorithm might also
be called a "single-minded" algorithm or an algorithm that gobbles up all of its favorites first. The
idea behind a greedy algorithm is to perform a single procedure in the recipe over and over again
until it can't be done any more and see what kind of results it will produce. It may not completely

33 www.arihantinfo.com
solve the problem, or, if it produces a solution, it may not be the very best one, but it is one way of
approaching the problem and sometimes yields very good (or even the best possible) results.

5.4 Minimum-Cost spanning trees

Let G=(V,E) be an undirected connected graph. A sub-graph t = (V,E1) of G is a spanning


tree of G if and only if t is a tree.

Above figure shows the complete graph on four nodes together with three of its spanning
tree.
Spanning trees have many applications. For example, they can be used to obtain an
independent set of circuit equations for an electric network. First, a spanning tree for the electric
network is obtained. Let B be the set of network edges not in the spanning tree. Adding an edge
from B to the spanning tree creates a cycle. Kirchoff’s second law is used on each cycle to obtain a
circuit equation.
Another application of spanning trees arises from the property that a spanning tree is a
minimal sub-graph G’ of G such that V(G’) = V(G) and G’ is connected. A minimal sub-graph with
n vertices must have at least n-1 edges and all connected graphs with n-1 edges are trees. If the
nodes of G represent cities and the edges represent possible communication links connecting two
cities, then the minimum number of links needed to connect the n cities is n-1. The spanning
trees of G represent all feasible choice.
In practical situations, the edges have weights assigned to them. These weights may
represent the cost of construction, the length of the link, and so on. Given such a weighted graph,
one would then wish to select cities to have minimum total cost or minimum total length. In either
case the links selected have to form a tree. If this is not so, then the selection of links contains a
cycle. Removal of any one of the links on this cycle results in a link selection of less const
connecting all cities. We are therefore interested in finding a spanning tree of G. with minimum
cost since the identification of a minimum-cost spanning tree involves the selection of a subset of
the edges; this problem fits the subset paradigm.

5.5 Minimum Spanning Tree: -


Let G = (V,E) be an undirected connected graph. A subgraph t = (V, E1) of g is a spanning tree of g
if t is a tree.

Example: shows the complete graph on four nodes together with three of its spanning trees.

34 www.arihantinfo.com
Spanning trees have many applications. For example, they can be used to obtain an independent set of circuit equations
for an electric network. In practical situations, the edges have weights assigned to them. These weights may represent
the cost of construction, the length of the link, and so on. Given such a weighted graph, one would then wish to select
cities to have minimum total cost or minimum total length. In either case the links selected have to form a tree. We are
therefore interested in finding a spanning tree g with minimum cost. A graph and one of its minimum spanning tree
involves the selection of a subset of the edges, this problem fits the subset paradigm.

1 1
10 28 10

6 2 6 2
14 16 25 14 16
25 3 7 3
7 5
24 18 12 12
5 4 22 4
22
Figure (a): Graph Fig. (b): Minimum spanning tree
We present two algorithms for finding a minimum spanning tree of a weighted graph: Prim’s
algorithm and Kruskal’s algorithm.

5.6 Prim’s Algorithm


A greedy method to obtain a minimum spanning tree is the to build the tree edge by edge. The
next edge to include is chosen according to some optimization criterion. The simplest such
criterion is to choose an edge that results in a minimum increase in the sum of the costs of the
edges so far included. There are two possible ways to interpret this criterion. In the first, the set of
edges so far selected from a tree. Thus if A is the set of edges selected so far, then A forms a tree.
The next edge (u,v) to be included in A is a minimum cost edge not in A with property that A ∪
{(u,v)} is also a tree. The following example shows this selection criterion results in a minimum
spanning tree. The corresponding algorithm is known as Prim’s algorithm.

Example shows the working of Prim’s method on the graph . The spanning tree obtained is shown
in figure (b) and has a cost of 99.

1 1
10 10

6 2 6 2
7 25 7
35 www.arihantinfo.com
3 3
5 5
5 4
Fig. (a) Fig. (b)

1 1
10 10

6 2 6 2
7 25 7
25 3 3
5 5
4 4 12
22 22
Fig. (c) Fig. (d)

1 1
10 10

6 2 16 6 2 16
7 25 7 14
25 3 3
5 12 5
4 4 12
22 22
Fig. (e) Fig. (f)

Having seen how Prim’s method works, let us obtain a n algorithm to find a minimum cost
spanning tree using this method. The algorithm will start with a tree that include only a minimum
cost edge of g. Then, edges are added to this tree one by one. The next edge (i, j) to be added is
such that I is a vertex already included in the tree, j is a vertex not yet included, and the cost of (i,
j), cost [i, j], is minimum among all edges (k, l) such that vertex k is in the tree and vertex e is not
in the tree. To determine this edge (i, j) efficiently, we associate with each vertex j not yet included
in the tree a value near [j]. The value near [j] [near (j)] is minimum among all choices for near [j].
We define near [j] = 0 for all vertices j that are already in the tree. The next edge to include is
defined by the vertex j such that near [j] # 0 (j not already in the tree) and cost [j] [near (j)] is
minimum.
Prim (E, cost, n, t)
//E is the set of edges in g. cost [n] [n] is
// the cost adjacmcy matrix of an n vertex

36 www.arihantinfo.com
//graph such that cost [i,j] is
//either a positive real number or is it
//no edge (i, j) exists.
//A minimum spanning tree is computed
//and stored as a set of edges in
//The array t[n–1] [2]. (The final cost is
//returned.
{
Let (k, l) be an edge of minimum cost in E;
Min cost = cost [k] [l];
t [1] [1] = k; k[1] [2] = l;
for (i = 1; i < = n : i + l)
if (cost [i] [l] < cost [i] [k])
near [i] = l;
else
near [i] = k;
near [k] = near [l] = 0;
for (i = 2; i < = n–1; i + l)
{ //Find n–2 additional edges for t.
Let j be an index such that near [j] ! = 0 and
Cost [j] near [(j)] is minimum;
T [i] [1] = j; t [i] [2] = near [j];
min cost = mincost + cost [j] [near [j]];
near [j] = 0;
for (k = 1; k < = n; k + l) //update near
if (near [k] ! = 0 && cost [k] [near [k] > cost [k] [j])
near [k] = j;
}
return (min cost);
}
The time required by algorithm prim is 0 (n2), where n is the number of vertices in the
graph g.

5.7 Kruskal’s Algorithm


There is a second possible interpretation of the optimization criteria mentioned earlier in
which the edges of the graph are considered in non-decreasing order of cost. This interpretation is
that the set t of edges so far selected for the spanning tree be such that it is possible to complete t
into a tree. Thus t may not be a tree at all stages in the algorithm. In fact it will generally only be

37 www.arihantinfo.com
a forest since the set of edges t can be completed into a tree if there are no cycles in t. This
method is due to kruskal.

Example: Consider the graph of figure (a). We begin with no edges selected figure(a) shows
the current graph with no edges selected Edge (1,6) is the first edge considered. It is included in
the spanning tree being built. This yields the graph of figure (b). Next the edge (3,4) is selected and
included in the tree (fig. (c)). The next edge to be considered is (2,7). Its inclusion in the tree being
built does not create a cycle, so we get the graph of figure. Edge (2,3) is considered next and
included in the tree figure (e). Of the edges not yet considered (7,4) has the least cost. It is
considered next. Its inclusion in the tree results in a cycle, so this edge is discarded. Edge (5,4) is
the next edge to be added in the tree being built. This result in the configuration of figure (f). The
next edge to be considered is the edge (7,5). It is discarded as its inclusion creates a cycle. Finally
edge (6,5) is considered an included in the tree built. This completes the spanning tree. The
resulting tree has cost 99.

1 1
10

6 2 6 2
7 7
3 3
5 5
4 4

Fig. (a) Fig. (b)

1 1
10 10

6 2 6 2
7 14
3 7 3
5 12 5
4 4 12
Fig. (c) Fig. (d)

38 www.arihantinfo.com
1 1
10 10
2
6 16 6 2 16
7 14 14
3 7 3
5 12 12
4 4
5 22
Fig. (e) Fig. (f)

For clarity, kruskal’s method is written out more formally in following algorithm.
1. t = 0;
2. while [(it has less than n–1 edges) R& (E! = 0)]
3. {
4. Choose an edge (u,v) from E of lowest cost;
5. Delete (q, w) from E;
6. If (u, w) does not create a cycle in it)
add (v,w) to t;
7. else
Discard (v, w);
8. }
Initially E is the set of all edges in g. The only functions we wish to perform on this set are
(1) determine an edge with minimum cost (line 4) and (2) delete this edge (line 5). Both these
functions can be performed efficiently if the edges in E are maintained as a sorted sequential list.
It is not essential to sort all the edges so long as the next edge for line 4 can be determined easily.
If the edges are maintained as a min heap, then the next edge to consider can be obtained in 0
(long |E|) line. The construction of heap it self take O (|E|) time. To be able to perform step 6
efficiently, the vertices in g should be grouped together in such a way that one can easily
determine whether the vertices v and w are already connected by the earlier selection of edges. If
they are, then the edge (v,w) is to be added to t. One possible grouping is to place all vertices in
the same connected component by t into a set. For example, when the edge (2,6) is to be
considered, the sets are {1,2}, {2,4,6}, and {5}. Vertices 2 and 6 are in different sets so these sets
are combined to give {1,2,3,4,6} and {5}. The next edge to be considered is (1,4). Since vertices 1
and 4 are in the same set, the edge is rejected. The edge (3,5) connects vertices in different sets
and results in the final spanning tree.

5.8 Shortest Paths


Graphs can be used to represent the highway structure of a state on country with vertices
representing cities and edges representing sections of highway. The edge can them be assigned

39 www.arihantinfo.com
weights which may be either the distance along that section of highway. A motoriot wishing to
drive from city A to B would be interested in answers to the following question:

• Is there a path from A to B?

• If there is more than one path from A to B, which is the shortest path?
The problems defined by these questions are special cases of the path problems we study in
this section. The length of a path is now defined to be the sum of the weights of the edges or that
path. The starting vertex of the path is referred to as the source, and the last vertex the
destination. The graphs are digraphs to allow for one-way structs. In the problem we consider we
are given a directed graph g = (V,E), a weighting function cost for the edges of g, and a source
vertex V0. The problem is to determine the shortest path from V0 to all the remaining vertices of
g. It is assumed that all the weight are positive.

5.9 Dijkstra’s Algorithm


This algorithm determines the lengths of the shortest paths from v0 to all other vertices in g.
Dijkstra’s (v, cost, dist, n)
//dist [j], 1< = j < = n, is set to the legnth
//of the hortest path from vertex v to
//vertex j in a diagraph g with n
//vertices dist [v] set to zero. G is
//represented by its cost adjacency matrix
//cost [n] [n].
{
for (i = 1; i < = n; i &&)
{ //intializes
S [i] = false; dist [i] = cost [v][i]
}
S [v] = true; dist [v] = 0.0; ||put v in S.
{
//Determines n–1 paths from v.
Choose u from among these vertices not in S such that digit [u] is minimum;
S[u] = true; ||put u in S
For (each is adjacent to u with S[w] = false)
//update distances
dist [w] = dist [u] + cost [u] [w]
}
}

Example: Consider the eight vertex diagraph with cost adjacency matrix as in figure. The values of
dist and the vertices selected at each iteration of for loop of line 12 is previous algorithm, for
finding all the shortest paths from Boston are shown in figure. To begin with, S contains only
40 www.arihantinfo.com
Boston. In the first iteration of for loop (that is num = 2), the city u that is not in S and whose dist
[4] is minimum is identified to be New York. In the next iteration of the for loop, the city that
enters S is Miami since it has the smallest dist [ ] value from among all the nodes not in S. None
of the dist [ ] values are altered. The algorithm when only seven of the eight vertices are in S. By
the definition of dist, the distance of the last vertex, in this case Los Angeles, is correct as the
shortest path from Boston to Los Angeles can go through only the remaining six vertices.

Boston
1500 5
Chicago

4 250
1200 1000

San Francisco 6 New York


800
2 3
300 Denver 1400 900
1000

1 8 1000
Los Angeles New Orleans
1700 7 Miami
Figure (a)

1 2 3 4 5 6 7 8

1 0
2 300 0
3 100 800 0
4 1200 0
5 1500 0 250
6 1000 0 900 1400
7 0 1000
8 1700 0

Figure (b)

Ilevation S Vertex Distance

Selcted LA SF DEN CHI BOST NY WA NO

[1] [2] [3] [4] [5] [6] [7] [8]

Intial — — +00 +00 +00 1500 0 250 +00 +00

1 {5} 6 +00 +00 +00 1250 0 250 1150 1650

2 {5,6} 7 +00 +00 +00 1250 0 250 1150 1650

41 www.arihantinfo.com
3 {5,6,7} 4 +00 +00 2450 1250 0 250 1150 1650

4 {5,6,7,4} 8 3350 +00 2450 1250 0 250 1150 1650

5 {5,6,7,4,8} 3 350 3250 2450 1250 0 250 1150 1650

6 {5,6,7,4,8,3} 2 3350 3250 2450 1250 0 250 1150 1650

{5,6,7,4,8,3,2}

Figure

5.10 Backtracking
Backtracking is a systematic way to go through all the possible configurations of a space.
These configurations may be all possible arrangements of objects (permutations) or all possible
ways of building a collection of them (subsets). Other applications may demand enumerating all
spanning trees of a graph, all paths between two vertices, or all possible ways to partition the
vertices into color classes.
What these problems have in common is that we must generate each one of the possible
configurations exactly once. Avoiding both repetitions and missing configurations means that we
must define a systematic generation order among the possible configurations. In combinatorial
search, we represent our configurations by a vector , where each element is
selected from an ordered set of possible candidates for position i. As shown below, this
representation is general enough to encode most any type of combinatorial object naturally.
The search procedure works by growing solutions one element at a time. At each step in
the search, we will have constructed a partial solution with elements fixed for the first k elements
of the vector, where . From this partial solution , we will construct the set of
possible candidates for the (k+1)st position. We will then try to extend the partial solution by
adding the next element from . So long as the extension yields a longer partial solution, we
continue to try to extend it.
However, at some point, might be empty, meaning that there is no legal way to extend
the current partial solution. If so, we must backtrack, and replace , the last item in the solution
value, with the next candidate in . It is this backtracking step that gives the procedure its name:

Backtrack(A) Compute , the set of candidate first elements of solution A.


k=1
while k > 0 do
while do (*advance*)
= the next element from

if is a solution, report it.

k=k+1
compute , the set of candidate kth elements of solution A.
k = k - 1 (*backtrack*)

Backtracking constructs a tree of partial solutions, where each vertex is a partial solution.
There is an edge from x to y if node y was created by advancing from x. This tree of partial
solutions provides an alternative way to think about backtracking, for the process of constructing

42 www.arihantinfo.com
the solutions corresponds exactly to doing a depth-first traversal of the backtrack tree. Viewing
backtracking as depth-first search yields a natural recursive implementation of the basic
algorithm:

Backtrack-DFS(A,k)

if is a solution, report it.

else

k = k +1

compute

while do

= an element in

Backtrack(a,k)

Although a breadth-first search could also be used to enumerate all solutions, depth-first
search is greatly preferred because of the amount of storage required. In depth-first search, the
current state of the search is completely represented by the path from the root to the current
search node, which requires space proportional to the height of the tree. In breadth-first search,
the queue stores all the nodes at the current level, which is proportional to the width of the
search tree. For most interesting problems, the width of the tree will grow exponentially in its
height.

43 www.arihantinfo.com
UNIT 6

Trees and Sorting

6.1 Basic terminology


6.2 Implementation of tree
6.3 An Array Representation of Trees
6.4 Representation of Trees by List of Children
6.5 Binary trees
6.6 The internal sorting model
6.7 Bubble sort
6.8 Insertion sort
6.9 Selection sort
6.10 Heap sort
6.11 Divide and conquer
6.11.1 Merge sort
6.11.2 Quick sort
6.11.3 Binary search

6.1 Basic terminology of Trees


• Trees are a way of organizing data so that branches relate the data items. Examples:
• Family Tree
• Genealogical Chart
Definition
• A tree is a finite set of one or more nodes such that:
• There is a specially designated node called the root
• The remaining nodes are partitioned into n>= 0 disjoint sets, T1,…Tn where each of these
sets is a tree. (T1,…Tn ) are subtrees of the root
Root Node
Node at the "top" of a tree - the one from which all operations on the tree commence. The
root node may not exist (a NULL tree with no nodes in it) or have 0, 1 or 2 children in a
binary tree.
Leaf Node
Node at the "bottom" of a tree - farthest from the root. Leaf nodes have no children.
Complete Tree
Tree in which each leaf is at the same distance from the root.
Height
Number of nodes which must be traversed from the root to reach a leaf of a tree.

6.2 Implementation of tree


The basic implementations for trees and discuss their capabilities for supporting the
various tree operations are defined in this section.

44 www.arihantinfo.com
6.3 An Array Representation of Trees
A complete binary tree has a simple array representation. Suppose we number the nodes
from left to right, beginning at the top and ending at the bottom. Then we can store the various
data items in the corresponding elements of an array.
For example

can be represented by the array

This in fact corresponds to the level order enumeration of the tree. Note that we only use
an initial segment of the array. Provided the array is long enough, and we know the number of
tree nodes, it doesn't matter how many unused components there are at the end.
6.3 Representation of Trees by List of Children
Again, the idea is simple. A node in the tree has
• A data field
• A left child field with a pointer to another tree node
• A right child field with a pointer to another tree node
• Optionally, a parent field with a pointer to the parent node
The most important thing to remember about the linked representation is this:
• The pointer to the root node, not a node, represents a tree.
• The empty tree is simply the NULL pointer, not an empty node.
An important and useful way to representing trees is to form for each node a list of its children.
Like linked lists, trees are made up of nodes. A common kind of tree is a binary tree, in
which each node contains a reference to two other nodes (possibly null). These references are
referred to as the left and right subtrees. Like list nodes, tree nodes also contain cargo. A state
diagram for a tree looks like this:

To avoid cluttering up the picture, we often omit the Nones.


The top of the tree (the node tree refers to) is called the root. In keeping with the tree
metaphor, the other nodes are called branches and the nodes at the tips with null references are
called leaves. It may seem odd that we draw the picture with the root at the top and the leaves at
the bottom, but that is not the strangest thing.

45 www.arihantinfo.com
To make things worse, computer scientists mix in another metaphor the family tree. The
top node is sometimes called a parent and the nodes it refers to are its children. Nodes with the
same parent are called siblings.
Finally, there is a geometric vocabulary for talking about trees. We already mentioned left
and right, but there is also "up" (toward the parent/root) and "down" (toward the children/leaves).
Also, all of the nodes that are the same distance from the root comprise a level of the tree.
We probably don't need three metaphors for talking about trees, but there they are.
Like linked lists, trees are recursive data structures because they are defined recursively.
A tree is either:
• The empty tree, represented by None, or
• A node that contains an object reference and two tree references.

6.5 Binary Trees


A Binary tree is a finite set of elements that is either empty or is partitioned into three
disjoint subsets. The first subset contains a single element called the root of the tree. The other
two subsets are themselves binary trees, called the left and right subtrees of the original tree. A
left or right subtree is called nodes of the tree.
A conventional method of picturing a binary tree is shown in figure. This tree consists of nine
nodes with A as its root. Its left subtree is rooted at B and its right subtree is rooted at C. This is
indicated by the two branches emanating from A to B on the left and to C on the right. The
absence of a branch indicates an empty subtree for example, the left subtree of the binary tree
rooted at C and the right subtree of the binary tree rooted at E are both empty. The binary trees
rooted at D, G, H and I have empty right and left subtrees.

B C

D E F

G H I

A Binary Tree
If A is the root of a binary tree and B is the root of its left or right subtree, then A is said to be the
father of B and B is said to be the left or right son of A. A node that has no sons (such as D, G,
H, and I of figure) is called a leaf. Node n1 is an ancestor of node n2 (and n2 is a descendent of n1).
If, n1 is either the father of n2 or the father of some ancestor of n2. For example, in the tree of fig.,
A is an ancestor of G and H is a descendent of C, but E is neither an ancestor nor a descendent of
C. A node n2 is a left descendent of node n1 if n2 is either the left son of n1 or a descendent of the
left son of n1. A right descendent may be similarly defined. Two nodes are brothers if they are left
and right sons of the same father.
Illustrate some structures that are not binary trees.

A A A
B C B C B C
D E D E D F
46 www.arihantinfo.com
G
(a) (b) (c)

Structures that are not binary trees

The simplest form of tree is a binary tree. A binary tree consists of


a. A node (called the root node) and
b. Left and right sub-trees.
Both the sub-trees are themselves binary trees.

A binary tree
The nodes at the lowest levels of the tree (the ones with no sub-trees) are called leaves.
In an ordered binary tree,
1. The keys of all the nodes in the left sub-tree are less than that of the root,
2. The keys of all the nodes in the right sub-tree are greater than that of the root, the left and
right sub-trees are themselves ordered binary trees.
Properties of Binary Trees
• The maximum number of nodes on level i is 2i-1, i >= 1
• The maximum number of nodes in a binary tree of depth k is 2k -1, k >= 1
• For any non-empty tree, T, if n0 is the number of leaf nodes and n2 is the number of nodes
of degree 2, then n0 = n2 + 1

6.6 The internal sorting model

The simplest algorithms usually take O(n2) time to sort n objects and are only useful for
sorting short lists. One of the most popular sorting algorithms is quick sort, which takes O(nlogn)
time on average. Quick sort works well for most common applications, although, in the worst
case, it can take O(n2) time. There are other methods, such as heaps ort and merge sort, that take
O(nlogn) time in the worst case, although their average case behavior may not be quite as good as
47 www.arihantinfo.com
that of quick sort. Merge sort, however, is well-suited for external sorting. We shall consider
several other algorithms called “bin” or “bucket” sorts. These algorithms work only on special
kinds of data, such as integers chosen from a limited range, but when applicable, they are
remarkably fast, taking only O(n) time the worst case.
We assume that the objects to be sorted are records consisting of one or more fields. One
of the fields, called the key, is of a type for which a linear – ordering relationship<= is defined.
Integers, reals and arrays of characters are common examples of such types, although we may
generally use any key type for which a “less than” or “less than or equal to “relation is defined.

The sorting problem is to arrange a sequence of records so that the values of their key fields form
a nondecreasing sequence. That is given records r1,r2…….. rn with key values k1,k2…..kn, respectively,
we must produce the same records in an order ri1,ri2,…….rin, such that ki1<=ki2 <= ki3……………..<= kin.
The records all need not have distinct values, nor do we require that records with the same key
value appear in any particular order.

We shall use several criteria to evaluate the running time of an internal sorting algorithm. The
first and most common measure is the number of algorithm steps required to sort n records.
Another common measure is the number of comparisons between keys that must be made to sort
n records. This measure is often useful when a comparison between a pair keys is a relatively
expensive operation, as when keys are long strings of characters. If the size of records is large, we
may also want to count the number of times a record must be, moved. The application at hand
usually makes the appropriate cost measure evident.

6.7 Bubble sorting

The bubble sort is the oldest and simplest sort in use. Unfortunately, it's also the slowest.The
bubble sort works by comparing each item in the list with the item next to it, and swapping them
if required. The algorithm repeats this process until it makes a pass all the way through the list
without swapping any items (in other words, all items are in the correct order). This causes larger
values to "bubble" to the end of the list while smaller values "sink" towards the beginning of the
list.
The bubble sort is generally considered to be the most inefficient sorting algorithm in
common usage. Under best-case conditions (the list is already sorted), the bubble sort can
approach a constant O(n) level of complexity. General-case is an abysmal O(n2).
While the insertion, selection, and shell sorts also have O(n2) complexities, they are
significantly more efficient than the bubble sort.
Pros: Simplicity and ease of implementation.
Cons: Horribly inefficient.
Source Code
Below is the basic bubble sort algorithm.
void bubbleSort(int numbers[], int array_size)
{
int i, j, temp;

for (i = (array_size - 1); i >= 0; i--)


{
for (j = 1; j <= i; j++)
{
if (numbers[j-1] > numbers[j])

48 www.arihantinfo.com
{
temp = numbers[j-1];
numbers[j-1] = numbers[j];
numbers[j] = temp;
}
}
}
}

• Unsorted array:
• 45 67 12 34 25 39

• Effective array of size 6:


• 45 12 34 25 39 67

• Effective array of size 5:


• 12 34 25 39 45

• Effective array of size 4:


• 12 25 34 39

• Effective array of size 3:


• 12 25 34

• Effective array of size 2:


• 12 25

• Effective array of size 1:


• 12

• Sorted array:
• 12 25 34 39 45 67

6.8 Insertion Sort


The insertion sort works just like its name suggests - it inserts each item into its proper
place in the final list. The simplest implementation of this requires two list structures - the source
list and the list into which sorted items are inserted. To save memory, most implementations use
an in-place sort that works by moving the current item past the already sorted items and
repeatedly swapping it with the preceding item until it is in place.
Like the bubble sort, the insertion sort has a complexity of O(n2). Although it has the same
complexity, the insertion sort is a little over twice as efficient as the bubble sort.
Pros: Relatively simple and easy to implement.
Cons: Inefficient for large lists.
The insertion sort is a good middle-of-the-road choice for sorting lists of a few thousand
items or less. The algorithm is significantly simpler than the shell sort, with only a small trade-off
in efficiency. At the same time, the insertion sort is over twice as fast as the bubble sort and
49 www.arihantinfo.com
almost 40% faster than the selection sort. The insertion sort shouldn't be used for sorting lists
larger than a couple thousand items or repetitive sorting of lists larger than a couple hundred
items.
Source Code
Below is the basic insertion sort algorithm.

void insertionSort(int numbers[], int array_size)


{
int i, j, index;

for (i=1; i < array_size; i++)


{
index = numbers[i];
j = i;
while ((j > 0) && (numbers[j-1] > index))
{
numbers[j] = numbers[j-1];
j = j - 1;
}
numbers[j] = index;
}
}

6.9 Selection Sort


The selection sort works by selecting the smallest unsorted item remaining in the list, and
then swapping it with the item in the next position to be filled. The selection sort has a complexity
of O(n2).
Pros: Simple and easy to implement.
Cons: Inefficient for large lists, so similar to the more efficient insertion sort that the insertion
sort should be used in its place.
The selection sort is the unwanted stepchild of the n2 sorts. It yields a 60% performance
improvement over the bubble sort, but the insertion sort is over twice as fast as the bubble sort
and is just as easy to implement as the selection sort. In short, there really isn't any reason to use
the selection sort - use the insertion sort instead.
If you really want to use the selection sort for some reason, try to avoid sorting lists of
more than a 1000 items with it or repetitively sorting lists of more than a couple hundred items.
Source Code
Below is the basic selection sort algorithm.

void selectionSort(int numbers[ ], int array_size)


{
int i, j;
int min, temp;

50 www.arihantinfo.com
for (i = 0; i < array_size-1; i++)
{
min = i;
for (j = i+1; j < array_size; j++)
{
if (numbers[j] < numbers[min])
min = j;
}
temp = numbers[i];
numbers[i] = numbers[min];
numbers[min] = temp;
}
}

6.10 Heap sorting


The heap sort is the slowest of the O(n log n) sorting algorithms, but unlike the merge and
quick sorts it doesn't require massive recursion or multiple arrays to work. This makes it the most
attractive option for very large data sets of millions of items.
The heap sort works as it name suggests - it begins by building a heap out of the data set,
and then removing the largest item and placing it at the end of the sorted array. After removing
the largest item, it reconstructs the heap and removes the largest remaining item and places it in
the next open position from the end of the sorted array. This is repeated until there are no items
left in the heap and the sorted array is full. Elementary implementations require two arrays - one
to hold the heap and the other to hold the sorted elements.
To do an in-place sort and save the space the second array would require, the algorithm
below "cheats" by using the same array to store both the heap and the sorted array. Whenever an
item is removed from the heap, it frees up a space at the end of the array that the removed item
can be placed in.
Pros: In-place and non-recursive, making it a good choice for extremely large data sets.
Cons: Slower than the merge and quick sorts.
Source Code
Below is the basic heap sort algorithm. The siftDown() function builds and reconstructs the heap.

void heapSort(int numbers[ ], int array_size)


{
int i, temp;

for (i = (array_size / 2)-1; i >= 0; i--)


siftDown(numbers, i, array_size);

for (i = array_size-1; i >= 1; i--)

51 www.arihantinfo.com
{
temp = numbers[0];
numbers[0] = numbers[i];
numbers[i] = temp;
siftDown(numbers, 0, i-1);
}
}

void siftDown(int numbers[ ], int root, int bottom)


{
int done, maxChild, temp;

done = 0;
while ((root*2 <= bottom) && (!done))
{
if (root*2 == bottom)
maxChild = root * 2;
else if (numbers[root * 2] > numbers[root * 2 + 1])
maxChild = root * 2;
else
maxChild = root * 2 + 1;

if (numbers[root] < numbers[maxChild])


{
temp = numbers[root];
numbers[root] = numbers[maxChild];
numbers[maxChild] = temp;
root = maxChild;
}
else
done = 1;
}
}

A complete program of heap sort


#include <stdlib.h>
#include <stdio.h>

52 www.arihantinfo.com
#define NUM_ITEMS 100
void heapSort(int numbers[], int array_size);
void siftDown(int numbers[], int root, int bottom);
int numbers[NUM_ITEMS];
int main()
{
int i;

//seed random number generator


srand(getpid());

//fill array with random integers


for (i = 0; i < NUM_ITEMS; i++)
numbers[i] = rand();

//perform heap sort on array


heapSort(numbers, NUM_ITEMS);

printf("Done with sort.\n");

for (i = 0; i < NUM_ITEMS; i++)


printf("%i\n", numbers[i]);
}

void heapSort(int numbers[], int array_size)


{
int i, temp;
for (i = (array_size / 2)-1; i >= 0; i--)
siftDown(numbers, i, array_size);

for (i = array_size-1; i >= 1; i--)


{
temp = numbers[0];
numbers[0] = numbers[i];
numbers[i] = temp;
siftDown(numbers, 0, i-1);
}
}

void siftDown(int numbers[], int root, int bottom)


{
int done, maxChild, temp;

done = 0;
while ((root*2 <= bottom) && (!done))
{
if (root*2 == bottom)
maxChild = root * 2;
else if (numbers[root * 2] > numbers[root * 2 + 1])
maxChild = root * 2;
else
maxChild = root * 2 + 1;

if (numbers[root] < numbers[maxChild])


{
temp = numbers[root];
numbers[root] = numbers[maxChild];

53 www.arihantinfo.com
numbers[maxChild] = temp;
root = maxChild;
}
else
done = 1;
}
}

6.11 Divide and conquer: -


6.11.1 Merge sort: -
The merge sort splits the list to be sorted into two equal halves, and places them in
separate arrays. Each array is recursively sorted, and then merged back together to form the final
sorted list. Like most recursive sorts, the merge sort has an algorithmic complexity of O(n log n).
Elementary implementations of the merge sort make use of three arrays - one for each half
of the data set and one to store the sorted list in. The below algorithm merges the arrays in-place,
so only two arrays are required. There are non-recursive versions of the merge sort, but they don't
yield any significant performance enhancement over the recursive algorithm on most machines.
Pros: Marginally faster than the heap sort for larger sets.
Cons: At least twice the memory requirements of the other sorts; recursive.
The merge sort is slightly faster than the heap sort for larger sets, but it requires twice the
memory of the heap sort because of the second array. This additional memory requirement makes
it unattractive for most purposes - the quick sort is a better choice most of the time and the heap
sort is a better choice for very large sets.
Like the quick sort, the merge sort is recursive which can make it a bad choice for
applications that run on machines with limited memory.
Source Code
Below is the basic merge sort algorithm.

void mergeSort(int numbers[], int temp[], int array_size)


{
m_sort(numbers, temp, 0, array_size - 1);
}

void m_sort(int numbers[], int temp[], int left, int right)


{
int mid;

if (right > left)


{
mid = (right + left) / 2;
m_sort(numbers, temp, left, mid);
m_sort(numbers, temp, mid+1, right);

54 www.arihantinfo.com
merge(numbers, temp, left, mid+1, right);
}
}

void merge(int numbers[], int temp[], int left, int mid, int right)
{
int i, left_end, num_elements, tmp_pos;

left_end = mid - 1;
tmp_pos = left;
num_elements = right - left + 1;

while ((left <= left_end) && (mid <= right))


{
if (numbers[left] <= numbers[mid])
{
temp[tmp_pos] = numbers[left];
tmp_pos = tmp_pos + 1;
left = left +1;
}
else
{
temp[tmp_pos] = numbers[mid];
tmp_pos = tmp_pos + 1;
mid = mid + 1;
}
}

while (left <= left_end)


{
temp[tmp_pos] = numbers[left];
left = left + 1;
tmp_pos = tmp_pos + 1;
}
while (mid <= right)
{
temp[tmp_pos] = numbers[mid];

55 www.arihantinfo.com
mid = mid + 1;
tmp_pos = tmp_pos + 1;
}

for (i=0; i <= num_elements; i++)


{
numbers[right] = temp[right];
right = right - 1;
}
}

6.11.2 Quick sort

Quick sort is a very efficient sorting algorithm invented by C.A.R. Hoare. It has two phases:
• The partition phase and
• The sort phase.
As we will see, most of the work is done in the partition phase - it works out where to divide the
work. The sort phase simply sorts the two smaller problems that are generated in the partition
phase.
This makes quick sort a good example of the divide and conquer strategy for solving
problems. (You've already seen an example of this approach in the binary search procedure.) In
quick sort, we divide the array of items to be sorted into two partitions and then call the quick
sort procedure recursively to sort the two partitions i.e. we divide the problem into two smaller
ones and conquer by solving the smaller ones. Thus the conquer part of the quick sort routine
looks like this:

quicksort( void *a, int low, int high )


{
int pivot;
/* Termination condition! */
if ( high > low ) Initial Step – First Partition
{
pivot = partition( a, low, high );
quicksort( a, low, pivot-1 );
quicksort( a, pivot+1, high );
}
}
Sort Left Partition in the same way

56 www.arihantinfo.com
For the strategy to be effective, the partition phase must ensure that all the items in one
part (the lower part) and less than all those in the other (upper) part.
To do this, we choose a pivot element and arrange that all the items in the lower part are
less than the pivot and all those in the upper part greater than it. In the most general case, we
don't know anything about the items to be sorted, so that any choice of the pivot element will do -
the first element is a convenient one.
As an illustration of this idea, you can view this animation, which shows a partition
algorithm in which items to be sorted are copied from the original array to a new one: items
smaller than the pivot are placed to the left of the new array and items greater than the pivot are
placed on the right. In the final step, the pivot is dropped into the remaining slot in the middle.
6.11.3 Binary search
However, if we place our items in an array and sort them in either ascending or
descending order on the key first, then we can obtain much better performance with an algorithm
called binary search.
In binary search, we first compare the key with the item in the middle position of the
array. If there's a match, we can return immediately. If the key is less than the middle key, then
the item sought must lie in the lower half of the array; if it's greater then the item sought must lie
in the upper half of the array. So we repeat the procedure on the lower (or upper) half of the array.
function can now be implemented:
static void *bin_search( collection c, int low, int high, void *key ) {
int mid;
/* Termination check */
if (low > high) return NULL;
mid = (high+low)/2;
switch (memcmp(ItemKey(c->items[mid]),key,c->size)) {
/* Match, return item found */
case 0: return c->items[mid];
/* key is less than mid, search lower half */
case -1: return bin_search( c, low, mid-1, key);
/* key is greater than mid, search upper half */
case 1: return bin_search( c, mid+1, high, key );
default : return NULL;
}
}

void *FindInCollection( collection c, void *key ) {


/* Find an item in a collection
Pre-condition:
c is a collection created by ConsCollection
c is sorted in ascending order of the key
key != NULL
Post-condition: returns an item identified by key if

57 www.arihantinfo.com
one exists, otherwise returns NULL
*/
int low, high;
low = 0; high = c->item_cnt-1;
return bin_search( c, low, high, key );
}
Points to note:
a. bin_search is recursive: it determines whether the search key lies in the lower or upper
half of the array, then calls itself on the appropriate half.
b. There is a termination condition (two of them in fact!)
i. If low > high then the partition to be searched has no elements in it and
ii. If there is a match with the element in the middle of the current partition, then we
can return immediately.
c. AddToCollection will need to be modified to ensure that each item added is placed in its
correct place in the array. The procedure is simple:
i. Search the array until the correct spot to insert the new item is found,
ii. Move all the following items up one position and
iii. Insert the new item into the empty position thus created.
d. bin_search is declared static. It is a local function and is not used outside this class: if it
were not declared static, it would be exported and be available to all parts of the program.
The static declaration also allows other classes to use the same name internally.
Analysis

58 www.arihantinfo.com
Each step of the algorithm divides
the block of items being searched
in half. We can divide a set of n
items in half at most log2 n times.
Thus the running time of a binary
search is proportional to log n and
we say this is a O(log n) algorithm.

Binary search requires a more complex program than


our original search and thus for small n it may run
slower than the simple linear search. However, for
large n,

Thus at large n, log n is much smaller than n,


consequently an O(log n) algorithm is much
faster than an O(n) one.

Plot of n and log n vs n .

In the worst case, insertion may require n operations to insert into a sorted list.
1. We can find the place in the list where the new item belongs using binary search in O(log
n) operations.

59 www.arihantinfo.com
2. However, we have to shuffle all the following items up one place to make way for the new
one. In the worst case, the new item is the first in the list, requiring n move operations for
the shuffle!
A similar analysis will show that deletion is also an O(n) operation.
If our collection is static, ie it doesn't change very often - if at all - then we may not be
concerned with the time required to change its contents: we may be prepared for the initial
build of the collection and the occasional insertion and deletion to take some time. In
return, we will be able to use a simple data structure (an array) which has little memory
overhead.
However, if our collection is large and dynamic, ie items are being added and deleted
continually, then we can obtain considerably better performance using a data structure
called a tree.

The Binary search requires an ordered list.

Iterative Algorithm
int find (const apvector &list, double target)
// pre: list is sorted in ascending order
//post: ITERATIVE binary search will return the index of the target element, else -1
{
int mid;
int first = 0;
int last = list.length( ) -1;
while ( first <= last )
{
mid = (first + last) / 2;
if ( list[mid] == target )
return mid;
if ( list[mid] > target )
last = mid - 1;
else first = mid + 1;
}
return -1;
}

Recursive Algorithm
int find (const apvector &list, double target, int first, int last)
// pre: list is sorted in ascending order
//post: RECURSIVE binary search will return the index of the target element, else -1
{
if (first > last)

60 www.arihantinfo.com
return -1;
int mid = (first + last) / 2;
if (list[mid] == target)
return mid;
if (list[mid] < target)
return find(list, target, mid+1, last);
return find(list, target, first, mid-1);

UNIT 7

Algorithms for External Storage

7.1 A Model of External Computation


7.2 External sorting
7.3 Characteristics of External Sorting
7.4 Criteria for Developing an External Sorting Algorithm
7.5 Important Uses of External Sorting
7.6 Merge Sort--A Digression
7.7 Top-Down Strategy
7.8 Bottom-Up Strategy
7.9 Storing Information in Files
7.10 Hashed Files
7.11 Indexed Files

7.1 A Model of External Computation

61 www.arihantinfo.com
In the algorithms discussed so far, we have assumed that the amount of input data is
sufficiently small to fit in main memory at the same time. But what if we want to sort all the
employees of the government by length of service or store all the information in the nation’s tax
returns? In such problems the amount of data to be processed exceeds the capacity of the main
memory. Most large computer systems have on-line external storage devices, such as disks or
mass storage devices, on which vast quantities of data can be stored. These on-line storage
devices, however, have access characteristics that are quite different from those of main memory.
A number of data structures and algorithms have been developed to utilize these devices more
effectively. This chapter discusses data structures and algorithms for sorting and retrieving
information stored in secondary memory.

Pascal, and some other languages, provides the file data type, which is intended to
represent data stored in secondary memory. Even if the language being used does not have a file
data type, the operating system undoubtedly supports the notion of files in secondary memory.
Whether we are talking about Pascal files or files maintained by the operating system directly, we
are faced with limitations on how files may be accessed. The operating system divides secondary
memory into equal-sized blocks. The size of blocks varies among operating systems, but 512 to
4096 bytes is typical.

We many regard a file as stored in a linked list of blocks, although more typically the
operating system uses a tree-like arrangement, where the blocks holding the file are leaves of the
tree, and interior nodes each hold pointers to many blocks of the file. If, for example, 4 bytes
suffice to hold the address of a block, and blocks are 4096 bytes long, then a root can hold
pointers to up to 1024 blocks. Thus, files of up to 1024 blocks, i.e., about four million bytes,
could be represented by a root block and blocks holding the file. Files of up to 220 blocks, or 232
bytes could be represented by a root block pointing to 1024 blocks at an intermediate level, each
of which points to 1024 leaf blocks holding a part of the file, and so on.

The basic operation on files is to bring a single block to a buffer in main memory; a buffer
is simply a reserved area of main memory whose size is the same as the size of a block. A typical
operating system facilitates reading the blocks in the order in which they appear in the list of
blocks that holds the file. That is, we initially read the first block into the buffer for that file, and
then replace it by the second block, which is written into the same buffer, and so on.

We can now see the rationale behind the rules for reading Pascal files. Each file is stored
in a sequence of blocks, with a whole number of records in each block. (Space may be wasted, as
we avoid having one record split across block boundaries.) The read-cursor always points to one
of the records in the block that is currently in the buffer. When that cursor must move to a record
not in the buffer, it is time to read the next block of the file.

Similarly, we can view the Pascal file-writing process as one of creating a file in a buffer. As
records are “written” into the file, they are placed in the buffer for that file, in the position
immediately following any previously placed records. When the buffer cannot hold another
complete record, the buffer is copied into an available block of secondary storage and that block is
appended to the end of the list of blocks for that file. We can now regard the buffer as empty and
write more records into it.

7.2 External sorting

This term is used to refer to sorting methods that are employed when the data to be sorted is too
large to fit in primary memory.

Why we Need New Algorithms?


Most of the internal sorting algorithms take advantage of the fact that memory is directly
addressable. Shell sort compares elements A [t] and A[i – hk] in one time unit. Heapsort
62 www.arihantinfo.com
computers elements A[i] and [i + 2 + 1] in one time unit. Quick sort, with median of three
partitioning requires comparing A [left], A [Center], and A [Right], in a constant number of time
units. If the input is on a tape can only be accessed sequentially. Even if the data is on a disk,
there is still a practical loss of efficiency, because of the delay required to spin the disk head.
To see how slow external access really are, create a student file that is large, but not too big to fit
in main memory. Read the file in and sort it using an efficient algorithm. The time it takes to sort
the input is certain to be insignificant compared to the time to read the input, even though sorting
is an O (n log n) operation and reading the input is only O(n).

Model for External Sorting


The wide variety of mass storage devices makes external sorting much more device-dependent
than internal sorting. The algorithms that we will consider work on tapes, which are probably the
most restrictive storage medium. Since access to an element on tape is done by winding the tape
to the correct location tapes can be efficiently accessed only in sequential order (in either
direction).
We will assume that we have at least three tapes are drives to perform, the sorting. We need too
drives to do an efficient sort, the third drive simplifies matters. If only one tape drive is present,
then we are in trouble any algorithm will require Ω(N2) tape accesses.

The Simple Algorithm


The basic external sorting algorithm uses the Merge routine from mergesort. Suppose we have
four tapes Ta1, Ta2, Tb1, Tb2, which are two input and two output tapes. Depending on the point in
the algorithm, the a and b tapes are either input tapes or output tapes. Suppose the data is
initially on Ta1. Suppose further that the internal memory can hold (and sort) M records at a time.
A natural first step is to read M records at a time from the input tape, sort the records internally,
and then write the sorted records alternately to Tb1 and Tb2. We will call each set of sorted records
a run. When this is done, we rewind all the tapes. Suppose we have the same input as our
example for Shellsort.

Ta1 81 94 11 96 12 35 17 99 28 58 41 75 15
Ta2
Tb1
Tb2

If M = 3, then after the runs are constructed, the tapes will contain the data indicated in the
following figure.
Ta1
Ta2
Tb1 11 81 94 17 28 99 15
Tb2 12 35 96 41 58 75

Now, Tb1 and Tb2 contain a group of runs. We take the first run from each tape and merge them,
writing the result, which is a run twice as long, onto T a1. Then we take the next run from each
tape, merge these, and the write the result to Ta2. We continue this process, alternative between
Ta1 and Ta2 until either Tb1 or Tb2 is empty. At this point either both are empty or there is one run
left. In the latter case, we copy this run to the approximate tape. We rewind all four tapes, and
repeat the same step, this time using the tapes as input and the b tapes, and repeat the same
steps, this time using the a tapes as input and the b tapes as output. This will give runs of 4m.
We continue the process until we get one run of length n.
63 www.arihantinfo.com
This algorithm will require [log (n/m)] passes, plus the initial run-constructing pass. For instance,
if we have 10 million records of 128 bytes each, and four megabytes of internal memory, then the
first pass will create 320 runs. We would then need nine more passes to complete the sort. Our
example requires [log 13/3] = 3 more passes, which are shown in the following figure.

Ta1 11 12 35 81 94 96 15
Ta2 17 28 41 58 75 99
Tb1
Tb 2

Ta1
Ta2
Tb1 11 12 17 28 35 51 58 75 81 94 96 99
Tb2 15

Ta1 11 12 15 17 28 35 41 58 75 81 94 96 99
Ta2
Tb1
Tb2

7.3 Characteristics of External Sorting

1. During the sort, some of the data must be stored externally. Typically the data will be
stored on tape or disk.
2. The cost of accessing data is significantly greater than either bookkeeping or comparison
costs.
3. There may be severe restrictions on access. For example, if tape is used, items must be
accessed sequentially.

7.4 Criteria for Developing an External Sorting Algorithm

1. Minimize number of times an item is accessed.


2. Access items in sequential order

7.5 Important Uses of External Sorting

1. Business applications where a "transaction" file updates a master file.


2. Example: Updating an inventory database based on sales.
3. Updating a personnel database based on new hires,
4. Promotions, dismissals, etc.
5. Database applications

64 www.arihantinfo.com
o Projection: The user requests a subset of the fields in a file. When a subset of the
fields is taken, there might be duplicate records, so an external sort is used to
remove duplicates.
o Join: Two files are "joined" on a common field(s) to create a new file whose fields are
the union of the fields of the two files. The two files must be sorted so that the
"join" fields can be matched.
6. Example: Suppose one database contains information about courses and rooms and
another database contains information about students and courses. To find out which
classrooms are being used by CS students, one could write the query:
7. Select from student database records with student.major = CS
8. Join the records selected in the previous query with the courses database using the
common courses field.
9. Project the result on rooms to produce a listing of the rooms being used by CS students

7.6 Merge Sort--A Digression

Merge sort is an ideal candidate for external sorting because it satisfies the two criteria for
developing an external sorting algorithm. Merge sort can be implemented either top-down or
bottom-up. The top-down strategy is typically used for internal sorting, whereas the bottom-up
strategy is typically used for external sorting.

7.7 Top-Down Strategy

The top-down strategy works by:


1. Dividing the data in half
2. Sorting each half
3. Merging the two halves
Example
Try using this strategy on the following set of characters (ignore the blanks):
a sorting example
Code
This section presents code for implementing top-down mergesort using arrays.

Merge code for arrays


i = 1;
j = 1;
a[M+1] = INT_MAX;
b[N+1] = INT_MAX;
for (k = 1; k <= M+N; k++)
c[k] = (a[i] < b[j]) ? a[i++] : b[j++];

Array Mergesort
mergesort(int a[], int left, int right)
{
int i, j, k, mid;

if (right > left) {


mid = (right + left) / 2;
mergesort(a, left, mid);
mergesort(a, mid+1, right);
/* copy the first run into array b */
for (i = left, j = 0; i <= mid; i++, j++)
b[j] = a[i];
b[j] = MAX_INT;
/* copy the second run into array c */
for (i = mid+1, j = 0; i <=right; i++, j++)

65 www.arihantinfo.com
c[j] = a[i];
c[j] = MAX_INT;
/* merge the two runs */
i = 0;
j = 0;
for (k = left; k <= right; k++)
a[k] = (b[i] < c[j]) ? b[i++] : c[j++];
}
}

• This code is a more straightforward but less elegant version of mergesort than the
mergesort routine presented on page 166 of Sedgewick.
• My code incorporates the merging code shown in the first bullet.
• Sedgewick's code uses a characteristic of the two halves of the data to cleverly avoid using
sentinels. Right before the merging code begins, he also manages to avoid the assignment i
= left by loading the first half of the array from back to front (i.e., by decrementing i rather
than incrementing i).
• Which is faster? I coded up both algorithms and ran them on 100,000 elements. The result
was a virtual deadheat (mine ran in 2.93 seconds versus 2.95 seconds for Sedgewick).
• What is the moral? Sedgewick's code is undeniably more elegant than mine. However, my
code is more straightforward and could be more easily understood by a programmer trying
to maintain the code. Since the two programs run almost identically fast, I would prefer
my code since it's easier to understand. The moral here is that unless you can get a
significant performance improvement from clever but subtle algorithmic tricks, you
should use the more straightforward approach. If you can get significant performance
improvements, than you should clearly document the tricks you used so that a
programmer unfamiliar with your code can figure out what you were doing.

7.8 Bottom-Up Strategy

The bottom-up strategy is the strategy that we will use for external sorting. The bottom-up
strategy for mergesort works by:

1. Scanning through data performing 1-by-1 merges to get sorted lists of size 2.
2. Scaning through the size 2 sublists and perform 2-by-2 merges to get sorted lists of size 4.
3. Continuing the process of scanning through size n sublists and performing n-by-n merges
to get sorted lists of size 2n until the file is sorted (i.e., 2n >= N, where N is the size of the
file).

Example
Try implementing this strategy on the following set of characters (ignore the blanks):
a sorting example
You get the following runs:
as, or, it, gn, ex, am, lp, e
aors, gint, aemx, elp
aginorst, aeelmpx
aaeegilmnoprstx
Code for Bottom-Up Strategy
The code for doing an in-memory sort using the bottom-up strategy is much more complicated
than for the top-down strategy. In general, one is better off using the top-down strategy if one
wants to use mergesort to perform an internal memory sort.
Mergesort Performance
1. Mergesort requires about N lg N comparisons to sort any file of N elements.

66 www.arihantinfo.com
o Bottom Up: Each M-by-N merge requires M+N comparisons. Each merge doubles
the size of the runs, so lg N passes are required and each pass requires about N
comparisons.
o Top Down: The problem is broken into two halves, the two halves are sorted, and
then N comparisons are used to merge them. This gives the recurrence:
o M[N] = 2M[N/2] + N with M[1] = 0
2. Mergesort requires extra space proportional to N.
3. Mergesort is stable.
4. Mergesort is insensitive to the initial order of its input. In other words, it will require
roughly N lg N comparisons on any input.
Programs
The programs take one argument, the name of a file to sort, and output the sorted file (they do not
write to the file so the file remains unsorted). For example, one could type:
array_mergesort my_file
Test data can be generated using the program generate_mergesort_data, which is found in the
above mentioned bin directory. generate_mergesort_data takes the number of integers to be
generated as an argument and outputs the desired number of integers. The integers are generated
randomly. For example, one could type:
generate_mergesort_data 10 > my_file
array_mergesort my_file

7.9 Storing Information in Files

In this section we consider data structures and algorithm for storing and retrieving
information in externally stored files. We shall view a file as a sequence of records, where each
record consists of the same sequence of fields. Fields can be either fixed length, having a
predetermined number of bytes, or variable length, having an arbitrary size. Files with fixed
length records are commonly used in database management systems to store highly structured
data. Files with variable length records are typically used to store textual information; they are
not available in Pascal. In this section we shall assume fixed – length fields, the fixed – length
techniques can be modified simply to work for variable – length records.
The operations on files we shall consider are the following
a. INSERT a particular record into a particular file.
b. DELETE from a particular file all records having a designated value in each of a designated
set of fields.
c. MODIFY all records in a particular file by setting to designated values in certain fields in
those records that have a designated value in each of another set of fields.
d. RETRIEVE all records having designated values in each of a designated set of fields.
A Simple Organization
The simplest, and also least efficient, way to implement the above file operations is to use
the file reading and writing primitives such as found in Pascal. In this “organization” (which is
really a “ lack of organization”), records can be stored in any order, Retrieving a record with
specified values in certain fields is achieved by scanning the file and looking at each record to see
if it has the specified values. An insertion into a file can be performed by appending the record to
the end of the file.
For modification of records, scan the file and check each record to see if it matches the
designated fields, and if so, make the required changes to the record. A deletion operation works
almost the same way, but when we find a record whose fields match the values required for the
deletion to take place, we must find a way to delete the record. One possibility is to shift all
subsequent records one position forward in their blocks, and move the first record of each
subsequent block into the last position of the previous block of the file. However, this approach
will not work if records are pinned, because a pointer to the ith record in the file would then point
to the I+1st record.
If records are pinned, we must use a somewhat different approach. We mark deleted
records in some way, but we do not move records to fill their space, nor do we ever insert a new
67 www.arihantinfo.com
record into their space. Thus, the record becomes deleted logically from the file, but its space is
still used for the file.
This is necessary so that if we ever follow a pointer to a deleted record, we shall discover
that the record pointed to was deleted and take some appropriate action, such as making the
pointer NIL so it will not be followed again. Two ways to mark records as deleted are:

1. Replace the record by some value that could never be the value of a” real” record, and
when following a pointer, assume the record is deleted if it has that value.
2. Let each record have a deletion bit, a single bit that is records that have been deleted
and 0 otherwise.

7.10 Hashed Files

Hashing is a common techniques used to provide fast access to information stored on


secondary files. We divide the records of a file among buckets, each consisting of a linked list of
one or more blocks of external storage. The organization is similar to that portrayed in Fig. 4.10.
There is a bucket table containing B pointers, one for each bucket. Each pointer in the bucket
table is the physical address of the first block of the linked- list of blocks for that bucket.
The buckets are numbered 0,1, ……, B-1. A hash function h maps each key value into one
of the integers 0 through B-1. If x is a key, h (x) is the number of the bucket that contains the
record with key x, if such a record is present at all. The blocks making up each bucket are
chained together in a linked list. Thus, the header of the ith block of a bucket contains a pointer to
the physical address of the I +1st block. The last block of a bucket contains a NIL pointer in its
header. The major difference between, elements stored in one block of a bucket do not need to be
chained by pointers; only the blocks need to be chained.

R 1 R2 R 3 R 4 R5 R 6 R 1 R2 R 3

Bucket Hashing with buckets consisting of chained blocks.


directory

If the size of the bucket table is small, it can be kept in main memory. Otherwise, it can
be stored sequentially on as many blocks as necessary. To look for the record with key x, we
compute h (x), and find the block of the bucket table containing the pointer to the first lock of
bucket h (x). We then read the blocks of bucket h (x) successively, until we find a block that
contains the record with key x. If we exhaust all blocks in the linked list for bucket h (x), we
conclude that x is not the key of any record.

This structure is quite efficient if the operation is one that specifies values for the fields
in the key, such as retrieving the record with a specified key value or inserting a record (which,
naturally, specifies the key value for that record). The average number of block accesses required
for an operation that specifies the key of a record is roughly the average number of blocks in a
bucket, which is n/bk if n is the number of records, a block holds b records, and k is the number
of buckets. Thus, on the average, operations based on keys are k times faster with this
organization than with the unorganized file. Unfortunately, operations not based on keys are not
speeded up, as we must examine essentially all the buckets during these other operations. The
only general way to speed up operations not based on keys seems to be the use of secondary
indices, which are discussed at the end of this section.
68 www.arihantinfo.com
To insert a record with key value x, we first check to see if there is already a record with
key x. If there is, we report error, since we assume that the key uniquely identifies each record. If
there is no record with key x, we insert the new record in the first block in the chain for bucket h
(x) into which the record can fit. If the record cannot fit into any existing block in the chain for
bucket h (x), we call upon the file system to find a new block into which the record is placed. This
new block is then added to the end of the chain for bucket h (x).

To delete a record with key x, we first locate the record, and then set its deletion bit.
Another possible deletion strategy (which cannot be used if the records are pinned) is to replace
the deleted record with the last record in the chain for h (x). If the removal of the last record
makes the last block in the chain for h (x) empty, we can then return the empty block to the file
system for later re-use.

A well-designed hashed-access file organization requires only a few block accesses for
each file operation. If our hash function is good, and the number of buckets is roughly equal to
the number of records in the file divided by the number of records that can fit on one block, then
the average bucket consists of one block. Excluding the number of block accesses to search the
bucket table, a typical retrieval based on keys will then take on block access, and a typical
insertion, deletion, or modification will take two block accesses. If the average number of records
per bucket greatly exceeds the number that will fit on one block, we can periodically reorganize
the hash table by doubling the number of buckets and splitting each bucket into two.

7.11 Indexed Files

Another common way to organize a file of records is to maintain the file sorted by key
values. We can then search the file as we would a dictionary or telephone directory, scanning only
the first name or word on each page. To facilitate the search we can create a second file, called a
sparse index, which consists of pairs (x, b), where x is a key value and b is the physical address of
the block in which the first record has key value x. This sparse index is maintained sorted by key
values.

69 www.arihantinfo.com
UNIT 8

Memory Management

8.1 The Issues in Memory


8.2 Garbage Collection Algorithms For Equal-Sized Block (Collection in Place)
8.3 Buddy system (Distribution of blocks, Allocation blocks, Returning blocks to available
storage)
8.4 Storage compaction and Compaction problem

8.1 The Issues in Memory

In actual practice, most systems use various combinations of different storage


allocation methods. This is because each method performs better under a different class of
demands. Most Pascal, C, and C++ implementations, for example, rely on a stack for local
variable allocation and a heap, typically implemented with a boundary tag method, for
dynamic or pointer variable allocation. On machines, which use virtual memory, both the
stack and heap areas share a single virtual address space, which is constructed by a
combination of hardware and software from a number of pages. Thus, fixed sized blocks
(pages) are allocated to create the memory region from which the variable size blocks on
the heap are later allocated!

Internal data structures in many operating systems are frequently allocated in terms of
a standard block size from a pool of such blocks, which is quite independent of the storage
allocation mechanism, used for allocation of page frames for user virtual address spaces.
For example, virtual address spaces might be composed of 4096 byte blocks, a size which
is also appropriate for disk buffers, but inappropriate for the records in service request
queues; the latter might be allocated from a separate pool of 16 or 32 byte blocks.

When dynamic data structures are maintained in virtual memory, poor locality of
reference can result. It is highly likely that a linked list, for example, will end up with each
list element stored in a separate page. Some heap managers have been constructed which
attempt to improve the locality of heap data structures, for example, by allocating
successive elements of a list in the same page when possible. This requires that the free
space data structures be organized, when possible, by page. It also requires that the
allocate routine be informed of the desired location of the allocated storage. The Pascal
new procedure has access to the address of the pointer to the newly allocated storage
because it returns its result in a parameter passed by reference, while the C malloc
function is not generally aware of this. As a result, an implementation of new could
attempt to allocate the new data structure in the same page as the pointer to the new data
structure, while this would be more difficult with malloc. Of course, even if new is coded to
allow this, it will only be useful if users are careful to call new(element^.next) to extend a
list, instead of using a temporary variable and then assigning that temporary to the
appropriate field of the data structure.

8.2 Garbage Collection Algorithms for Equal-Sized Block (Collection in Place)

The simplest general-purpose dynamic allocation mechanism of all requires that all of
the available storage be divided into equal sized blocks. When this is done, the size field on the
allocate and reallocate requests is not very meaningful, since only one size is available. If the

70 www.arihantinfo.com
user requests more than this amount, the request will fail; if the user requests less and there
are any free blocks, the request will be successful and an entire block will be allocated no
matter how little memory the user actually requests.

The heap manager must maintain enough information to determine which blocks in
the heap are free at any time. Typically, this information will be stored in the form of a
linked list, where the first word of each free block holds the address of the next free block.
As with chained resolution of forward references, there is a problem with representing the
end of the free list. Throughout this section, this problem will be ignored, and the symbol
NULL, standing for an illegal or reserved pointer value, will be used; it should be
remembered that there are occasional contexts where there are no reserved addresses,
requiring a more complex solution to the problem of marking the end of a list.

When writing a heap manager in assembly language or other low-level languages,


where there is no difference between pointers and integers, it is natural to think of
memory as one big array in which it is perfectly appropriate to do arithmetic on array
indices. In higher level languages, where the type system discourages or even prevents
this, we could take two approaches to explaining such algorithms. One approach is to
explain the algorithm in terms of a large array of integers, M, with integer memory
addresses. Another is to use pointers with carefully documented and very unsafe
conversions between integer and pointer types.

In C, for example, *(int*)a is a pointer to the integer at the memory address a, while
*(char*)a refers to the character stored at the same memory address. The variable a used
in these examples could have been an integer or a pointer; in either casem, it holds the
same memory address. In most C implementations, the integer occupies 4 successive
bytes of memory starting at this address, while the character occupies only one byte at
this address.

Using these conventions, C code to allocate from a free list on the heap .

typedef void * pointer; /* used for untyped pointers */

pointer freelist; /* pointer to the first free block in memory */

pointer allocate( int size )


{
if (size > blocksize) {
error( "size too big" );
return NULL;
} else if (freelist == NULL) {
error( "no space available" );
return NULL;
} else {
pointer block;
block = freelist;
freelist = *(pointer *)freelist;
return block;
}
}

void deallocate( pointer block, int size );


{

71 www.arihantinfo.com
if (block != NULL) {
*(pointer *)block = freelist;
freelist = block;
}
}

Although fixed size block allocation is presented here primarily to set the stage for
more interesting allocation algorithms, there are many places where it is the most
appropriate approach. When memory is divided into fixed length pages (or sectors on disk),
and a non-contiguous allocation scheme is being used, fixed size blocks are clearly
appropriate. Fixed size blocks are also appropriate for those applications where all
requests will be for the same size block. This latter situation shows up on many systems
where the only dynamically allocated storage is for input-output buffering, and all sectors
are of the same size. More commonly, there will be two or three common request sizes;
perhaps a small request for device control blocks, and a large one for input-output buffers.
When this is the case, fixed block size allocation can still be used, but only by maintaining
a separate heap for each size.

One disadvantage of a fixed block size is that, if a program needs to create a data
structure larger than the block size, the program must use multiple blocks for this
structure. Programmers who remember the early days of Microsoft's DOS and Apple's
MacOS are likely to remember a time when the largest memory incrment the operating
system would allocate to an application was 64K bytes. In those settings, many
programmers were forced to write awkwardly structured programs to manipulate such
things as databases, images or other large objects.

Another disadvantage of a fixed block size is that, if a program needs blocks


smaller than the fixed size, the standard size will be allocated anyway. As a result, each
user object will be stored in a container larger than necessary, wasting space. In each
block of memory allocated to a user, we refer to the unused space as a fragment of what
ought to be free space. Because this fragment is inside an allocated block, we refer to this
problem as internal fragmentation of the available free space. In general, any storage
allocation mechanism which will, on occasion, allocate blocks larger than the requested
size is said to cause some degree of internal fragmentation.

An effective measure of the extent to which a storage allocation algorithm leads to


internal fragmentation is the percent of allocated space which goes unused because it is
contained in internal fragments. If the block sizes requested by users are uniformly
distributed between one and the allocated block size, the fixed size block mechanism
described in this section would result in about 50% waste due to internal fragmentation.
In fact, a uniform distribution of block sizes is quite uncommon! Most real problems only
require a few well-defined block sizes.

8.3 Buddy system (Distribution of blocks, Allocation blocks, Returning blocks to


available storage)

One way of dealing with internal fragmentation is to allow a variety of block sizes.
Blocks of each size can be allocated and reallocated by the use of a fixed size block
allocate and reallocated mechanism, and if a block of one size is not available, a larger
block can be allocated and a block of the desired split off of it. When this is done, all
blocks resulting from splitting a particular block are called buddies, and the block from
which they were split is called their parent. The resulting storage allocation mechanism is
said to use a buddy system. All buddy systems maintain an array of lists of free blocks,

72 www.arihantinfo.com
where all blocks in each list are the same size, and the array is indexed by a value
computed from the size.

The oldest buddy system, the binary buddy system has block sizes that are powers
of two. Therefore, when a block is split, it is split exactly in half, and when blocks are
combined, two equal size blocks are combined to make a block twice as big. With the
binary buddy system, we arrange things so that blocks of size 2n always begin at memory
addresses where the n least significant bits are zero. Thus, blocks of size 1 (20) may begin
at any address, but blocks of size 2 (21) may only begin at even addresses, and blocks of
size 4 (22) only begin at addresses with the least significant 2 bits equal to zero.

The constraints on the block addresses in the binary buddy system have an
interesting consequence. When a block of size 2n+1 is split into two blocks of size 2n, the
addresses of these two blocks will differ in exactly one bit, bit n, using the counting
scheme that numbers bits starting with 0 at the least significant end. Thus, given a block
of size 2n at address a, we can compute the address of its buddy, the other half of the
block from which it was split, by exclusive-oring a with 2n. This leads to the allocate
routine.

typedef void * pointer; /* used for untyped pointers */

/* pointers to the free space lists */


pointer freelists[sizes];

/* blocks in freelists[i] are of size 2**i. */


#define BLOCKSIZE(i) (1 << (i))

/* the address of the buddy of a block from freelists[i]. */


#define BUDDYOF(b,i) ((pointer)( ((int)b) ^ (1 << (i)) ))

pointer allocate( int size )


{
int i;

/* compute i as the least integer such that i >= log2(size) */


for (i = 0; BLOCKSIZE( i ) < size; i++);

if (i >= sizes) {
error( "no space available" );
return NULL;
} else if (freelists[i] != NULL) {

/* we already have the right size block on hand */


pointer block;
block = freelists[i];
freelists[i] = *(pointer *)freelists[i];
return block;

} else {

/* we need to split a bigger block */


pointer block, buddy;
block = allocate( BLOCKSIZE( i + 1 ) );

if (block != NULL) {
/* split and put extra on a free list */
buddy = BUDDYOF( block, i );
73 www.arihantinfo.com
*(pointer *)buddy = freelists[i];
freeliests[i] = buddy;
}

return block;
}
}

There are two terminating conditions. As shown, recursion terminates when a


sufficiently large block of free space is available; in this case, either that block is returned
directly to the user, or it is split, perhaps repeatedly, and some piece of it is returned. If
there is no sufficiently large block available, the algorithm terminates with an error. The
same error message results when there is not enough free space as when the user
requests a block larger than the largest allowed, because the recursion keeps asking for
larger and larger blocks!

Uses purely integer arithmetic to take the log of the requested block size. This is
almost always faster than converting the block size to floating point, taking the log, and
then rounding up to the nearest integer. Note that, in C, 1<<i is a very fast way to compute
2i. Also, note that the exclusive-or of variables a and b in C is a^b.

The deallocate routine for the binary buddy system is also recursive.

void deallocate( pointer block, int size )


{
int i;
pointer * p;
pointer buddy;

/* compute i as the least integer such that i >= log2(size) */


for (i = 0; BLOCKSIZE( i ) < size; i++);

/* see if this block's buddy is free */


buddy = BUDDYOF( block, i );
p = &freelists[i];
while ((*p != NULL) && (*p != buddy)) p = (pointer *)*p;

if (*p != buddy) {

/* buddy not free, put block on its free list */


*(pointer *)block = freelists[i];
freeliest[i] = block;

} else {

/* buddy found, remove it from its free list */


*p = *(pointer *)buddy;

/* deallocate the block and its buddy as one block */


if (block > buddy) {
deallocate( buddy, BLOCKSIZE( i + 1 ));
} else {
deallocate( block, BLOCKSIZE( i + 1 ));
}
}
}

74 www.arihantinfo.com
Either find the block's buddy directly above or below it in the heap, so there are
two possible recursive calls to deallocate, one dealing with the former case and one with
the latter. The recursion will continue so long as we keep finding buddies to combine with
the successively larger blocks. Eventually, we must reach the point where the block's
buddy is not found in a free list, and at that point, the block goes in that free list with no
further recursion.

In the binary buddy system, each block has exactly one buddy, the identity of
which can be determined directly from the address and size of that block. At the start, all
of memory, or at least, all of the memory used for the heap, may be a single block, which
is then subdivided as required. Only when the user deallocates the very last block would
the buddy system be able to reconstruct this block. Illustrates this for a heap of 128 bytes
starting at address 0.

_______________________________________________________________
|_______________________________________________________________|
0 128
p1 = allocate(24);
_______________________________________________________________
|\\\\\\\\\\\|///|_______________|
_______________________________|
0 32 64 128
p1
p2 = allocate(12);
_______________________________________________________________
|\\\\\\\\\\\|///|\\\\\|/|_______|
_______________________________|
0 32 48 64 128
p1 p2
p3 = allocate(24); deallocate(p1,24)
_______________________________________________________________
|_______________|\\\\\|/|_______|\\\\\\\\\\\|///|
_______________|
0 32 48 64 96 128
p2 p3
______________________________________
Key |______|\\\\\\\\\\\\\|/////////////////|
free allocated internal fragment

An example of the binary buddy system in use.

This example illustrates three things: First, the successive subdivision of blocks as
requests are made, second, the internal fragmentation resulting from allocating more than
was requested, and third, the external fragmentation resulting from the breaking up of the
free space into multiple fragments. By the end of the example, there are two free blocks of
size 32, but there is no way to combine these blocks to get a block of size 64 should one be
needed.

The cost of external fragmentation is not as easy to measure as that of internal


fragmentation, but it can be partially characterized in terms of such statistics as the
average number of free blocks and the distribution of free block sizes.

75 www.arihantinfo.com
The two blocks of size 16 starting at addresses 32 and 48. Should the block
at address 32 be reallocated, these two must be combined into a new block of size 32
starting at address 32. This new block is the buddy of the block at address 0, and since
that block is also free, they must be combined to make a block of size 64. The two blocks
of size 32 starting at addresses 64 and 96 are also buddies.

What if the initial heap does not begin at zero? What if the size of the initial heap is
not a power of 2? The answer used with the binary buddy system is fairly straightforward.
The rules already given define what addresses are legal for each block, so if we start at a
random address, we must follow those rules, breaking the initial heap into whatever size
blocks work. Consider a heap starting at address 100 and running to address 274.

_________________________________ ___________________
|___|_______|_______________|___ __|_______________|_|
100 104 112 128 256 272
4 8 16 128 16 2

The initial division of an odd heap into free blocks.

The heap shown in Figure begins with one free block of size 2 (at address 272), one
of size 4 (at address 100), two of size 16, and one of size 128. These blocks have no
buddies in the heap, so they can never be combined with anything else, but they may be
allocated to users.

Some aspects of the performance of the binary buddy system are easy to
determine. For example, if over a long period of time, requests for all possible block sizes
are evenly distributed, the average allocated block will be about one third larger than
required (one quarter of each block will typically be wasted, three quarters may be in use).
This is because blocks of size s will only be allocated to satisfy requests for between s/2
and s words of memory. This waste, which represents internal fragmentation, could clearly
be reduced if there were a larger assortment of available block sizes. The nature of the
external fragmentation is harder to determine without an experimental measurement, and
in any case, the assumption of uniformly distributed requests for different block sizes is
very unrealistic. In fact, programmers frequently prefer powers of two and powers of 10
when deciding on the sizes of different data structures!

The Fibonacci series provides the basis for a buddy system variant that
reduces the degree of internal fragmentation by providing a wider variety of free block
sizes. Each member of the Fibonacci series is the sum of the two previous elements, as
shown in Figure 14.7.

i : 0 1 2 3 4 5 6 7 8 9 10 11 ...
fib(i): 0 1 1 2 3 5 8 13 21 34 55 89 ...

i < 2: fib(i) = i

i > 2: fib(i) = fib(i-1) + fib(i-2)

Figure 14.7. The Fibonacci Series

Of course, we are not interested in blocks too small to hold a pointer, so if


memory is byte addressed, and if pointers are 32 bits each, the smallest block size that we
will use with this series is 5 bytes. In the limit, the ratio of successive numbers in the
Fibonacci series approaches the golden ratio (approximately 1.61803).

76 www.arihantinfo.com
When a block is split using the Fibonacci buddy system, the fragments which result
are not the same size; thus, for example, when a block of size 55 is split, the results are of size
21 and 34. Figure illustrates a sequence of operations using the Fibonacci buddy system.

______________________________________________________
|______________________________________________________|
0 55
p1 := allocate(15);
______________________________________________________
|\\\\\\\\\\\\\\|/////|_________________________________|
0 21 55
p1 p2 := allocate(10);
______________________________________________________
|\\\\\\\\\\\\\\|/////|\\\\\\\\\|//|____________________|
0 21 34 55
p1 p2
deallocate(p1,15); p3 := allocate(6);
______________________________________________________
|\\\\\|/|____________|\\\\\\\\\|//|____________________|
0 8 21 34 55
p3 p2
______________________________________
Key |______|\\\\\\\\\\\\\|/////////////////|
free allocated internal fragment

An example of the Fibonacci buddy system in use.

With the Fibonacci buddy system, there is no simple formula for deriving the buddy of
a block from information about its size and location. In Figure 14.8, there are two blocks of
size 13; one of these has a buddy of size 8 below it, while the other has a buddy of size 21
above it. Thus, the code for the Fibonacci buddy system will necessarily be more complex than
that for the binary buddy system. The additional cost of the Fibonacci buddy system is offset
by the reduction in internal fragmentation resulting from having a more diverse assortment of
block sizes than the binary buddy system.

8.4 Storage Compaction and Compaction problem

Compaction works by actually moving blocks of data etc. from one location in memory to another
so as to collect all the free blocks into one large block. The allocation problem then becomes
completely implied. Allocation now consists of merely moving a pointer which point to the top of
this successively shortening block of storage. Once this single block gets too small again, the
compaction mechanism is again invoked to reclaim what unused storage may now exist among
allocated blocks. There is generally no storage release mechanism. Instead, a marking algorithm
is used to mark blocks that are still in use. Then, instead of freeing each unmarked block by
calling a release mechanism to put it on the free list, the compactor simply collects all unmarked
blocks into one large block at one end of the memory segment. The only real problem in this
method is the redefining of pointers. This is solved by making extra passes through memory.
After blocks are marked, the entire memory is stepped through and the new address for each
marked block is determined. This is solved by making extra passes through memory. After blocks
are marked, the entire memory is stepped through and the new address for each marked block is
determined. This new address is stored in the block itself. Then another pass over memory is
made. On this pass, pointers that point to marked blocks are reset to point to where the marked
blocks will be after compaction. This is why the new address is stored right in the block – it is

77 www.arihantinfo.com
easily obtainable. After all pointers have been reset, then the marked blocks are moved to their
new locations. A general algorithm for the compaction routine is as follows.
1. Invoke garbage collection marking routine.
2. Repeat step 3 until the end of memory is reached.
3. If the current block of storage being examined has been marked then set the address of
the block to the starting address of unused memory update the starting address of unused
memory
4. Redefine variable references address of unused memory.
5. Define new values for pointers in marked block.
6. Repeat step 7 until the end of memory is reached
7. Move marked blocks into new locations and reset markets.

One approach to reducing external fragmentation is to move blocks in memory so that fragments
can be combined; this is called compacting the heap. This is much easier to say than it is to do! To
do it, we need a way to find and update all pointers to a block when we move the block. In
segmented virtual memories, there is usually a master segment table containing descriptions of
all of the segments in the system. If all pointers to a segment are indirect through an entry in this
table, segments may easily be moved to compact storage by updating this entry, in general,
though, finding all of the pointers to a block is computationally difficult.
Consider, for example, the problem of compacting the storage used by a Pascal, C or C++
program, which has built extensive data structures in the heap. Each item allocated by that
program in the heap may contain any number of pointers to other items in the heap, and no item
may be moved without updating all of the pointers that refer to it. This is possible only when there
is a description of each block in the heap, which tells the system which words in that block
contain pointers. Some machines have what are called tagged architectures, where each word has
a small field describing the type of the contents of that word, but the majority of machines provide
no help in solving this problem.
A Pascal or Java compiler supporting heap compaction must explicitly include this
information in each item stored on the heap, typically by including a special pointer to a type or
class description record as the first word of each object. Doing this in Pascal and Java is feasible
because these are strongly typed languages. Doing it in C or C++ is harder because the compiler
has less licence to include auxiliary information in a C or C++ structure. In those languages,
programmers usually assume that the pointer to a structure is also a pointer to the first explicitly
declared element of the structure, so insertions by the compiler pose problems.
The ability of the system to find all pointers to a block allows for more than just heap compaction,
it allows automatic reclamation of those memory blocks in the heap which are no longer
referenced by any of the user's pointers. This is valuable enough that it has a name, garbage
collection.
In a language implementation that supports garbage collection, so that storage which is no
longer referenced by the user is automatically reclaimed, the user need not make any calls to a
service such as reallocate; in fact, such a service need not even be provided to the user.
One approach to automatic storage reclamation uses a reference count associated with each block.
Many programmers consider reference counting to be a simple form of garbage collection, but it is
more properly considered as an inferior alternative to garbage collection. The reference count
attached to a block is initially zero when the block is allocated, but it is immediately incremented
when the pointer to that block is assigned to any pointer variable. The reference count of a block
is incremented each time the pointer to that block is assigned to a pointer variable, and it is
decremented each time the pointer to that block, stored in some pointer variable, is overwritten
with a pointer to something else or when a pointer variable holding a pointer to that block is
reallocated. The block is reallocated whenever the block's reference counts falls to zero.
The problem with reference counts is that they cannot detect the fact that a circularly linked
structure has been cut loose from the remainder of the structure, and thus, they tend to slowly
loose storage in environments where such structures occur. Reference counting is widely used in
modern file systems, but it is not generally used in heap managers.
Real garbage collection, in contrast, does not suffer from this problem. With garbage
collection, a special routine, the garbage collector, periodically traverses all data structures
accessible to the user, marking each item on the heap, which is encountered; this is called the

78 www.arihantinfo.com
marking phase of the garbage collector. After doing so, the garbage collector claims all unmarked
blocks as garbage and sweeps them into the free space list; this is called the sweep phase of the
garbage collector. The names for these two phases lead to the name for the entire algorithm, the
mark sweep garbage collection algorithm.
Generally, the mark-sweep algorithm requires heap data structure that includes a mark
bit in each item on the heap. We might store this in the status field of the boundary tag on each
item. Given this, the sweep phase becomes a simple scan of the boundary tag list of the entire
heap, gathering the unmarked blocks, marking them as free, merging them if they are adjacent,
and stringing the results onto the free list. In the process, the mark bits on all blocks are cleared
to prepare for the next marking phase.
The marking phase is more interesting. In its classical form, this assumes that the entire heap
is reachable from some root item. The marking phase operates by recursively traversing the items
reachable from the root. If, during this recursion, an unmarked item is found, the item is
immediately marked and then the recursion continues. If an already marked item is found, it is
ignored since the presence of a mark indicates that it either has already been dealt with or that it
is on the stack of items being dealt with. The recursion itself, of course, only follows the pointers
stored in each item, completely ignoring integers, characters or other non-pointer content.

79 www.arihantinfo.com
UNIT 9

NP Complete Problem

9.1 Introduction
9.2 Polynomial-time
9.3 Abstract Problems
9.4 Encoding
9.5 NP-Completeness and Reducibility
9.6 NP-Completeness
9.7 Circuit Satisfiability
9.8 NP-Complete Problems
9.9 The Vertex-cover Problem
9.10 The Hamiltonian-cycle Problem
9.11 The Traveling-salesman Problem

9.1 Introduction
All of the algorithms we have studied thus far have been polynomial-time algorithms: on inputs
of size n, their worst-case running time is O(nK) where K is a constant. What do you think
whether all problems can be solved in polynomial-time? The answer is no. Some problems such as
Tuning's famous "Halting Problem," cannot be solved by any computer, no matter how much time
is provided. One problems can b e solved, but not in time O(nK) for any constant k. The problems
that can be solvable by polynomial - time algorithms are called tractable, and problems that need
super polynomial time as being intractable.
We discuss an interesting class of problems called the "NP-Complete" problems in this chapter,
whose status is unknown. There is no polynomial-time algorithm for an NP-Complete problem,
nor we are yet able to prove a super polynomial time lower bound for any of them. In theoretical
computer science question P ≠ NP question has been one of the deepest, most perplexing open
research problems in theoretical computer science. It came into existence in 1971.
Many scientists believe that the NP-complete problems can be solved in polynomial time i.e.
intractable. Because if any single NP-complete problem can be solved in polynomial time, then we
can solve every NP-complete problem in a polynomial time.
If you want to design good algorithms you should understand the rudiments of the theory of NP-
completeness. If its intractability can be proved, as an engineer you would then do better
spending your time developing an approximation algorithm rather than searching for fast
algorithm that solved the problem exactly. Some problems no harder than sorting, graph
searching or overhear flow seem easy but are in face NP-complete. Hence we should become
familiar with its important class of problems.

9.2 Polynominal Time


We begin our study of NP-completeness by defining polynominal time solvable problems.
Generally these problems are tractable. The reason is a mathematical issue. We give three
supporting arguments here.

The first argument says that it is reasonable to regard a problem that requires time Q(n100) as
intractable there are very few practical problems that require time on the order of such a high
degree polynomials. The practical polynomial-time problems require much less time.

80 www.arihantinfo.com
Second, many problems that can be solved in polynomial-time in one model, there always exists
another polynomial-time model.
And the third is that the class of polynomial-time solvable problems has closure properties. For
example if we fed the output of one polynomial-time algorithm into the input of another, the
resultant algorithm is polynomial. If an polynomial-time algorithm makes a constant number of
calls to polynomial-time sub routines, the running time of the composite algorithm is polynomial.

9.3 Abstract Problems


To make clear the class of polynomial-time solvable problems, we first define what a problem is.
We define an abstract problem Q to be a binary relation on a set I of problem instances and a set
S of problem solutions. For example, remember the problem of finding SHORTEST PATH, a
shortest path between two given vertices in an unweighted undirected graph G = (V,E). We define
an instance for SHORTEST PATH as a triple consisting of a graph and two veritices. And a
solution is defined as a sequence of vertices in the graph (with empty sequence denoting that no
path exists) The problem SHORTEST PATH itself is the relation that associated each instance of a
graph of two vertices with a shortest path in the graph that joins the two vertices. We can have
more than one solution becauseshortest paths are not necessarily unique.
This formulation of an abstract problem is sufficient for our purposes. To make simple the theory
of NP completeness restricts attention to decision problems: those having yes/no solution. In this
case we can view an abstract decision problem as a function that maps the instance set I to the
solution set {0,1}. We can observe it by an example: a decisions problem path related to the
shortest path problem is "Given a graph G = (V,E), two vertices p, q ∈ V and a positive integer k
does a path exits in G between p and q whose length is at most k" if i = (G,p,q,k) is an instance of
this shortest path problem, then PATH (i) = 1 (yes) if a shortest path from u to v has length at
most k otherwise PATH (i) = 0 (no).
Certain other abstract problems are there called optimization problem in which some value
must be minimized or maximized and these are not decision problems. But if we want to apply the
theory of NP-completeness to optimization problems, we must reproduce them as decision
problems. Typically, an optimization problem can be recast by imposing a bound on the value to
be optimized. For example in recasting the shortest path problem as the decision problem PATH
we added a bound k to the problem instance.
The requirement to recast optimization problem as decision problem does not diminish the impact
of the theory. Generally if we are able to solve an optimization problem quickly, we will be able to
solve its related decision problem in short time. We simply compare the value obtained from the
solution of the optimization problem with the bound provided as input to the decision problem if
an optimization problem is easy, therefore, its related decision problem is easy as well. Stated in a
way that has more relevance to NP-completeness if we can provide evidence that a decision is
hard, we also provide evidence that its related optimization problem is hard. Thus even though it
restricts attention it decision problem the theory of NP-completeness applies much more widely.

9.4 Encodings
If we want to make a computer program that can solve an abstract problem, we have to represent
instances in a way that the program understands. An encoding of a set S of abstract objects is a
mapping e from S to the set of binary strings , For example, the encoding of the natural numbers
N = {0,1,2,3,4........} is as the strings {0,10,11,100,......} Hence by this encoding, e(17) = 10001.
Anyone who has looked at computer representations of keyboard characters is familiar with either
the ASCII or EBCDIC codes. In the ASCII codes, e (A) = 1000001. Even a compound object can be
encoded as a binary string by combining the representations of its constituent parts. Polygons,
graphs, functions ordered pairs programs all can be encoded as binary strings.
Hence a computer algorithm to solves some substance decision problem will an encoding of
problem instants as input. A concrete problem is a problem whose instance set is the set of binary
81 www.arihantinfo.com
thing. An algorithm solves a concrete problem in time O(T (n)) if it is provided a problem instance i
of length n = [i], the algorithm can produce the solution in at most O(T(n)) time. We can say that a
concrete problem is polynomial-time solvable. Therefore if there exist an algorithm to solve it in
time O(nk) for some constant k.
We will now define the complexity class P as the set of concrete decision problems that can be
solved in polynomial-time.
Encoding can be used to map abstract problem to concrete problems. Given an abstract decision
problem Q mapping an instance set I to {0,1} an encoding e : I —{0,1} can be use to induce a
related concrete decision problem which we denote by e(Q). If the solution to an abstract problem
instance i ∈ I is Q(i) ∈ {0,1}, then the solution to the concrete-problem e(i) ∈{0,1}* is also Q(i). There
may be some binary strings that represent no meaningful abstract-problem instance. For
convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus the concrete
problem produces the same solution as the abstract problem on binary digit instances that
represent the encoding of abstract-problem instances.
Now we generalize the definition of polynomial-time solvability from concrete problems to abstract
problems using encoding as the bridge, but we keep the definition independent of any particular
encoding. We want to say that the efficiency of solving a problem will not depend on how the
problem is encoded. Unfortunately, it depends quite heavily. For example suppose that an integer
k is to be provided as the sole input to an algorithm and suppose that the running time of the
algorithm is O(k). If the integer K the provided in unary—a string of k 1's then the running time of
the algorithm is O(n) on length-n-inputs, which is polynomial time. If we use the more natural
binary representation of the integer k, but then the input length is n = [1gk]. In this case, the
running time of the algorithm is O(k) = θ(2n) which is exponential in the size of the input. Thus,
depending on the encoding, the algorithm run in the either polynomial of super polynomial-time.
To understand the polynomial – time it the encoding of an abstract program is important. We
cannot really talk about solving an abstract problem without first specifying an encoding.
Practically, if we rule out expensive encoding such as an unary ones, the actual encoding of a
problem makes little difference to whether the problem can be solved in polynomial-time. For
example representing integers in base 6 instead of binary has no effect on whether a problem is
solvable in polynomial-time, since an integer represented in base 2 in polynomial-time.
A function f : {0,1}*—{0,1}* is Polynomial-time computable if we can find a polynomial-time
algorithm A that, for any input x∈{0,1}*, produces as output f (x). For time set I of problem
instances, two encoding e1 and e2 are polynomial related if there exist two polynomial-time
computable functions f12 and f21 such that for any i ∈ I, we have f12 (e1(i)) e21 (i) and f21 (e2(i))
e1(i). That is, the encoding e2 (i) can be computed from the encoding e1(i) by a polynomial-time
algorithm and vice versa.

9.5 NP-completeness and Reducibility

The reason that theoretical computer scientists believe that P ≠ NP is the existence of the class of
"NP-complete" problems. This class has an interesting property that if any one NP-complete
problem can be solved in polynomial-time, then every problem in NP has a polynomial-time
solution, that is, P = NP. No polynomial-time algorithm has ever been discovered for any NP-
complete problem for the decades.
The language HAM-CYCLE is one NP-complete problem. If we could decide HAM-CYCLE in polynomial-
time then we solve could every problem in NP in polynomial-time. In fact, if NP - P should turn out
to be nonempty, we could say with certainty that HAM-CYCLE ∈NP  – P.

82 www.arihantinfo.com
{0,1}* {0,1}*

Figure: An illustration of a polynomial-time reduction from a language L 1 to a language


L2 L1via a reduction function f. For any input x ∈{0,1}*,
 the question of whether
x ∈L, has the same answer as the qestion of whether f (x)∈
 L2.

The NP-complete languages are the "hardest" languages in NP. We shall show how to compare the
relative "hardness" of languages using "polynomial-time reducibility." First, we formally define the
NP-complete languages, and then we sketch a proof that one such language, called CIRCUIT-SAT,
is NP-complete. We use the notion of reducibility to show that many other problems are NP-
complete.

Reducibility
A problem P can be reduced to another problem P’ if any instance of P can be "easily rephrased"
as an instance of P’ the solution to which provides a solution to the instance of P. For example,
the problem of solving linear equations in an indeterminate x reduces to the problem of solving
quadratic equations. Given an instance ax + b = 0, we can transform it to Ox2 + ax + b = 0. Its
solution provides a solution to ax + b = 0. Thus, if a problem P reduces to another problem P’,
then we say that P is, "no harder to solve" than P’.
Now we returns our formal-language framework for decision problems, a language L1 is said to
polynomial-time reducible to a language L2 written L1 ≤ pL2, if there exists a polynomial-time
computable to function f : {a,b}* → {a,b}* such that for all x ∈{a,b}*,

x ∈L
 1 if and only if f (x) ∈L
 2.........................................(1)

The function f is called the reducing-function, and a polynomial-time algorithm F that computes
f is known as a reduction algorithm.
The idea of a polynomial-time reduction from a language L1to another language L2 is given in
Figure. Each language is a subset of {0,1}*. The reduction function f gives a polynomial-time
mapping such that if x ∈L 1 then f (x) ∈L
 2. also, if x ∈L
 1, then f (x) ∉L
 2 . Thus, the reduction
function maps any instance x of the decision problem represented by the language L2 to an
instance f (x) of the problem represented the language L1 to instance f(x) of the problem
represented by L2. Providing an answer to whether f (x) ∈L
 2 directly provides the answer to
whether x ∈L
 1.

x∈L1?
x f(x) f(x)∈L2?
F A2

A1

Figure: The proof of lemma 2The algorithms F is a reduction algorithm that computes the
reduction function f from L1to L2in ploynomial time, and A2 is a polynomial-time algorithm that
decides L2 . Illustrated is an algorithm A1that decides whether x ∈L
 1 . by using F
to transform any input x into f(x) and then using A2 to decide whether f (x) ∈L
 2.

83 www.arihantinfo.com
Polynomial-time reductions give us a powerful tool for proving that various languages belong to P.

9.6 NP-completeness

Polynomial-time reductions gives a formal means for showing that one problem is at least as hard
as another, if we consider a polynomial-time factor. Hence if L1P L2, then L1 is not more than
a polynomial-time factor Harder than L2 because the "less than or equal to" notation for reduction
is mnemonic. Now we define the set of NP-complete languages, which are the hardest problems in
NP.

A language L {0,1}* is NP-complete if

1. L ∈NP,
 and

2. L1P L for every L1 ∈NP.


NP
NPC
P

Figure: How most theoretical computer scientists view the relationship among P, NP, and NPC.
Both
P and NPC are wholly contained within NP,and P NPC  = ∅.
If a language L satisfies property 2, but not necessarily property 1, we say that L is NP = hard. We
also define NPC to be the class of NP-complete language.

9.7 Circuit Satisfiability


Up to this point we have not actually proved that any Problems is NP-complete though we have
defined NP-complete problem. If we prove that at least one problem is NP-complete, polynomial-
time reducibility can be used as a tool to prove the NP-completeness of other problems. So we will
focus on showing the existence of an NP-complete problem: the circuit-satisfiability problem.

84 www.arihantinfo.com
Figure: Two instances of the circuit satisfiability problem. (a) The assignment x1= 1, x2= 1, x3=
0
to the inputs of this circuit causes the output of the circuit to be 1. The circuit is therefore
satisfiable.
(b) No assignment to the input of this circuit can cause the output of the circuit to
be 1. the circuit is therefore unsatisfable.
We shall informally describe a proof that relies of a basic understanding of boolean combinational
circuits.
Two boolean combinational circuits are shown in Figure. Each circuit has three inputs and a one
output. A truth assignment means a set of boolean input values for that circuit. We say that a one
output Boolean combinational circuits is satisfiable if it has a satisfying assignment: a truth
assignment that causes the output of the circuit in Figure 4.4(a) has the satisfying assignment 
x1= 1, x2= 1, x3= 0, and so it is satisfiable. No assignment of values to x 1,x2, and x3 causes the
circuit in Figure 4.4(b) to produce a 1 output; it always produces 0, and so it is unsatisfiable.
Now we state the circuit-satisfiability problem as, "Given a boolean combinational circuit
composed of AND, OR, AND NOT gates is it satisfiable?" In order to pose this question formally
however we must agree on a standard encoding. We can make a graph like encoding that maps

any given circuit C into a binary string Cwhose length is not much larger than the size of the
circuit itself. As a formal language. We can therefore define.
CIRCUIT-SET  is a satisfiable boolean combinational circuit}
= {CC
There is a great importance of the circuit-satisfiability problem in the area of computer aided
hardware optimization. If a circuit always produces 0, it can be replaced by an easier circuit that
omits all logic gates and provides the constant 0 value as its output. If we can design a
polynomial-time algorithm for the problem then it would have considerable practical application.
Suppose we are given a circuit C, we might attempt to determine whether it is satisfiable by
simply checking all possible assignment to the input. But if there are k inputs there are 2k
possible assignments. When the size of C is polynomial in k, checking each one leads to a super
polynomial-time algorithm. In fact as has been claimed there is strong evidence that no
polynomial-time algorithm exists that solves the circuit-satisfiability problem because circuit
satisfiability is NP-complete. We break the proof of this fact into two parts based on the two parts
of the definition of NP-completeness.

85 www.arihantinfo.com
9.8 NP-complete Problems
NP-complete problem can be in the domains: Boolean logic, arithmetic, automata and language
theory, network design sets and partitions, storage and retrieval sequencing and scheduling,
graphs, mathematical programming, algebra and number theory, games and puzzles, program
optimization etc. Here we use the reduction methodology to provide NP-completeness proofs for
the problems related to graph theory and set partitioning.

CIRCUIT-SAT

SAT

3-CNF-SAT

CLIQUE HAM-CYCLE

VERTEX-COVER TSP

SUBSET-SUM

Figure: The Structure of NP Completents Proofs

9.9 The Vertex-cover Problem


We define a vertex cover of an undirected graph G = (V,E) as a subset V' V such that if (u,v) E,
then u V, then u V' (or both). That is each vertex "covers" its incident edges, and a vertex cover
for G is a set of vertices that covers all the edges in E. The size of a vertex cover is the number of
vertices in it. For example the graph in figure has a vertex cover {w, z} of size 2.
This problem is to find a vertex cover of minimum size in a given graph. Restating it as a decision
problem we wish to determine whether a graph has a vertex cover of a given size k. In language we
define.

9.10 The Hamiltonian-cycle Problem

a a’ a a’

z1 z2 z3 z4

b b’ b b’
(a) (b)

a a’ a a’

z1 z2 z3 z4 A

86 www.arihantinfo.com
b b’ b b’
(c) (d)

Figure (a) Widget A, used in the reduction from 3-CNF to HAM-CYCLE. (b)-(c). If A is a subgraph
of some graph G That contains a hamiltoniam cycle and the only Connections from A to the rest of
G are through the vertices a, a’, b and b’ then the shaded edges represent the only two possible
ways in which the hamiltonian cycle may traverse the edges of subgraph A.
(d) A compact representation of the A widget.

Theorem
The Hamiltonian cycle problem is NP-complete.
Proof Initially we show that HAM-CYCLE belongs to NP. Given a graph G = (V,E) our certificate is
the sequence of [V] vertices that make up the Hamiltonian cycle. The verification algorithm checks
that this sequence contains each vertex in V exactly once and that with the first vertex repeated at
the end it forms a cycle in G. This verification can be performed in polynomial-time.
We now prove that HAM-CYCLE is NP-complete by showing that 3 CNF-SAT P HAM-CYCLE.
Given a 3-CNF Boolean formula ø over variables x1, x2,....,xn with clauses c1,c2,....,ck , each
containing exactly 3 distinct literals we construct a graph G = (V,E) in polynomial-time such that
G has a Hamiltonian cycle if and only if φ is satisfiable. Our construction is based on widgets,
which are pieces of graphs that enforce certain properties.
Our first widget is the subgraph A shown in Figure. Suppose that A is a subgraph of some graph
G and that the only connections between A and the remainder G are through the vertices z1, z2,
z3 and z4 in one of the ways shown in figures (b) and (c) we may treat subgraph A as if it were
simply a pair of edges (a,a') and (b,b') with the restriction that any hamiltonian cycle of G must
include exactly one of these edges. We shall represent widget A as shown in Figure.
The subgraph B in Figure is our second widget. Suppose that B is a subgraph of some graph G
and that the only connections from B to the remainder of G are through vertices b1,b2.b3, and b4.
A Hamiltonian cycle of graph G cannot traverse all of the edges (b1,b2), (b2,b3), and (b3,b4),
since then all vertices in the widget other than b1,b2,b3 and b4 would be missed. A hamiltonian
cycle of G may however traverse any proper subset of these edges. Figure (a)—(e) show five such
subsets; the remaining two subsets can be obtained by performing a top-to-bottom flip of part
(b) and (e). We represent this widget as in Figure (f), the idea being that at least one of the paths
pointed by that arrows must be taken by a G hamiltonain cycle.
The graph G that we shall construct consists mostly of copies of these two widgets. The
construction is illustrated in Figure 4.13 of the k clauses φ, we include a copy of widget B, and we
join these widgets together in series as follows. Letting bij be the copy of vertex bj in the jth copy
of widget B, we connect bi,4 to bi+1.1 for i = 1,2,...,k - 1.

Then, for each variable xm in φ we include two vertices x'm and x"m . We connect these two
vertices by means of two copies of the edge (x'm , x"m), which we denote by em and em to
distinguish them. The idea is that if the hamiltonian cycle takes edge em , it corresponds to
assigning variable xm the value 1. If the hamiltonian cycle takes edge em, the variable is assigned
the value 0. Each pair of these edges forms a two-edge loop; we connect these small loops in
series by adding edges (x'm , x"m+1) for m = 1,2,....,n - 1. We connect the left (clause) side of the
graph to the right (variable) side by means of two edges (b1,1 , x'1 ) and (bk,4 , x"n ), which are the
topmost and bottom most edges in Figure.

87 www.arihantinfo.com
We are not yet finished with the construction of graph G, since we have yet to relate the variables
to the clauses. If the jth literal of clause Ci is xm, then we use an A widget to connect edge
(bij,bi,j+1) with edge em. If the jth literal of clause ci is  xm, then we instead put an A widget
between edge (bij,bi,j+1) and em In Figure for example, because clause c2 is (xi  x2,x3), we
place three A widgets as follows:
 between (b2,1;b2,2) and e1,

 between (b2,2;b2,3) and e2 , and

 between (b2,3;b2,4) and e3,

Note that connecting two edges by means of A widgets actually entails replacing each edge by the
five edges in the top to bottom of Figure (a) and, of course, adding connections that pass through
the Z vertices as well. A given literal lm may appear in several clauses and thus an edge em or
em may be influenced by several A widgets (edge e3 ,for example). In this case, we connect the A
widgets in series, as shown in Figure effectively replacing edge cm or em by a series of edges.

Figure: Widget B, used in the reduction from 3-CHF-SAT to HAM-CYCLE. No path from vertex
b1 to vertex b4 containing all the vertices in the widget may use all three edges (b1,b2), (b2,b3),
88 www.arihantinfo.com
and (b3,b4).Any proper subset of these edges may be used, however. (a)-(e) five such subsets.
(f) A representation of this widget in which at least one of the paths pointed to
by the arrow must be taken a hamiltonian cycle.

Figure: The Graph Constructed from the formula φ = (x1x2x3)(x1x2x3)(x1x2x3).


A satisfying assignment s to the variables of φ is s (x1) = 0, s (x2) = 1, and s (x3) = 1, which
corresponds to the hamiltonian cycle shown. Note that if s(xm) = 1, then edge em is in
the hamiltonian cycle, and if s(Xm) = 0, then edge em is in the hamiltonian cycle

b1,3 x’2

O O O
O O O
O O O
O O O

b1,4
b1,3

b1,4 x’3 b3,3


A O O O
e3 O O O
A O O O
b3,3 x’’3 O O O

b3,4 b3,4 x’’3

89 www.arihantinfo.com
We claim that formula φ is satisfiable if and only if graph G contains a hamiltonian cycle. We
first suppose that G has a hamiltonian cycle h and show that φ is satisfiable. Cycle h must take a
particular form:

 ( )
First, it traverses edge. b1.1 , x'1 to go from the top left of the top right.

 It then follows all of the x'm and x"m vertices from top to bottom, choosing either edge em or
edge em , but not both.

 ( )
It next traverses edge bk 4 , x"n to get back to the left side.

 Finally, it traverses the B widgets from bottom to top on the left.


(It actually traverses edges within the A widgets as well, but we use these subgraphs to enforce
the either /or nature of the edges it connects.)
Given the hamiltonian cycle h, we define a truth assignment for ø as follows. If edge em belong to
h then we set xm = 1. Otherwise, edge em belong to h, and we set xm = 0.

We claim that this assignment satisfies φ. Consider a clause Ci and the corresponding B widget in
( )
G. Each edge bi , j bi , j + 1 is connected by an A widget to either edge em of edge em , depending on
( )
whether xm or ¬xm is the jth literal in the clause. The edge bi , j bi , j + 1 is traversed by h if and only
if the corresponding literal is 0. Since each of the three edges (b i,j1,bi,2),(bi,j2,bi,3),(bi,j3,bi,4) in
clause ci is also in a B widget, all three cannot be traversed by the hamiltonian cycle h. One of the
three edges, therefore, must have a corresponding literal whose assigned value is 1, and Clause
.....k, and thus formula φ is satisfied.
Ci is satisfied. This property holds for each clause Ci , i = 1, 2

4
u v
1

3 2
1

x w
5
Figure: An instance of the traveling - salesman problem. Shaded edges
represent a minimum-cost tour, with cost 7.

Conversely, let us suppose that formula φ is satisfied by some truth assignment. By following the
rules from above, we can construct a hamiltonian cycle. For graph G: traverse edge em edge en if
( )
xm = 0, and traverse edge bi , j , bi , j + 1 if and only if the jth literal of clause Ci is 0 under the
assignment. These rules can indeed be followed, since we assure that s is a satisfying assignment
for formula φ.
Finally, we note that graph G can be constructed on polynomial-time. It contains one B widget for
each of the k clauses in φ, and so there are 3k A widgets. Since the A and B widgets are of fixed
size, the graph G has O(k) vertices and is easily constructed in polynomial-time. Thus we have
provided a polynomial-time reduction from 3-CHF-SET to HAM-CYCLE.

9.11 The Traveling-salesman Problem

90 www.arihantinfo.com
In the travelling-salesman problem, which is closely related to the hamiltonian-cycle problem, a
salesman must visit n cities. Modeling the problem as a complete graph with n vertices, we can
say that the salesman wishes to make a tour, or hamiltonian cycle, visiting each city exactly once
and to finishing at the city he starts from. There is an integer cost c(i, j) to travel from city i to city
j, and the salesman wishes to make the tour whose total cost is minimum, where the total cost is
the sum of the individual costs along the edges of the tour. For example, in Figure 4.15 a
minimum -cost tour is u, w, v, x, uwith cost 7. The formal language for the traveling salesman
problem is :
TPS = {G, c, k} : G = (V, E) is a complete graph,
c is a function from V × V → Z,
K Z, and
G has a travelling -salesman tour with cost at most k}.
The following theorem shows that a fast algorithm for the travelling-salesman problem is unlikely
to exist.

Theorem
The travelling-salesman problem is NP-complete.
Proof: We first show that TPS belongs to NP. Given an instance of the problem, we use as a
certificate the sequence of n vertices in the tour. The verification algorithm checks that this
sequence contains each vertex exactly once, sums up the edge costs and checks whether the sum
is at most k. This process can certainly be done in polynomial-time.
To prove that TSP is NP-hard, show that HAM-CYCLE P TSP. Let G = (V,E) be an instance of HAM-
CYCLE. We form the complete graph G' = (V,E') where E' = {(i,j) : i,j,V}, and we define the cost
function c by
c(i,j) = {0 if (i,j)E,
1if (i,j)E.
The instance of TSP then (G',c,o), which is easily formed in polynomial-time.
We now show that graph G has a hamiltonian cycle if and only if graph G' has a tour of cost at
most 0. Suppose that graph G has a hamiltonian cycle h. Each edge in h belong to E and thus has
cost 0 in G' has. Thus, h' is a tour in G' with cost 0. Conversely, suppose that graph G' has a tour
h' of cost at most 0. Since the cost s of the edges in E' are 0 and 1, the cost of tour h' is exactly 0.
Therefore, h' contains only edge in E. We conclude that h is a hamiltonian cycle in graph G.

91 www.arihantinfo.com

You might also like