You are on page 1of 45

C++ # include <iostream.

h> void main() { int number; int prime=0; cout<<Enter a number; cin>>number; // check whether the number is divisible by any number from 2 to number/2 for (int i=2;i<=number/2;i++) { if ((number%i)==0) { prime=1; break; } } if (prime==0) cout<<The number is prime number; else cout<< The number is not a prime number; } // primegen.cpp # include <iostream.h> void main() { int number; int prime; cout<<Enter a number; cin>>number; for(int i=2;i<=number;i++) {prime=0; for (int j=2;j<=i/2;j++) { if ((i%j)==0) { prime=1; break; } } if (prime==0) cout<<i <<endl; } }

//stack.cpp # include <iostream.h> # define size 100 class stack { int stck[size]; int top; public: stack() {top=0; cout <<"stack initialised"<<endl;} ~stack() {cout <<"stack destroyed"<<endl;} void push(int i); int pop(); }; void stack::push(int i) { if (top==size) { cout <<"stack is full"; return; } stck[top]=i; top++; } int stack ::pop() { if (top==0) { cout << "stack underflow" ; return 0; } top--; return stck[top]; } void main() { stack a,b; a.push(1); b.push(2); a.push(3); b.push(4); cout<<a.pop() << " "; cout<<a.pop()<<" "; cout<<b.pop()<< " "; cout<<b.pop()<<endl; }

Allocators Allocators do exactly what it says on the can. They allocate raw memory, and return it. They do not create or destroy objects. Allocators are very "low level" features in the STL, and are designed to encapsulate memory allocation and deallocation. This allows for efficient storage by use of different schemes for

particular container classes. The default allocator, alloc, is thread-safe and has good performance characteristics. On the whole, it is best to regard allocators as a "black box", partly because their implementation is still in a state of change, and also because the defaults work well for most applications.

Sequence Adapters
Sequence container adapters are used to change the "user interface" to other STL sequence containers or to user written containers if they satisfy the access function requirements. Why might you want to do this? Well, if you wanted to implement a stack of items, you might at first decide to base your stack class on the list container - let's call it ListStack - and define public member functions for push(), pop(), empty() and top(). However, you might later decide that another container like a vector might be better suited to the task. You would then have to define a new stack class, with the same public interface, but based on the vector, e.g. VectorStack, so that other programmers could choose a list or a vector based queue. It is obvious that the number of names for what is essentially the same thing start to mushroom. In addition, this approach rules out the programmer using his or her own underlying class as the container. Container adapters neatly solve this by presenting the same public interface irrespective of the underlying container. Being templatized, they avoid name proliferation. Provided the container type used supports the operations required by the adapter class (see the individual sections below) you can use any type for the underlying implementation. It is important to note that the adapters provide a restricted interface to the underlying container, and you cannot use iterators with adapters. Stack #include <stack> The stack implements a Last In First Out, or LIFO structure, which provide the public functions push(), pop(), empty() and top(). Again, these are self explanatory - empty returns a bool value which is true if the stack is empty. To support this functionality stack expects the underlying container to support push_back(), pop_back(), empty() or size() and back(). Table 8.5 Underlying Container functions and Stack adapter functions

You would be correct in guessing that you can use vector, deque or list as the underlying container type. If you wanted a user written type as the container, then if provided the necessary public interfaces, you could just "plug" it into a container adapter. Queue

#include <queue> A queue implements a First In First Out, or FIFO structure, which provides the public functions push(), pop(), empty(), back() and front() ( empty() returns a bool value which is true if the queue is empty). To support these, queue expects the underlying container to have push_back(), pop_front(), empty() or size() and back(). Table 8.6 Underlying Container functions and Queue adapter functions

You can use deque or list as the underlying container type, or a user-written type. You can't use a vector because vector doesn't support pop_front().You could write a pop_front() function for vector, but this would be inefficient because removing the first element would require a potentially large memory shuffle for all the other elements, taking time O(N). Priority Queue #include <queue> A priority_queue, defined in the <queue> header, is similar to a queue, with the additional capability of ordering the objects according to a user-defined priority. The order of objects with equal priority is not really predictable, except of course, they will be grouped together. This might be required by an operating system process scheduler, or batch queue manager. The underlying container has to support push_back(), pop_back(), empty(), front(), plus a random access iterator and comparison function to decide priority order. Table 8.7 Underlying container functions and Priority Queue adapter functions

Hence a vector or a deque can be used as the underlying container, or a suitable user-provided class. 4.

Extensibility Mechanisms The extensibility mechanisms allow you to customize and extend the UML by adding new building blocks, creating new properties, and specifying new semantics in order to make the language suitable for your specific problem domain. There are three common extensibility mechanisms that are defined by the UML: stereotypes, tagged values, and constraints.
Stereotypes Stereotypes allow you to extend the vocabulary of the UML so that you can create new model elements, derived from existing ones, but that have specific properties that are suitable for your problem domain. They are used for classifying or marking the UML building blocks in order to introduce new building blocks that speak the language of your domain and that look like primitive, or basic, model elements. For example, when modeling a network you might need to have symbols for representing routers and hubs. By using stereotyped nodes you can make these things appear as primitive building blocks. As another example, let us consider exception classes in Java or C++, which you might sometimes have to model. Ideally you would only want to allow them to be thrown and caught, nothing else. Now, by marking them with a suitable stereotype you can make these classes into first class citizens in your model; in other words, you make them appear as basic building blocks. Stereotypes also allow you to introduce new graphical symbols for providing visual cues to the models that speak the vocabulary of your specific domain, as shown below.

Graphically, a stereotype is rendered as a name enclosed by guillemots and placed above the name of another element.

Alternatively, you can render the stereotyped element by using a new icon associated with that stereotype.

Tagged Values Tagged values are properties for specifying keyword-value pairs of model elements, where the keywords are attributes. They allow you to extend the properties of a UML building block so that you create new information in the specification of that element. Tagged values can be defined for existing model elements, or for individual stereotypes, so that everything with that stereotype has that tagged value. It is important to mention that a tagged value is not equal to a class attribute. Instead, you can regard a tagged value as being a metadata, since its value applies to the element itself and not to its instances. One of the most common uses of a tagged value is to specify properties that are relevant to code generation or configuration management. So, for example, you can make use of a tagged value in order to specify the programming language to which you map a particular class, or you can use it to denote the author and the version of a component. As another example of where tagged values can be useful, consider the release team of a project, which is responsible for assembling, testing, and deploying releases. In such a case it might be feasible to keep track of the version number and test results for each main subsystem, and so one way of adding this information to the models is to use tagged values. Graphically, a tagged value is rendered as a string enclosed by brackets, which is placed below the name of another model element. The string consists of a name (the tag), a separator (the symbol =), and a value (of the tag), as shown below.

Constraints Constraints are properties for specifying semantics and/or conditions that must be held true at all times for the elements of a model. They allow you to extend the semantics of a UML building block by adding new

rules, or modifying existing ones. For example, when modeling hard real time systems it could be useful to adorn the models with some additional information, such as time budgets and deadlines. By making use of constraints these timing requirements can easily be captured. Graphically, a constraint is rendered as a string enclosed by brackets, which is placed near the associated element(s), or connected to the element(s) by dependency relationships. This notation can also be used to adorn a model elements basic notation, in order to visualize parts of an elements specification that have no graphical cue. For example, you can use constraint notation to provide some properties of associations, such as order and changeability. Refer to the diagram below to understand the use of constraints.

1. Write note on: a) Elementary Data Structures b) Ordered list c) Linked list d) Queue e) Slack f) Binary tree g) Hash tables
Ans:1 a) Elementary Data Structures: Organizing the data for processing is an essential step in the development of a computer program. For many applications, the choice of the proper data structure is the only major decision involved in the implementation: once the choice has been made, the necessary algorithms are simple. For the same data, some data structures require more or less space than others; for the same operations on the data, some data structures lead to more or less efficient algorithms than others. The choices of algorithm and of data structure are closely intertwined, and we continually seek ways to save time or space by making the choice properly. Elementary data structures such as stacks, queues, lists, and heaps will be the of-the-shelf components we build our algorithm from. There are two aspects to any data structure -The abstract operations which it supports. -The implementation of these operations. b) Ordered list: A list in which the order matters to the application. Therefore, for example, the implementer cannot scramble the order to improve efficiency. An ordered list is a list in which the order of the items is significant. However, the items in an ordered list are not necessarily sorted. Consequently, it is possible to change the order o items and still have a valid ordered list. Consider a list of the titles of the chapters in this book. The order of the items in the list corresponds to the order in which they appear in the book. However, since the chapter titles are not sorted alphabetically, we cannot consider the list to be sorted. Since it is possible to change the order of the chapters in book, we must be able to do the same with the items of the list. As a result, we may insert an item into an ordered list at any position. A searchable container is a container that supports the following additional operations:

insert -used to put objects into a the container; withdraw -used to remove objects from the container; find -used to locate objects in the container; isMember -used to test whether a given object instance is in the container.
c) Linked list:

A linked list is an ordered collection of items from which items may be deleted and inserted in any places. A data structure in which each element contains a pointer to the next element, thus forming a linear list.

Figure1. Singly linked list


Apart from singles linked we still have t the following types of linked lists: doubly linked list, ordered linked list, circular list. Linked lists are commonly used when the length of the list is not known in advance and/or when it is frequently necessary to insert and/or delete in the middle of the list. Doubly-linked vs. singly-linked listsIn a doubly-linked list, each node points to the next node and also to the previous node. In a singly-linked list, each node points to the next node but not back to the previous node. Circular list A linked list in which the last node points to the first node. If the list is doubly-linked, the first node must also point back to the last node. d) Queue:

An ordered list in which insertion always occurs at one end and deletion always occurs at the other end.

Figure 2. FIFO The following operations can be applied to a queue: InitQueue(Queue): creates an empty queue; Join(Item): inserts an item to the rear of the queue; Leave(Queue): removes an item from the front of the queue; isEmpty(Queue): returns true is the queue is empty.

e) Stack:

A stack is a list in which all insertions and deletions are made at one end, called the top. The last element to be inserted into the stack will be the first to be removed. Thus stacks are sometimes referred to as Last In First Out (LIFO) lists.

Figure 3. LIFO

The following operations can be applied to a stack: InitStack(Stack): creates an empty stack; Push(Item): pushes an item on the stack; Pop(Stack): removes the first item from the stack; Top(Stack): returns the first item from the stack w/o removing it; isEmpty(Stack): returns true is the stack is empty;

f) Binary tree: Each element in a binary tree is stored in a "node" class (or struct). Each node contains pointers to a left child node and a right child node. In some implementations, it may also contain a pointer to the parent node. A tree may also have an object of a second "tree" class (or struct) which as a header for the tree. The "tree" object contains a pointer to the root of the tree (the node with no parent) and whatever other information the programmer wants to squirrel away in it (e.g. number of nodes currently in the tree). In a binary tree, elements are kept sorted in left to right order across the tree. That is, if N is a node, then the value stored in N must be larger than the value stored in left-child(N) and less than the value stored in right-child(N). Variant trees may have the opposite order (smaller values to the right rather than to the left) or may allow two different nodes to contain equal values.

Figure 4. Binary Tree

g) Hash tables: A hash table or hash map is a data structure that uses a hash function to map identifying values, known as keys (e.g., a person's name), to their associated values (e.g., their telephone number). Thus, a hash table implements an associate array. The hash function is used to transform the key into the index (the hash) of an array element (the slot or bucket) where the corresponding value is to be sought. Ideally, the hash function should map each possible key to a unique slot index, but this ideal is rarely achievable in practice (unless the hash keys are fixed; i.e. new entries are never added to the table after it is created). Instead, most hash table designs assume that hash collisions different keys that map to the same hash valuewill occur and must be accommodated in some way. In a well-dimensioned hash table, the average cost (number of instructions) for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions of key-value pairs, at constant average (indeed) cost per operation. In many situations, hash tables turn out to be more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer software, particularly for associate arrays, database indexing caches and sets.

2. Illustrate the C program to represents the Stack Implementation on POP and PUSH operation. Ans:2
#include<studio.h> #include<conio.h> #define Max 5 Int Staff [Max], top=-1; Void display() { If ((top==-1 || (top==0)) { Printf(\n stack is full\n); } Else { Printf(\n stack elements are\n); For(int i=top-1;i>=0;i--) } } void push() { int ele; char ch; it(top-=-1) top=0; do { If (top>=5) { Printf(\n stack is full); Break; } Else { Clrscr(); Printf(\n enter the element to be insearted\n); Scanf(%d, &ele); Staff(top++)=ele display(); } Printf(\n Do u want to add more elements:?\n); Scanf(\n%c, &ch); } While ((ch==y ||(ch==Y)); } void pop() { If ((top==-1)||(top==0)

{ Printf(\nstack is under flow\n); } Else { Printf(%d is deleted fro stack\n,Staff(--top]); display(); } } Void main() { clrscr(); char c; int choice; do { clrscr(); printf(\n enter the choice\n); printf(1->push\n); printf(2-pop\n); scanf(%d, &choice); if(choice==1) push(); elseif(choice==2) pop(); else printf(\in valid choice); printf(\n do u want to continue:?); scanf(\n%c, &c); } While{(c==y)||(c==Y)); }

3. Show the result of inserting 3, 1, 4, 5, 2, 9, 6, 8 into a a) bottom-up splay tree

b)

top-down splay tree.

Ans:3 Splay Trees We shall describe the algorithm by giving three rewrite rules in the form of pictures. In these pictures, x is the node that was accessed (that will eventually be at the root of the tree). By looking at the local structure of the tree defined by x, xs parent, and xs grandparent we decide which of the following three rules to follow. We continue to apply the rules until x is at the root of the tree:

Notes 1) Each rule has a mirror image variant, which covers all the cases.

2) The zig-zig rule is the one that distinguishes splaying from just rotating x to the root of the tree. 3) Top-down splaying is much more efficient in practice.

The Basic Bottom-Up Splay Tree


A technique called splaying can be used so a logarithmic amortized bound can be achieved. We use rotations such as weve seen before. The zig case.

o Let X be a non-root node on the access path on which we are rotating. o If the parent of X is the root of the tree, we merely rotate X and the root as shown in Figure 2.

Figure 2 Zig Case

o This is the same as a normal single rotation. The zig-zag case. o In this case, X and both a parent P and a grandparent G. X is a right child and P is a left child (or vice versa).

o This is the same as a double rotation. o This is shown in Figure 3. The zig-zig case. o This is a different rotation from those we have previously seen. o Here X and P are either both left children or both right children. o The transformation is shown in Figure 4. o This is different from the rotate-to-root. Rotate-to-root rotates between X and P and then between X and G. The zig-zig splay rotates between P and G and X and P.
Figure 4 Zig-zig Case

Given the rotations, consider the example in Figure 5, where we are splaying c.

Top-Down Splay Trees


Bottom-up splay trees are hard to implement. We look at top-down splay trees that maintain the logarithmic amortized bound. o This is the method recommended by the inventors of splay trees. Basic idea as we descend the tree in our search for some node X, we must take the nodes that are on the access path, and move them and their subtrees out of the way. We must also perform some tree rotations to guarantee the amortized time bound. At any point in the middle of a splay, we have: o The current node X that is the root of its subtree. o Tree L that stores nodes less than X. o Tree R that stores nodes larger than X. Initially, X is the root of T, and L and R are empty. As we descend the tree two levels at a time, we encounter a pair of nodes. o Depending on whether these nodes are smaller than X or larger than X, they are placed in L or R along with subtrees that are not on the access path to X. When we finally reach X, we can then attach L and R to the bottom of the middle tree, and as a result X will have been moved to the root. We now just have to show how the nodes are placed in the different tree. This is shown below: o Zig rotation Figure 7

Zig-Zig Figure 8

Zig-Zag Figure

4. Discuss the techniques for allowing a hash file to expand and shrink dynamically. What are the advantages and disadvantages of each? Ans:4 Dynamic Hashing: As the database grows over time, we have three options: 1. Choose hash function based on current file size. Get performance degradation as file grows. 2. Choose has function based on anticipated file size. Space is wasted initially. 3. Periodically re-organize hash structure as file grows. Requires selecting new hash function, recomputing all addresses and generating new bucket assignments. Some hashing techniques allow the hash function to be modified dynamically to accommodate the growth or shrinking of the database. These are called dynamic hash functions. Extendable hashing is one form of dynamic hashing. Extendable hashing splits and coalesces buckets as database size changes.

Figure 11.9: General extendable hash structure. We choose a hash function that is uniform and random that generates values over a relatively large range. Range is b-bit binary integers (typically b=32). is over 4 billion, so we don't generate that many buckets! Instead we create buckets on demand, and do not use all b bits of the hash initially.

At any point we use i bits where . The i bits are used as an offset into a table of bucket addresses. Value of i grows and shrinks with the database. Figure 11.19 shows an extendable hash structure. Note that the i appearing over the bucket address table tells how many bits are required to determine the correct bucket. It may be the case that several entries point to the same bucket. All such entries will have a common hash prefix, but the length of this prefix may be less than i. So we give each bucket an integer giving the length of the common hash prefix.

This is shown in Figure 11.9 (textbook 11.19) as . Number of bucket entries pointing to bucket j is then :

To find the bucket containing search key value


Compute

Take the first i high order bits of . Look at the corresponding table entry for this i-bit string. Follow the bucket pointer in the table entry.

We now look at insertions in an extendable hashing scheme.

Follow the same procedure for lookup, ending up in some bucket j. If there is room in the bucket, insert information and insert record in the file. If the bucket is full, we must split the bucket, and redistribute the records. If bucket is split we may need to increase the number of bits we use in the hash.

Two cases exist: 1. If


, then only one entry in the bucket address table points to bucket j. Then we need to increase the size of the bucket address table so that we can include pointers to the two buckets that result from splitting bucket j. We increment i by one, thus considering more of the hash, and doubling the size of the bucket address table. Each entry is replaced by two entries, each containing original value. Now two entries in bucket address table point to bucket j. We allocate a new bucket z, and set the second pointer to point to z.

Set and to i. Rehash all records in bucket j which are put in either j or z. Now insert new record. It is remotely possible, but unlikely, that the new hash will still put all of the records in one bucket. If so, split again and increment i again. 2. If , then more than one entry in the bucket address table points to bucket j.

Then we can split bucket j without increasing the size of the bucket address table (why?).

Note that all entries that point to bucket j correspond to hash prefixes that have the same value on the leftmost bits. We allocate a new bucket z, and set and to the original value plus 1. Now adjust entries in the bucket address table that previously pointed to bucket j. Leave the first half pointing to bucket j, and make the rest point to bucket z. Rehash each record in bucket j as before. Reattempt new insert.

Note that in both cases we only need to rehash records in bucket j. Deletion of records is similar. Buckets may have to be coalesced, and bucket address table may have to be halved. Insertion is illustrated for the example deposit file of Figure 11.20.

32-bit hash values on bname are shown in Figure 11.21. An initial empty hash structure is shown in Figure 11.22. We insert records one by one.

We (unrealistically) assume that a bucket can only hold 2 records, in order to illustrate both situations described. As we insert the Perryridge and Round Hill records, this first bucket becomes full. When we insert the next record (Downtown), we must split the bucket.

Since , we need to increase the number of bits we use from the hash. We now use 1 bit, allowing us buckets. This makes us double the size of the bucket address table to two entries. We split the bucket, placing the records whose search key hash begins with 1 in the new bucket, and those with a 0 in the old bucket (Figure 11.23). Next we attempt to insert the Redwood record, and find it hashes to 1.

That bucket is full, and . So we must split that bucket, increasing the number of bits we must use to 2. This necessitates doubling the bucket address table again to four entries (Figure 11.24). We rehash the entries in the old bucket. We continue on for the deposit records of Figure 11.20, obtaining the extendable hash structure of Figure 11.25.

Advantages: Extendable hashing provides performance that does not degrade as the file grows. Minimal space overhead - no buckets need be reserved for future use. Bucket address table only contains one pointer for each hash value of current prefix length.

Disadvantages:

Extra level of indirection in the bucket address table Added complexity

Operating systems 1.
Deadlock avoidance
The above solution allowed deadlock to happen, then detected that deadlock had occurred and tried to fix the problem after the fact. Another solution is to avoid deadlock by only granting resources if granting them cannot result in a deadlock situation later. However, this works only if the system knows what requests for resources a process will be making in the future, and this is an unrealistic assumption. The text describes the bankers algorithm but then points out that it is essentially impossible to implement because of this assumption.

Deadlock Prevention The difference between deadlock avoidance and deadlock prevention is a little subtle. Deadlock avoidance refers to a strategy where whenever a resource is requested, it is only granted if it cannot result in deadlock. Deadlock prevention strategies involve changing the rules so that processes will not make requests that could result in deadlock.
Here is a simple example of such a strategy. Suppose every possible resource is numbered (easy enough in theory, but often hard in practice), and processes must make their requests in order; that is, they cannot request a resource with a number lower than any of the resources that they have been granted so far. Deadlock cannot occur in this situation. 2.

File Structure
Unix hides the chunkiness of tracks, sectors, etc. and presents each file as a smooth array of bytes with no internal structure. Application programs can, if they wish, use the bytes in the file to represent structures. For example, a wide-spread convention in Unix is to use the newline character (the character with bit pattern 00001010) to break text files into lines. Some other systems provide a variety of other types of files. The most common are files that consist of an array of fixed or variable size records and files that form an index mapping keys to values. Indexed files are usually implemented as B-trees. File Types Most systems divide files into various types. Unix initially supported only four types of files: directories, two kinds of special files, and regular files. Just about any type of file is considered a regular file by Unix. Within this category, however, it is useful to distinguish text files from binary files; within binary files there are executable files (which contain machine-language code) and data files; text files might be source files in a particular programming language (e.g. C or Java) or they may be human-readable text in some mark-up language such as html (hypertext markup language). Data files may be classified according to the program that created them or is able to interpret them.

In general (not just in Unix) there are three ways of indicating the type of a file:

1. The operating system may record the type of a file in meta-data stored separately from the file, but associated with it. Unix only provides enough meta-data to distinguish a regular file from a directory (or special file), but other systems support more types. 2. The type of a file may be indicated by part of its contents, such as a header made up of the first few bytes of the file. In Unix, files that store executable programs start with a two byte magic number that identifies them as executable and selects one of a variety of executable formats. In the original Unix executable format, called the a.out format, the magic number is the octal number 0407, which happens to be the machine code for a branch instruction on the PDP11 computer, one of the first computers to implement Unix. 3. The type of a file may be indicated by its name. Sometimes this is just a convention, and sometimes it's enforced by the OS or by certain programs. For example, the Unix Java compiler refuses to believe that a file contains Java source unless its name ends with .java. Access Modes Systems support various access modes for operations on a file. Sequential. Read or write the next record or next n bytes of the file. Usually, sequential access also allows a rewind operation.
Random. Read or write the nth record or bytes i through j. Unix provides an equivalent facility

by adding a seek operation to the sequential operations listed above. This packaging of operations allows random access but encourages sequential access.
Indexed. Read or write the record with a given key. In some cases, the key need not be

unique there can be more than one record with the same key. In this case, programs use a combination of indexed and sequential operations: Get the first record with a given key, then get other records with the same key by doing sequential reads. Note that access modes are distinct from from file structure e.g., a record-structured file can be accessed either sequentially or randomly but the two concepts are not entirely unrelated. For example, indexed access mode only makes sense for indexed files. 3. In order for deadlock to occur, four conditions must be true. Mutual exclusion Each resource is either currently allocated to exactly one process or it is available. (Two processes cannot simultaneously control the same resource or be in their critical section).
Hold and Wait processes currently holding resources can request new resources No preemption Once a process holds a resource, it cannot be taken away by another

process or the kernel


Circular wait Each process is waiting to obtain a resource which is held by another

process.

Deadlock can be modeled with a directed graph. In a deadlock graph, vertices represent either processes (circles) or resources (squares). A process which has acquired a resource is show with an arrow (edge) from the resource to the process. A process which has requested a resource which has not yet been assigned to it is modeled with an arrow from the process to the resource. If these create a cycle, there is deadlock. The deadlock situation in the above code can be modeled like this.

This graph shows an extremely simple deadlock situation, but it is also possible for a more complex situation to create deadlock. Here is an example of deadlock with four processes and four resources.

There are a number of ways that deadlock can occur in an operating situation. We have seen some examples, here are two more. Two processes need to lock two files, the first process locks one file the second process locks the other, and each waits for the other to free up the locked file.
Two processes want to write a file to a print spool area at the same time and both start

writing. However, the print spool area is of fixed size, and it fills up before either process finishes writing its file, so both wait for more space to become available. 5. The term process is used somewhat interchangeably with 'task' or 'job'. There are quite a few definitions presented in the literature, for instance A program in Execution.
An asynchronous activity.

The entity to which processors are assigned. The 'dispatchable' unit.

A process can be simply defined as a program in execution. A process is more than a program code. A process is an 'active' entity as oppose to program which considered being a 'passive' entity. A program is an algorithm expressed in some programming language. Being a passive, a program is only a part of process. Process, on the other hand, includes: Current value of Program Counter (PC)
Contents of the processors registers Value of the variables The process stack, which typically contains temporary data such as subroutine parameter,

return address, and temporary variables.


A data section that contains global variables. A process is the unit of work in a system.

A process is created and terminated, and it follows some or all of the states of process transition; such as New, Ready, Running, Waiting, and Exit. In Process model, all software on the computer is organized into a number of sequential processes. In reality, the CPU switches back and forth among processes. Process States A process goes through a series of discrete process states during its lifetime. Depending on the implementation, the operating systems may differ in the number of states a process goes though. Though there are various state models starting from two states to nine states, but I will consider five states model and then seven states model, as lower states models are now obsolete. Five State Process Model The Following are the states of a five state process model. The figure below shows these state transition. 1. New State The process being created.

2. Terminated State The process has finished execution. 3. Blocked (waiting) State When a process blocks, it does so because logically it cannot continue, typically because it is waiting for input that is not yet available. Formally, a process is said to be blocked if it is waiting for some event to happen (such as an I/O completion) before it can proceed. In this state a process is unable to run until some external event happens. 4. Running State A process is said to be running if it currently has the CPU, which is, actually using the CPU at that particular instant. 5. Ready State A process is said to be ready if it use a CPU if one were available. It is run-able but temporarily stopped to let another process run.

Process states transitions

Logically, the 'Running' and 'Ready' states are similar. In both cases the process is willing to run, only in the case of 'Ready' state, there is temporarily no CPU available for it. The 'Blocked' state is different from the 'Running' and 'Ready' states in that the process cannot run, even if the CPU is available. Following are six possible transitions among above mentioned five states Transition 1 occurs when process discovers that it cannot continue. If running process initiates an I/O operation before its allotted time expires, the running process voluntarily relinquishes the CPU. This state transition is: Block (process): Running Blocked. Transition 2 occurs when the scheduler decides that the running process has run long enough and it is time to let another process have CPU time. This state transition is: Time-Run-Out (process): Running Ready. Transition 3 occurs when all other processes have had their share and it is time for the first process to run again This state transition is: Dispatch (process): Ready Running. Transition 4 occurs when the external event for which a process was waiting (such as arrival of input) happens. This state transition is: Wakeup (process): Blocked Ready. Transition 5 occurs when the process is created. This state transition is: Admitted (process): New Ready. Transition 6 occurs when the process has finished execution. This state transition is: Exit (process): Running Terminated.

Seven State Process Model The following figure shows the seven state process model in which uses swapping technique.

Figure Seven State Process Model

Apart from the transitions we have seen in five states model, following are the new transitions which occur in the above seven state model. Blocked to Blocked / Suspend: If there are now ready processes in the main memory, at least one blocked process is swapped out to make room for another process that is not blocked. Blocked / Suspend to Blocked: If a process is terminated making space in the main memory, and if there is any high priority process which is blocked but suspended, anticipating that it will become free very soon, the process is brought in to the main memory. Blocked / Suspend to Ready / Suspend: A process is moved from Blocked / Suspend to Ready / Suspend, if the event occurs on which the process was waiting, as there is no space in the main memory. Ready / Suspend to Ready: If there are no ready processes in the main memory, operating system has to bring one in main memory to continue the execution. Some times this transition takes place even there are ready processes in main memory but having lower priority than one of the processes in Ready / Suspend state. So the high priority process is brought in the main memory. Ready to Ready / Suspend: Normally the blocked processes are suspended by the operating system but sometimes to make large block free, a ready process may be suspended. In this case normally the low priority processes are suspended.

New to Ready / Suspend: When a new process is created, it should be added to the Ready state. But some times sufficient memory may not be available to allocate to the newly created process. In this case, the new process is sifted to Ready / Suspend.

C prog
Double ended queue (Deque) Another type of queue called double-ended queue also called Deque is discussed in this section. Deque is a special type of data structure in which insertions and deletions will be done

either at the front end or at the rear end of the queue. The operations that can be performed on deques are Insert an item from front end Insert an item from rear end Delete an item from front end Delete an item from rear end Display the contents of queue C program to Implement double-ended queue #include <stdio.h> #include <process.h> #define QUEUE_SIZE 5 /* Include function to check for overflow 4.2.1 Eg.-1*/ /* Include function to check for underflow 4.2.1 Eg -3*/ /* Include function to insert an item at the front end 4.2. 3 Eg.-1*/ /* Include function to insert an item at the rear end 4.2.1 Eg -2*/ /* Include function to delete an item at the front end 4.2.1 Eg -4*/ /* Include function to delete an item at the rear end 4.2. 3 Eg.-2*/ /* Include function to display the contents of queue 4.2.1 Eg -5*/ void main() { int choice,item,f,r,q [10]; f=0; r = -1; for (;;) { printf(" 1:Insert_front 2:lnsert_rear\n"); printf("3: Delete_front 4: Delete_rear\n" ); printf("5: Display 6:Exit\n"); printf("Enter the choice\n"); scanf("%d" ,&choice ); switch ( choice ) { case 1: printf("Enter the item to be inserted\n"); scanf("%d",& item); insert_ front(item, q, &f, &r); break; case 2: printf("Enter the item to be inserted\n"); scanf("%d",& item); insert_rear(item, q, &r); break; case 3: delete _front(q, &f, &r); break;

case 4: delete_rear(q, &f, &r); break; cases 5: display(q, f, r); break; default: . exit(0); } } }

2. In this section, we discuss linear lists again with slight modification by storing address of the first node in the link field of the last node. The resulting list is called a circular singly linked list or a circular list. The pictorial representation of a circular list is shown in following figure.

The circular lists have some advantages over singly linked lists that are not circular. In a singly linked list given the address of any node x, only those nodes which follow x are reachable but, the nodes that precede x are not reachable. To reach the nodes that precede x, it is required to preserve a pointer variable say first which contain the address of the first node before traversing. But in circular lists if the address of any node x is known, one can traverse the entire list from that node and so all nodes are reachable. So, in a circular linked list, the search for the predecessor of the node x can be initiated from x itself and there is no need of a pointer variable that points to the first node of the list. The disadvantage of these circular lists is that when proper care is not taken it is possible that we may end up in an infinite loop unless proper care is taken to detect the end of the list. A circular list can be used as a stack, queue or a deque. The basic operations required to implement these data structures are insert_ front, insert_ rear, delete_ front, delete_ rear and display. Let us implement these functions one by one. Insert a node at the front end Consider the list shown in following fig. (a). The list contains 4 nodes and a pointer variable last contains address of the last node.

Step 1: To insert an item 50 at the front of the list, obtain a free node temp from the availability list and store the item in info field as shown in dotted lines in above fig. (b). This can be accomplished using the following statements temp = getnode( ); temp->info = item; Step 2: Copy the address of the first node(i.e., last->link) into link field of newly obtained node temp and the statement to accomplish this task is temp->link = last->link; Step 3: Establish a link between the newly created node temp and the last node. This is achieved by copying the address of the node temp into link field of last node. The corresponding code for this is last->link = temp; Now, an item is successfully inserted at the front of the list. All these steps have been designed by assuming that the list is already existing. If the list is empty, make temp itself as the last node and establish a link between the first node and the last node. Repeatedly insert the items using the above procedure to create a list. The C function to insert an item at the front of the circular linked list is shown in below example. Function to insert an item at the front end of the list. NODE insert_ front (int item, NODE last) { NODE temp; temp = getnode( ); /* Create a new node to be inserted */ temp->info = item; if (last = = NULL) /* Make temp as the first node */ last = temp; else temp->link = last->link; last->link = temp; /* link last node to first node */ return last; /* Return the last node */ } Insert a node at the rear end Consider the list shown in below fig. (a). This list contains 4 nodes and last is a pointer variable that contains the address of the last node.

Let us insert the item 80 at the end of this list. After successfully inserting 80, the list shown in fig.(c) is obtained. Following steps shown below to insert an item at the rear end of the list. Step 1: Obtain a free node temp from the availability list and store the item in info field as shown in dotted lines in fig. (b). This can be accomplished using the following statements temp = getnode( ); temp->info = item; Step 2: Copy the address of the first node(i.e., last->link) into link field of newly obtained node temp and the statement to accomplish this task is temp->link = last->link; . Step 3: Establish a link between the newly created node temp and the last node. This is achieved by copying the address of the node temp into link field of last node. The corresponding code for this is last->link = temp; Step 4: The new node is made as the last node using the statement: return temp; /* make new node as the last node */ These steps have been designed by assuming the list is already existing. If the list is empty make temp itself as the-first node as well as the last node. The C function for this is shown in below example. Example : Function to insert an item at the rear end of the list NODE insert_ rear (int item, NODE last) { NODE temp; temp = getnode( ); /* Create a new node to be inserted */ temp->info = item; if ( last == NULL) /* Make temp as the first node */ last = temp; else /*Insert at the rear end */ temp->link = last->link;

last->link = temp; /* link last node to first node */ return temp; /* Make the new node as the last node */ } Note: Compare the functions insert_ front( ) and insert_ rear( ). All statements in both the functions are same except that in insert_ front( )function, address of last node is returned and in the function insert_ rear( ) address of the new node is returned. Delete a node from the front end Consider the list shown in below figure. This list contains 5 nodes and last is a pointer variable that contains the address of the last node.

To delete the front node (see the sequence of numbers 1,2,3 shown in above fig. ), the steps to be followed are Step 1: first = last->link; /* obtain address of the first node */ Step 2: last->link = first->link; /* link the last node and new first node */ Step 3: printf ("The item deleted is %d\n", first->info); Freenode (first); /*delete the old first node */ Now, the node identified as first node is deleted. These steps have been designed by assuming the list is already existing. If there is only one node, delete that node and assign NULL to last indicating the list is empty. The code corresponding to this is can be /* If there is only one node, delete it */ if ( last->link = = last ) { printf ("The item deleted is %d\n", last->info); freenode(last); last = NULL; return last; } All these steps designed so far have to be executed provided the list is not empty. If the list is empty, display the appropriate message. The complete code to delete an item from the front end is shown in below example. Example : Function to delete an item from the front end NODE delete_ front(NODE last) { NODE temp, first; if ( last = = NULL ) { printf("List is empty\n");

return NULL; } if ( last->link = = last) /* Only one node is present */ { printf("The item deleted is %d\n", last->info); freenode (last); return NULL; } /* List contains more than one node */ first = last->link; /* obtain node to be deleted */ last->link = last->link; /*Store new first node in link of last */ printf ("The item deleted is %d\n", first->info); freenode (first); /* delete the old first node */ return last; } Delete a node from the rear end To delete a node from the rear end, it is required to obtain the address of the predecessor of the node to be deleted. Consider the list shown in fig. where the pointer variable last contains the, address of the last node.

To delete the node pointed to by last, the steps to be followed are: Step 1: Obtain the address of the predecessor of the node to be deleted. This can be accomplished by traversing from the first node till the link field of a node contains address of the last node. The code corresponding to this is prev = last->link; while ( prev->link != last ) { prev = prev->link; } Step 2: The first node and the last but one node (i.e., prev) are linked. This can be accomplished using the statement prev->link ='last->link; Step 3: The last node can be deleted using the statement

freenode (last); After executing these statements, return the node pointed to by prev as the new last node using the statement return(prev); All these steps have been designed by assuming that the list is already existing. If there is only one node, delete that node and assign NULL to the pointer variable last indicating that the list is empty. If the list is empty in the beginning display the appropriate message. The C function to delete an item from the rear end of circular list is shown in below example. Example : Function to delete an item from the rear end NODE delete_ rear (NODE last) { NODE prev; if ( last = = NULL ) { printf("List is empty\n"); return NULL; } if ( last->link = = last) /* Only one node is present */ { printf("The item deleted is %d\n", last->info); freenode(last); return NULL; } /* Obtain address of previous node */ prev = last->link; while( prev->link != last ) { prev = prev->link; } prev->link = last->link; /* prev node is made the last node */ printf("The item deleted is %d\n", last->info); freenode(last); /* delete the old last node */ return prev; /* return the new last node */ }

3 Depth First Search (DFS) Implementation The most obvious solution to code is to add queens recursively to the board one by one, trying all possible queen placements. It is easy to exploit the fact that there must be exactly one queen in each column: at each step in the recursion, just choose where in the current column to put the queen. This scheme requires little space, though: since it only keeps track of as many decisions as there are to make, it requires only O(d) space. Breadth First Search (BFS) Form a one-element queue consisting of the root node. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. If the first element is the goal node, do nothing (or you may stop now, depends on the situation). If the first element is not the goal node, remove the first element from the queue and add the first element's children,

if any, to the back of the queue. If the goal node has been found, announce success, otherwise announce failure. The side effect of BFS: 1. Memory requirements are a bigger problem for BFS than the execution time 2. Time is still a major factor, especially when the goal node is at the deepest level.

Depth First with Iterative Deepening (DF-ID) An alternative to breadth first search is iterative deepening. Instead of a single breadth first search, run D depth first searches in succession, each search allowed to go one row deeper than the previous one. That is, the first search is allowed only to explore to row 1, the second to row 2, and so on. This simulates'' a breadth first search at a cost in time but a savings in space. Comparison of DFS, BFS & DFS+ID Once you've identified a problem as a search problem, it's important to choose the right type of search. Here are some information which may be useful while taking the decision. In a Nutshell

d is the depth of the answer k is the depth searched d <= k If the program needs to produce a list sorted shortest solution first (in terms of distance from the root node), use breadth first search or iterative deepening. For other orders, depth first search is the right strategy. If there isn't enough time to search the entire tree, use the algorithm that is more likely to find the answer. If the answer is expected to be in one of the rows of nodes closest to the root, use breadth first search or iterative deepening. Conversely, if the answer is expected to be in the leaves, use the simpler depth first search. Be sure to keep space constraints in mind. If memory is insufficient to maintain the queue for breadth first search but time is available, use iterative deepening. b.

Splay Trees We shall describe the algorithm by giving three rewrite rules in the form of pictures. In these pictures, x is the node that was accessed (that will eventually be at the root of the tree). By looking at the local structure of the tree defined by x, x's parent, and x's grandparent we decide which of the following three rules to follow. We continue to apply the rules until x is at the root of the tree:

Notes 1) Each rule has a mirror image variant, which covers all the cases. 2) The zig-zig rule is the one that distinguishes splaying from just rotating x to the root of the tree. 3) Top-down splaying is much more efficient in practice. Here are some examples: