You are on page 1of 16

Deadlock:-

A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.

In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly
process 2 has resource 2 and needs to acquire resource 1. Process 1 and process 2 are in
deadlock as each of them needs the other’s resource to complete their execution but neither of
them is willing to relinquish their resources.
Deadlock Characteristics
As discussed in the previous post, deadlock has following characteristics.

1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait

1. Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.
2. Hold and Wait
A process can hold multiple resources and still request more resources from other
processes which are holding them. In the diagram given below, Process 2 holds Resource
2 and Resource 3 and is requesting the Resource 1 which is held by Process 1.

3. No Preemption
A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from
Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.

4. Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource
held by the first process. This forms a circular chain. For example: Process 1 is allocated
Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1
and it is requesting Resource 2. This forms a circular wait loop.
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion


It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape
drive and printer, are inherently non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before the start of its execution, this way
hold and wait condition is eliminated but it will lead to low device utilization. for
example, if a process requires printer at a later time and we have allocated printer before
the start of its execution printer will remain blocked till it has completed its execution.

2. The process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.
Eliminate No Preemption
Preempt resources from the process when resources required by other high priority processes.
Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3
lesser than R5 such request will not be granted, only request for resources more than R5 will
be granted.

Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’ Algorithm is resource allocation and deadlock avoidance algorithm which test all the
request made by processes for resources, it checks for the safe state, if after granting request
system remains in the safe state it allows the request and if there is no safe state it doesn’t
allow the request made by the process.
Inputs to Banker’s Algorithm:

1. Max need of resources by each process.


2. Currently, allocated resources by each process.
3. Max free available resources in the system.
The request will only be granted under the below condition:
1. If the request made by the process is less than equal to max need to that process.
2. If the request made by the process is less than equal to the freely available resource in the
system.

Concurrency in Operating System

Concurrency is the execution of a set of multiple instruction sequences at the same time. This
occurs when there are several process threads running in parallel. These threads communicate
with the other threads/processes through a concept of shared memory or through message
passing. Because concurrency results in the sharing of system resources - instructions, memory,
files - problems can occur. like deadlocks and resources starvation. (we will talk about starvation
and deadlocks in the next module).
Principles of Concurrency :
With current technology such as multi core processors, and parallel processing, which allow for
multiple processes/threads to be executed concurrently - that is at the same time - it is possible to
have more than a single process/thread accessing the same space in memory, the same declared
variable in the code, or even attempting to read/write to the same file.

The amount of time it takes for a process to execute is not easily calculated, so we are unable to
predict which process will complete first, thereby allowing us to implement algorithms to deal
with the issues that concurrency creates. The amount of time a process takes to complete depends
on the following:

 The activities of other processes


 The way operating system handles interrupts
 The scheduling policies of the operating system

Problems in Concurrency:

 Sharing global resources


Sharing of global resources safely is difficult. If two processes both make use of a global
variable and both make changes to the variables value, then the order in which various
changes take place are executed is critical.
 Optimal allocation of resources
It is difficult for the operating system to manage the allocation of resources optimally.
 Locating programming errors
It is very difficult to locate a programming error because reports are usually not
reproducible due to the different states of the shared components each time the code runs.
 Locking the channel
It may be inefficient for the operating system to simply lock the resource and prevent its
use by other processes.

Advantages of Concurrency:

 Running of multiple applications


Having concurrency allows the operating system to run multiple applications at the same
time.
 Better resource utilization
Having concurrency allows the resources that are NOT being used by one application can
be used for other applications.
 Better average response time
Without concurrency, each application has to be run to completion before the next one
can be run.
 Better performance
Concurrency provides better performance by the operating system. When one application
uses only the processor and another application uses only the disk drive then the time to
concurrently run both applications to completion will be shorter than the time to run each
application consecutively.

Drawbacks of Concurrency:

 When concurrency is used, it is pretty much required to protect multiple


processes/threads from one another.
 Concurrency requires the coordination of multiple processes/threads through additional
sequences of operations within the operating system.
 Additional performance enhancements are necessary within the operating systems to
provide for switching among applications.
 Sometimes running too many applications concurrently leads to severely degraded
performance.

Precedence Graph:-

Precedence Graph is a directed acyclic graph which is used to show the execution level of
several processes in operating system. It consists of nodes and edges. Nodes represent the
processes and the edges represent the flow of execution.

Properties of Precedence Graph: Following are the properties of Precedence Graph:


 It is a directed graph.
 It is an acyclic graph.
 Nodes of graph correspond to individual statements of program code.
 Edge between two nodes represents the execution order.
 A directed edge from node A to node B shows that statement A executes first and then
Statement B executes.
Consider the following code:
S1: a = x + y;
S2: b = z + 1;
S3: c = a - b;
S4: w = c + 1;
If above code is executed concurrently, the following precedence relations exist:
 c = a – b cannot be executed before both a and b have been assigned values.
 w = c + 1 cannot be executed before the new values of c has been computed.
 The statements a = x + y and b = z + 1 could be executed concurrently.
Example: Consider the following precedence relations of a program:
1. S2 and S3 can be executed after S1 completes.
2. S4 can be executed after S2 completes.
3. S5 and S6 can be executed after S4 completes.
4. S7 can be executed after S5, S6 and S3 complete.

Solution:

critical section:-
The critical section is a code segment where the shared variables can be accessed. An atomic
action is required in a critical section i.e. only one process can execute in its critical section at a
time. All the other processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows −

In the above diagram, the entry section handles the entry into the critical section. It acquires the
resources needed for execution by the process. The exit section handles the exit from the critical
section. It releases the resources and also informs the other processes that the critical section is
free.

Solution to the Critical Section Problem

The critical section problem needs a solution to synchronize the different processes. The
solution to the critical section problem must satisfy the following conditions −

 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at any
time. If any other processes require the critical section, they must wait until it is free.
 Progress
Progress means that if a process is not using the critical section, then it should not stop
any other process from accessing it. In other words, any process can enter a critical
section if it is free.
 Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt should
not wait endlessly to access the critical section.
Inter Process Communication – Semaphores:-

The first question that comes to mind is, why do we need semaphores? A simple answer, to
protect the critical/common region shared among multiple processes.
Let us assume, multiple processes are using the same region of code and if all want to access
parallels then the outcome is overlapped. Say, for example, multiple users are using one printer
only (common/critical section), say 3 users, given 3 jobs at same time, if all the jobs start
parallels, then one user output is overlapped with another. So, we need to protect that using
semaphores i.e., locking the critical section when one process is running and unlocking when it is
done. This would be repeated for each user/process so that one job is not overlapped with
another job.
Basically semaphores are classified into two types −
Binary Semaphores − only two states 0 & 1, i.e., locked/unlocked or available/unavailable,
Mutex implementation.
Counting Semaphores − Semaphores which allow arbitrary resource count are called counting
semaphores.
Assume that we have 5 printers (to understand assume that 1 printer only accepts 1 job) and we
got 3 jobs to print. Now 3 jobs would be given for 3 printers (1 each). Again 4 jobs came while
this is in progress. Now, out of 2 printers available, 2 jobs have been scheduled and we are left
with 2 more jobs, which would be completed only after one of the resource/printer is available.
This kind of scheduling as per resource availability can be viewed as counting semaphores.
To perform synchronization using semaphores, following are the steps −
Step 1 − Create a semaphore or connect to an already existing semaphore.
Step 2 − Perform operations on the semaphore i.e., allocate or release or wait for the resources.
Step 3 − Perform control operations on the message queue.

File

A file is a named collection of related information that is recorded on secondary storage such as
magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits, bytes,
lines or records whose meaning is defined by the files creator and user.

File Structure

A File Structure should be according to a required format that the operating system can
understand.
 A file has a certain defined structure according to its type.
 A text file is a sequence of characters organized into lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are understandable by the
machine.
 When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file structure.

File Type

File type refers to the ability of the operating system to distinguish different types of file such as
text files source files and binary files etc. Many operating systems support many types of files.
Operating system like MS-DOS and UNIX have the following types of files −

Ordinary Files
 These are the files that contain user information.
 These may have text, databases or executable program.
 The user can apply various operations on such files like add, modify, delete or even
remove the entire file.

Directory Files
 These files contain list of file names and other information related to these files.

Special Files
 These files are also known as device files.
 These files represent physical device like disks, terminals, printers, networks, tape drive
etc.
These files are of two types −
 Character special files − data is handled character by character as in case of terminals or
printers.
 Block special files − data is handled in blocks as in the case of disks and tapes.

File Access Mechanisms

File access mechanism refers to the manner in which the records of a file may be accessed. There
are several ways to access files −

 Sequential access
 Direct/Random access
 Indexed sequential access

Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is the
most primitive one. Example: Compilers usually access files in this fashion.

Direct/Random access
Random access file organization provides, accessing the records directly.
 Each record has its own address on the file with by the help of which it can be directly
accessed for reading or writing.
 The records need not be in any sequence within the file and they need not be in adjacent
locations on the storage medium.

Indexed sequential access

 This mechanism is built up on base of sequential access.


 An index is created for each file which contains pointers to various blocks.
 Index is searched sequentially and its pointer is used to access the file directly.
Directory Structure

What is a directory?

Directory can be defined as the listing of the related files on the disk. The directory may store
some or the entire file attributes.

To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.

Each partition must have at least one directory in which, all the files of the
Partition can be listed. A directory entry is maintained for each file in the directory which stores
all the information related to that file.

A directory can be viewed as a file which contains the Meta data of the bunch of files.

Every Directory supports a number of common operations on the file:

1. File Creation

2. Search for the file

3. File deletion

4. Renaming the file

5. Traversing Files

6. Listing of files

What is Free Space Management in OS?

There is system software in an operating system that manipulates and keeps track of free spaces
to allocate and de-allocate memory blocks to files, this system is called a "file management
system in an operating system". There is a "free space list" in an operating system that
maintains the record of free blocks.
When a file is created, the operating system searches the free space list for the required space
allocated to save a file. While deletion a file, the file system frees the given space and adds this
to the "free space list".

The process of looking after and managing the free blocks of the disk is called free space
management. There are some methods or techniques to implement a free space list. These are as
follows:

 Bitmap
 Linked list
 Grouping
 Counting

1. Bitmap

This technique is used to implement the free space management. When the free space is
implemented as the bitmap or bit vector then each block of the disk is represented by a bit. When
the block is free its bit is set to 1 and when the block is allocated the bit is set to 0. The main
advantage of the bitmap is it is relatively simple and efficient in finding the first free block and
also the consecutive free block in the disk. Many computers provide the bit manipulation
instruction which is used by the users.

The calculation of the block number is done by the formula:

(number of bits per words) X (number of 0-value word) + Offset of first 1 bit

For Example: Apple Macintosh operating system uses the bitmap method to allocate the disk
space.

Assume the following are free. Rest are allocated:


Advantages:

 This technique is relatively simple.


 This technique is very efficient to find the free space on the disk.

Disadvantages:

 This technique requires a special hardware support to find the first 1 in a word it is not 0.
 This technique is not useful for the larger disks.

For example: Consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25,26, and 27
are free and the rest of the blocks are allocated. The free-space bitmap would be:
001111001111110001100000011100000

2. Linked list

This is another technique for free space management. In this linked list of all the free block is
maintained. In this, there is a head pointer which points the first free block of the list which is
kept in a special location on the disk. This block contains the pointer to the next block and the
next block contain the pointer of another next and this process is repeated. By using this disk it is
not easy to search the free list. This technique is not sufficient to traverse the list because we
have to read each disk block that requires I/O time. So traversing in the free list is not a frequent
action.

Advantages:

 Whenever a file is to be allocated a free block, the operating system can simply allocate
the first block in free space list and move the head pointer to the next free block in the
list.

Disadvantages:

 Searching the free space list will be very time consuming; each block will have to be read
from the disk, which is read very slowly as compared to the main memory.
 Not Efficient for faster access.
In our earlier example, we see that keep block 2 is the first free block which points to another
block which contains the pointer of the 3 blocks and 3 blocks contain the pointer to the 4 blocks
and this contains the pointer to the 5 block then 5 block contains the pointer to the next block and
this process is repeated at the last.

3. Grouping

This is also the technique of free space management. In this, there is a modification of the free-
list approach which stores the address of the n free blocks. In this the first n-1 blocks are free but
the last block contains the address of the n blocks. When we use the standard linked list approach
the addresses of a large number of blocks can be found very quickly. In this approach, we cannot
keep a list of n free disk addresses but we keep the address of the first free block.

4. Counting

Counting is another approach for free space management. Generally, some contiguous blocks are
allocated but some are free simultaneously. When the free space is allocated to a process
according to the contiguous allocation algorithm or clustering. So we cannot keep the list of n
free block address but we can keep the address of the first free block and then the numbers of n
free contiguous block which follows the first block. When there is an entry in the free space list
it consists the address of the disk and a count variable. This method of free space management is
similar to the method of allocating blocks. We can store these entries in the B-tree in place of the
linked list. So the operations like lookup, deletion, insertion are efficient.

You might also like