You are on page 1of 13

UNIT-3

DEADLOCK PROBLEM / DEADLOCK CHARACTERIZATION:

If 2 processes are waiting on happening of some event but that event doesn’t happens , this is
the deadlock situation and those 2 processes are said to be in deadlock state.

NECESSARY CONDITIONS FOR DEADLOCK

1) MUTUAL EXCLUSION: One process can use the resource at a time . While the process is
using the resource , no other process is allowed to use that particular resource.

2) NO PREEMPTION: There should be no pre-emption. That is , one process which is holding


the resource will keep holding the resource until it gets executed . Processes will not be pre-
empted bases on the priority.

3) HOLD AND WAIT: P1 is holding 1 resource that is R1 and waiting for R2 , and P2 is holding
R2 and waiting for R1.

4) CIRCULAR WAIT: A loop is present. Then there are chances of the process to be in a
deadlock state.

VARIOUS METHODS TO HANDLE DEADLOCK:

1) DEADLOCK IGNORANCE (OSTRICH METHOD) : Just ignore the deadlock. Example: If


our system gets hanged and we restart , we basically restart the system and in this process we
are just ignoring the deadlock.
2) DEADLOCK PREVENTION: The main preventions are ,

• Either try to remove all deadlock necessary conditions or at least try to remove or
discard one of the condition.
• The 4 necessary conditions were: Mutual exclusion , no pre-emption , Hold and Wait
, Circular Wait. If all the situations are true , then it is a deadlock state , and if 1
condition is false then deadlock will be removed or prevented.
HOW TO MAKE THESE CONDITIONS FALSE.
• MUTUAL EXCULSION: All the resources are shareable means at a time multiple
processes can share the resources. If it is possible , deadlock is prevented.
• NO PRE-EMPTION: Pre-empt one process , Then the process will go in a ready
queue and the process which was requesting the resource will get the resource since
that particular resource will be free. This will happen with the help of time quantum.
• HOLD AND WAIT: Before the process starts executing , allocate all the resources
for which the process was demanding . This way we can prevent Hold and Wait.
(Practically impossible).
• CIRCULAR WAIT: Remove the condition of circular wait. That is , give numbering
to all the resources . Whenever a process is requesting for a resource , it will request
in increasing order.

3) DEADLOCK AVOIDANCE ( BANKER’S ALGORITHM): Whenever a resource is


allocated to a particular process , it will be checked whether it is in safe state or not with the help
of Banker’s algorithm.

4) DEADLOCK DETECTION AND RECOVERY: Detect the deadlock with the help of
Resource Allocation Graph and then recover from that situation. One way is to kill the processes
or process which are in deadlock situation. Another way is Resource Pre-emption .

DEADLOCK AVOIDANCE ( BANKER’S ALGORITHM):

We have to provide information beforehand to the Operating system which processes are coming
, which processes will request for which resources , how many resources will they request , for
how long they will need it. This algorithm is also used to detect the deadlock.
RESOURCE ALLOCATION GRAPH:

It is the most convenient and efficient way to represent the state of the system. Means in our
system how the resources are allocated and how the processes have been assigned multiple
resources. To represent that we use Resource Allocation Graph ( RAG). Means if in our system
there is a deadlock or not , to represent that it is the most suitable method.

STARVATION: If the waiting time is finite, then the process is in starvation.

DEADLOCK: If the waiting time is infinite, then the process is in deadlock.


If RAG has circular wait (cycle) and single instance resource there will always be a deadlock.

MEMORY MANAGEMENT:

It is a kind of method or it is a kind of functionality to manage the various kind of memories.

Memory Management -> method of managing primary memory. (RAM)

GOAL -> Efficient utilization of memory.

CPU: It executes the instructions. CPU is generally directly connected with the registers and
cache memory and also connected with RAM. And RAM is generally directly connected to
secondary memory.

We cannot directly connect CPU with the secondary memory because of the speed. Because
secondary memory is very slow, CPU is very fast. If size of RAM is increased, then it will
directly affect the cost of the system.

Secondary memory brings all the processes in RAM and then CPU directly interacts with the
processes.

MULTIPROGRAMMING:

Whenever we are keeping the programs in the secondary memory and when we are bringing that
program in the RAM then don’t bring one program in the RAM, try to bring more than one
processes in the RAM. That is called the multiprogramming.

Higher the degree of multiprogramming, higher the utilization of CPU.

Degree of multiprogramming -> Keep a greater number of processes in the RAM.

MEMORY MANAGEMENT TECHNIQUES:

Operating system uses various memory management methods to manage the primary memory ,
that is RAM.

MEMORY MANAGEMENT TECHNIQUES

CONTIGUOUS NON-
CONTIGUOUS
FIXED PARTITION VARIABLE PARTITION PAGING
SEGMENTATION

SEGMENTED PAGING INVERTED


PAGING

MULTILEVEL PAGING

CONTIGUOUS MEMORY ALLOCATION:

Contiguous means continuous allocation, means whatever number of similar kind of processes
come, we will provide them continuous amount of memory.

NON-CONTIGUOUS MEMORY ALLOCATION:

Non-contiguous means we are not doing continuous allocation, non-continuous allocation we are
doing. In this, a process can be stored or process can be put in the different different locations of
the memory.

FIXED PARTITIONING (STATIC PARTITIONING):

➢ No. of partitions are fixed.


➢ Size of each partition may or may not be same.
➢ Contiguous allocation so spanning is not allowed. (Either all or none)

DISADVANTAGES:

1. INTERNAL FRAGMENTATION:

Internal Fragmentation occurs when the memory is distributed into fixed-sized blocks. If the
memory allocated to the process is slightly larger than the memory demanded, then the difference
between allocated and demanded memory is known as internal fragmentation.

2. LIMIT IN PROCESS SIZE


3. DEGREE OF MULTIPROGRAMMING IS LIMITED.
4. EXTERNAL FRAGMENTATION:

Although we are having availability of memory in different slots and combination means addition
of all the space is equivalent to the available size, means size of new process is less than the
available space but still we are not able to accommodate because of the contiguous method.
DYNAMIC PARTITIONING / VARIABLE PARTITIONING:

Whenever a process is coming into RAM only then we are allocating space to the processes.
When processes come in the RAM then at run time the capacity they need, the space they need,
according to that space we will allocate them.

ADVANTAGES:

1. No internal fragmentation.
2. No limitation on number of processes.
3. No limitation on process size.

DISADVANTAGES:

1. External Fragmentation.
2. Allocation and de-allocation is complex.

MEMORY ALLOCATION METHODS:

Means how we are allocating memory to the processes. We basically use 4 algorithms:

I. FIRST FIT: Allocate the first hole that is big enough.


II. NEXT FIT: Same as first fit but start search always from last allocated hole.
III. BEST FIT: Allocate the smallest hole that is big enough.
IV. WORST FIT: Allocate the largest hole.

NON-CONTIGUOUS MEMORY ALLOCATION:

We are allocating the memory to the different different processes in a non-consecutive manner.

Process can be divided and placed at different locations. External Fragmentation can be removed
by non-contiguous memory allocation.

NEED OF PAGING:

• Paging is important because with the help of paging we can divide the process in pages so
that, we can store them in the memory at different holes.

PAGING:

In Paging we split a process into equally sized pages and insert it into frames of main memory.
Paging is a storage mechanism used in OS to retrieve processes from secondary storage to the
main memory as pages. The primary concept behind paging is to break each process into
individual pages. Thus, the primary memory would also be separated into frames.

No. of entries in a page table = No. of pages in a page table.

Size of page table = No of entries in a page table * frame no. bits.

LOGICAL ADDRESS SPACE:

Logical Address = Page number + page offset/size. Logical address space always represents the
size of the process.

Page number = MSB bits

Page offset = LSB bits.

PHYSICAL ADDRESS SPACE:

Physical Address = Frame number + frame offset/size. A data structure called page map table is
used to keep track of the relation between a page of a process to a frame in physical memory.
Physical address is the size of the main memory.

Frame offset / size = page offset / size.

PAGE OFFSET:

The least significant bits specify the word within the page and are called the page offset.
DISADVANTAGES OF PAGING:

1. Each process has its own page table.

2. Page Table will be in main memory.

INVERTED PAGING:

Inverted Page Table is the global page table which is maintained by the Operating System for all
the processes. In inverted page table, the number of entries is equal to the number of frames in
the main memory. It can be used to overcome the drawbacks of page table.

There is always a space reserved for the page regardless of the fact that whether it is present in
the main memory or not. However, this is simply the wastage of the memory if the page is not
present. There will be a global page table for all the processes rather than keeping page tables
separate for each and every process.

No. of page table entries = No. of frames in main memory.

DISADVANTAGE:

Searching time is more in inverted paging. Linear Search is performed.

DEGREE OF MULTIPROGRAMMING:

No. of process present inside RAM.

LEVEL PAGING (MULTILEVEL PAGING):


Multilevel Paging is a paging scheme that consists of two or more levels of page tables in a
hierarchical manner. It is also known as hierarchical paging. The entries of the level 1 page table
are pointers to a level 2 page table and entries of the level 2 page tables are pointers to a level 3
page table and so on. The entries of the last level page table store actual frame information. Level
1 contains a single-page table and the address of that table is stored in PTBR (Page Table Base
Register).

Why needed?

Page Table size is big and main memory’s frame size is small.

THRASHING:

Thrashing is directly linked to Degree of Multiprogramming.

A condition in which excessive paging operations are taking place is called Thrashing. A system
that is thrashing can be perceived as either a very slow system or one that has come to a halt.

PAGE FAULT:

CPU demands for a particular page and that page is not present in a page table then page fault
occurs. It takes a lot of time to service a page fault and entire OS will get busy in servicing page
fault and hence this will degrade the performance of the OS.

How to Remove Thrashing?

Increase main memory size.

Long term scheduler. (Bringing Maximum number of processes in RAM or in ready state.)
SEGMENTATION:

In Segmentation a process is divided into parts/segments and then put into the main memory.

Similar to Paging.

But in paging we divide the process into further pages without knowing what’s written in that
and it may not execute properly. Page Fault may also occur.

While Segmentation does not divide the whole process or program directly. It will create different
segments.

In Paging each page has same size while in segmentation, segments may be of various size.

OVERLAY:

Overlay is a method by which a large size process can be put into the main memory, that is, if
the size of the process is more than the size of memory, then we can make the process
accommodate in the main memory by using concept of overlay.

Overlays are used to enable a process to be larger than the amount of memory allocated to it. The
basic idea of this is that only instructions and data that are needed at any given time are kept in
memory.

PAGING COMBINED WITH SEGMENTATION:

In Segmented Paging, the main memory is divided into variable size segments which are further
divided into fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple page tables.
3. The logical address is represented as Segment Number (base address), Page number and
page offset.

Segment Number → It points to the appropriate Segment Number.

Page Number → It Points to the exact page within the segment

Each Page table contains the various information about every page of the segment. The Segment
Table contains the information about every segment. Each segment table entry points to a page
table entry and every page table entry is mapped to one of the page within a segment.

Translation of logical address to physical address

The CPU generates a logical address which is divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less than the segment limit. Offset is further divided
into Page number and Page Offset. To map the exact page number in the page table, the page
number is added into the page table base.

The actual frame number with the page offset is mapped to the main memory to get the desired
word in the page of the certain segment of the process.
Advantages of Segmented Paging

1. It reduces memory usage.


2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.

Disadvantages of Segmented Paging

1. Internal Fragmentation will be there.


2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.

You might also like