You are on page 1of 20

Explain Dekker’s or Peterson’s algorithm.

Peterson’s Solution

Peterson’s solution provides a good algorithmic description of solving the


critical-section problem and illustrates some of the complexities involved in
designing software that addresses the requirements of mutual exclusion,
progress, and bounded waiting.

This solution is for 2 processes to enter into critical section. This solution works
for only 2 processes.

#define N 2

#define TRUE 1

#define FALSE 0

Int INTERSETED[N]=FALSE

Int TURN;

Void Entry_selection(int process)

Int other;

Othe=1-process;

INTERESTED[process]=TRUE;

TURN=process;

While(INTERESTED[other]== TRUE && TURN=process)

Void Exit_selection(int process)

INTERESTED[process]=FALSE;

}
Disadvantage of Peterson’s Solution:

This solution works for 2 processes, but this solution is best scheme in user
mode for critical section.

This is also a busy waiting solution so CPU time is wasted. And because of that
“SPIN LOCK” problem can come. And this problem can come in any of the busy
waiting solution.

Mutual Exclusion

Mutual Exclusion is a special type of binary semaphore which is used for


controlling access to the shared resource.

Semaphore

Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.

Define Deadlock. Explain Ostrich algorithm for deadlock handling. Describe


deadlock detection and recovery.

Deadlock is a situation where a set of processes are blocked because each


process is holding a resource and waiting for another resource acquired by
some other process.
R2

P1 P2
2 22
R1

Example of a Deadlock

Ostrich Algorithm

The ostrich algorithm means that the deadlock is simply ignored and it is
assumed that it will never occur.
Pretend (imagine) that there’s no problem.

This is the easiest way to deal with problem.

This algorithm says that stick your head in the sand and pretend (imagine) that
there is no problem at all.

This strategy suggests to ignore the deadlock because deadlocks occur rarely,
but system crashes due to hardware failures, compiler errors, and operating
system bugs frequently, then not to pay a large penalty in performance or
convenience to eliminate deadlocks.

Deadlock Detection:

Deadlock detection is used by employing and algorithm that tracks the circular
waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is
deadlocked. A deadlock is resolved by aborting and restarting a process,
relinquishing all the resources that the process held.

 This technique does not limit resources access or restrict process action.

 Requested resources are granted to processes whenever possible.

 It never delays the process initiation and facilitates online handling.

 The disadvantage is the inherent pre-emption losses.

Recovery From Deadlock:

There are three basic approaches to getting out of a bind:

 Inform the system operator and give him/her permission to intervene


manually.

 Stop one or more of the processes involved in the deadlock.

 Prevent the use of resources


Condition of Deadlock arises:

1)Mutual Exclusion –

At least one resource must be kept in a non-shareable state; if another process


requests it, it must wait for it to be released.

2)Hold and Wait –

A process must hold at least one resource while also waiting for at least one
resource that another process is currently holding.

3)No preemption –

Once a process holds a resource (i.e. after its request is granted), that resource
cannot be taken away from that process until the process voluntarily releases
it.

4)Circular Wait –

There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition
implies the hold-and-wait condition, but dealing with the four conditions is
easier if they are considered separately).

Methods for handling deadlock

Preventing or avoiding deadlock by Avoid allowing the system to become stuck


in a loop.

Detection and recovery of deadlocks, when deadlocks are detected, abort the
process or preempt some resources.

Ignore the problem entirely.

To avoid deadlocks, the system requires more information about all processes.
The system, in particular, must understand what resources a process will or
may request in the future.
Deadlock detection is relatively simple, but deadlock recovery necessitates
either aborting processes or preempting resources, neither of which is an
appealing option.

If deadlocks are not avoided or detected, the system will gradually slow down
as more processes become stuck waiting for resources that the deadlock has
blocked and other waiting processes.

Unfortunately, when the computing requirements of a real-time process are


high, this slowdown can be confused with a general system slowdown.

How starvation differ dead lock? Explain the handling policies.

Starvation and deadlock are both issues that can occur in concurrent
computing environments, but they differ in their nature, causes, and
consequences:

Starvation:

Starvation refers to a situation where a process or thread is unable to make


progress or access a resource it needs, even though it is eligible to do so.

It occurs due to scheduling policies or resource allocation methods that


unfairly prioritize some processes or threads over others.

In a system with starvation, certain processes may be repeatedly favored,


while others are kept waiting for extended periods.

Starvation does not involve a circular dependency between processes or


threads; instead, it's more about fairness and resource allocation.

Deadlock:

Deadlock is a specific type of concurrency problem where two or more


processes or threads are blocked and unable to proceed because they are each
waiting for a resource held by another process or thread in the same set.

Deadlock typically occurs due to the simultaneous satisfaction of four


necessary conditions: mutual exclusion, hold and wait, no preemption, and
circular wait.
In a deadlock situation, there is a cyclic dependency among processes or
threads, meaning each is waiting for a resource that another process in the
cycle possesses.

Deadlock can lead to a complete standstill of the affected processes, causing


the entire system to become unresponsive.

Deadlock handling strategies /policies.

1. Deadlock Prevention

2. Deadlock avoidance

3. Deadlock detection

Deadlock Prevention

The strategy of deadlock prevention is to design the system in such a way that
the possibility of deadlock is excluded. Indirect method prevent the occurrence
of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-
emption and hold and wait. Direct method prevent the occurrence of circular
wait.

Deadlock avoidance

This approach allows the three necessary conditions of deadlock but makes
judicious choices to assure that deadlock point is never reached. It allows more
concurrency than avoidance detection A decision is made dynamically whether
the current resource allocation request will, if granted, potentially lead to
deadlock. It requires the knowledge of future process requests.

Two techniques to avoid deadlock:

1. Process initiation denial

2. Resource allocation denial

Deadlock Detection:

Deadlock detection is used by employing and algorithm that tracks the circular
waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is
deadlocked. A deadlock is resolved by aborting and restarting a process,
relinquishing all the resources that the process held.

 This technique does not limit resources access or restrict process action.

 Requested resources are granted to processes whenever possible.

 It never delays the process initiation and facilitates online handling.

 The disadvantage is the inherent pre-emption losses.

Define process scheduling. Explain Round Robin Scheduling.

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating


systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the CPU
using time multiplexing.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.

 Each process is provided a fix time to execute, it is called a quantum.

 Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.

 Context switching is used to save states of preempted processes.


How safe state in achieved in banker algorithm?

2. Banker’s Algorithm

It is a deadlock avoidance algorithm used in operating systems. The banker's


algorithm works on the principle of ensuring that the system has enough
resources to allocate to each process so that the system never enters a
deadlock state. The algorithm is used to prevent deadlocks that can occur
when multiple processes are competing for a finite set of resources.

Working Principle

Initialize the system

Define the number of processes and resource types.

Define the total number of available resources for each resource type.

Create a matrix called the "allocation matrix" to represent the current resource
allocation for each process.

Create a matrix called the "need matrix" to represent the remaining resource
needs for each process.

2. Define a request

A process requests a certain number of resources of a particular type.


3. Check if the request can be granted

Check if the requested resources are available.

If the requested resources are not available, the process must wait.

If the requested resources are available, go to the next step.

4. Check if the system is in a safe state

Simulate the allocation of the requested resources to the process.

Check if this allocation results in a safe state, meaning there is a sequence of


allocations that can satisfy all processes without leading to a deadlock.

If the state is safe, grant the request by updating the allocation matrix and the
need matrix.

If the state is not safe, do not grant the request and let the process wait.

5. Release the Resources

When a process has finished its execution, release its allocated resources.

What is segmentation. Write down the importance and drawbacks of


segmentation.

Segmentation

A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the same sizes are called segments.
Segmentation gives user’s view of the process which paging does not give.
Here the user’s view is mapped to physical memory.

There are types of segmentation

Virtual memory segmentation

Each process is divided into a number of segments, not all of which are
resident at any one point in time.
Simple segmentation

Each process is divided into a number of segments, all of which are loaded into
memory at run time, though not necessarily contiguously

Importance of Segmentation

 No internal fragmentation

 Average Segment Size is larger than the actual page size.

 Less overhead

 It is easier to relocate segments than entire address space.

 The segment table is of lesser size as compared to the page table in paging.

Drawback of Segmentation

 It can have external fragmentation.

 It is difficult to allocate contiguous memory to variable sized partition.

 Costly memory management algorithms.

File operation

A file is an abstract data type. OS can provide system calls to create, write,
read, reposition, delete and truncate files.

Creating a file – First space in the file system must be found for the file.
Second, an entry for the new file must be made in the directory.
Writing a file – To write a file, specify both the name of the file and the
information to be written to the file. The system must keep a write pointer to
the location in the file where the next write is to take place.

Reading a file – To read from a file, directory is searched for the associated
entry and the system needs to keep a read pointer to the location in the file
where the next read is to take place. Because a process is either reading from
or writing to a file, the current operation location can be kept as a per process
current file position pointer.

Repositioning within a file – Directory is searched for the appropriate entry


and the current file position pointer is repositioned to a given value. This
operation is also known as file seek.

Deleting a file – To delete a file, search the directory for the named file. When
found, release all file space and erase the directory entry.

Truncating a file – User may want to erase the contents of a file but keep its
attributes. This function allows all attributes to remain unchanged except for
file length.

Optimal page replacement.

In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future. Example: Consider the page references 7, 0, 1,
2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>
4 Page faults 0 is already there so —> 0 Page fault.

when 3 came it will take the place of 7 because it is not used for the longest
duration of time in the future.— >1 Page fault.

0 is already there so —> 0 Page fault.


4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are
already available in the memory.

Optimal page replacement is perfect, but not possible in practice as the


operating system cannot know future requests. The use of Optimal Page
replacement is to set up a benchmark so that other replacement algorithms
can be analyzed against it.

Clock Page Replacement Algorithms

Clock is a more efficient version of FIFO than Second-chance because pages


don't have to be constantly pushed to the back of the list, but it performs the
same general function as Second-Chance. The clock algorithm keeps a circular
list of pages in memory, with the "hand" (iterator) pointing to the last
examined page frame in the list. When a page fault occurs and no empty
frames exist, then the R (referenced) bit is inspected at the hand's location. If R
is 0, the new page is put in place of the page the "hand" points to, and the
hand is advanced one position. Otherwise, the R bit is cleared, then the clock
hand is incremented and the process is repeated until a page is replaced. This
algorithm was first described in 1969 by F. J. Corbató.

Process scheduling.

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy. Process scheduling is an essential
part of a Multiprogramming operating systems. Such operating systems allow
more than one process to be loaded into the executable memory at a time and
the loaded process shares the CPU using time multiplexing.

Process Scheduling Goals Fairness: Each process gets fair share of the CPU.

Efficiency: When CPU is 100% busy then efficiency is increased. Response


Time: Minimize the response time for interactive user.

Throughput: Maximizes jobs per given time period.


Waiting Time: Minimizes total time spent waiting in the ready queue.

Turn Around Time: Minimizes the time between submission and termination.

Following characteristics of Memory Hierarchy Design from above figure:

Capacity: It is the global volume of information the memory can store. As we


move from top to bottom in the Hierarchy, the capacity increases.

Access Time: It is the time interval between the read/write request and the
availability of the data. As we move from top to bottom in the Hierarchy, the
access time increases.

Performance: Earlier when the computer system was designed without


Memory Hierarchy design, the speed gap increases between the CPU registers
and Main Memory due to large difference in access time. This results in lower
performance of the system and thus, enhancement was required. This
enhancement was made in the form of Memory Hierarchy Design because of
which the performance of the system increases. One of the most significant
ways to increase system performance is minimizing how far down the memory
hierarchy one has to go to manipulate data.

Cost per bit: As we move from bottom to top in the Hierarchy, the cost per bit
increases i.e. Internal Memory is costlier than External Memory
Define swapping. Differentiate between fixed and variable sized partitioning in
multiprogramming.

Swapping, refers to the process of temporarily moving data from one location
in memory (typically RAM - Random Access Memory) to another location,
often to free up space for other data or to improve system performance.
Swapping is used when the available RAM is insufficient to accommodate all
the running processes and data that the computer or device needs

Fixed sized partition Variable size partition


Equal-sized partitions are allocated to Partitions can vary in size, depending
processes on the size of the process
Fixed-sized partitions can lead to Variable-sized partitions are more
memory wastage if a process doesn’t efficient in term of memory utilization
fully utilize its allocated space size partitions adapt to the size of
processes.
Fixed-sized partitions suffer from both Variable-sized partitions mainly suffer
internal and external fragmentation. from external fragmentation.
Not flexible; each partition Highly flexible; partitions can be
adjusted to accommodate varying
processes sizes.
Easier to manage since partition sizes More complex to manage due to
are constant. varying partitions sizes.
Typically less memory efficient due to More memory-efficient as it
fixed sizes. minimizes memory wastage.
Processes must fit within the allocated Processes can vary in size, potentially
partition size. accommodating larger or smaller
processes.
Simpler to implement since partition More complex to implement due to
sizes are fixed and known in advance. the need for dynamic allocation and
management of variable-sized
partitions.
Older systems with limited memory Modern systems where process sizes
and relatively uniform process sizes. vary significantly, allowing for better
resource allocation.
Contiguous Noncontiguous
Allocates a single, continuous block of Allocates a single, continuous block of
memory to a process. memory to a process.
Suffers from external fragmentation, Typically reduces external
where free memory exists in between fragmentation but may suffer from
allocated block internal fragmentation within
allocated segments
May be less memory-efficient due to Often more memory-efficient as it
external fragmentation. minimizes external fragmentation
Processes cannot be easily relocated Processes can be moved or swapped
in memory without causing significant in and out of memory more flexibly,
disruption allowing better resource management
Generally faster since memory is Access times can vary as data may be
contiguous, leading to efficient scattered in noncontiguous segments.
memory access.
Simpler to implement and manage More complex to implement and
due to fixed memory locations. manage, especially with memory
fragmentation and segment allocation
Common in older systems with limited Often used in modern systems with
memory and static memory allocation dynamic memory allocation and
requirements. variable-sized processes.
Lower memory management Higher memory management
overhead as memory allocation is overhead due to the need for tracking
straightforward. multiple noncontiguous segments.
Generally not possible or very difficult Easier to perform memory
due to contiguous allocation. compaction to reduce fragmentation.
Memory Mapped IO IO mapped IO
IO devices are accessed like any other They cannot be accessed like any
memory location. other memory location.
They are assigned with 16-bit address They are assigned with 8-bit address
values values.
The instruction used are LDA and STA, The instruction used are IN and OUT
etc/
Cycle involved during operation are Cycle involved during operation during
Memory read, Memory Write operation are IO read and IO writes in
the case of IO Mapped IO
Any register can communicate with Only Accumulator can communicate
the IO devices in case of Memory with IO devices in case of IO Mapped
Mapped IO IO
216 IO port are possible to be used for Only 265 I/O ports are available for
interfacing in case of Memory interfacing in case of IO Mapped IO.
Mapped IO.
During writing or read cycle (IO/M=0) During writing or read cycle (IO/M=1)
in case of Memory Mapped IO. in case of IO Mapped IO.
No separate control signal required Special control signals are used in the
since we have unified memory space case of IO Mapped IO.
in the case of Memory Mapped IO.
Process Thread
Process means a program is in Thread means a segment of a process
execution
The process is not lightweight Threads are lightweight.
The process takes more time to The thread takes less time to
terminate terminate
It take more time for creation It take less time for creation
Communication between processes Communication between threads
needs more time compared to thread requires less time compared to
processes.
It take more time for context It takes less time for context switching
switching
Process consume more resources Thread consume less resources.
Different process are tread separately All the level peer threads are treated
by OS as a single task by OS
The process is mostly isolated Thread share memory
It does not share data Thread share data with each other

Inter process communication.

Interprocess communication is the mechanism provided by the operating


system that allows processes to communicate with each other. This
communication could involve a process letting another process know that
some event has occurred or the transferring of data from one process to
another.

Process p1 Interprocesscommunication Process p1

Here are some common IPC mechanisms:

 Pipes: Simple for communication between related processes; can be


anonymous or named.
 Sockets: Used for network-based communication between processes on
different machines.
 Message Queues: Enable asynchronous communication and reliable
message delivery.
 Shared Memory: Fast data sharing but requires synchronization
mechanisms.
 Signals: Software interrupts for notifying processes about events or
conditions.
 RPC: Allows processes to invoke functions in remote processes.
 D-Bus: Common on Linux for desktop application and system service
communication.
 File-based IPC: Communication via shared files with locking mechanisms.
 Named Pipes (FIFOs): Special files for communication between
processes.

Choosing the right IPC method depends on factors like communication nature,
process relationships, and specific requirements.

OS as resource manager

An Operating System is a collection of programs and utilities. It acts as an


interface between user and computer. It creates a user-friendly environment.
A computer has many resources(Hardware and Software), which may be
required to complete a task. The commonly required resources are
Input/Output devices, Memory file storage space, CPU(Central Processing Unit)
time and so on.

When a number of computers are connected through a network with more


than one computer trying for a computer print or a common resource, then
the operating system follows the same order and manages the resources in an
efficient manner.

Resources sharing in two ways "in time" and "in space". When a resource is a
time-sharing resource, first one of the tasks gets the resource for some time,
then another and so on.
The other kind of sharing is "space sharing". In this method, the users share
the space of resources.

By allocating the time and space for the different computers by their
requirement so it is called resource manager.

OS as an extended machine

1. provides stable, portable, reliable, safe, well-behaved environment (ideally)

2. Magician: makes computer appear to be more than it really is

3. Single processor appears like many separate processors

4. Single memory made to look like many separate memories, each potentially
larger than the real memory

You might also like