You are on page 1of 35

MIZAN-TEPI UNIVERSITY

TEPI CAMPUS

SCHOOL OF COMPUTING AND INFORMATICS

DEPARTMENT OF SOFTWARE ENGINEERING

ASSIGNMENT OF OPERATING SYSTEM (SE2032)

NAME ID

1 Thitna Endale MTUUR/3080/14

2 Surafel Wondimu MTUUR/2920/14

3 Begna Leta MTUUR/0629/14

4 Firomsa Dine MTUUR/1386/14

5 Zelalem Girma MTUUR/3772/14

Submission date: 15/4/2016 e. c

Submitted to Mr. Melkamu

1
Contents
INTRODUCTION.......................................................................................................................................3
Semaphore..............................................................................................................................................4
Counting Semaphore...................................................................................................................................4
Binary Semaphore.......................................................................................................................................4
Advantages and Disadvantages of Semaphore........................................................................................6
Advantages:.................................................................................................................................................6
Disadvantages.............................................................................................................................................6
Monitor...................................................................................................................................................6
Advantages and Disadvantages of Monitor.............................................................................................8
Advantages:.................................................................................................................................................8
Disadvantages:............................................................................................................................................8
Main Differences between the Semaphore and Monitor........................................................................8
Synchronization Problems.....................................................................................................................10
Bounded-Buffer (or Producer-Consumer) Problem...............................................................................10
Dining-Philosophers Problem................................................................................................................10
Readers and Writers Problem................................................................................................................11
Banker’s algorithm................................................................................................................................12
Advantages................................................................................................................................................13
Disadvantages...........................................................................................................................................13
Safety Algorithm....................................................................................................................................15
Resource Request Algorithm.................................................................................................................16
Page Replacement Algorithms (PRA).....................................................................................................18
Paging in Operating Systems (OS)..........................................................................................................19
Virtual Memory in Operating Systems (OS)...........................................................................................19
Disk Scheduling Algorithm.....................................................................................................................27
The Working Set Model........................................................................................................................32
Page fault frequency..............................................................................................................................33
CONCLUSION.........................................................................................................................................34
REFERENCES..............................................................................................................................................35

2
INTRODUCTION

A semaphore is a synchronization primitive used in concurrent programming to control access to


shared resources. It is a variable that can be incremented or decremented by a process.
Semaphores are mainly used to solve the critical section problem, where multiple processes or
threads compete for access to a shared resource.

A semaphore can be either binary (0 or 1) or a counting semaphore (a non-negative integer).


Binary semaphores can be used for mutual exclusion, allowing only one process to access a
resource at a time. Counting semaphores can represent the availability of a certain number of
resources. Processes use operations like wait (P) and signal (V) on the semaphore to control
their access to the shared resource.

A monitor is a higher-level synchronization construct that provides a way for multiple threads to
safely access shared data. It combines data structures and procedures (or methods) into a single
unit. The idea behind monitors is to encapsulate shared resources within an object and ensure
that only one thread can execute monitor procedures at a time.

The monitor concept includes three fundamental operations: 1) Entry section: The section of
code that a thread must execute to enter the monitor, 2) Inside section: The section where the
thread can access shared data and execute operations, and 3) Exit section: The code section that
the thread executes to exit the monitor. Only one thread can be inside the monitor at any given
time, while other threads wait until it becomes free.

Monitors also provide mechanisms like condition variables. A condition variable allows threads
to wait until a certain condition is satisfied before proceeding. Threads can signal or broadcast
signals to wake up waiting threads when the desired condition is met.

3
Both semaphores and monitors are synchronization mechanisms used to coordinate concurrent
access to shared resources, but they differ in terms of complexity, abstraction level, and specific
use cases.

Semaphore

A semaphore is an integer variable that allows many processes in a parallel system to manage
access to a common resource like a multitasking OS. It is an integer variable (S), and it is
initialized with the number of resources in the system. The wait() and signal() methods are the
only methods that may modify the semaphore (S) value. When one process modifies the
semaphore value, other processes can't modify the semaphore value simultaneously.

Furthermore, the operating system categorizes semaphores into two types:

1. Counting Semaphore

2. Binary Semaphore

Counting Semaphore

In Counting Semaphore, the value of semaphore S is initialized to the number of resources in


the system. When a process needs to access shared resources, it calls the wait() method on the
semaphore, decreasing its value by one. When the shared resource is released, it calls
the signal() method, increasing the value by 1.

When the semaphore count reaches 0, it implies that the processes have used all resources.
Suppose a process needs to utilize a resource when the semaphore count is 0. In that case, it
performs the wait() method, and it is blocked until another process using the shared resources
releases it, and the value of the semaphore increases to 1.

4
Binary Semaphore

Semaphore has a value between 0 and 1 in binary semaphore. It's comparable to mutex lock,
except that mutex is a locking method while the semaphore is a signalling method. When a
process needs to access a binary semaphore resource, it uses the wait() method to decrement the
semaphore's value from 1 to 0.

When the process releases the resource, it uses the signal() method to increase the semaphore
value to 1. When the semaphore value is 0, and a process needs to use the resource, it uses
the wait() method to block until the current process that is using the resource releases it.

Syntax:

The syntax of the semaphore may be used as:

// Wait Operation

wait(Semaphore S) {

while (S<=0);

S--;

// Signal Operation

signal(Semaphore S) {

S++;

5
Advantages and Disadvantages of Semaphore
Advantages:

They don't allow multiple processes to enter the critical part simultaneously. Mutual exclusion is
achieved in this manner, making it much more efficient than other synchronization techniques.

There is no waste of process time or resources as a result of the busy waiting in semaphore. It is
because processes are only allowed to access the critical section if a certain condition is satisfied.

They enable resource management that is flexible.

They are machine-independent because they execute in the microkernel's machine-independent


code.

Disadvantages

There could be a situation of priority inversion where the processes with low priority get access
to the critical section than those with higher priority.

Semaphore programming is complex, and there is a risk that mutual exclusion will not be
achieved.

The wait() and signal() methods must be conducted correctly to avoid deadlocks.

6
Monitor

It is a synchronization technique that enables threads to mutual exclusion and the wait() for a
given condition to become true. It is an abstract data type. It has a shared variable and a
collection of procedures executing on the shared variable. A process may not directly access the
shared data variables, and procedures are required to allow several processes to access the shared
data variables simultaneously.

At any particular time, only one process may be active in a monitor. Other processes that require
access to the shared variables must queue and are only granted access after the previous process
releases the shared variables.

Syntax:

The syntax of the monitor may be used as:

monitor {

//shared variable declarations

data variables;

Procedure P1() { ... }

Procedure P2() { ... }

7
.

Procedure Pn() { ... }

Initialization Code() { ... }

Advantages and Disadvantages of Monitor


Advantages:

Mutual exclusion is automatic in monitors.

Monitors are less difficult to implement than semaphores.

Monitors may overcome the timing errors that occur when semaphores are used.

Monitors are a collection of procedures and condition variables that are combined in a special
type of module.

Disadvantages:

Monitors must be implemented into the programming language.

The compiler should generate code for them.

It gives the compiler the additional burden of knowing what operating system features is
available for controlling access to crucial sections in concurrent processes.

8
Main Differences between the Semaphore and Monitor

Here, you will learn the main differences between the semaphore and monitor. Some of the main
differences are as follows:

A semaphore is an integer variable that allows many processes in a parallel system to manage
access to a common resource like a multitasking OS. On the other hand, a monitor is a
synchronization technique that enables threads to mutual exclusion and the wait() for a given
condition to become true.

When a process uses shared resources in semaphore, it calls the wait() method and blocks the
resources. When it wants to release the resources, it executes the signal() In contrast, when a
process uses shared resources in the monitor, it has to access them via procedures.

Semaphore is an integer variable, whereas monitor is an abstract data type.

In semaphore, an integer variable shows the number of resources available in the system. In
contrast, a monitor is an abstract data type that permits only a process to execute in the crucial
section at a time.

Semaphores have no concept of condition variables, while monitor has condition variables.

A semaphore's value can only be changed using the wait() and signal() In contrast, the monitor
has the shared variables and the tool that enables the processes to access them.

9
Fig 1 semaphore VS monitor

Semaphore and Monitor both allow processes to access the shared resources in mutual exclusion.
Both are the process synchronization tool. Instead, they are very different from each other.
Where Semaphore is an integer variable which can be operated only by wait() and signal()
operation apart from the initialization.

On the other hand, the Monitor type is an abstract data type whose construct allow one process to
get activate at one time. In this article, we will discuss the differences between semaphore and
monitor with the help of comparison chart shown below.

Synchronization Problems

These problems are used for testing nearly every newly proposed synchronization scheme. The
following problems of synchronization are considered as classical problems:

1. Bounded-Buffer (or Producer-Consumer) Problem

2. Dining-Philosophers Problem

3. Readers and Writers Problem

4.Banker’s algorithm

Bounded-Buffer (or Producer-Consumer) Problem

The Bounded Buffer problem is also called the producer-consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. The solution to this problem is, to
create two counting semaphores “full” and “empty” to keep track of the current number of full
and empty buffers respectively. Producers produce a product and consumers consume the
product, but both use of one of the containers each time.

10
Dining-Philosophers Problem

The Dining Philosopher Problem states that K philosophers seated around a circular table with
one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One
chopstick may be picked up by any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.

Fig 2 Dining Philosopher Problem

Readers and Writers Problem

Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database. We distinguish between these two types of processes by referring to
the former as readers and to the latter as writers. Precisely in OS we call this situation as
the readers-writers problem. Problem parameters:

One set of data is shared among a number of processes.

Once a writer is ready, it performs its write. Only one writer may write at a time.

11
If a process is writing, no other process can read it.

If at least one reader is reading, no other process can write.

Readers may not write and only read.

Banker’s algorithm

Banker algorithm used to avoid deadlock and allocate resources safely to each process in the
computer system. The 'S-State' examines all possible tests or activities before deciding whether
the allocation should be allowed to each process. It also helps the operating system to
successfully share the resources between all the processes. The banker's algorithm is named
because it checks whether a person should be sanctioned a loan amount or not to help the bank
system safely simulate allocation resources. In this section, we will learn the Banker's
Algorithm in detail. Also, we will solve problems based on the Banker's Algorithm. To
understand the Banker's Algorithm first we will see a real word example of it.

Suppose the number of account holders in a particular bank is 'n', and the total money in a bank
is 'T'. If an account holder applies for a loan; first, the bank subtracts the loan amount from full
cash and then estimates the cash difference is greater than T to approve the loan amount. These
steps are taken because if another person applies for a loan or withdraws some amount from the
bank, it helps the bank manage and operate all things without any restriction in the functionality
of the banking system.

Similarly, it works in an operating system. When a new process is created in a computer system,
the process must provide all types of information to the operating system like upcoming
processes, requests for their resources, counting them, and delays. Based on these criteria, the
operating system decides which process sequence should be executed or waited so that no
deadlock occurs in a system. Therefore, it is also known as deadlock avoidance
algorithm or deadlock detection in the operating system.

12
Advantages

Following are the essential characteristics of the Banker's algorithm:

 It contains various resources that meet the requirements of each process.

 Each process should provide information to the operating system for upcoming resource
requests, the number of resources, and how long the resources will be held.

 It helps the operating system manage and control process requests for each type of
resource in the computer system.

 The algorithm has a Max resource attribute that represents indicates each process can
hold the maximum number of resources in a system.

Disadvantages

 It requires a fixed number of processes, and no additional processes can be started in the
system while executing the process.

 The algorithm does no longer allows the processes to exchange its maximum needs while
processing its tasks.

 Each process has to know and state their maximum resource requirement in advance for
the system.

13
 The number of resource requests can be granted in a finite time, but the time limit for
allocating the resources is one year.

When working with a banker's algorithm, it requests to know about three things:

1 How much each process can request for each resource in the system. It is denoted by the
[MAX] request.

2 How much each process is currently holding each resource in a system. It is denoted by the
[ALLOCATED] resource.

3 It represents the number of each resource currently available in the system. It is denoted by the
[AVAILABLE] resource.

Following are the important data structures terms applied in the banker's algorithm as follows:

Suppose n is the number of processes, and m is the number of each type of resource used in a
computer system.

Available: It is an array of length 'm' that defines each type of resource available in the system.
When Available[j] = K, means that 'K' instances of Resources type R[j] are available in the
system.

Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number of
resources R[j] (each type) in a system.

Allocation: It is a matrix of m x n orders that indicates the type of resources currently allocated
to each process in the system. When Allocation [i, j] = K, it means that process P[i] is currently
allocated K instances of Resources type R[j] in the system.

Need: It is an M x N matrix sequence representing the number of remaining resources for each
process. When the Need[i] [j] = k, then process P[i] may require K more instances of resources

14
type Rj to complete the assigned work.
Nedd[i][j] = Max[i][j] - Allocation[i][j].

Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating whether
the process has been allocated to the requested resources, and all resources have been released
after finishing its task.

The Banker's Algorithm is the combination of the safety algorithm and the resource request
algorithm to control the processes and avoid deadlock in a system:

The Banker's Algorithm is the combination of the safety algorithm and the resource
requestalgorithm to control the processes and avoid deadlock in a system:

Safety Algorithm

It is a safety algorithm used to check whether or not a system is in a safe state or follows the safe
sequence in a banker's algorithm:

1. There are two vectors Wok and Finish of length m and n in a safety algorithm.

Initialize:
Finish[i] = false; for I = 0, 1, 2, 3, 4… n - 1.

2. Check the availability status for each type of resources [i], such as:

Need[i] <= Work


Finish[i] == false
If the i does not exist, go to step 4.

3. Work = Work +Allocation(i) // to get new resource allocation

15
Finish[i] = true

Go to step 2 to check the status of resource availability for the next process.

4. If Finish[i] == true; it means that the system is safe for all processes.

Resource Request Algorithm

A resource request algorithm checks how a system will behave when a process makes each type
of resource request in a system as a request matrix.

Let create a resource request array R[i] for each process P[i]. If the Resource Request i [j] equal to
'K', which means the process P[i] requires 'k' instances of Resources type R[j] in the system.

1 When the number of requested resources of each type is less than the Need resources, go to
step 2 and if the condition fails, which means that the process P[i] exceeds its maximum claim
for the resource. As the expression suggests:

If Request (i) <= Need

Go to step 2

2 And when the number of requested resources of each type is less than the available resource
for each process, go to step (3). As the expression suggests:

If Request (i) <= Available

Else Process P[i] must wait for the resource since it is not available for use.

3 When the requested resource is allocated to the process by changing state:

Available= Available – Request

16
Allocation(i) = Allocation(i) + Request (i)

Need (i) = Need(i) – Request(i)

When the resource allocation state is safe, its resources are allocated to the process P(i). And if
the new state is unsafe, the Process P (i) has to wait for each type of Request R(i) and restore the
old resource-allocation state.

Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the three
resource types A, B and C. Following are the resources types: A has 10, B has 5 and the resource
type C has 7 instances.

Process Allocation Max Available


A B C A B C A B C

P1 0 1 0 7 5 3 3 3 2

P2 2 0 0 3 2 2

P3 3 0 2 9 0 2

17
P4 2 1 1 2 2 2

P5 0 0 2 4 3 3

Page Replacement Algorithms (PRA)

In an operating system that uses paging for memory management, a page replacement algorithm
is needed to decide which page needs to be replaced when a new page comes in.

A page fault happens when a running program accesses a memory page that is mapped into the
virtual address space but not loaded in physical memory. Since actual physical memory is much
smaller than virtual memory, page faults happen. In case of a page fault, Operating System might
have to replace one of the existing pages with the newly needed page. Different page
replacement algorithms suggest different ways to decide which page to replace. The target for all
algorithms is to reduce the number of page faults.

Paging in Operating Systems (OS)

Paging is a storage mechanism. Paging is used to retrieve processes from secondary memory to
primary memory.

The main memory is divided into small blocks called pages. Now, each of the pages contains the
process which is retrieved into main memory and it is stored in one frame of memory.

It is very important to have pages and frames which are of equal sizes which are very useful for
mapping and complete utilization of memory.

18
Virtual Memory in Operating Systems (OS)

A storage method known as virtual memory gives the user the impression that their main
memory is quite large. By considering a portion of secondary memory as the main memory, this
is accomplished.

By giving the user the impression that there is memory available to load the process, this
approach allows them to load larger size programs than the primary memory that is accessible.

The Operating System loads the many components of several processes in the main memory as
opposed to loading a single large process there.

By doing this, the level of multiprogramming will be enhanced, which will increase CPU
consumption.

Demand Paging

The Demand Paging is a condition which is occurred in the Virtual Memory. We know that the
pages of the process are stored in secondary memory. The page is brought to the main memory
when required. We do not know when this requirement is going to occur. So, the pages are
brought to the main memory when required by the Page Replacement Algorithms.

So, the process of calling the pages to main memory to secondary memory upon demand is
known as Demand Paging.

19
Fig 3 demand paging

The important jobs of virtual memory in Operating Systems are two. They are:

1. Frame Allocation

2. Page Replacement.

Frame Allocation in Virtual Memory

Demand paging is used to implement virtual memory, an essential component of operating


systems. A page-replacement mechanism and a frame allocation algorithm must be created for
demand paging. If you have numerous processes, frame allocation techniques are utilized to
determine how many frames to provide to each process.

A Physical Address is required by the Central Processing Unit (CPU) for the frame creation and
the physical Addressing provides the actual address to the frame created. For each page a frame
must be created.

20
Frame Allocation Constraints

 The Frames that can be allocated cannot be greater than total number of frames.

 Each process should be given a set minimum amount of frames.

 When fewer frames are allocated then the page fault ration increases and the process
execution becomes less efficient

 There ought to be sufficient frames to accommodate all the many pages that a single
instruction may refer to

Frame Allocation Algorithms

There are three types of Frame Allocation Algorithms in Operating Systems. They are:

1) Equal Frame Allocation Algorithms

Here, in this Frame Allocation Algorithm we take number of frames and number of processes at
once. We divide the number of frames by number of processes. We get the number of frames we
must provide for each process.

This means if we have 36 frames and 6 processes. For each process 6 frames are allocated.

It is not very logical to assign equal frames to all processes in systems with processes of different
sizes. A lot of allocated but unused frames will eventually be wasted if a lot of frames are given
to a little operation.

2) Proportionate Frame Allocation Algorithms

21
Here, in this Frame Allocation Algorithms we take number of frames based on the process size.
For big process more number of frames is allocated. For small processes less number of frames
is allocated by the operating system.

The problem in the Proportionate Frame Allocation Algorithm is number of frames are wasted in
some rare cases.

The advantage in Proportionate Frame Allocation Algorithm is that instead of equally, each
operation divides the available frames according to its demands.

3) Priority Frame Allocation Algorithms

According to the quantity of frame allocations and the processes, priority frame allocation
distributes frames. Let's say a process has a high priority and needs more frames; in such case,
additional frames will be given to the process. Processes with lower priorities are then later
executed in future and first only high priority processes are executed first.

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm,
the operating system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the queue is selected
for removal.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number
of page faults.

22
Fig 4 example for FIFO

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots 3
PageFaults.

When 3 come, it is already in memory so —> 0 Page Faults.

Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e. 1. —>1
PageFault.

Then 6 comes, it is also not available in memory so it replaces the oldest page slot i.e. 3 —
>1Page Fault.

Finally, when 3 come it is not available so it replaces 0 1 Page Fault.

Belady’s anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm.
For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9
total page faults, but if we increase slots to 4, we get 10-page faults.

23
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used
for the longest duration of time in the future.

Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame.


Find number of page fault.

Fig 5 example for Optimal Page replacement

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.

When 3 came it will take the place of 7 because it is not used for the longest duration of time in
the future.—>1 Page fault.

0 is already there so —> 0 Page fault.

4 will take place of 1 —> 1 Page Fault.

24
Now for the further page reference string —> 0 Page fault because they are already available in
in memory.

Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that
other replacement algorithms can be analyzed against it.

3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.

Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page


frames. Find number of page faults.

Fig 6 example of Least Recently Used

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.

When 3 came it will take the place of 7 because it is least recently used —>1 Page fault

25
0 is already in memory so —> 0 Page Fault.

4 will take place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available in the
memory.

4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used
recently. Belady’s anomaly can occur in this algorithm.

Fig 7 example for Most Recently Used

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults

0 is already their so–> 0 page fault

when 3 comes it will take place of 0 because it is most recently used —>1 Page fault

when 0 comes it will take place of 3 —>1 Page fault

26
when 4 comes it will take place of 0 —>1 Page fault

2 is already in memory so —> 0 Page fault

when 3 comes it will take place of 2 —>1 Page fault

when 0 comes it will take place of 3 —>1 Page fault

when 3 comes it will take place of 0 —>1 Page fault

when 2 comes it will take place of 3 —>1 Page fault

when 3 comes it will take place of 2 —>1 Page fault

Disk Scheduling Algorithm

A Process makes the I/O requests to the operating system to access the disk. Disk Scheduling
Algorithm manages those requests and decides the order of the disk access given to the requests.

Why Disk Scheduling Algorithm is needed?

Disk Scheduling Algorithms are needed because a process can make multiple I/O requests and
multiple processes run at the same time. The requests made by a process may be located at
different sectors on different tracks. Due to this, the seek time may increase more. These
algorithms help in minimizing the seek time by ordering the requests made by the processes.

Important Terms related to Disk Scheduling Algorithms

Seek Time - It is the time taken by the disk arm to locate the desired track.

Rotational Latency - The time taken by a desired sector of the disk to rotate itself to the position
where it can access the Read/Write heads is called Rotational Latency.

27
Transfer Time - It is the time taken to transfer the data requested by the processes.

Disk Access Time - Disk Access time is the sum of the Seek Time, Rotational Latency, and
Transfer Time.

Disk Scheduling Algorithms

First Come First Serve (FCFS)

In this algorithm, the requests are served in the order they come. Those who come first are
served first. This is the simplest algorithm.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60.

Fig 8 First Come First Serve (FCFS)

Seek Time = Distance Moved by the disk arm = (140-70)+(140-50)+(125-50)+(125-30)+(30-


25)+(160-25)=480

28
Shortest Seek Time First (SSTF)

In this algorithm, the shortest seek time is checked from the current position and those requests
which have the shortest seek time is served first. In simple words, the closest request from the
disk arm is served first.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60.

Fig 9 Shortest Seek Time First (SSTF)

Seek Time = Distance Moved by the disk arm = (60-50)+(50-30)+(30-25)+(70-25)+(125-


70)+(140-125)+(160-125)=270SCAN In this algorithm, the disk arm moves in a particular
direction till the end and serves all the requests in its path, then it returns to the opposite direction
and moves till the last request is found in that direction and serves all of them.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

29
Fig 10 SCAN

Seek Time = Distance Moved by the disk arm = (170-60)+(170-25)=255LOOK In this


algorithm, the disk arm moves in a particular direction till the last request is found in that
direction and serves all of them found in the path, and then reverses its direction and serves the
requests found in the path again up to the last request found. The only difference between SCAN
and LOOK is, it doesn't go to the end it only moves up to which the request is found.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

Fig 11 LOOK

Seek Time = Distance Moved by the disk arm = (170-60)+(170-25)=235

C-SCAN This algorithm is the same as the SCAN algorithm. The only difference between
SCAN and C-SCAN is, it moves in a particular direction till the last and serves the requests in its

30
path. Then, it returns in the opposite direction till the end and doesn't serve the request while
returning. Then, again reverses the direction and serves the requests found in the path. It moves
circularly.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

Fig 12 C-SCAN

Seek Time = Distance Moved by the disk arm = (170-60)+(170-0)+(50-0)=330C-LOOK This


algorithm is also the same as the LOOK algorithm. The only difference between LOOK and C-
LOOK is, it moves in a particular direction till the last request is found and serves the requests in
its path. Then, it returns in the opposite direction till the last request is found in that direction and
doesn't serve the request while returning. Then, again reverses the direction and serves the
requests found in the path. It also moves circularly.

Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

31
Fig 13 C-LOOK

Seek Time = Distance Moved by the disk arm = (160-60)+(160-25)+(50-25)=260.

The Working Set Model


The working set model finds out the number of unique pages in a locality, and maintains those
many frames for the process. Hence all the required pages can be in the main memory for a
certain locality. This prevents thrashing and reduces page faults. As the locality changes, the
number of frames allocated is also made to change.

A working set window is maintained of some x no. of pages (x is generally denoted by delta:
Δ). At every instance, this window examines the past Δ references made by the process and
determines the working set. The working set is the set of unique pages from these past Δ
references. The working set the size of a process ‘i’ is denoted as WSS i and is used to
determine the number of frames to allocate to the process i.

The purpose of the working set model is to decide the number of frames allocated to each
process. This is done in a way that all pages of a locality needed for the process at a time can
be ready in the main memory. This prevents excessive page faults and thrashing.

The main idea of the working set model is to maintain a window to examine the recent-past
page references. With this, the model gets an idea of the current locality of the process. This is
the working set. The number of unique pages in a working set is used to determine the no. of
frames allocated to the process

32
Page fault frequency

Page fault frequency refers to the rate at which page faults occur in a computer system. A page
fault occurs when a program tries to access a page of memory that is not currently in physical
memory, leading to the need to retrieve the page from secondary storage.

The page fault frequency is an important metric for evaluating the performance of a page
replacement algorithm. A high page fault frequency indicates that the algorithm is not effectively
managing memory resources, leading to frequent delays as pages are swapped in and out of
memory. On the other hand, a low page fault frequency suggests that the algorithm is efficiently
keeping frequently accessed pages in memory, minimizing the need for page swaps.

By analyzing the page fault frequency, system administrators and developers can assess the
effectiveness of different page replacement algorithms and make informed decisions about which
algorithm to use based on the specific requirements of the system. Overall, monitoring and
managing page fault frequency is crucial for optimizing system performance and ensuring
efficient memory management.

33
CONCLUSION
Semaphores are an important tool in computer science and operating systems for managing
access to shared resources in concurrent systems. They help prevent race conditions and ensure
that only one process can access a resource at a time, thus improving the overall reliability and
efficiency of the system.

Monitorsare a valuable synchronization mechanism in concurrent programming, providing a


higher level of abstraction compared to low-level constructs like semaphores. Monitors
encapsulate shared resources and the operations that can be performed on them, making it easier
to manage access and prevent race conditions.

Theclassical problems of synchronization, including the producer-consumer problem, the


dining philosophers problem, and the readers-writers problem, are challenging issues in
concurrent programming that require careful management of shared resources and coordination
between multiple threads. These problems can be effectively addressed using synchronization
mechanisms such as locks, semaphores, and monitors, which provide a higher level of
abstraction and help, prevent race conditions and deadlock.

TheBanker's Algorithm is a valuable tool for ensuring safe and efficient resource allocation in
a multi-threaded environment. By carefully managing available resources and avoiding potential
deadlock situations, the Banker's Algorithm helps to maintain system stability and reliability. Its
use can greatly contribute to the development of secure and robust concurrent systems.

Page replacement algorithms play a crucial role in managing memory resources efficiently in
computer systems. By selecting and replacing pages in memory, these algorithms help to
optimize system performance and minimize the impact of page faults. Overall, page replacement
algorithms are essential for maintaining smooth and efficient operation of computer systems.

Disk scheduling algorithms aim to reduce seek times and improve disk performance by
efficiently organizing the movement of the read/write heads. Each algorithm has its own
approach and benefits, depending on factors such as the workload, disk access patterns, and
system requirements.

The working-set model allows the operating system to track the pages that a process is currently
using and ensure that these pages remain in physical memory. The working set consists of the
pages that exhibit temporal locality, meaning they are frequently accessed by the process.

A high page fault frequency indicates that a significant amount of time is spent on retrieving
data from secondary storage, which can negatively impact performance. It may result in
increased response times and decreased overall system efficiency.

34
REFERENCES

https://www.tutorialspoint.com/semaphores-in-operating-system

https://stackoverflow.com/questions/7335950/semaphore-vs-monitors-whats-the-difference

https://www.studytonight.com/operating-system/classical-synchronization-problems

https://www.geeksforgeeks.org/bankers-algorithm-in-operating-system-2/

http://www.faadooengineers.com/online-study/post/ece/operating-systems/1165/page-fault-frequency

https://www.geeksforgeeks.org/page-replacement-algorithms-in-operating-systems/

https://t.me/AGI_ChatBot

35

You might also like