Professional Documents
Culture Documents
Peterson’s Solution
This solution is for 2 processes to enter into critical section. This solution works
for only 2 processes.
#define N 2
#define TRUE 1
#define FALSE 0
Int INTERSETED[N]=FALSE
Int TURN;
Int other;
Othe=1-process;
INTERESTED[process]=TRUE;
TURN=process;
INTERESTED[process]=FALSE;
}
Disadvantage of Peterson’s Solution:
This solution works for 2 processes, but this solution is best scheme in user
mode for critical section.
This is also a busy waiting solution so CPU time is wasted. And because of that
“SPIN LOCK” problem can come. And this problem can come in any of the busy
waiting solution.
Mutual Exclusion
Semaphore
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.
P1 P2
2 22
R1
Example of a Deadlock
Ostrich Algorithm
The ostrich algorithm means that the deadlock is simply ignored and it is
assumed that it will never occur.
Pretend (imagine) that there’s no problem.
This algorithm says that stick your head in the sand and pretend (imagine) that
there is no problem at all.
This strategy suggests to ignore the deadlock because deadlocks occur rarely,
but system crashes due to hardware failures, compiler errors, and operating
system bugs frequently, then not to pay a large penalty in performance or
convenience to eliminate deadlocks.
Deadlock Detection:
Deadlock detection is used by employing and algorithm that tracks the circular
waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is
deadlocked. A deadlock is resolved by aborting and restarting a process,
relinquishing all the resources that the process held.
This technique does not limit resources access or restrict process action.
1)Mutual Exclusion –
A process must hold at least one resource while also waiting for at least one
resource that another process is currently holding.
3)No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource
cannot be taken away from that process until the process voluntarily releases
it.
4)Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition
implies the hold-and-wait condition, but dealing with the four conditions is
easier if they are considered separately).
Detection and recovery of deadlocks, when deadlocks are detected, abort the
process or preempt some resources.
To avoid deadlocks, the system requires more information about all processes.
The system, in particular, must understand what resources a process will or
may request in the future.
Deadlock detection is relatively simple, but deadlock recovery necessitates
either aborting processes or preempting resources, neither of which is an
appealing option.
If deadlocks are not avoided or detected, the system will gradually slow down
as more processes become stuck waiting for resources that the deadlock has
blocked and other waiting processes.
Starvation and deadlock are both issues that can occur in concurrent
computing environments, but they differ in their nature, causes, and
consequences:
Starvation:
Deadlock:
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
Deadlock Prevention
The strategy of deadlock prevention is to design the system in such a way that
the possibility of deadlock is excluded. Indirect method prevent the occurrence
of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-
emption and hold and wait. Direct method prevent the occurrence of circular
wait.
Deadlock avoidance
This approach allows the three necessary conditions of deadlock but makes
judicious choices to assure that deadlock point is never reached. It allows more
concurrency than avoidance detection A decision is made dynamically whether
the current resource allocation request will, if granted, potentially lead to
deadlock. It requires the knowledge of future process requests.
Deadlock Detection:
Deadlock detection is used by employing and algorithm that tracks the circular
waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is
deadlocked. A deadlock is resolved by aborting and restarting a process,
relinquishing all the resources that the process held.
This technique does not limit resources access or restrict process action.
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
2. Banker’s Algorithm
Working Principle
Define the total number of available resources for each resource type.
Create a matrix called the "allocation matrix" to represent the current resource
allocation for each process.
Create a matrix called the "need matrix" to represent the remaining resource
needs for each process.
2. Define a request
If the requested resources are not available, the process must wait.
If the state is safe, grant the request by updating the allocation matrix and the
need matrix.
If the state is not safe, do not grant the request and let the process wait.
When a process has finished its execution, release its allocated resources.
Segmentation
A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the same sizes are called segments.
Segmentation gives user’s view of the process which paging does not give.
Here the user’s view is mapped to physical memory.
Each process is divided into a number of segments, not all of which are
resident at any one point in time.
Simple segmentation
Each process is divided into a number of segments, all of which are loaded into
memory at run time, though not necessarily contiguously
Importance of Segmentation
No internal fragmentation
Less overhead
The segment table is of lesser size as compared to the page table in paging.
Drawback of Segmentation
File operation
A file is an abstract data type. OS can provide system calls to create, write,
read, reposition, delete and truncate files.
Creating a file – First space in the file system must be found for the file.
Second, an entry for the new file must be made in the directory.
Writing a file – To write a file, specify both the name of the file and the
information to be written to the file. The system must keep a write pointer to
the location in the file where the next write is to take place.
Reading a file – To read from a file, directory is searched for the associated
entry and the system needs to keep a read pointer to the location in the file
where the next read is to take place. Because a process is either reading from
or writing to a file, the current operation location can be kept as a per process
current file position pointer.
Deleting a file – To delete a file, search the directory for the named file. When
found, release all file space and erase the directory entry.
Truncating a file – User may want to erase the contents of a file but keep its
attributes. This function allows all attributes to remain unchanged except for
file length.
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future. Example: Consider the page references 7, 0, 1,
2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find number of page fault.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>
4 Page faults 0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest
duration of time in the future.— >1 Page fault.
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Process scheduling.
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy. Process scheduling is an essential
part of a Multiprogramming operating systems. Such operating systems allow
more than one process to be loaded into the executable memory at a time and
the loaded process shares the CPU using time multiplexing.
Process Scheduling Goals Fairness: Each process gets fair share of the CPU.
Turn Around Time: Minimizes the time between submission and termination.
Access Time: It is the time interval between the read/write request and the
availability of the data. As we move from top to bottom in the Hierarchy, the
access time increases.
Cost per bit: As we move from bottom to top in the Hierarchy, the cost per bit
increases i.e. Internal Memory is costlier than External Memory
Define swapping. Differentiate between fixed and variable sized partitioning in
multiprogramming.
Swapping, refers to the process of temporarily moving data from one location
in memory (typically RAM - Random Access Memory) to another location,
often to free up space for other data or to improve system performance.
Swapping is used when the available RAM is insufficient to accommodate all
the running processes and data that the computer or device needs
Choosing the right IPC method depends on factors like communication nature,
process relationships, and specific requirements.
OS as resource manager
Resources sharing in two ways "in time" and "in space". When a resource is a
time-sharing resource, first one of the tasks gets the resource for some time,
then another and so on.
The other kind of sharing is "space sharing". In this method, the users share
the space of resources.
By allocating the time and space for the different computers by their
requirement so it is called resource manager.
OS as an extended machine
4. Single memory made to look like many separate memories, each potentially
larger than the real memory