Professional Documents
Culture Documents
Big-oh notation: Big-oh is the formal method of expressing the upper bound of an algorithm's
running time. It is the measure of the longest amount of time. The function f (n) = O (g
(n)) [read as "f of n is big-oh of g of n"] if and only if exist positive constant c and such that
1. f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case
Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f (n)
Omega () Notation: The function f (n) = Ω (g (n)) [read as "f of n is omega of g of n"] if and
only if there exists positive constant c and n0 such that
In algorithmic methods, the design is to take a dispute on a huge input, break the input into
minor pieces, decide the problem on each of the small pieces, and then merge the piecewise
solutions into a global solution.
This mechanism of solving the problem is called the Divide & Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.
Following is simple Divide and Conquer method to multiply two square matrices.
1) Divide matrices A and B in 4 sub-matrices of size N/2 x N/2 as shown in the below diagram.
2) Calculate following values recursively. Ae + bg, af + bh, ce + dg and cf + dh.
In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions.
Addition of two matrices takes O(N 2) time. So the time complexity can be written as
T(N) = 8T(N/2) + O(N 2)
Here, we will sort an array using the divide and conquer approach (ie. merge sort).
Divide the
array into smaller subparts
3. Now, combine the individual elements in a sorted manner.
Here, conquer and combine steps go side by side.
Co
mbine the subparts
It starts to build the Minimum Spanning Tree from the vertex carrying minimum weight
in the graph.
It traverses one node only once.
Kruskal’s algorithm’s time complexity is O(E log V), V being the number of vertices.
This is managed with a FIFO queue. The full form of FCFS is First Come First Serve.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of
the queue and, when the CPU becomes free, it should be assigned to the process at the beginning
of the queue.
Priority scheduling algorithm executes the processes depending upon their priority. Each
process is allocated a priority and the process with the highest priority is executed first. Priorities
can be defined internally as well as externally.
Shortest Job First (SJF) Scheduling Algorithm is based upon the burst time of the process. The
processes are put into the ready queue based on their burst times. In this algorithm, the process
with the least burst time is processed first. The burst time of only those processes is compared
that are present or have arrived until that time. It is also non-preemptive in nature. Its preemptive
version is called Shortest Remaining Time First (SRTF) algorithm.
Preemptive Scheduling is a CPU scheduling technique that works by dividing time slots of
CPU to a given process. The time slot given might be able to complete the whole process or
might not be able to it. When the burst time of the process is greater than CPU cycle, it is placed
back into the ready queue and will execute in the next chance. This scheduling is used when the
process switch to ready state.
Algorithms that are backed by preemptive Scheduling are round-robin (RR), priority, SRTF
(shortest remaining time first).
Non-preemptive Scheduling is a CPU scheduling technique the process takes the resource
(CPU time) and holds it till the process gets terminated or is pushed to the waiting state. No
process is interrupted until it is completed, and after that processor switches to another process.
Algorithms that are based on non-preemptive Scheduling are non-preemptive priority, and
shortest Job first.
Round Robin(RR) scheduling algorithm is mainly designed for time-sharing systems. This
algorithm is similar to FCFS scheduling, but in Round Robin(RR) scheduling, preemption is
added which enables the system to switch between processes.
Once a process is executed for the given time period that process is preempted and
another process executes for the given time period.
This algorithm is simple and easy to implement and the most important is thing is this
algorithm is starvation-free as all processes get a fair share of CPU.
It is important to note here that the length of time quantum is generally from 10 to 100
milliseconds in length.
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
Deadlock Avoidance :
In Deadlock avoidance we have to anticipate deadlock before it really occurs and ensure that the
system does not go in unsafe state.It is possible to avoid deadlock if resources are allocated
carefully. For deadlock avoidance we use Banker’s and Safety algorithm for resource allocation
purpose. In deadlock avoidance the maximum number of resources of each type that will be
needed are stated at the beginning of the process.
Q.6 Explain the Need of Memory management and its requirements, paging, segmentation,
concept of fragmentation.
Answer:-
Reasons for using memory management:
It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
Tracks whenever inventory gets freed or unallocated. According to it will update the
status.
It allocates the space to application routines.
It also make sure that these applications do not interfere with each other.
Helps protect different processes from each other
It places the programs in memory so that memory is utilized to its full extent.
Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage
into the main memory in the form of pages. In the Paging method, the main memory is divided
into small fixed-size blocks of physical memory, which is called frames. The size of a frame
should be kept the same as that of a page to have maximum utilization of the main memory and
to avoid external fragmentation. Paging is used for faster access to data, and it is a logical
concept.
Segmentation method works almost similarly to paging. The only difference between the two is
that segments are of variable-length, whereas, in the paging method, pages are always of fixed
size.
A program segment includes the program’s main function, data structures, utility functions, etc.
The OS maintains a segment map table for all the processes. It also includes a list of free
memory blocks along with its size, segment numbers, and its memory locations in the main
memory or virtual memory.
Fragmentation
Processes are stored and removed from memory, which creates free memory space, which are
too small to use by other processes.
After sometimes, that processes not able to allocate to memory blocks because its small size and
memory blocks always remain unused is called fragmentation. This type of problem happens
during a dynamic memory allocation system when free blocks are quite small, so it is not able to
fulfill any request.
1. External fragmentation
2. Internal fragmentation