You are on page 1of 25

UNIT - I

Definition:
The operating system is a set of special program that run on a computer system that
allow it to work properly.
It acts as an interface between the user of a computer and the computer hardware.

Operating System Goals:-


It controls the allocation and use of the computing system’s resources among the
various users and tasks.
It provides an interface between the computer hardware and the programmer that
Simplifies and makes feasible for coding, creation, debugging of application programs.

Operating System Views:-


Operating system can be explored from two view points.

1. user view
2. system view

User view:
The user view of the computer varies by interface being used. Most computers users
sit in front of a pc consisting of a monitor, keyboard, mouse and system unit.

System view:
From the computer’s point of view the operating system is the program that is most
intimate with the hardware. A computer system has many resources hardware and
software that may be used to solve a program.

Types of system

1. Mainframe system
2. Desktop system
3. Multiprocessor system
4. Distributed system
5. Clustered system
6. Real-time system
7. Handheld system

Mainframe system
This type of computer system is used for scientific and commercial applications.
An operating system may process it workload serially or concurrently.
There are three types of system is there. They are
1. Batch system
2. Multiprogramming system
3. Time sharing system
Batch system:

Some computer systems only did one thing at a time. They had a list of
instructions to carry out. This is called a serial system
Memory management in batch system is very simple memory is usually divided into
two areas.
1. Operating system
2. User program area

Spooling:
Acronym for simultaneous peripheral operations on line. Spooling refers to
putting jobs in a buffer a special area in memory or on a disk where a device can
access them when it is ready.

Disk

Card
CPU printer
reader

Advantages of spooling:

1. The spooling operation uses a disk as very large buffer


2. Spooling is however capable of overlapping I/O operation for one job with
processor operations for another job.

Advantages of batch system:

1. Move much of the work of the operator to the computer.


2. Increased performance since it was possible for job to start as soon as the previous
job finished.
Disadvantages of batch system:

1. Turn around time can be large from user standpoint.


2. Difficult to debug program.

Multiprogramming operating system:

When two or more programs are in memory at the same time, sharing the processor is
defined to the multiprogramming operating system. Multiprogramming assumes a single
processor that is being shared.

512k Job 5
Job4
Job3
Job2
Job1
000 os

Memory layout for multiprogramming system

Time sharing system:

Time sharing system supports interactive users. Time sharing is also called
multitasking. It is logical extension of multiprogramming.
Time sharing system uses cpu scheduling and multiprogramming to provide an
economical interactive system of two or more users.
Time sharing uses medium term scheduling such as can run several programs at the
background. Time sharing system also multiprogramming system. But multiprogramming
operating system is not a time sharing system.

Desktop system:

Multiprogramming schemes to increase cpu use were limited by the physical capacity
of the main memory, which was a limited resource and very expensive;
These system includes pc running MS Window and the Apple Macintosh. The Apple
Macintosh os support new advance hardware i.e., virtual memory and multitasking with
virtual memory, the entire program did not need to reside in memory before execution
could begin.
Linux a Unix like os available for pc, has also become popular recently. The
microcomputer as developed for single users in the late 1970.
Multiprocessor systems:

cpu

p1
P2 cpu
p3

cpu

Multiprocessor system have more than one processor in close communication. They
share the computer bus, system clock and input output devices and sometimes memory.
Multiprocessor systems are of two types:
1. symmetric multiprocessing
2. asymmetric multiprocessing.
Symmetric multiprocessing:
In symmetric multiprocessing, each processor runs an identical copy of the operating
system and they communication with one another as needed.

P1 CPU

Communication
P2 CPU network

P3 CPU

Asymmetric multiprocessing system:


In asymmetric multiprocessing, each processor is assigned a specific task.
It uses master slave relationship. A master processor controls the system. The master
processor schedules and allocates work to the slave processors.

Distributed system
Distributed operating system depend on networking for their operation. Distributed os
runs on and controls the resources of multiple machines. It provides resource sharing
across the boundaries of a single computer system.
Advantages of distributed os:
1. Resource sharing
2. Higher reliability
3. Better price performance ratio
4. Shorter response times and higher throughput.
5. Incremental growth

LA
N

Clustered systems
It is a group of computer system connected with a high speed communication link.
Each computer system has its own memory and peripheral devices.
Clustering is usually performed to provide high availability. Clustered systems are
integrated with hardware cluster and software cluster.
Hardware cluster means sharing of high performance disks. Software cluster is in the
form of unified control of the computer system in cluster.
Clustered system can be categorized into two groups: asymmetric clustering and
symmetric clustering.
Real time systems:
Real time systems which were originally used to control autonomous systems such as
satellites, robots and hydroelectric dams. A real time operating system is one that must
react to inputs and responds to them quickly. Real time systems are divided into two
groups: Hard real time system and soft real time system.
Hard real time systems guarantee that complete on time. In soft real system a critical
task get priority over other tasks and remains that priority until it completes.

Handheld systems:
Personal Digital Assistants(PDA) is one type of handheld systems:
Memory of handheld system is in the range of 521KB to 8MB. Operating system and
application manager memory efficiently.
Wireless technology is also used in handheld devices. Bluetooth protocol is used for
remote access to email and web browsing.

System structure:
Modern os is large and complex. Operating system consists of different type of
components. These are interconnected and melded into a kernel. For designing the system
different types of system structure are used.
These structures are:
1. Simple structure
2. Layered approach
3. Microkernel.

Simple structure:
Simple structure os are small, simple and limited systems. It is not well defined
MS Dos is an example of the simple structure os.

Application program

Resident system
program

Ms-Dos device
drivers

ROM BIOS device drivers

Simple Structure Layout


The traditional Unix kernel:
The traditional Unix kernel is not designed to be extensible and has few facilities for
code reuse.
Its kernel is not very versatile, supporting a single type of the file system, process
scheduling policy and executable file format.

Layered Approach:
Layered os were developed in which functions are organized hierarchically and
interaction only takes place between adjacent layers. It provides good modularity. Each
layer of the os forms a module with a clearly defined functionality and inter face with the
rest of the os. Its simplifies debugging and system verification.
Layered approach required careful definite of the layers, because a layer can use only
those layers below it. Problem will layered implementations is that they tend to be less
efficient than other types.

User User mode

File system

Interprocess
communication Kernel mode
I/O and device
Management
Virtual
memory
Process
management
Hardware

Microkernel’s:
Microkernel is a small os core that provides the foundation for modularity extensions.
The main function of microkernel is to provide a communication facility between the
client program and the various services that are also running in user space. In theory, the
kernel approach was supposed to provide a high degree of flexibility and modularity .
Benefits of microkernel’s:
1. Microkernel allows the addition of new services.
2. Microkernel design imposes a uniform interface on requests made by a process.
3. It is flexible because existing features can be subtracted to produce a smaller and
more efficient architecture.
4. Modular design helps to enhance reliability.
Virtual memory
Device drivers

Process server
proc
Clie

ess
nt

File server
Microkernal

Hardware

Microkernal

System components:
Modern operating systems share the goal of supporting the system components. The
system components are:
1. Process management
2. Main Memory management
3. File management
4. Secondary storage management
5. I/O system management
6. Networking
7. Protection system
8. Command interpreter system

Process management:
 Process refers to a program in execution. The process abstraction is a fundamental
operating program mechanism for management of concurrent program execution.
The operating system responds by creating a process.
 A process needs certain resources. Such as cpu time, memory, files and I/O
devices. These resources are either given to the process when it is created on
allocated to it while it is running.
 When the process terminates the os will reclaim any reusable resources.
 The operating system is responsible for the following activities of the process
management.
 Creating and destroying the user and system processes.
 Allocating hardware resources among the processes.
 Controlling the programs of process communications.
 Also provides mechanisms for deadlock handling.

Main memory management:


 The memory management modules of an operating system are concerned with the
management of the primary memory. Memory management is concerned with
following functions:
 Keeping track of the status of each location of main memory ie., each memory
location is either free or allocated.
1. Determining allocation policy for memory.
2. Allocation technique i.e. the specific location must be selected and allocation
3. De allocation technique and policy. After de allocation , status information must
be updated.

File management:
 Logically related data items on the secondary storage are usually organized into
named collections called files. In short file is a logical collection of information.
Computer uses physical media for storing of the different algorithms depends on
the particular situation.
 The operating system is responsible for the following in connection with file
management.
1. creating and deleting of files.
2. mapping files onto secondary storage.
3. creating and deleting directories.
4. backing up files on stable storage media.
5. supporting primitives for manipulating files and directories.
6. transmission of file elements between main and secondary storage.

Secondary storage management:


 A storage device is a mechanism by which the computer may store information in
such a way that this information may be retrieved at a later time. Secondary
storage device is used for storing all the data and programs.
 It also lost of the data when power is last. For this reason secondary storage
device is used. Therefore the proper management of disk storage is of central
importance to a computer system.
 The operating system is responsible for the following activities in connection with
the disk management.
1. free space management
2. storage allocation
3. disk scheduling
 The entire speed and performance of a computer may hinge on the speed of the
disk subsystem.

I/O system management:


The module that keeps track of the status of devices is called the io traffic controller.
Each io device has a devices handler that resides in a separate process associated with
that device. The io subsystems consists of
1. a memory management component that includes buffering, caching and
spooling.
2. a general device driver interface
3. drivers for specific hardware device
We discussed the io system management in detail in this chapter.
Networking:
Networking enables computer user to share resources and speed up computations. The
processes communicate with one another through various communication lines.
The processor in the system are connected through a communication network which
can be configured in a number of different ways.
Following parameter are considered while designing the networks.
1. topology of network
2. type of network
3. physical media
4. communication protocols
5. routing algorithm

Protection system:
Modern computer system supports many users and allow the concurrent execution of
multiple processes. Organizations on computer to store information. The protection is any
mechanism for controlling the access of programs processes or user to the resources
defined by a computer system.
Protection mechanisms are implemented in operating system is to authenticate subjects
and to authorize their access to any objects.
Protection can improves reliability by detecting latent errors at the interface between
component subsystems. Protection domains are extensions of the hardware supervisor
mode ability.

Command interpreter system:


Command interpreter is the interface between user and the operating system. It is
system programs for an operating system. Command interpreter is a special program in
Unix and ms dos operating system.
When users login first time or when a job is initialized, the command interpreter is
initially some operating system is include in the kernel. A control statement is include
control statement , analyses it and carries out the required action.

Operating system services:


An operating system provides services to programs and to the users of those programs.
The services provided by one operating system is different than other. It makes the
programming task easier.
The common services provided by operating systems are:
1. program execution
2. io operation
3. file system manipulation
4. communications
5. error detection

. Program execution: operating system loads a program into memory and executes the
program
Io operation: program may require any io device while running. So operating system
must provide the required io.
File system manipulation: program needs a file to read or write. The operating system
gives the permission to program to operation on file.
Communications: data transfer between two processes are on one or different computers
but connected through network. They may be by two methods 1. shared memory and 2.
message passing.
Error detection: error may occur in cpu, io devices and memory hardware. They
operating system constantly need to be aware possible of errors.
Operating systems with multiple users provides following services:
Resource allocation: If there are more than one user or jobs running at the same time,
then resources must be allocated to each of them.
Accounting: Log of each user must be kept. It is also necessary to keep record of which
user uses how much and what kinds of computer resources. This log is used for
accounting purposes.The accounting data may be used for statistics or for the billing. It is
also to improve system efficiency.
Protection: protection involves ensuring that all access to system resources is controlled.
Security starts with each user having to authenciate to the systems, usually by means of a
password.

Virtual machines:

A design concept in which the programming model is implemented by the operating


system rather than by underlaying physical hardware. The virtual machine operating
system for IBM system is the best example of the virtual machine concept.
Each user directs the virtual machine to perform different commands. These
commands are then executed on the physical machine in a multiprogramming
environment.
Processes

Processes

Processes
Processes

Processes

Kernel kernel kernel kernel

VM1 VM2 VM3 VM4

Implementation of virtual
machine

Hardware
Implementation of virtual machine concept is difficult. The virtual machine software can
run in monitor mode but virtual machine itself executes in only user mode.
Use of virtual machine is more reliable than other system. It allows system development
to be done without disrupting normal system operation.
Benefits of virtual machine:
It provides good security.
It supports the research and development or os.
It solves system compatibility problems.
It solves the problem of system development time.
System design and implementation:
For system design and implementation, we consider the design goals, mechanisms and
policy. Design goals are divided into two groups:
User goals
System goals
From user’s view point, the system should be
Easy to learn.
Easy to use.
System should be reliable and safe.
Execution speed is fast.
The goals are changing according to the users. The system should be flexible and reliable.
It must be free from error. They is no unique solution available for designing the
operating system.
Mechanisms and policies are totally different to each other. Mechanisms determine how
to do something. Policies determine what will be done.
For example, designing a timer is a mechanism for protecting cpu and setting a timer
value for a particular user is a policy.
Process:
A process is sequential program in execution. A process defines the fundamental lunit
of computation for the computer. Components of the process are:
Object program.
Data.
Resources.
Status of the process execution.
Processes and programs:
Process is a dynamic entity. A process is a sequence of instruction executions.
Program statement. Program contains the instructions.
Process state:
When process executes, it changes state. Process state is defined as the current activity
of the process. Process state contains five states. Each process is in one of the states . the
states are listed below:
New
Ready
Running
Waiting
Terminated(exist)
New:
A process that has just been created.
Ready:
Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
Running:
The process that is currently being executed.
Waiting:
A process that can not execute until some event occurs such as the completion of an IO
operation.
Terminated:
A process that has been released from the pool of executable processes.
Process control block:
Each process contains the process control block. Pcb is the data structure used by the
operating system. Operating system groups all information that needs about particular
process.

Pointer processs

Process number

Program counter

CPU registers

Memory
allocation

Event
information

List of open files

Process control block

Process scheduling:
The scheduling mechanism is the part of the process manager that handles the removal
of the running process from the cpu and the selection of another process on the basis of a
particular strategy.

Scheduling queues:
When the process enters into the system they are put into a job queue. This queue
consists of all processes in the system. The operating systems also has other queues.
Queues are of two types.
Ready queue
Set of device queue.
While executing the process, one of the several events could occur:
The process could issue an io request and then place in an io queue.
The process could create new subprocess and waits for its termination.
The process could be removed forcibly from the cpu, as a result of interrupt.
Schedulers:
Schedulers are of three types.
Long term scheduler.
Short term scheduler
Medium term scheduler.
Long term scheduler.
It is job scheduler. Speed is less than short term scheduler. It controls the degree
of multiprogramming. Absent or minimal in time sharing system. It select processes
from, pool and load them into memory for execution. Process state is new to ready. Select
a good process and cpu bound.
Short term scheduler:
It is cpu scheduler. Speed is very fast. Less control over degree of
multiprogramming. Minimal in time sharing system. It is selected from among the
processes that are ready to execute. Short term scheduler is faster than long term
scheduler. When the process changes the state from ready to running.
Select a new process for a cpu quite frequently.
Medium term scheduler:
It is port of the swapping function. It removes the processes from the memory.
Speed is in between both. Reduce the degree of multiprogramming.
Time sharing system use medium term scheduler. Process sharing system use medium
term scheduler. Process can be reintroduced into memory and its execution can be
continued.
Co-operating process:
Co-operating process is a process that can affect or be affected by the other
processes while executing. If suppose any process is sharing data with other processes,
then it is called co-operating process.

Benefit of the co-operating processes are:

Sharing of information
Increases computation speed
Modularity
Convenience

Co-operating processes share the information such as a file, memory etc. computation
speed will increase if the computer has multiple processing elements are connected
together. System function is divided into number of modular.
Process1 Process2
Printf(“abc” Printf(“CBA”
) )

CBA abc abc CBA abc CBA

Behaviour of co-operating processes is nondeterministic it depends on relative execution


sequence and connot be predicted a prior. It is also irreproducible. For example supposes
one process writes “ABC”, another writes “CBA” can get different outputs, cannot tell
what comes from which. Which process output first ”c” in ABCCBA. The subtle state
sharing that occurs here via the terminal.

Threads:

A thread is flow of execution through the process’s code, with its own program counter,
system registers and stack. A thread is sometimes called a light weight process. No thread
can exits outside a process and they represent a separate flow of control.
Threads are of two types, such as:
User level thread
Kernel level thread

User level threads kernel level threads


User level threads are faster to create and kernel level threads are slowed to create
and manage. manage
Implemented by a thread library at the user operating system support directly to Kernel
Level. threads
User level thread can run on any operating kernel level threads are specific to the
System operating system
Support provided at the user level called support may be provided by kernel is
user Level thread Called the kernel level threads.
Multithread application cannot lake kernel routines themselves can be
advantage Of multiprocessing. multithreaded
. Difference between process and threads:

Process Thread
Process switching needs interface with Thread switching does not need to call a
operating System operating system and cause an, Interrupt to
kernel.
Process is called heavy weight process. Thread is called light weight Process.
In multiple process implementation each All threads can share same set of open files,
process child processes. Execute the same code but has
its own memory and file resources.
If one server process is blocked no other While one server thread is blocked and waiting,
server Processes can execute until the second thread in the Same task could run.
first process Unblocked.
Multiple redundant process uses more Multiple threaded processes uses fewer
resources than multiple threaded. resources that multiple redundant process.
In multiple process each process One thread can read, write or even completely
operates redependently of the others wipes out another threads stack.

Interprocess communication:
Interprocess communication means the process to communicate with each other
while they are running. Ipc allows processes to synchronize their action without sharing
the sane address space . interprocess communication is best provided by a message
passing systems.

Message passing system:


Message passing system requires the synchronization and communication
between the two process. It is normally provided in the form of a pair of primitives.
Send(destination-name, message)
Receive (source-name, message)
Send primitive is used for sending a message to destination.
A process receives information by executing the receive primitive.
Design characteristics of message system for ipc:
Synchronization between the process
Addressing
Format of the message
Queueing discipline

Synchronization:
The communication of a message between two process implies some level of
synchronization between the two process.

It has sender and receiver can be blocking or non blocking.


Blocking send, blocking receiver
Non blocking send, blocking receiver
Non blocking send, non blocking receiver
Blocking send, blocking receive:
Both the sender and receiver are blocked until the message is delivered. This is
called rendezvous.
Non blocking send, blocking receive:
Sender may continue on, the receiver is blocked until the requested message
arrives.
Non blocking send, non blocking receive:
Sending process sends the message and resumes the operation. Receiver also
receives a message.
Addressing (naming):
Processes that want to communicate must have a way to refer each other. The
send and receive primitives are of two types:
Direct communication
Indirect communication

Direct communication:
Send and receive primitives are defined as direct communication.

Indirect communication:
In indirect communication, messages are not send directly from sender to
receiver but to a shared data structure consisting of queues than can temporarily holds
messages. Such queue are generally referred to as mailboxes.

The sender and receive primitives are as following


Send(b, message)
Send a message to mailbox b
Receive(b, message)
Receive a message from mailbox b

Process Mail box Process


P B M

Send(B, message)

Properties of communication link:

A link may be associated with move the two processes


The link may be unidirectional, but is usually bidirectional.
Buffering:

Buffering is used in direct and indirect communication. It is implemented in three ways:


Zero capacity
Bounded capacity
Unbounded capacity

Send(M,message)
Send primitive is used for sending a message sender must specify the sender
name. m is the name of destination, message is actual data. The process p is sending
message to process m.
Receive (P, message):
A process receives information by executing the receive primitive .

Process Process
P M

Properties of communication link:

A link is associated with exactly two process.Exactly one llink exists between each pair
of processes.The above explained scheme is symmetry addressing. Asymmetry
addressing is also possible in direct communication. The send and receive primitives are:
Send(m, message)
Send a message to process m from process p
Receive(id, message)
Receive a message from any process. The variable id is name of the process.

Zero capacity:
Maximum length of the queue is 0. In this case , the sender must block until the
recipient receives message.

Bounded capacity:
The queue has a finite length. Atmost ‘n’ message can reside in it. If the queue is full, it
discards the message and sender is blocked until space is available in queue.

Unbounded capacity:
The queue has infinite length. Any number of message can wait in it. The sender never
blocks.
Message format:

Source Header
Destination
Message length
Control
information
Message type
Message content Body

Message format is divided into two parts: header and body .header contains
source , destination address, message length , control information and message type.
Messages content is the body part.

Scheduling criteria:
Scheduler may use in attempting to maximize system performance. The scheduling
policy determines the importance of each of the criteria. Some commonly used criteria
are:

 Cpu utilization.
 Throughput.
 Waiting time.
 Turnaround time.
 Response time.
 Priority.
 Balanced utilization.
 Fairness.

Cpu utilization:
Cpu utilization is the average function of time, during which the processor is
busy. Cpu utililzatlion may range from 0% to 100%. On large and expensive systems (i.e)
time shared system, cpu utilization may be the primary consideration.

Throughput:
It refers to the amount of work completed is a unit of time . the number of
processes the system can execute in a period of time.

Waiting time:
The average period of time a process spends waiting. Waiting time may be
expressed as turnaround time less the actual execution time

Turnaround time:
The interval from the time of submission of a process to the time of completion is
the turnaround time.
Response time:
It is the time from the submission of a request until the first response is produced.

Priority:
It gives preferential treatment to processes with higher priorities.

Balanced utilization:
Utilization of memory, io devices and other system resources are also considered.
Not only cpu utilization considered for performance.

Fairness:
Avoid the process from the starvation. All the processes must be given equal
opportunity to execute .

Scheduling algorithms:
Scheduling algorithm may be preemptive or non preemptive. Types of scheduling
algorithms are

First come first served


Shortest job first
Priority
Round robin
Multilevel feedback queue
Multilevel queue.

First come first served:


It is simplest scheduling algorithm. Cpu is allocated to the process in the order of
arrival. It is a non preemptive scheduling algorithm. When process enters the ready
queue, its process control block is linked onto the tail of the queue.
Let us consider the processes that arrive at time 0, with the cpu burst time given in
milliseconds.

Process burst time

P1 3
P2 6
P3 4
P4 2

If the processes arrive in the order of p1,p2,p3,p4. the gantt chart for fcfs is
Gantt chart:

P1 P2 P3 P4
0 3 9 13 15
Waiting time:

Process waiting time


P1 0
P2 3
P3 9
P4 13

Average waiting time:


Average waiting time= waiting times of all processes
------------------------------------
Number of processes
=25/4
=6.25
Turn around time:
Process turn around time
P1 3+0=3
P2 6+3=9
P3 4+9=13
P4 2+13=15

Average turnaround time:


=3+9+13+15
-------------
4
=40/4
=10
Shortest job first:
It is assigned to the process of the ready queue which has smallest next cpu burst.
It is used frequently in long term scheduling.
It may be either preemptive or non preemptive scheduling algorithm.
Let us consider the set of process with burst time in milliseconds.

Process burst time


P1 3
P2 6
P3 4
P4 2

Gantt chart:

P1 P2 P3 P4
0 2 5 9 15
Waiting time:

Process waiting time


P1 2
P2 9
P3 5
P4 0

Average waiting time:

=2+9+5+0
-----------
4
=16/4
=4

Turn around time:


Process turnaround time
P1 3+2=5
P2 6+9=15
P3 4+5=9
P4 2+0=2

Average turnaround time:

=5+15+9+2
-----------
4
=7.75
Priority scheduling:
Cpu is allocated to the highest priority of the process from the ready queue.
Each queue process has a priority number. It is either preemptive or non preemptive.
Priority of the consider can be defined either internally or externally. Internally defined
priority considers the time limits, number of open file, use of memory and IO device.
Externally priorities are set by using external parameter of the process.
Let us consider the set of processes with burst time in milliseconds.
Process burst time priority
P1 3 2
P2 6 4
P3 4 1
P4 2 3

Gantt chart:

P3 P1 P4 P2
0 4 7 9 15
Waiting time:

Process waiting time


P1 4
P2 9
P3 0
P4 7
Average waiting time:
=4+9+0+7
----------
4
=20/4
=5
Turn around time:

Process turn around time


P1 3+4=7
P2 6+9=15
P3 4+0=4
P4 2+7=9

Average turn around time:


=7+15+4+9
-----------
4
=8.75
Round robin scheduling:
Time sharing system used the round robin algorithm.
It is a preemptive algorithm
Cpu selects the process from the ready queue
To implement RR scheduling, ready queue is maintained as a FIFO queue of the
processes.

Process burst time

P1 3
P2 6
P3 4
P4 2

Gantt chart:

P1 P2 P3 P4 P1 P2 P3 P2
0 2 4 6 8 9 11 13 15
Waiting time:
Process waiting time
P1 0+6=6
P2 2+5+2=9
P3 4+5=9
P4 6=6
Average waiting time:
=6+9+9+6
----------
4
=7.5
Turn around time:
Process turn around time
P1 3+6=9
P2 6+9=15
P3 4+9=13
P4 2+6=8
Average turn around time:
=9+15+13+8
-------------
4
=11.25
Multilevel queue scheduling:
Multilevel queues are extension of priority scheduling where by all processes of the
same priority are placed in a single queue. For example, time sharing systems often
support the idea of foreground and background processes.
These two types of processes have different response time requirement, so they require
different scheduling algorithm.
It divides the ready queue into the number of separate queues.
Each queue has its own scheduling algorithm.

System
processes

Interactive
processes

Batch processes

Student
processes
Multilevel feedback queue scheduling:

 It is overcomes the problem of multilevel queue scheduling algorithm.

 It allows a process to move between the queues.

 MFQ idea is to separate the processes with different cpu burst time.

 If a process uses too much cpu time, it will be moved to a lower priority queue.

Ready queue 1 cpu

Ready queue 2 cpu

Ready queue cpu


n

You might also like