You are on page 1of 14

Applied Operating System

MODULE 1 – IT0035

WHAT IS AN OPERATING SYSTEM? 2. Program Execution – The OS must have the


capability to load a program into memory
An Operating System (OS) is a program or
and execute that program.
system software that acts as an interface between
3. File System Manipulation – Programs need
the user and the computer hardware and controls
has to be read and then write them as files
the execution of all kinds of programs.
and directories. File handling portion of OS
It is not possible for the user to use any computer also allows users to create and delete files by
or mobile device without having an operating specific name along with extension, search
system. for a given file and / or list file information.
4. Input / Output Operations – A program
Goals of Operating System
which is currently executing may require I/O,
1. Execute user programs and make solving user which may involve file or other I/O device.
problems easier. The OS is responsible for reading and/or
2. Make the computer system convenient to use. writing data from I/O devices such as disks,
3. Use the computer hardware in an efficient tapes, printers, keyboards, etc.
manner. 5. Communication – Process needs to swap
over information with other process.
COMPONENTS OF A COMPUTER SYSTEM
Processes executing on same computer
❖ Computer hardware – CPU, memory and system or on different computer systems can
I/O devices, provides the basic computing communicate using operating system
resources. support.
❖ Application programs – are used to solve 6. Resource Allocation – The OS manages the
the computing problems of the users such as different computer resources such as CPU
word processors, games and business time, memory space, file storage space, I/O
programs. devices, etc. and allocates them to different
❖ Users – who utilize a computer or network application programs and users.
service trying to solve different problems. 7. Error Detection – The operating system
❖ Operating Systems – controls and should be able to detect errors within the
coordinates the use of the hardware among computer system (CPU, memory, I/O, or user
the various application programs for the program) and take the appropriate action.
various users. 8. Job Accounting – OS keeps track of time
and resources used by various tasks and
COMMON SERVICES OFFERED BY ALMOST ALL
users, this information can be used to track
OPERATING SYSTEMS:
resource usage for a particular user or group
1. User Interface (UI) – refers to the part of an of users.
OS, or device that allows a user to enter and 9. Security and Protection
receive information. ➢ Protection is any mechanism for
• Types of UI: controlling access of processes or users
o Command Line Interface to resources defined by the OS.
o Batch based interface ➢ Security is a defense of the system
o Graphical User Interface against internal and external attacks
Applied Operating System
MODULE 1 – IT0035

(denial-of-service, worms, viruses, switches occur so frequently. Thus, the


identity theft, theft of service) user can receive an immediate
response. Response time should be < 1
WHAT IS A KERNEL? second
Kernel is the central part of an OS which manages • Each user has at least one program
system resources and is always resident in executing in memory
memory. It also acts like a bridge between • If several jobs ready to run at the same
application and hardware of the computer. It is time -> CPU scheduling
also the first program that loads after the • If processes don’t fit in memory,
bootloader. swapping will take place
Bootloader is a program that loads and starts the • Examples: Unix, Linux, Multics and
boot time tasks and processes of an OS. It also Windows
places the OS of a computer into memory. 3. Distributed Operating System
• Distributed systems use multiple
TYPES OF OPERATING SYSTEM central processors to serve multiple
Operating systems are there from the very first real-time applications and multiple
computer generation and they keep evolving with users. Data processing jobs are
time. distributed among the processors
accordingly.
Types of operating systems which are most • The processors communicate with one
commonly used: another through various
1. Batch Operating System communication lines (such as high-
• The user of a BOS never directly speed buses or telephone lines). These
interacts with the computer. are referred as loosely coupled systems
• Every user prepares his or her job on an or distributed systems. Processors in a
offline device like a punch card and distributed system may vary in size and
submit it to the computer operator. function. These processors are referred
• To speed up processing, jobs with as sites, nodes, computers, and so on.
similar needs are batched together and • Examples: Telecom Network, WWW,
run as a group. Cloud Computing, etc.
• The programmers leave their programs 4. Network Operating System
with the operator then operator sorts • A NOS runs on a server and provides
the programs with similar the server the capability to manage
requirements into batches. data, users, groups, security,
2. Time-sharing Operating Systems applications, and other networking
• Time-sharing or multitasking is logical functions.
extension in which CPU switches jobs • The primary purpose of the network
so frequently that users can interact operating system is to allow shared file
with each job while it is running, and printer access among multiple
creating interactive computing. computers in a network, typically a
• Multiple jobs are executed by the CPU local area network (LAN), a private
by switching between them, but the network or to other networks.
Applied Operating System
MODULE 1 – IT0035

• Examples: Microsoft Windows Server × CPU moves data from/to main memory
2003, Microsoft Windows Server 2008, to/from local buffers.
UNIX, Linux, Mac OS X Server, Novell × I/O is from the device to local buffer of
NetWare, and BSD/OS (Berkeley controller.
Software Design) × Device controller informs CPU that it has
5. Real-time Operating System finished its operation by causing an interrupt.
• RTOS is an operating system intended
to serve real-time systems/applications What is an Interrupt?
that process data as it comes in, mostly › Interrupt is a signal emitted by hardware or
without buffer delay. software when a process or an event needs
• The time interval required to process immediate attention.
and respond to inputs is very small. › It alerts the processor temporarily to a high
This time interval is called response priority process requiring interruption of the
time. current working process and then return to
• Real-time systems are used when there its previous task.
are time requirements are very strict › Types of Interrupts:
like missile systems, air traffic control o Hardware Interrupt
systems, robots, etc. o Software Interrupt
• Examples: LynxOS, OSE, QNX, RTLinux, › An operating system is interrupt driven.
VxWorks, Windows CE
6. Handheld Operating System HARDWARE INTERRUPT
• It is also known as Mobile OS which is → A signal created and sent to the CPU that is
built exclusively for a mobile device, caused by some action taken by a hardware
such as a smartphone, personal device.
digital assistant (PDA), tablet, → Example: When a key is pressed or when
wearable devices or other embedded the mouse is moved.
mobile OS.
• Examples: Android, Symbian, iOS, SOFTWARE INTERRUPT
BlackBerry OS and Windows Mobile → Arises due to illegal and erroneous use of
an instruction or data. It often occurs when
COMPUTER SYSTEM ORGANIZATION an application software terminates or when
× One or more CPUs, device controllers connect it requests the operating system for some
through common bus providing access to service.
shared memory. → Example: stack overflow, division by zero,
× Concurrent execution of CPUs and devices invalid opcode, etc. These are also called
completing for memory cycles. traps.
× I/O devices and the CPU can execute
INTERRUPT HANDLING
concurrently.
× Each device controller is in charge of a The operating system preserves the state of the
particular device type CPU by storing registers and the program counter.
× Each device controller has a local buffer. Determines which type of interrupt has occurred:
Applied Operating System
MODULE 1 – IT0035

➢ Polling – operating system sends signal to × Most systems have special-purpose


each devices asking if they have a request. processors as well
➢ Vectored Interrupt System - requesting × Multiprocessors systems growing in use and
device sends interrupt to the operating importance
system. o Also known as parallel systems or
Separate segments of code determine what action tightly-coupled systems
should be taken for each type of interrupt. o Advantages include:
▪ Increased throughput
Operating System Operators ▪ Economy of scale
› Dual-mode operation allows OS to protect ▪ Increased reliability – graceful
itself and other system components degradation of fault tolerance
× User mode and kernel mode o Two types:
× Mode bit provided by hardware ▪ Asymmetric Multiprocessing
• Provides ability to distinguish when ▪ Symmetric Multiprocessing
system is running user code (1) or kernel
MULTIPROCESSOR SYSTEMS
code (0)
• Some instructions designated as Advantages:
privileged, only executable in kernel 1. Increased throughput. Increasing the
mode number of processor, expect to get more
• System call changes mode to kernel, work done in less time.
return from call resets it to user. 2. Economy of Scale. It can cost less than
• A system call is a way for programs to equivalent multiple single-processor
interact with the OS. A computer systems because they can share
program makes a system call when it peripherals, mass storage and power
makes a request to the OS‘s kernel. supplies.
3. Increased Reliability. Functions can be
PROCESSOR SYSTEM distributed properly among several
Single-Processor System processors. If one processor fails, the other
› There is one main CPU capable of executing processor can pick-up the task.
a general-purpose instruction set, including The multiprocessor systems in use today are of
instructions from user processes. two types:
1. Asymmetric Multiprocessing. Each
Multiprocessor System
processor is assigned a specific task. A boss
› Also known as parallel-system or
processor controls the system and the
multicore.
other processors either look to the boss for
› First appeared in servers and now in
instructions or have predefined tasks.
smartphones and tablet computers.
Boss-worker relationship.
COMPUTER SYSTEM ARCHITECTURE 2. Symmetric Multiprocessing (SMP). The
× Most systems use a single general-purpose most commonly used. In which each
processor (PDAs through mainframes) processor performs all tasks within the
Applied Operating System
MODULE 1 – IT0035

operating system. All processors are • Home networks


peers and no boss-worker relationship. → Used to be single system, then
modems
The difference between symmetric and
→ Now firewalled, networked
asymmetric may result from either hardware or
• Mobile computing
software.
→ Refers to computing on handheld
× A recent trend in CPU design is to include smartphones and tablet
multiple computing cores on a single chip. computers.
Such multiprocessor systems are termed • Distributed system
multicore. They can be more efficient than → It is a collection of physically
multiple chips with single core. separate, possibly heterogeneous
× A dual-core design with two cores on the computer systems that are
same chip. Each core has its own register set networked to provide users with
as well as its own local cache access to the various resources that
the system maintains.
CLUSTERED SYSTEMS
→ Network operating system is an
› Like multiprocessor systems, but multiple
operating system that provides
systems working together.
services across the network.
› Usually sharing storage via a storage-area
• Client-Server Computing
network (SAN)
→ Dumb terminals succeeded by
› Provides a high-availability service which
smart PCs
survives failures
→ Many systems now servers,
o Asymmetric clustering has one
responding to requests generated
machine in hot-standby mode
by clients
o Symmetric clustering has multiple
o Compute-server provides
nodes running applications, monitoring
an interface to client to
each other
request services (i.e.
› Some clusters are for high-performance
database)
computing (HPC)
o File-server provides
COMPUTING ENVIRONMENT interface for clients to store
› Traditional computer – blurring over time and retrieve files.
• Office environment › Another model of distributed system
→ PCs connected to a network, › P2P does not distinguish clients and servers
terminals attached to mainframe or • Instead all nodes are considered peers
minicomputers providing batch • May each act as client, server or both
and timesharing • Node must join P2P network
→ Now portals allowing networked → Registers its service with central
and remote systems access to lookup service on network, or
same resources → Broadcast request for service and
respond to requests for service via
discovery protocol
• Examples: Napster and BitTorrent
Applied Operating System
MODULE 1 – IT0035

› Virtualization
• It is a technology that allows operating
systems to run as applications within OPEN-SOURCE OPERATING SYSTEM
other operating system. × Open Source operating systems are
• It is one member of the class software released under a license where the copyright
that also includes emulation. Emulation holder allows others to study, change as well
is used when the source CPU type is as distribute the software to other people.
different from the target CPU type. × Counter to the copy protection and Digital
• Example: virtual machine, Rights Management (DRM) movement.
OracleVirtualBox × Started by Free Software Foundation (FSF),
› Cloud Computing which has “copyleft” GNU Public License
• It is a type of computing that delivers (GPL)
computing, storage and even × Examples: GNU (GNU’s Not Unix) / Linux,
applications as a service across a BSD UNIX (including core of Mac OS X), and
network. Sun Solaris
• It is a logical extension of virtualization
→ Public Cloud – cloud available via
the Internet
→ Private Cloud – cloud run by a
company for that company’s own
use
→ Hybrid Cloud – cloud that includes
both public and private
› Cloud Computing Service Models
• Software as a Service (SaaS) – one or
more applications available via the
Internet
• Platform as a Service (PaaS) – software
stack ready for application use via the
Internet
• Infrastructure as a Service (IaaS) –
servers or storage available over the
Internet.
Platform Type Common Examples
Google Apps, Dropbox, Salesforce,
SaaS
Cisco WebEx, Concur, GoToMeeting
AWS Elastic Beanstalk, Windows Azure,
PaaS Heroku, Force.com, Google App
Engine, Apache Stratos, OpenShift
DigitalOcean, Linode, Rackspace,
Amazon Web Services (AWS), Cisco
IaaS
Metapod, Microsoft Azure, Google
Compute Engine (GCE)
Applied Operating System
MODULE 2 – IT0035

PROCESS MANAGEMENT PROCESS ARCHITECTURE


PROCESS CONCEPT To put it in simple terms, we write our computer
› A process is a program in execution. A programs in a text file and when we execute this
program by itself is not a process. program, it becomes a process which performs all
› A program is a passive entity, such as the the tasks mentioned in the program.
contents of a file stored on disk while a When a program is loaded into the memory and
process is an active entity. it becomes a process, it can be divided into four
› A computer system consists of a collection of sections ─ stack, heap, text and data. The figure
processes: shows a simplified layout of a process inside main
o Operating system processes execute memory.
system code, and
The process Stack contains the temporary data
o User processes execute user code. Stack such as method/function parameters, return
› Although several processes may be address and local variables.
associated with the same program, they are This is dynamically allocated memory to a
Heap
nevertheless considered separate execution process during its run time.
sequences. This includes the current activity represented by
Text the value of Program Counter and the contents
› All processes can potentially execute of the processor's registers.
concurrently with the CPU (or CPUs) This section contains the global and static
Data
multiplexing among them (time sharing). variables.
› A process is actually a cycle of CPU execution
(CPU burst) and I/O wait (I/O burst). PROCESS STATE
Processes alternate back and forth between As a process executes, it changes state. The
these two states. current activity of a process party defines its state.
› Process execution begins with a CPU burst Each sequential process may be in one of
that is followed by an I/O burst, which is following states:
followed by another CPU burst, then another 1. New. The process is being created
I/O burst, and so on. Eventually, the last CPU 2. Running. The CPU is executing its
burst will end with a system request to instructions.
terminate execution. 3. Waiting. The process is waiting for some
› Example: event to occur (such as an I/O completion).
4. Ready. The process is waiting for the OS to
assign a processor to it.
5. Terminated. The process has finished
execution.
Applied Operating System
MODULE 2 – IT0035

Process State Diagram used, time limits, account numbers, job or


process numbers, and so on.
7. I/O Status Information. This information
includes outstanding I/O requests, I/O
devices (such as disks) allocated to this
process, a list of open files, and so on.
× The PCB simply serves as the repository for
any information that may vary from process to
process.

PROCESS CONCEPT
Example of the CPU being switched from one
PROCESS CONTROL BLOCK
process to another. This is also known as Context
Each process is represented in the operating Switch Diagram.
system by a process control block (PCB) – also
called a task control block. A PCB is a data block CONCURRENT PROCESSES
or record containing many pieces of the o The processes in the system can execute
information associated with a specific process concurrently; that is, many processes may be
including: multitasked on a CPU.
1. Process state. The state may be new, o A process may create several new processes,
ready, running, waiting, or halted. via a create-process system call, during the
2. Program Counter. The program counter course of execution. Each of these new
indicates the address of the next processes may in turn create other processes.
instruction to be executed for this process. o The creating process is the parent process
3. CPU Registers. These include whereas the new processes are the children
accumulators, index registers, stack of that process.
pointers, and general-purpose registers, o When a process creates a sub-process, the
plus any condition-code information. sub-process may be able to obtain its
Along with the program counter, this resources directly from the OS or it may use a
information must be saved when an subset of the resources of the parent process.
interrupt occurs, to allow the process to be o Restricting a child process to a subset of the
continued correctly afterward. parent’s resources prevents any process from
4. CPU Scheduling Information. This overloading the system by creating too
information includes a process priority, many processes
pointers to scheduling queues, and any o When a process creates a new process, two
other scheduling parameters. common implementations exist in terms of
5. Memory Management Information. This execution:
information includes limit registers or page 1. The parent continues to execute
tables. concurrently with its children.
6. Accounting Information. This information 2. The parent waits until all its children
includes the amount of CPU and real time have terminated.
Applied Operating System
MODULE 2 – IT0035

o A process terminates when it finishes its last 2. The result of its execution is
statement and asks the operating system to nondeterministic since it will not always
delete it using the exit system call. be the same for the same input.
o A parent may terminate the execution of one o Concurrent execution of cooperating process
of its children for a variety of reason, such as: requires mechanisms that allow processes to
1. The child has exceeded its usage of communicate with one another and to
some of the resources it has been synchronize their actions
allocated.
2. The task assigned to the child is no SCHEDULING CONCEPTS
longer required. • The objective of multiprogramming is to
3. The parent is exiting, and the OS does have some process running at all times, to
not allow a child to continue if its parent maximize CPU utilization.
terminates. In such systems, if a process • Multiprogramming also increases
terminates, then all its children must also throughput, which is the amount of work
be terminated by the operating system. the system accomplishes in a given time
This phenomenon is referred to as interval (for example, 17 processes per
cascading termination. minute).
o The concurrent processes executing in the OS • Example: Given two processes, P0 and P1.
may either be independent processes or
cooperating processes.
o A process is independent if it cannot affect or
be affected by the other processes. Clearly, • The idea of multiprogramming is if one
any process that does not share any data process is in the waiting state, then another
(temporary or persistent) with any other process which is in the ready state goes to
process is independent. Such a process has the running state.
the following characteristics: • As processes enter the system, they are put
1. Its execution is deterministic; that is, the into a job queue. This queue consists of all
result of the execution depends solely processes in the system.
on the input state. • The processes that are residing in main
2. Its execution is reproducible; that is, the memory and are ready and waiting to
result of the execution will always be the execute are kept on another queue which is
same for the same input. the ready queue.
3. Its execution can be stopped and • A process migrates between the various
restarted without causing ill effects. scheduling queues throughout its lifetime.
o A process is cooperating if it can affect or be The OS must select processes from these
affected by the other processes. Clearly, any queues in some fashion.
process that shares data with other processes • The selection process is the responsibility of
is a cooperating process. Such a process has the appropriate scheduler.
the following characteristics:
1. The results of its execution cannot be
predicted in advance, since it depends
on relative execution sequence.
Applied Operating System
MODULE 2 – IT0035

TYPES OF SCHEDULER process and loading the saved state for the
o Long-term scheduler ( or Job scheduler ) new process. This task is known as context
→ selects processes from the secondary switch.
storage and loads them into memory › Context-switch time is pure overhead,
for execution. because the system does no useful work
→ The long-term scheduler executes while switching and should therefore be
much less frequently. minimized.
→ There may be minutes between the › Whenever the CPU becomes idle, the OS
creation of new processes in the (particularly the CPU scheduler) must select
system. one of the processes in the ready queue for
→ The long-term scheduler controls the execution.
degree of multiprogramming – the
CPU SCHEDULER
number of processes in memory.
→ Because of the longer interval between CPU scheduling decisions may take place under
executions, the long-term scheduler the following four circumstances:
can afford to take more time to select 1. When a process switches from the running
a process for execution. state to the waiting state (for example, I/O
o Short-term scheduler ( or CPU scheduler ) request, invocation of wait for the
→ selects process from among the termination of one of the child processes).
processes that are ready to execute, 2. When a process switches from the running
and allocates the CPU to one of them. state to the ready state (for example, when
→ The short-term scheduler must select a an interrupt occurs).
new process for the CPU frequently. 3. When a process switches from the waiting
→ A process may execute for only a few state to the ready state (for example,
milliseconds before waiting for an I/O completion of I/O).
request. 4. When a process terminates.
→ Because of the brief time between For circumstances 1 and 4, there is no choice in
executions, the short-term scheduler terms of scheduling. A new process (if one exists
must be very fast. in the ready queue) must be selected for
execution. There is a choice, however, for
SCHEDULING CONCEPTS
circumstances 2 and 3.
› Medium-term scheduler removes (swaps
When scheduling takes place only under
out) certain processes from memory to lessen
circumstances 1 and 4, the scheduling scheme is
the degree of multiprogramming (particularly
non-preemptive; otherwise, the scheduling
when thrashing occurs)
scheme is preemptive.
› At some later time, the process can be
reintroduced into memory and its execution CPU SCHEDULING
can be continued where it left off.
• Non-preemptive scheduling, once the CPU
› This scheme is called swapping.
has been allocated to a process, the process
› Switching the CPU to another process
keeps the CPU until it releases the CPU either
requires some time to save the state of the old
by terminating or switching states. No process
Applied Operating System
MODULE 2 – IT0035

is interrupted until it is completed, and after waiting in the ready queue, executing in the
that processor switches to another process. CPU, and doing I/O.
• Preemptive scheduling works by dividing 4. Waiting Time is the time a job waits for
time slots of CPU to a given process. The time resource allocation when several jobs are
slot given might be able to complete the whole competing in multiprogramming system.
process or might not be able to it. When the Waiting time is the total amount of time a
burst time of the process is greater than CPU process spends waiting in the ready queue.
cycle, it is placed back into the ready queue and 5. Response Time is the time from the
will execute in the next chance. This scheduling submission of a request until the system
is used when the process switch to ready state. makes the first response. It is the amount of
time it takes to start responding but not the
CPU SCHEDULING ALGORITHMS time that it takes to output that response.
• Different CPU-scheduling algorithms have
A good CPU scheduling algorithm maximizes
different properties and may favor one class
CPU utilization and throughput and minimizes
of processes over another.
turnaround time, waiting time and response time.
• Many criteria have been suggested for
comparing CPU-scheduling algorithms. • In most cases, the average measure is
• The characteristics used for comparison can optimized. However, in some cases, it is
make a substantial difference in the desired to optimize the minimum or
determination of the best algorithm. The maximum values, rather than the average.
criteria should include: CPU Utilization, • For example, to guarantee that all users get
Throughput, Turnaround Time, Waiting good service, it may be better to minimize
Time, and Response Time the maximum response time.
• For interactive systems (time-sharing
CPU SCHEDULING ALGORITHMS systems), some analysts suggests that
1. CPU Utilization measures how busy is the minimizing the variance in the response
CPU. CPU utilization may range from 0 to 100 time is more important than averaging
percent. In a real system, it should range from response time.
40% (for a lightly loaded system) to 90% (for • A system with a reasonable and predictable
a heavily loaded system). response may be considered more desirable
2. Throughput is the amount of work than a system that is faster on the average,
completed in a unit of time. In other words but is highly variable.
throughput is the processes executed to • Non-preemptive:
number of jobs completed in a unit of time. › First-Come First-Served(FCFS)
The scheduling algorithm must look to › Shortest Job First (SJF)
maximize the number of jobs processed per › Priority Scheduling (Non-preemptive)
time unit. • Preemptive:
3. Turnaround Time measures how long it › Shortest Remaining Time First (SRTF)
takes to execute a process. Turnaround time › Priority Scheduling (Preemptive)
is the interval from the time of submission to › Round-robin (RR)
the time of completion. It is the sum of the
periods spent waiting to get into memory,
Applied Operating System
MODULE 2 – IT0035
Arrival Burst Waiting Turnaround
Process
CPU SCHEDULING TECHNIQUES—NON Time Time Time Time
PREEMPTIVE P1 0 5 0-0 = 0 5-0 = 5
P2 1 3 5-0 = 5 8-0 = 8
FIRST-COME FIRST-SERVED (FCFS) P3 2 8 8-0 = 8 16-0 =16
→ FCFS is the simplest CPU-scheduling 16-0 =
P4 3 6 22-0 = 22
algorithm. 16
→ The process that requests the CPU first gets 29/4 51/4
Average
7.25 ms 12.75 ms
the CPU first.
→ The FCFS algorithm is non-preemptive.
→ Example 1: Gantt Chart:
o Consider the following set of P1 P2 P3 P4
processes that arrive at time 0, with
the length of the CPU burst given in
0 5 8 16 22
milliseconds (ms).
o Illustrate the Gantt chart and SHORTEST JOB FIRST (SJF)
compute for the average waiting → SJF algorithm associates with each process
time and average turnaround the length of the latter’s next CPU burst.
time. → When the CPU is available, it is assigned
Given: to the process that has the smallest next
Arrival Burst Waiting Turnaround CPU burst.
Process
Time Time Time Time
→ If two processes have the same length next
P1 0 5 0-0 = 0 5-0 = 5
P2 0 3 5-0 = 5 8-0 = 8
CPU burst, FCFS scheduling is used to
P3 0 8 8-0 = 8 16-0 =16 break the tie.
16-0 = → SJF algorithm is non-preemptive.
P4 0 6 22-0 = 22
16 → Example 1:
29/4 51/4 Arrival Burst Waiting Turnaround
Average Process
7.25 ms 12.75 ms Time Time Time Time
P1 0 5 3-0 = 3 8-0 = 8
Gantt Chart:
P2 0 3 0-0 = 0 3-0 = 3
P1 P2 P3 P4 14-0 =
P3 0 8 22-0 = 22
14
P4 0 6 8-0 = 8 14-0 = 14
0 5 8 16 22 25/4 47/4
Average
Formulas: 6.25 ms 11.75 ms
• Waiting time = Start time – Arrival Gantt Chart:
time
P2 P1 P4 P3
• Turnaround time = Completion time
– Arrival time
• Average Waiting time = Sum of 0 3 8 14 22
Waiting time / No. of processes
• Average Waiting Time = Sum of
Turnaround time / No. of processes
→ Example 2:
Applied Operating System
MODULE 2 – IT0035

→ Example 2: CPU SCHEDULING TECHNIQUES—


Process
Arrival Burst Waiting Turnaround PREEMPTIVE
Time Time Time Time
P1 0 5 0-0 = 0 5-0 = 5 SHORTEST JOB FIRST (SJF)
P2 1 3 5-1 = 4 8-1 = 7 → The SJF algorithm may be either
14-2 =
P3 2 8 22-2 = 20 preemptive or nonpreemptive.
12
P4 3 6 8-3 = 5 14-3 = 11
→ A new process arriving may have a shorter
21/4 43/4 next CPU burst than what is left of the
Average
5.25 ms 10.75 ms currently executing process.
Gantt Chart: → A preemptive SJF algorithm will preempt
the currently executing process.
P1 P2 P4 P3
→ Preemptive SJF scheduling is sometimes
called Shortest Remaining Time First
0 5 8 14 22 scheduling.
PRIORITY SCHEDULING (NP) SHORTEST REMAINING TIME FIRST (SRTF)
→ Priority scheduling (non-preemptive) Example:
algorithm is one of the most common Arrival Burst Waiting Turnaround
Process
scheduling algorithms in batch systems. Time Time Time Time
→ Each process is assigned a priority. (0-0) +
P1 0 5 8-0 = 8
(4-1) = 3
→ Process with highest priority is to be
P2 1 3 1-1 = 0 4-1 = 3
executed first and so on. 14-2 =
→ Processes with same priority are executed P3 2 8 22-2 = 20
12
on FCFS basis. P4 3 6 8-3 = 5 14-3 = 11
→ Priority can be decided based on memory Average
20/4 42/4
requirements, time requirements or any 5 ms 10.5 ms
other resource requirement. Gantt Chart:
→ Example 1: P1 P2 P2 P2 P1 P4 P3
Arrival Burst Waiting Turnaround
Process
Time Time Time Time
P1 0 5 0-0 = 0 5-0 = 5 0 1 2 3 4 8 14 22
P2 0 3 13-0 = 13 16-0 = 16
P3 0 8 5-0 = 5 13-0 = 13 PRIORITY SCHEDULING (P)
P4 0 6 16-0 = 16 22-0 = 22 → Priority scheduling can either be
34/4 56/4 preemptive or nonpreemptive.
Average
8.5 ms 14 ms
→ When a process arrives at the ready queue,
Gantt Chart: its priority is compared with the priority at
P1 P3 P2 P4 the currently running process.
→ A preemptive priority scheduling
algorithm will preempt the CPU if the
0 5 13 16 22
priority of the newly arrived process is
higher than the currently running process.
Applied Operating System
MODULE 2 – IT0035

→ Example 3:
Proce Arrival Burst
Priority
Waiting Turnaroun → The performance of the RR algorithm
ss Time Time Time d Time
P1 0 5 1 0-0 = 0 5-0 = 5
depends heavily on the size of the time
13-1 = quantum.
P2 1 3 2 16-1 = 15
12 → If the time quantum is too large (infinite),
P3 2 8 1 5-2 = 3 13-2 = 11
the RR policy degenerates into the FCFS
16-3 =
P4 3 6 3
13
22-3 = 19 policy.
Average
28/4 50/4 → If the time quantum is too small, then the
7 ms 12.5 ms effect of the context switch time becomes
Gantt Chart: a significant overhead.
P1 P1 P1 P1 P3 P2 P4 → As a general rule, 80 percent of the CPU
burst should be shorter than the time
quantum.
0 1 2 3 5 13 16 22

ROUND-ROBIN (RR) SCHEDULING


→ This algorithm is specifically for time-
sharing systems.
→ A small unit of time, called a time
quantum or time slice, is defined.
→ The ready queue is treated as a circular
queue.
→ The CPU scheduler goes around the ready
queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.
→ The RR algorithm is therefore preemptive.
→ Example:
Arrival Burst Waiting Turnaround
Process
Time Time Time Time
(0-0 + 12-3)
P1 0 5 14-0 = 14
=9
P2 1 3 3-1 = 2 6-1 = 5
(6-2 + 14-9
P3 2 8 + 20-17) 22-2 = 20
=12
(9-3 + 17-
P4 3 6 20-3 = 17
12) = 11
34/4 56/4
Average
8.5 ms 14 ms
Gantt Chart:
P1 P2 P3 P4 P1 P3 P4 P3

0 3 6 9 12 14 17 20 22

You might also like