You are on page 1of 26

MODULE I

OPERATING SYSTEM

An Operating System is system software which may be viewed as an organized


collection of software consisting of procedures for operating a computer and providing
an environment for execution of programs. It acts as an interface between users and the
hardware of a computer system.
The main purpose of an Operating System
 Convenience: transform the raw hardware into a machine that is more amiable to
users.
 Efficiency: manage the resources of the overall computer system.
Operating system can also be defined as:
A large collection of software which manages resources of the computer system, such
as memory, processor, file system and input/output devices. It keeps track of the status
of each resource and decides who will have a control over computer resources, for how
long and when.
Examples of operating systems
•UNIX
•GNU/Linux
•Mac OS
•MS-DOS
Why should you need an Operating System?
Some of the important reasons why do you need an Operating System are as follows:
• User interacts with the computer through operating system in order to accomplish
his/her task since it is his primary interface with a computer.
• It helps the user in understand the inner functions of a computer very closely.
• Many concepts and techniques found in operating system have general
applicability in other applications.
An operating system is an essential component of a computer system. The primary
objectives of an operating system are to make computer system convenient to use and
utilizes computer hardware in an efficient manner.

Don Bosco College, Kottiyam Page 1


The positioning of operating system in overall computer system is shown below.

There are two ways one can interact with operating system:
• By means of Operating System Call in a program
• Directly by means of Operating System Commands

OPERATING-SYSTEM STRUCTURE

1.Simple Structure

It consists of two separable parts: the kernel and the system programs.

The kernel is further separated into a series of interfaces and device drivers. Everything
below the system-call interface and above the physical hardware is the kernel. The
kernel provides the file system, CPU scheduling, memory management, and other
operating-system functions through system calls. An enormous amount of functionality
is combined into one level. This monolithic structure was difficult to implement and
maintain.

Don Bosco College, Kottiyam Page 2


2. Layered Approach
The operating system is broken into a number of layers (levels). The bottom layer (layer
0) is the hardware; the highest (layer N) is the user interface. The main advantage of
the layered approach is simplicity of construction and debugging. The layers are
selected so that each uses functions (operations) and services of only lower-level
layers. This approach simplifies debugging and system verification. Once the first
layer is debugged, its correct functioning can be assumed while the second layer is
debugged, and so on. If an error is found during the debugging of a particular layer, the
error must be on that layer, because the layers below it are already debugged. Thus,
the design and implementation of the system are simplified .Each layer is implemented
only with operations provided by lower-level layers. A layer does not need to know how
these operations are implemented; it needs to know only what these operations do.
Hence, each layer hides the existence of certain data structures, operations, and
hardware from higher-level layers. The major difficulty with the layered approach
involves appropriately defining the various layers. Because a layer can use only
lower-level layers, careful planning is necessary.

3. Microkernel
In UNIX operating system, the kernel became large and difficult to manage. An
operating system called Mach that modularized the kernel using the microkernel
approach. This method structures the operating system by removing all nonessential
components from the kernel and implementing them as system and user-level
programs. The result is a smaller kernel. Microkernel provide minimal process and
memory management, in addition it provides communication facility. The architecture of
a typical microkernel is as follows:

Don Bosco College, Kottiyam Page 3


The main function of the microkernel is to provide communication between the client
program and the various services that are also running in user space. Communication is
provided through message passing.

4. Hybrid Systems

Some operating systems adopt a single, strictly defined structure. Instead, they combine
different structures, resulting in hybrid systems that resolve performance, security, and
usability issues. The three hybrid systems: the Apple Mac OSX operating system and
the two most prominent mobile operating systems — iOS and Android.

Mac OS X

The Apple Mac OS X operating system uses a hybrid structure. It is a layered


system. The top layers include the Aqua user interface (Figure below) and a set of
application environments and services. The Cocoa environment specifies an API for the
Objective -C programming language, which is used for writing Mac OS X applications.
Below these layers is the kernel environment, which consists primarily of the Mach
microkernel and the BSD UNIX kernel. Mach provides memory management; support
for remote procedure calls (RPCs) and Inter Process communication (IPC) facilities,
including message passing; and thread scheduling. The BSD component provides a
BSD command-line interface, support for networking and file. The kernel
environment provides an I/O kit for development of device drivers and dynamically
loadable.

Don Bosco College, Kottiyam Page 4


iOS

iOS is a mobile operating system designed by Apple to run its smart phone, the iPhone,
as well as its tablet computer, the iPod. iOS is structured on the Mac OS X operating
system, with added functionality pertinent to mobile devices, but does not directly run
Mac OS X applications. The structure of iOS appears in Figure below. Cocoa Touch is
an API for Objective-C that provides several frameworks for developing applications that
run on iOS devices. The fundamental difference between Cocoa, mentioned earlier, and
Cocoa Touch is that the latter provides support for hardware features unique to mobile
devices, such as touch screens. The media services layer provides services for
graphics, audio, and video.

Android

The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smart phones and tablet
computers. Whereas iOS is designed to run on Apple mobile devices and is close-
sourced, Android runs on a variety of mobile platforms and is open-sourced, partly
explaining its rapid rise in popularity.

EVOLUTION OF AN OPERATING SYSTEM

As the need for better processing raised due to the increase in demand for better
processing speed and efficiency the operating systems have been enhanced with extra
features.

1. Serial Processing

In serial processing the resources of the computer system are dedicated to single
program until its completion. In earlier, computer system was referred to as bare
machines. Programs for the bare machine had to develop manually and the instruction
had to be done manually and convert it into binary code and entered into the system by
means of certain switches. Program should be started by loading the program counter
with address of the first instruction and the result of execution were obtained by
examining the corresponding memory location. If errors were detected the program
instruction had to be changed and again feed into the system for execution.

Don Bosco College, Kottiyam Page 5


The next significant development in computer system usage was the development of
input output devices and the evolution of many system programs. Language translator
could be used to translate the programming language into executable form. Compilers
or interpreters could also be used to convert program instruction into executable form.
Another program, the loader automates the process of loading the program instruction
into maim memory.

Drawbacks of serial processing

 Low productivity of the system


 Wastage of system resources
 Slow execution of programs

2. Batch Processing:

The next significant evolution of operating system was the development of another type
of processing known as Batch processing. In this case a batch of programs of similar
type which require the same set of resources and perform the same task for execution
are grouped into a batch and stored into the system using an input storage device.
Once the programs are loaded they are automatically executed by the operating system
in a serial manner. Along with batch, instructions are embedded into the batch which is
operating system commands written in a language known as job control language
(JCL). This instruction instructs the operating system regarding how to execute each job
in a batch.

For eg: JOB_END ,JOB_START, SYS PRINT

A memory resident portion of batch operating system is known as Batch monitor.


Batch monitor read, interprets and executes JCL commands .Batch processing enables
better resource utilization than serial processing. Since a batch programs gains control
over the system even though the programs are executed in a serial manner.

Problems in batch Processing

 The CPU sits idle when there is a job transition.


 Speed discrepancy between fast CPU and comparatively slow input/output
devices such as card reader, printers.
The fist problem i.e. idle time of CPU can be overcomes by a small program called a
resident monitor will be created, which resides always in the memory. It acts according
to the directives given by a programmer like marking of job’s beginning and ending,
commands for loading and executing programs etc.
The second problem has been overcomes through the technological improvement
resulted in faster I/O devices. But CPU speed increased even faster. Therefore, the

Don Bosco College, Kottiyam Page 6


need was to increase the throughput and resource utilization by overlapping I/O and
processing operations. Dedicated I/O processors, peripheral controllers brought a
major development. The development of Direct Memory Access(DMA) chip Was a
major achievement, which directly transfer the entire block of data from its own memory
buffer to main memory without intervention of CPU. DMA can transfer data between
high speed I/O devices and main memory, while the CPU is executing.

3. Multiprogramming

Batch operating system dedicates the resources of the computer system to a single
program at a time. During the course of execution of a program it oscillates between two
stages.
I. Computational intensive phase
II. I/O intensive phase

Computational intensive phase is the period during which the program instructions are
executed on the CPU. I/O intensive phase is the period of execution of a program
during which it leaves the CPU in order to perform I/O operation.
In a multiprogramming operating system during the execution of a program when it
leaves to perform an I/O operation, the OS will schedule the next ready program into the
CPU for execution. Again when the previous program arrives the CPU is assigned to
the first and then to the second. A significant performance gains is achieved by
interleaved execution of programs.

Advantages

 100% CPU utilization


 Better utilization of all system resources
 Faster execution
 Better system performance

TYPES OF OPERATING SYSTEM

The characteristic of different operating system varies according to the following factors.
i. Processor scheduling
ii. Memory management
iii. I/o management
iv. File management

1. BATCH OPERATING SYSTEM


A batch processing environment requires grouping of similar jobs which consist of
programs, data and system commands. In this type of processing, programs with large
computation time with no need of user interaction/involvement. Users are not required

Don Bosco College, Kottiyam Page 7


to wait while the job is being processed. They can submit their programs to operators
and return later to collect them.
Some examples of such programs include payroll, forecasting, statistical analysis and
large scientific number crunching programs.
But it has two major disadvantages:
• Non-interactive environment
• Off-line debugging
Non-interactive environment: There are some difficulties with a batch system from the
point of view of programmer or user. Batch operating systems allow little or no
interaction between users and executing programs. The turnaround time taken
between job submission and job completion in batch operating system is very high.
Users have no control over intermediate results of a program.
Off-line debugging: The second disadvantage with this approach is that programs
must be debugged which means a programmer cannot correct bugs the moment it
occurs.
Process Scheduling (i.e. allocation strategy for a process to a processor) that is, in the
first come, first served basis. Jobs are typically processed in the order of submission.
Memory Management: Memory is divided into two areas. One of them is permanently
occupied by the operating system, and the other is used to load programs for execution.
I/O management in batch processing are quite simple that is, allocation and
deallocation of devices is trivial (unimportant).
File management: Access to file is serial, little protection and no concurrency control of
file access is required.
2. MULTIPROGRAMMING OPERATING SYSTEMS
Multiprogramming operating systems compared to batch operating systems are fairly
sophisticated. Multiprogramming has a significant potential for improving system
throughput and resource utilization relative to batch and serial processing.
Different forms of multiprogramming operating system are multitasking,
multiprocessor and multi-user operating systems.
Multitasking Operating Systems: A running state of a program or instance of a
program in execution is called a process or a task. A multitasking operating system
(also called multiprocessing operating system) supports two or more active processes
simultaneously. Multitasking operating system is operating system which, in addition to
supporting multiple or concurrent process (several processes in execution states
simultaneously) allows the instruction and data from two or more separate processes to

Don Bosco College, Kottiyam Page 8


reside in primary memory simultaneously and multiplexing processor and I/O device
among them.
Note: A multiprogramming implies multiprocessing or multitasking operation, but
multiprocessing operation (or multitasking) does not imply multiprogramming. Therefore,
multitasking operation is one of the mechanism that multiprogramming operating system
employs in managing the totality of computer related resources like CPU, memory and
I/O devices.
The simplest form of multitasking is called serial multitasking or context switching. This
is nothing more than stopping one temporarily to work on another. While a program is
running, you decide that you want to use the calculator, so you pop it and use it. When
you stop using the calculator, the Program continues running.
Multiuser/Multi-access operating system allows simultaneous access to a computer
system through or more terminals. Although frequently associated with
multiprogramming, multiuser operating system does, not imply multiprogramming or
multitasking. A dedicated transaction processing system such as railway reservation
system that hundreds of terminals under control of a single program is an example of
multiuser operating system.
3. TIME SHARING SYSTEM
It is a form of multiprogrammed Operating system which operates in an interactive
mode with a quick response time. The user types a request to the computer through a
keyboard. The computer processes it and a response (if any) is displayed on the user’s
terminal.
A time sharing system allows the many users to simultaneously share the computer
resources. Since each action or command in a time-shared system take a very small
fraction of time, only a little CPU time is needed for each user. As the CPU switches
rapidly from one user to another user, each user is given impression that he has his
own computer, while it is actually one computer shared among many users. Most time
sharing system use time-slice (round robin) scheduling of CPU. In this approach,
Programs are executed with rotating priority that increases during waiting and drops
after the service is granted. In Order to prevent a program from monopolizing the
processor, a program executing longer than the system defined time-slice in interrupted
by the operating system and placed at the end of the queue of waiting program.
Memory Management in time sharing system provides for the protection and
separation of user programs.
Input/output Management feature of time-sharing system must be able to handle
multiple users (terminals). As required by most multiuser environment allocation and
deallocation of devices must be performed in a manner that preserves system integrity
and provides for good performance.
File Management : Concurrent and conflicting request to access files is handled using
access control. It provides protection to files.

Don Bosco College, Kottiyam Page 9


4. REAL-TIME SYSTEMS
Real-time Systems is another form of operating system which is used in environments
where a large number of events mostly external to computer systems, must be
accepted and processed in a short time or within certain deadlines.
Examples of such applications are flight control, real time simulations etc. Real time
systems are also frequently used in military application.
A primary objective of real-time system is to provide quick response times. User
convenience and resource utilization are of secondary concern to real-time system. In
the real-time system each process is assigned a certain level of priority according to
the relative importance of the even it processes. The processor is normally allocated
to the highest priority process among those which are ready to execute. Higher priority
process usually preemptive execution of lower priority processes. This form of
scheduling called, priority based pre- emptive scheduling, is used by a majority of
real-time systems.
Memory Management: In real-time operating system there is a little swapping of
program between primary and secondary memory. Most of the processes remain in
primary memory in order to provide quick response, therefore, memory management in
real-time system is less demanding compared to other types of multiprogramming
system.
I/O Management: Time-critical device management is one of the main characteristics
of real-time systems. It also provides sophisticated form of interrupt management and
I/O buffering.
File Management: The primary objective of file management in real-time systems is
usually the speed of access rather than efficient utilization of secondary storage.
Real time operating systems are categorized into two:

 Hard real time system:


 Strict about each task and its dead line.
 The preemption period should be very less( in range of microsecond) .
Eg: Rocket launching.

 Soft real time system


 The task and its dead line is manageable but we should met the condition
most of all the time
 The preemption period of this can be more( in range of millisecond).
Eg: Washing machine, camera.

Don Bosco College, Kottiyam Page 10


5. PARALLEL SYSTEMS
Multiprocessor systems are also known as parallel systems or tightly coupled system.
Such systems have more than one processor sharing computer bus, memory and
peripheral devices.
Multiprocessor system has three main advantages.
Increased throughput
Throughput means the amount of work completed in a unit of time. By increasing the
number of processor high throughput can be achieved.
Economy of scale
Multiprocessor systems can save more money than multiple single processor systems,
because they can share peripherals, mass storage and power supplies.
Increased reliability
Its function can be distributed among several processors then the failure of one
processor will not halt system, only slow it down.
If we have ten processors and one fails, then each of the remaining nine processors
must pick up a share of the work of the failed processor. Thus, the entire system runs
only 10% slower, rather than failing altogether. This ability to continue providing service
proportional to level of surviving hardware is called graceful degradation. Some
systems go beyond graceful degradation and are called fault tolerant, because they
can suffer a failure of any single component and still continue operation.
The tandem system uses both h/w and s/w duplication to ensure continued operation
despite faults. The system consists of two identical processors, each with its own local
memory. The processors are connected by a bus. One processor is the primary and the
other is the backup. At fixed checkpoints in the execution of a process, the state
information and a copy of the memory status is copied from the primary machine to the
backup. If a failure is detected, the backup copy is activated and restarted from the most
recent checkpoints. This method is expensive, since it involves h/w duplication.
The most common multiprocessor system now in use is symmetric multiprocessing
(SMP). It involves a symmetric multiprocessor system where two or more identical
processors connected to a shared memory have full access to all i/o devices and are
controlled by a single operating system that treats all processor equally reserving
none for special purpose.
Some system use asymmetric multiprocessing (AMP) in which processor is assigned
to a specific task. A master processor controls the system, the other processors either
look to the master for instructions or have predefined tasks. In AMP all cpu s are not
treated equally only one CPU is allowed to perform i/o operations, so that they were
asymmetry with regard to peripheral attachment.

Don Bosco College, Kottiyam Page 11


SYMMETRIC ASYMMETRIC
MULTIPROCESSING MULTIPROCESSING

Basic Each processor run the tasks Only Master processor run the
in Operating System tasks of Operating System.

Process Processor takes processes Master processor assign


from a common ready queue, processes to the slave
or there may be a private processors, or they have some
ready queue for each predefined processes.
processor.

Architecture All processor in Symmetric All processor in Asymmetric


Multiprocessing has the same Multiprocessing may have
architecture. same or different architecture.
Communication All processors communicate Processors need not
with another processor by a communicate as they are
shared memory. controlled by the master
processor.
Failure Processors need not If a master processor fails, a
communicate as they are slave is turned to the master
controlled by the master processor to continue the
processor. execution. If a slave processor
fails, its task is switched to
other processors.
Ease Symmetric Multiprocessor is Asymmetric Multiprocessor is
complex as all the processors simple as master processor
need to be synchronized to access the data structure.
maintain the load balance.
6. DISTRIBUTED OPERATING SYSTEM

A distributed operating system is one that looks to its users like an ordinary centralized
operating system but runs on multiple independent CPUs. The key concept here is
transparency. In other words, the use of multiple processors should be invisible to the
user. In a true distributed system, users are not aware of where their programs are
being run or where their files are residing; they should all be handled automatically and
efficiently by the operating system. Distributed operating systems have many aspects in
common with centralized ones but they also differ in certain ways.

Distributed operating system, for example, often allow programs to run on several
processors at the same time, thus requiring more complex processor scheduling
(scheduling refers to a set of policies and mechanisms built into the operating systems
that controls the order in which the work to be done is completed) algorithms in order to
achieve maximum utilization of CPU’s time.

Don Bosco College, Kottiyam Page 12


Fault-tolerance is another area in which distributed operating systems are different.
Distributed systems are considered to be more reliable than uniprocessor based
system. They perform even if certain part of the hardware is malfunctioning.

Advantages of Distributed Operating Systems


There are three important advantages in the design of distributed operating system:

1. Major breakthrough in microprocessor technology: Micro- processors have become


very much powerful and cheap, compared with mainframes and minicomputers, so it
has become attractive to think about designing large systems consisting of small
processors. These distributed systems clearly have a price/performance advantages
over more traditional systems.
2. Incremental Growth: The second advantage is that if there is a need of 10 per cent
more computing power, one should just add 10 per cent more processors. System
architecture is crucial to the type of system growth, however, since it is hard to give
each user of a personal computer another 10 per cent.
3. Reliability: Reliability and availability can also be a big advantage; a few parts of the
system can be down without disturbing people using the other parts.
7. NETWORK OPERATING SYSTEM
A network operating system is a collection of software and associated protocols that
allow a set of autonomous computers which are interconnected by a computer network
to be used together in a convenient and cost-effective manner. In a network operating
system, the users are aware of existence of multiple computers and can log in to remote
machines and copy files from one machine to another machine.
Some of typical characteristics of network operating systems which make it different
from distributed operating system are the followings:
• Each computer has its own private operating system instead of running part of a
global system wide operating system.

• Each user normally works on his/her own system; using a different system
requires some kind of remote login, instead of having the operating system
dynamically allocate processes to CPUs.

• Users are typically aware of where each of their files are kept and must move file
from one system to another with explicit file transfer commands instead of having
file placement managed by the operating system. The system has little or no fault
tolerance; if 5% of the personnel computers crash, only 5% of the users is out of
business.
Network operating system offers many capabilities including:
• Allowing users to access the various resources of the network hosts
Don Bosco College, Kottiyam Page 13
• Controlling access so that only users in the proper authorization are allowed to access
particular resources.
• Making the use of remote resources appear to be identical to the use of local
resources
• Providing up-to-the minute network documentation on-line.
OPERATING-SYSTEM SERVICES

An operating system provides an environment for the execution of programs. It provides


certain services to programs and to the users of those programs.

i. User interface
Almost all operating systems have a user interface(UI).This interface can take several
forms. One is a command-line interface(CLI), which uses text commands and a
method for entering them (say, a keyboard for typing in commands in a specific format
with specific options). Another is a batch interface, in which commands and directives
to control those commands are entered into files, and those files are executed. Most
commonly, a graphical user interface (GUI) is used. Here, the interface is a window
system with a pointing device to direct I/O, choose from menus, and make selections
and a keyboard to enter text. Some systems provide two or all three of these variations.

ii. Program execution


The operating system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally
indicating error.

iii. I/O operations


A running program may require I/O operations. This I/O operation may be to access a
file or an I/O device. For efficiency and protection, users usually cannot control I/O
devices directly. Therefore, the operating system must provide a means to do I/O
operations.

iv. File-system manipulation


The programs must be able to read and write files and directories. They also, need to
create and delete them by name, search for a given file, list file information. Some
operating systems include permissions management to allow or deny access to files or
directories based on file ownership.

v. Communications
There are many circumstances in which one process needs to exchange information
with another process. Such communication may occur between processes that are
executing on the same computer or between processes that are executing on different
computer systems tied together by a computer network. Communications may be
implemented via shared memory, in which two or more processes read and write to a

Don Bosco College, Kottiyam Page 14


shared section of memory, or message passing, in which packets of information in
predefined formats are moved between processes by the operating system.

vi. Error detection


The operating system needs to be detecting and correcting errors constantly. Errors
may occur in the CPU and memory hardware (such as a memory error or a power
failure), in I/O devices (such as a parity error on disk, a connection failure on a network,
or lack of paper in the printer), and in the user program (such as an arithmetic overflow,
an attempt to access an illegal memory location, or a too-great use of CPU time). For
each type of error, the operating system should take the appropriate action to ensure
correct and consistent computing.

vii. Resource allocation


When there are multiple users or multiple jobs running at the same time, resources
must be allocated to each of them. The operating system manages many different types
of resources such as CPU cycles, main memory, and file storage. There may also be
routines to allocate printers, USB storage drives, and other peripheral devices.

viii. Accounting
Os keeps track of which users use how much and what kinds of computer resources.
This record keeping may be used for accounting (so that users can be billed) or simply
for accumulating usage statistics. Usage statistics may be a valuable tool for
researchers who wish to reconfigure the system to improve computing services.

ix. Protection and security


Protection involves ensuring that all access to system resources is controlled. Security
of the system from outsiders is also important. Such security starts with requiring each
user to authenticate himself or herself to the system, usually by means of a password,
to gain access to system resources.
THE PROCESS

A program is a passive entity, such as a file containing a list of instructions stored on


disk (often called an executable file). In contrast, a process is an active entity ,with a
program counter specifying the next instruction to execute and a set of associated
resources. A program becomes a process when an executable file is loaded into
memory.
A process is an instance of program in execution . It is the smallest unit of work
individually schedulable by an operating system. The process management function
done by an operating system are:
 Creating and destroying processes.
 Controlling the progress of process and ensuring process reaches its completion
at a positive rate.
 Acting on exceptional conditions such as handling interrupts and errors during
execution of a process.
 Allocating hardware resources among the processes.

Don Bosco College, Kottiyam Page 15


 Providing a means of communication among processes.
The execution of a program is announced by command like RUN . In response to RUN
command OS creates a process. Once created a process become active and eligible to
compete for system resources such as the CPU, memory, I/O devices. A program is a
static concept whereas a process is a dynamic concept.

PROCESS STATE

As a process executes, it changes state .The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
 Dormant
The process is not known to the OS.
 New
The process is newly created by the OS.
 Running
Instructions of the process are executed on the CPU.
 Waiting
The process is waiting for some event to occur (such as an I/O completion or reception
of a signal).
 Ready
The process has acquired all resources needed for its execution except the processor.
 Terminated
The process has finished execution.

PROCESS CONTROL BLOCK

Each process is represented in the operating system by a process control block (PCB)
also called a task control block. It contains many pieces of information associated with
a specific process, including these:
 Process state
The state may be new, ready, running, waiting, halted, and so on.
 Program counter
The counter indicates the address of the next instruction to be executed for this
process.

Don Bosco College, Kottiyam Page 16


 CPU registers
The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers, and general-purpose registers,
plus any condition-code information. Along with the program counter, this state
information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward .
 CPU scheduling information
This information includes a process priority, pointers to scheduling queues, and any
other scheduling parameters.
 Memory-management information
This information may include such items as the value of the base and limit registers
and the page tables, or the segment tables, depending on the memory system used by
the operating system .
 Accounting information
This information includes the amount of CPU and real time used, time limits, account
numbers, job or process numbers, and so on.

 I/O status information


This information includes the list of I/O devices allocated to the process, a list of open
files, and so on.

PROCESS SWITCH/ CONTEXT SWITCH


A transition between two memory resident processes in a multiprogramming system is
called a process switch or task switch or context switch.
Process switch usually occur in response to events that change system state of a
process. When a process executes the control is in the user adress space. When a
event occur within the process, the process leaves its control from the CPU and the
control goes back to the OS space inorder to handle the event. This crossing of the
protection from the user space to the OS space is called the mode switch.

Don Bosco College, Kottiyam Page 17


The os then records the status oif th running processin its PCB and then suspends this
process to handle the event. It then schedules the next process for execution by
updating th PCB of the new process and again the control goes back to the OS space to
the user space and then the new process starts running. Process switching is a
complex task which involves a series of steps. Therefore frequent process switching will
affect the performance of a multiprogramming system.

THREADS
Threads are lightweight processes(LWP), is a basic unitof CPU utilization. It comparises
a thread ID, a program mcounter, a register set, and a stack. They improve
performance by weakening the process abstraction. A process(heavy weight) is one
thread of control executing one program in one address space. A thread may have
multiple threads of control running different parts of a program in one address space.
Because threads expose multitasking to the user (cheaply) they are more powerful, but
more complicated.

An application typically is implemented as a separate process with several threads of


control. A web browser might have one thread display images or text while another
thread retrieves data from the network, for example. A word processor may have a
thread for displaying graphics, another thread for responding to keystrokes from the
user, and a third thread for performing spelling and grammar checking in the
background.

THE BENEFITS OF MULTITHREADED PROGRAMMING

1. Responsiveness.
Multithreading an interactive application may allow a program to continue running even
if part of it is blocked or is performing a lengthy operation, thereby increasing
responsiveness to the user.This quality is especially useful in designing user
interfaces.For instance, consider what happens when a user clicks a button that results

Don Bosco College, Kottiyam Page 18


in the performance of a time-consuming operation. A single-threaded application would
be unresponsive to the user until the operation had completed. In contrast, if the time-
consuming operation is performed in a separate thread, the application remains
responsive to the user.
2. Resourcesharing.
Processes can only share resources through techniques such as shared memory and
message passing. Such techniques must be explicitly arranged by the programmer.
However, threads share the memory and there sources of the process to which they
belong by default. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same addressspace.
3. Economy.
Allocating memory and resources fo rprocess creation is costly. Because threads share
the resources of the process to which they belong, it is more economical to create and
context-switch threads. Empirically gauging the difference in overhead can be difficult,
but in general it is significantly more time consuming to create and manage processes
than threads. In Solaris, for example, creating a process is about thirty times slower
than is creating a thread, and context switching is about five times slower.

4. Scalability
The benefits of multithreading can be even greater in a multiprocessor architecture.

CPU SCHEDULE

Scheduling refers to a set of policies and mechanisms built into the operating system
that govern the order in which the work to be done by a computer system is completed.
A scheduler is an OS module that selects the next job to be admitted into the system
and the next process to run. The primary objective of scheduling is to optimize system
performance in accordance with the criteria deemed most by the system designer.

There are three types of schedulers in a complex operating system.

i. Long term scheduler


ii. Short term scheduler
iii. Medium term scheduler

Scheduling Criteria

Many criteria have been suggested for comparing CPU-scheduling algorithms. Which
characteristics are used for comparison can make a substantial difference in which
algorithm is judged to be best. The criteria include the following:

Don Bosco College, Kottiyam Page 19


• CPU utilization: We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily loaded system).

• Throughput: If the CPU is busy executing processes, then work is being done. One
measure of work is the number of processes that are completed per time unit, called
throughput. For long processes, this rate may be one process per hour; for short
transactions, it may be ten processes per second.

• Turnaround time: From the point of view of a particular process, the important
criterion is how long it takes to execute that process. The interval from the time of
submission of a process to the time of completion is the turnaround time. Turnaround
time is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.

• Waiting time: The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/O. It affects only the amount of time that a
process spends waiting in the ready queue. Waiting time is the sum of the periods spent
waiting in the ready queue.

• Response time: In an interactive system, turnaround time may not be the best
criterion. Often, a process can produce some output fairly early and can continue
computing new results while previous results are being output to the user. Thus,
another measure is the time from the submission of a request until the first response is
produced. This measure, called response time, is the time it takes to start responding,
not the time it takes to output the response. The turnaround time is generally limited by
the speed of the output device.

SCHEDULING ALGORITHMS

CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU. There are many different CPU-scheduling algorithms.

1. First-Come, First-Served Scheduling

By far the simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)


scheduling algorithm. With this scheme, the process that requests the CPU first is
allocated the CPU first. The implementation of the FCFS policy is easily managed with a
FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of
the queue. When the CPU is free, it is allocated to the process at the head of the queue.
The running process is then removed from the queue. The code for FCFS scheduling is
simple to write and understand. On the negative side, the average waiting time under

Don Bosco College, Kottiyam Page 20


the FCFS policy is often quite long. Consider the following set of processes that arrive
at time 0, with the length of the CPU burst given in milliseconds:

Process Burst Time

P1 24

P2 3

P3 3

If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get
the result shown in the following Gantt chart, which is a bar chart that illustrates a
particular schedule, including the start and finish times of each of the participating
processes:

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and
27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17
milliseconds. If the processes arrive in the order P2, P3, P1, however, the results will be
as shown in the following Gantt chart:

The average waiting time is now (6+0+3)/3 = 3 milliseconds. This reduction is


substantial. Thus, the average waiting time under an FCFS policy is generally not
minimal and may vary substantially if the processes CPU burst times vary greatly.

2. Shortest-Job-First Scheduling

A different approach to CPU scheduling is the shortest-job-first (SJF)scheduling


algorithm. This algorithm associates with each process the length of the process’s next
CPU burst. When the CPU is available, it is assigned to the process that has the
smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS
scheduling is used to break the tie. Note that a more appropriate term for this
scheduling method would be the shortest-next- CPU-burst algorithm, because
scheduling depends on the length of the next CPU burst of a process, rather than its
total length.

As an example of SJF scheduling, consider the following set of processes, with the
length of the CPU burst given in milliseconds:
Don Bosco College, Kottiyam Page 21
Process Burst Time

P1 6

P2 8

P3 7

P4 3

Using SJF scheduling, we would schedule these processes according to the following
Gantt chart:

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9
milliseconds for process P3, and 0 milliseconds for process P4. Thus, the average
waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. By comparison, if we were using the
FCFS scheduling scheme, the average waiting time would be 10.25 milliseconds.

The SJF algorithm can be either preemptive or non preemptive. The choice arises
when a new process arrives at the ready queue while a previous process is still
executing. The next CPU burst of the newly arrived process may be shorter than what is
left of the currently executing process. A preemptive SJF algorithm will preempt the
currently executing process, whereas a non preemptive SJF algorithm will allow the
currently running process to finish its CPU burst. Preemptive SJF scheduling is
sometimes called shortest-remaining-time-first scheduling.

3. Priority Scheduling

The SJF algorithm is a special case of the general priority-scheduling algorithm. A


priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF
algorithm is simply a priority algorithm where the priority (p) is the inverse of the
(predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice
versa.

As an example, consider the following set of processes, assumed to have arrived at


time 0 in the order P1, P2, ··· , P5, with the length of the CPU burst given in
milliseconds:

Process Burst Time Priority

P1 10 3

Don Bosco College, Kottiyam Page 22


P2 1 1

P3 2 4

P4 1 5

P5 5 2

Using priority scheduling, we would schedule these processes according to the


following Gantt chart:

The average waiting time is 8.2 milliseconds.

A major problem with priority scheduling algorithms is indefinite blocking, or


starvation. A process that is ready to run but waiting for the CPU can be considered
blocked. A priority scheduling algorithm can leave some low- priority processes waiting
indefinitely. In a heavily loaded computer system, a steady stream of higher-priority
processes can prevent a low-priority process from ever getting the CPU. A solution to
the problem of indefinite blockage of low-priority processes is aging. Aging involves
gradually increasing the priority of processes that wait in the system for a long time.

4. Round-Robin Scheduling

The round-robin (RR) scheduling algorithm is designed especially for time- sharing
systems. It is similar to FCFS scheduling, but preemption is added to enable the system
to switch between processes. A small unit of time, called a time quantum or time slice,
is defined. A time quantum is generally from 10 to 100 milliseconds in length. The ready
queue is treated as a circular queue.

The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum. To implement RR scheduling, we again treat
the ready queue as a FIFO queue of processes. New processes are added to the tail of
the ready queue. The CPU scheduler picks the first process from the ready queue,sets
a timer to interrupt after 1 time quantum, and dispatches the process.

The average waiting time under the RR policy is often long. Consider the following set
of processes that arrive at time0, with the length of the CPU burst given in milliseconds:

Process Burst Time

P1 24

Don Bosco College, Kottiyam Page 23


P2 3

P3 3

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4


milliseconds. Since it requires another 20 milliseconds, it is preempted after the first
time quantum, and the CPU is given to the next process in the queue, process P2.
Process P2 does not need 4 milliseconds, so it quits before its time quantum expires.
The CPU is then given to the next process, process P3. Once each process has
received 1 time quantum, the CPU is returned to process P1 for an additional time
quantum. The resulting RR schedule is as follows:

Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds
(10-4), P2 waits for 4milliseconds, and P3 waits for 7milliseconds. Thus, the average
waiting time is 17/3 = 5.66 milliseconds.

5. Multilevel Queue Scheduling

Another class of scheduling algorithms has been created for situations in which
processes are easily classified into different groups. For example, a common division is
made between foreground (interactive) processes and background (batch) processes.
These two types of processes have different response-time requirements and so may
have different scheduling needs. In addition, foreground processes may have priority
(externally defined) over background processes. A multilevel queue scheduling
algorithm partitions the ready queue into several separate queues. The processes are
permanently assigned to one queue, generally based on some property of the process,
such as memory size, process priority, or process type. Each queue has its own
scheduling algorithm. For example, separate queues might be used for foreground and
background processes. The foreground queue might be scheduled by an RR algorithm,
while the background queue is scheduled by an FCFS algorithm. In addition, there must
be scheduling among the queues, which is commonly implemented as fixed-priority
preemptive scheduling.

For example, the foreground queue may have absolute priority over the background
queue. Let’s look at an example of a multilevel queue scheduling algorithm with five
queues, listed below in order of priority:

1. System processes

2. Interactive processes

Don Bosco College, Kottiyam Page 24


3. Interactive editing processes

4. Batch processes

5. Student processes

Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty. If an interactive editing
process entered the ready queue while a batch process was running, the batch process
would be preempted. Another possibility is to time-slice among the queues. Here, each
queue gets a certain portion of the CPU time, which it can then schedule among its
various processes. For instance, in the foreground–background queue example, the
foreground queue can be given 80 percent of the CPU time for RR scheduling among
its processes, while the background queue receives 20 percent of the CPU to give to its
processes on an FCFS basis.

6. Multilevel Feedback Queue Scheduling

Normally, when the multilevel queue scheduling algorithm is used, processes are
permanently assigned to a queue when they enter the system. If there are separate
queues for foreground and background processes, for example, processes do not move
from one queue to the other, since processes do not change their foreground or
background nature. This setup has the advantage of low scheduling over head,but it is
inflexible. The multilevel feedback queue scheduling algorithm, in contrast, allows a
process to move between queues. The idea is to separate processes according to the
characteristics of their CPU bursts. If a process uses too much CPU time, it will be

Don Bosco College, Kottiyam Page 25


moved to a lower-priority queue. This scheme leaves I/O-bound and interactive
processes in the higher-priority queues. In addition, a process that waits too long in a
lower-priority queue may be moved to a higher-priority queue. This form of aging
prevents starvation.

For example, consider a multilevel feedback queue scheduler with three queues,
numbered from 0 to 2. A process entering the ready queue is put in queue0. A process
in queue0 is given a time quantum of 8 milliseconds. If it does not finish within this time,
it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue
1 is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is
put into queue 2. Processes in queue 2 are run on an FCFS basis but are run only when
queues0 and 1 are empty. This scheduling algorithm gives highest priority to any
process with a CPU burst of 8 milliseconds or less.

Don Bosco College, Kottiyam Page 26

You might also like