You are on page 1of 26

OPERATING SYSTEM - INTRODUCTION

An Operating System (OS) is a system software which act as an interface between a computer user and
computer hardware.

Objectives

• To make the computer system convenient to use.

• To manage the computer's resources, such as the central processing unit, memory, disk drives,
printers etc. efficiently.

• To execute and provide services for applications software.

Types of operating systems


Operating systems can be classified into various categories with respect to
the type of processing it support. the main types of os are as follows

1.batch os

This is the most primitive type of OS .batch processing required that a program ,its
related data and relevant control commands should be submitted together in the form of a job
normally on punch cards.The batch monitor automatically batches jobs with similar needs and
executes the batches one by one without user intervention. Thus long term jobs (payrolls
,forecasting, statistical analysis)with less operator interaction are well serviced by batch
processing .but due to long term turn around delays and infeasibilities of online debugging it is
not suitable for software development

Features of batch os

Scheduling: follows FCFS scheduling but suffers from long avg turn around time and avg
waiting time

Memory management: memory used to be divided into two permanent partition


Part1: permanently occupied by os and part 2 dynamically loads programs for execution

I/O management: since only one batch is under execution at a time there was no contention for
allocation of I/O device so a simple program-controlled I/O was used to access i/o devices

File management : since there is only onre program accessing a file at a time ,there was no need
of providing concurrency control

2.Multi programming OS

Multi programming systems permit multiple programs to be loaded in to memory and execute
the program concurrently and thus improve utilization of system resources. A program in
execution is called a process or a Task .multiprogramming OS will have multi tasking capability
with good memory management
Multi –user os: multi programming os that supports simultaneous interaction with multiple users

Multi – access os : refers to os which permits simultaneous access to a computer system ,through
multiple terminals but not multi programming(eg: airline reservation system)

Multi-processor os it refers to an environment of multiple cpu’s tightly coupled through a


common bus with multi tasking features

3.Time-sharing operating systems

Time-sharing is a technique which enables many people, located at various


terminals, to use a particular computer system at the same time. Time-sharing or multitasking is
a logical extension of multiprogramming. Processor's time which is shared among multiple
users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-
Sharing Systems is that in case of Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems, the objective is to minimize
response time.
Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate response. For example, in
a transaction processing, the processor executes each user program in a short burst or quantum
of computation. That is, if n users are present, then each user can get a time quantum. When the
user submits the command, the response time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide
each user with a small portion of a time. Computer systems that were designed primarily as
batch systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are as follows −

 Provides the advantage of quick response.


 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −

 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.

4.Real Time operating System

A real-time system is defined as a data processing system in which


the time interval required to process and respond to inputs is so small that it controls the
environment. The time taken by the system to respond to an input and display of required
updated information is termed as the response time. So in this method, the response time is
very less as compared to online processing.
Real-time systems are used when there are rigid time requirements
on the operation of a processor or the flow of data and real-time systems can be used as a
control device in a dedicated application. A real-time operating system must have well-defined,
fixed time constraints, otherwise the system will fail. For example, Scientific experiments,
medical imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
There are two types of real-time operating systems.

Hard real-time systems


Hard real-time systems guarantee that critical tasks complete on time. In hard real-
time systems, secondary storage is limited or missing and the data is stored in ROM. In these
systems, virtual memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets
priority over other tasks and retains the priority until it completes. Soft real-time systems have
limited utility than hard real-time systems. For example, multimedia, virtual reality, Advanced
Scientific Projects like undersea exploration and planetary rovers, etc.

5. Distributed operating System

Distributed systems use multiple central processors to serve multiple


real-time applications and multiple users. Data processing jobs are distributed among the
processors accordingly.
The processors communicate with one another through various
communication lines (such as high-speed buses or telephone lines). These are referred
as loosely coupled systems or distributed systems. Processors in a distributed system may vary
in size and function. These processors are referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −

 With resource sharing facility, a user at one site may be able to use the resources
available at another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can potentially continue
operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.

6. Network operating System

A Network Operating System runs on a server and provides the


server the capability to manage data, users, groups, security, applications, and other networking
functions. The primary purpose of the network operating system is to allow shared file and
printer access among multiple computers in a network, typically a local area network (LAN), a
private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
The advantages of network operating systems are as follows −

 Centralized servers are highly stable.


 Security is server managed.
 Upgrades to new technologies and hardware can be easily integrated into the system.
 Remote access to servers is possible from different locations and types of systems.
The disadvantages of network operating systems are as follows −

 High cost of buying and running a server.


 Dependency on a central location for most operations.
 Regular maintenance and updates are required.
Handheld Computer Operating System

7.Handheld operating systems

They are designed to run on handheld machines such as smart phones, tablets etc that have
lower speed processors and less memory, they were designed to use less memory and require
fewer resources. Handheld operating systems are also designed to work with different types of
hardware than standard desktop operating systems.

Eg: Windows, Android, iOS etc..

8.Embedded operating system

Embedded operating system (OS) is a specialized operating system designed to perform a


specific task for a device that is not a computer. An embedded operating system’s main job is to
run the code that allows the device to do its job. The embedded OS also makes the device’s
hardware accessible to the software that is running on top of the OS.

An embedded is a computer that supports a machine. Examples include computers in cars,


traffic lights, digital televisions, ATMs, airplane controls, point of sale(POS) terminals, digital
cameras, GPS navigation systems, elevators, digital media receivers and among many other
possibilities.

Eg: Windows CE, Palm OS, Android etc..

9.Sensor Node Operating System

A sensor node is a node in a sensor network(refers to a group of spatially dispersed and


dedicated sensors for monitoring and recording the physical conditions of the environment and
organizing the collected data at a central location. WSNs measure environmental conditions like
temperature, sound, pollution levels, humidity, wind, and so on) that is capable of performing
some processing, gathering sensory information and communicating with other connected nodes
in the network. Sensor node operating system is designed to manage sensor nodes.

Sensos is a Sensor Node Operating System with a Device Management Scheme for Sensor
Nodes.
10.Smart Card Operating System

The smart card operating system (Card OS) is the hardware-specific firmware (firmware is a
specific class of computer software that provides the low-level control for a device's specific
hardware) that provides basic functionality such as secure access to on-card storage,
authentication and encryption.

Eg. MULTOS

. FUNCTIONS OF OPERATING SYSTEM


Following are some of important functions of an operating System.

• Memory Management

• Process Management

• Device Management

• File Management

• Security

• Control over system performance

• Job accounting

• Error detecting aids

• Coordination between other software and users

Memory Management

Memory management refers to management of Main Memory. Main memory provides a fast
storage that can be accessed directly by the CPU. For a program to be executed, it must in the main
memory. An Operating System does the following activities for memory management −

• Keeps tracks of primary memory, i.e., what part of it are in use by whom , w hat part are not in use.

• In multiprogramming, the OS decides which process will get memory when space becomes available

• Allocation and De-allocation of dynamic memory

 swapping -in and swapping-out of processes

Process Management

A process is an instance of a program in execution .An Operating System does the following
activities for process management −

 process creation which involves loading the program from secondary storage to memory and

commence its execution .


 process scheduling-allocates a process to cpu when cpu becomes free
 suspending a process-transferring the process from running state to waiting state
 resuming a process- transferring the process from waiting state to running state
 provide mechanism for process synchronization
 provide mechanism for interprocess communication
 provide mechanism for deadlock handling
 process deletion and process termination

Device Management

An Operating System manages device communication via their respective drivers. It does the
following activities for device management −

• Keeps tracks of all devices.

• Decides which process gets the device when and for how much time.

• Allocates the device in the efficient way.

• De-allocates devices.

File Management

computers can store information on different media like hard disk,floppy,CD,magnetic tapes etc
all these medias have different characteristics in terms of physical organization,capacity and access
methods ,transfer rate etc.but for convenient accessing ,the os provides a uniform logical view of
information storage.this is called file A files are normally organized into directories for easy navigation
and usage. An Operating System does the following activities for file management −

 creating and deleting files


 creating and deleting directories
 support manipulation of file and directories
 mapping files on to secondary storage
 backing up files periodically

Security

By means of password and similar other techniques, it prevents unauthorized access to


programs and data.

Control over system performance

Recording delays between request for a service and response from the system.

Job accounting

− Keeping track of time and resources used by various jobs and users.

Error detecting aids

− Production of traces, error messages, and do ther debugging and error detecting
aids. Coordination between other softwares and users

− Coordination and assignment of compilers, interpreters, assemblers and other software to


the various users of the computer systems.
OPERATING SYSTEM SERVICES

An operating system provides an environment for the execution of user programs. OS provides
certain services to the programs and to the users of those programs for the accessing of system
resources. Some of the common services are:

1. Program Execution

The main purpose of an OS is to provide an efficient and convenient environment for the execution of
programs. So, an Os must provide various functions for loading a program Into RAM and executing it.
Each executing program must terminate, either normally or abnormally.

2. I/O Operations

A running program would need I/O operations for reading-in of input data and for outputting of
results. This I/O may be from/to a file or from/to an I/O device. For each device some special functions
may be necessary (such as rewind a tape. dear screen ete.). All these operations are managed by an OS.

3. File Manipulation

Each executing program would need to create. delete and manipulate file, which

is managed by the OS.

4. Communications

Os manages inter-process communication between the processes, executing on the same computer
or running on different computers in a distributed/multiprocessor environment. An OS would provide a
mechanism for this inter-process communication; like mailboxes, shared memory etc.

5. Error Detection and Recovery

Errors may occur during execution of a program: like divide by zero, memory access violation etc.
The OS should provide for detection of such errors (or exceptions) and handle recovery (called Exception
Handling.)

6. Resource Allocation

When multiple users are logged onto the system or multiple jobs are running concurrently,
resources would need to be shared amongst them. The OS would decide on the allocation of resources:
Like the CPU scheduling algorithm will determine the control of CPU amongst the concurrent processes.
Similarly, there will be routines for the allocation and deallocation of other resources like memory. 1/0
devices, files ete.

Os-system view and user view

An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.

The operating system can be observed from the point of view of the user or the system. This is
known as the user view and the system view respectively. More details about these are given as follows

User View

The user view depends on the system interface that is used by the users. The different types of
user view experiences can be explained as follows −

If the user is using a personal computer, the operating system is largely designed to make the
interaction easy. Some attention is also paid to the performance of the system, but there is no need for
the operating system to worry about resource utilization. This is because the personal computer uses all
the resources available and there is no sharing.

If the user is using a system connected to a mainframe or a minicomputer, the operating system is
largely concerned with resource utilization. This is because there may be multiple terminals connected
to the mainframe and the operating system makes sure that all the resources such as CPU,memory, I/O
devices etc. are divided uniformly between them.

If the user is sitting on a workstation connected to other workstations through networks, then
the operating system needs to focus on both individual usage of resources and sharing though the
network. This happens because the workstation exclusively uses its own resources but it also needs to
share files etc. with other workstations across the network.

If the user is using a handheld computer such as a mobile, then the operating system handles the
usability of the device including a few remote operations. The battery level of the device is also taken
into account.

There are some devices that contain very less or no user view because there is no interaction with
the users. Examples are embedded computers in home devices, automobiles etc.

System View

According to the computer system, the operating system is the bridge between applications and
hardware. It is most intimate with the hardware and is used to control it as required.

The different types of system view for operating system can be explained as follows:

The system views the operating system as a resource allocator. There are many resources such as
CPU time, memory space, file storage space, I/O devices etc. that are required by processes for
execution. It is the duty of the operating system to allocate these resources judiciously to the processes
so that the computer system can run as smoothly as possible.

The operating system can also work as a control program. It manages all the processes and I/O
devices so that the computer system works smoothly and there are no errors. It makes sure that the I/O
devices work in a proper manner without creating problems.

Operating systems can also be viewed as a way to make using hardware easier. Computers were
required to easily solve user problems. However it is not easy to work directly with the computer
hardware. So, operating systems were developed to easily communicate with the hardware.

An operating system can also be considered as a program running at all times in the background
of a computer system (known as the kernel) and handling all the application programs. This is the
definition of the operating system that is generally followed.
The Operating System as a Resource Manager

Internally an Operating System acts as a manager of resources of the computer system such as
processor, memory, files, and I/O device.In this role, the operating system keeps track of the status of
each resource, and decides who gets a resource, for how long and when. In system that supports
concurrent execution of program, the operating system resolves conflicting requests for resources in
manner that preserves system integrity, and in doing so attempts to optimize the resulting performance.

Basic concepts of process

A process is a program in execution. Process is not as same as program


code but a lot more than it. A process is an 'active' entity as opposed to program which is
considered to be a 'passive' entity. Attributes held by process include hardware state, memory,
CPU etc.
Process memory is divided into four sections for efficient working :

 The Text section is made up of the compiled program code, read in from non-volatile
storage when the program is launched.
 The Data section is made up the global and static variables, allocated and initialized
prior to executing the main.
 The Heap is used for the dynamic memory allocation, and is managed via calls to new,
delete, malloc, free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local variables
when they are declared.

Different Process States


Processes in the operating system can be in any of the following states:

 NEW- The process is being created.

 READY- The process is waiting to be assigned to a processor.

 RUNNING- Instructions are being executed.

 WAITING- The process is waiting for some event to occur(such as an I/O completion or

reception of a signal).
 TERMINATED- The process has finished execution.
Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process
There is a Process Control Block for each process, enclosing all the information about the
process. It is a data structure, which contains the following:

 Process State: It can be running, waiting etc.


 Process ID and the parent process ID.
 CPU registers:cpu registers includes accumulators,index registers, stack pointers general
purpose registrs and any conditional code information ,theses informations are used by
the process to continue correctly after an interrupt
 Program Counter. Program Counter holds the address of the next instruction to be
executed for that process.
 CPU Scheduling information: Such as priority information and pointers to scheduling
queues.
 Memory Management information: this information may include such item as the
value of the base and limit registers , page tables or segment tables depending on the
memory system used by the OS.
 Accounting information: this information includes the amount of cpu and real time used
,time limits, account numbers, job, or process numbers
 I/O Status information: this information includes devices allocated, open file tables, etc.
PROCESS SCHEDULING

Definition

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the
basis of a particular strategy. The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization. The objective of time sharing is to switch
the CPU among processes so frequently that users can interact with each program while it is
running. To meet these objectives, the process scheduler selects an available process
(possibly from a set of several available processes) for program execution on the CPU.

Process scheduling is an essential part of Multiprogramming operating systems.


Such operating systems allow more than one process to be loaded into the executable
memory at a time and thus control degree of multiprogramming

Process Scheduling Queues.

The Operating System maintains the following important process scheduling queues −

Job queue − This queue keeps all the processes in the system. As processes enter the
system, they are put into a job queue, which consists of all processes in the system.

Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue. This queue is generally stored
as a linked list. A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

Schedulers

A process migrates among the various scheduling queues throughout its lifetime. The
operating system must select, for scheduling purposes, from these queues in some fashion.
processes Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −

 Long-Term Schedulers

 Short-Term Scheduler

 Medium-Term Scheduler

Long-Term Scheduler

The process that enters the system are kept in a mass storage device ,typically a disk
(called job pool).The long term scheduler or job scheduler selects processes from this pool and
loads them into memory for execution .or it selects jobs from job queue to ready queue

Short-Term Scheduler

The Short-Term Scheduler or CPU Scheduler, selects from among the processes, that are
ready to execute and allocate the cpu to one among them .or it select jobs from ready queue and
allocates to cpu

Medium Term Scheduler

A running process may become suspended if it makes an I/O request. Suspended


processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other processes, the suspended process is moved to
the secondary storage(swapped out). Later, the process can be reintroduced into
memory(swapped in), and its execution can be continued where it left off. This scheme is called
swapping.

Medium-term scheduling is a part of swapping. The process is swapped out, and is later
swapped in, by the medium-term scheduler
Comparison between schedulers

Long-Term Scheduler Short-Term Scheduler Medium-Term


Scheduler

It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

It is almost absent or It is also minimal in time It is a part of Time sharing


minimal in time sharing sharing system systems.
system

It selects processes from It selects those processes It can re-introduce the


pool and loads them into which are ready to execute process into memory and
memory for execution execution can be continued.

I/O BOUND PROCESS AND CPU BOUND PROCESS

An I\O bound process spend more of its time doing I\O than it spends doing computation

A cpu bound process generates I/O request infrequently, using more of its time doing
computation

It is important that the long term scheduler select a good process mix of cpu bound and
I/O bound process. If all processes are I/O bound, the ready queue will almost always be empty,
and the short-term scheduler will have little to do. If all processes are CPU bound, the I/O
waiting queue will almost always be empty, devices will go unused, and again the system will be
unbalanced. The system with the best performance will thus have a combination of CPU-bound
and I/O-bound processes
Context Switch

When an interrupt occurs, the system needs to save the current context of the
process running on the cpu so that a its execution can be resumed from the same point at a
later time. The context is represented in the PCB of the process. When the scheduler switches
the CPU from executing one process to execute another process requires a state save of the
current process and state restore of different processes this task is known as context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.

OPERATIONS ON PROCESS

 PROCESS CREATION
 PROCESS TERMINATION
 PROCESS CREATION

PROCESS CREATION

Process creation is a task of creating new processes. There are different ways to
create new process. A new process can be created at the time of initialization of operating
system or when system calls such as fork () are initiated by other processes. The process, which
creates a new process using system calls, is called parent process while the new process that is
created is called child process. The child processes can create new processes using system calls.
A new process can also create by an operating system based on the request received from the
user. Each process is given an integer identifier, termed its process identifier, or PID

Depending on system implementation, a child process may receive some amount of


shared resources with its parent. Child processes may or may not be limited to a subset of the
resources originally allocated to the parent, preventing runaway children from consuming all of
a certain system resource.

 When a process creates a new process ,two possibilities for the execution exist

◦ The parent continues to execute concurrently with its children.

◦ The parent wait until some or all of its children have terminated

 Two possibilities for the address space of the child relative to the parent:

◦ The child may be an exact duplicate of the parent, sharing the same program and data
segments in memory. Each will have their own PCB, including program counter,
registers, and PID. This is the behavior of the fork system call in UNIX.

◦ The child process may have a new program loaded into its address space, with all new
code and data segments.

Process Termination

 Processes may request their own termination by making the exit( ) system call, typically
returning an int. This int is passed along to the parent if it is doing a wait( ), and is typically zero
on successful completion and some non-zero code in the event of problems.
 Processes may also be terminated by the system for a variety of reasons, including:

◦ The inability of the system to deliver necessary system resources.

◦ In response to a KILL command, or other un handled process interrupt.

◦ A parent may kill its children if the task assigned to them is no longer needed.

 If the parent exits, the system may or may not allow the child to continue without a
parent

 When a process terminates, all of its system resources are freed up, open files flushed
and closed, etc. The process termination status and execution times are returned to the parent
if the parent is waiting for the child to terminate, or eventually returned to init if the process
becomes an orphan

Interprocess Communication

processses executing in os may be

 Independent Processes operating concurrently on a systems are those that can neither
affect other processes or be affected by other processes.

 Cooperating Processes are those that can affect or be affected by other processes.

There are several reasons why cooperating processes are allowed:

◦ Information Sharing - There may be several processes which need access to the same
file for example. ( e.g. pipelines. )

◦ Computation speedup - Often a solution to a problem can be solved faster if the


problem can be broken down into sub-tasks to be solved simultaneously ( particularly
when multiple processors are involved. )

◦ Modularity - The most efficient architecture may be to break a system down into
cooperating modules. ( E.g. databases with a client-server architecture. )

◦ Convenience - Even a single user may be multi-tasking, such as editing, compiling,


printing, and running the same code in different windows.

Cooperating processes require some type of inter-process communication that will


allow them to exchange data and information. There are two fundamental models of
interprocess communication:
◦ Shared Memory systems

◦ Message Passing systems.


 Shared-Memory Systems

In the shared-memory model, a region of memory that is shared by cooperating processes is


established. Processes can then exchange information by reading and writing data to the shared
region. Normally, the operating system tries to prevent one process from accessing another
process's memory. Shared memory requires that two or more processes agree to remove this
restriction. They can then exchange information by reading and writing data in the shared areas.
The form of the data and the location are determined by these processes and are not under the
operating system's control. The processes are also responsible for ensuring that they are not
writing to the same location simultaneously.

 Message-Passing Systems

Message passing provides a mechanism to allow processes to communicate and to synchronize


their actions without sharing the same address space and is particularly useful in a distributed
environment, where the communicating processes may reside on different computers
connected by a network. For example, a chat program used on the World Wide Web could be
designed so that chat participants communicate with one another by exchanging messages. A
message-passing facility provides at least two operations: send(message) and receive(message).
Messages sent by a process can be of either fixed or variable size.A communication link must be
established between the cooperating processes before messages can be sent.

 There are several methods for logically implementing a link and the send and receive
operations:

◦ Direct or indirect communication ( naming )

◦ Synchronous or asynchronous communication

◦ Automatic or explicit buffering.

 direct communication

With direct communication the sender must know the name of the receiver to which it wishes
to send a message using 2 primitives

send(p,msg)-send a msg to process p

receive(q,msg)-receive a msg from process q

◦ There is a one-to-one link between every sender-receiver pair with following property
◦ A link is established automatically between every pair of process that want to
communicate

◦ A link is associated with exactly two processes

◦ Between each pair of processes ,there exist exactly one link

Direct communication can be symmetric or asymmetric .For symmetric communication, the


receiver must also know the specific name of the sender from which it wishes to receive
messages. For asymmetric communications, this is not necessary.

 Indirect communication

Indirect communication uses shared mailboxes, or ports.

◦ Multiple processes can share the same mailbox or boxes.

◦ Only one process can read any given message in a mailbox. Initially the process that
creates the mailbox is the owner, and is the only one allowed to read mail in the
mailbox, although this privilege may be transferred.

 ( Of course the process that reads the message can immediately turn around
and place an identical message back in the box for someone else to read, but
that may put it at the back end of a queue of messages. )

◦ The OS must provide system calls to create and delete mailboxes, and to send and
receive messages to/from mailboxes.

Synchronization

Either the sending or receiving of messages ( or neither or both ) may be either blocking or non-
blocking(synchronous or nonynchronous )

 Blocking send-the sending processes is blocked untill the message is received by the receiving
process or the mailbox

 Non blocking send-the sending process sends the message and resume operation

 Blocking receive-the recevier block untill a msg is available

 Non blocking receive-the recevier receives either a valid message or none

Buffering

Messages are passed via queues, which may have one of three capacity configurations:

◦ Zero capacity - Messages cannot be stored in the queue, so senders must block until
receivers accept the messages.
◦ Bounded capacity- There is a certain pre-determined finite capacity in the queue.
Senders must block if the queue is full, until space becomes available in the queue, but
may be either blocking or non-blocking otherwise.

◦ Unbounded capacity - The queue has a theoretical infinite capacity, so senders are
never forced to block.

CPU Scheduling

 A scheduling system allows one process to use the CPU while another is waiting for I/O, thereby
making full use of otherwise lost CPU cycles.

CPU-I/O Burst Cycle

 Almost all processes alternate between two states in a continuing cycle,

◦ A CPU burst of performing calculations, and

◦ An I/O burst, waiting for data transfer in or out of the system.

 CPU Scheduler

Whenever the CPU becomes idle, it is the job of the CPU Scheduler ( the short-term scheduler )
to select another process from the ready queue to run next.

The storage structure for the ready queue and the algorithm used to select the next process are
not necessarily a FIFO queue. There are several alternatives to choose from, as well as
numerous adjustable parameters for each algorithm,

 Preemptive & non-preemptive scheduling

CPU scheduling decisions take place under one of four conditions:

◦ When a process switches from the running state to the waiting state, such as for an I/O
request or invocation of the wait( ) system call.

◦ When a process switches from the running state to the ready state, for example in
response to an interrupt.

◦ When a process switches from the waiting state to the ready state, say at completion of
I/O or a return from wait( ).

◦ When a process terminates.

 For conditions 1 and 4 there is no choice - A new process must be selected.

 For conditions 2 and 3 there is a choice - To either continue running the current process, or
select a different one.
 If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive,
or cooperative. Under these conditions, once a process starts running it keeps running, until it
either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive.

 Dispatcher

The dispatcher is the module that gives control of the CPU to the process selected by the scheduler.
This function involves:

◦ Switching context.

◦ Switching to user mode.

◦ Jumping to the proper location in the newly loaded program.

 The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
consumed by the dispatcher is known as dispatch latency.

CPU SCHEDULING ALGORITHMS

Scheduling Criteria

There are several different criteria to consider when trying to select the "best" scheduling algorithm for
a particular situation and environment, including:

◦ CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU
cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90%
( heavily loaded. )

◦ Throughput - Number of processes completed per unit time. May range from 10 /
second to 1 / hour depending on the specific processes.

◦ Turnaround time - Time required for a particular process to complete, from submission
time to completion. ( Wall clock time. )

◦ Waiting time - How much time processes spend in the ready queue waiting their turn to
get on the CPU.

◦ Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command.

A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms. There are six popular process scheduling algorithms

1. First-Come, First-Served Scheduling


2. Shortest-Job-First Scheduling
3. Priority Scheduling
4. Round-Robin Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback-Queue Scheduling

1. First-Come, First-Served Scheduling

The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm.
With this scheme, the process that requests the CPU first is allocated the CPU first.

The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters
the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to
the process at the head of the queue. The average waiting time under the FCFS policy is often quite
long.

Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds:

If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the result shown
in the following Gantt chart:

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27

milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.

Problems with FCFS Scheduling

1. If the process enters the order ,P2,P3,P1

The waiting time is 0 milliseconds for process P2, 3 milliseconds for process P3, and 6

milliseconds for process P1. Thus, the average waiting time is (0 + 3 + 6)/3 = 3 milliseconds.
So Average Waiting Time is not optimal .

2. It is Non Pre-emptive algorithm, which means the process priority doesn't matter, once the cpu
has been allocated to a process ,that process keeps the cpu until by itself.

3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor
resource (CPU, I/O etc) utilization.

 Convoy Effect is a situation where many processes, who need to use a resource for short time are
blocked by one long process holding that resource for a long time.

2. Shortest-Job-First Scheduling

The shortest-job-first (SJF) scheduling algorithm associates with each process the length of the
process's next CPU burst. When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.

If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. It is
also called shortest-next-CPU-burst algorithm, because scheduling depends on the length of the
next CPU burst of a process, rather than its total length.

As an example of SJF scheduling, consider the following set of processes, with the length of the
CPU burst given in milliseconds:

Using SJF scheduling, we would schedule these processes according to the following Gantt chart

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for
process P3, and 0 milliseconds for process P4 . Thus, the average waiting time is (3 + 16 + 9 +
0)/4= 7 milliseconds.

The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting
time for a given set of processes. SJF scheduling is used frequently in long- term scheduling.

The SJF algorithm can be either preemptive or non preemptive SJF scheduling

preemptive SJF scheduling


The choice arises when a new process arrives at the ready queue while a previous process is still
executing. The next CPU burst of the newly arrived process may be shorter than what is left of the
currently executing process. A preemptive SJF algorithm will preempt the currently executing
process, whereas a non preemptive SJF algorithm will allow the currently running process to finish
its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first
scheduling.

As an example, consider the following four processes, with the length of the CPU burst given in
milliseconds:

If the processes arrive at the ready queue at the times shown and need the indicated burst times,
then the resulting preemptive SJF schedule is as shown in the following Gantt chart:

Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time
1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process
P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting
time for this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5 milliseconds.
Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.

3.Priority Scheduling

Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first served basis. Priority can be decided
either internally or externally. Internal priority based on factors such as memory requirements, time
requirements or any other resource requirement. Whereas external priority is based on criteria’s
outside the system such as importance of the process, type, amount and funding factors etc

As an example, consider the following four processes, with the length of the CPU burst given in
milliseconds and priority is given:
Priority scheduling can be of two types

 Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher
priority than the currently running process, the CPU is preempted, which means the processing
of the current process is stopped and the incoming new process with higher priority gets the
CPU for its execution.

 Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling algorithm if


a new process arrives with a higher priority than the current running process, the incoming
process is put at the head of the ready queue, which means after the execution of the current
process it will be processed.

In priority scheduling algorithm, there is the chances of indefinite blocking or starvation. A


process is considered blocked when it is ready to run but has to wait for the CPU as some other
process is running currently. But in case of priority scheduling if new higher priority processes
keeps coming in the ready queue then the processes waiting in the ready queue with lower
priority may have to wait for long durations before getting the CPU for execution..

Solution for starvation is Aging Technique

To prevent starvation of any process, we can use the concept of aging where we keep on
increasing the priority of low-priority process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each minutes of waiting, then if a
process with priority 7 comes in the ready queue. After 10 minutes of waiting, its priority is
increased to 2 and so on.Doing so, we can ensure that no process will have to wait for indefinite
time for getting CPU time for processing.
4.Round-Robin Scheduling

The round-robin (RR) scheduling algorithm is designed especially for timesharing systemsIt is a
preemptive process scheduling algorithm A fixed time is allotted to each process,
called quantum, for execution. Once a process is executed for given time period that process is
preempted and other process executes for given time period.Context switching is used to save
states of preempted processes. To implement RR scheduling, we keep the ready queue as a
circular FIFO queue

Consider the following example

 In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in
a row (unless it is the only runnable process). If a process's CPU burst exceeds 1 time quantum,
that process is preempted and is put back in the ready queue. The RR scheduling algorithm is
thus preemptive. If thre are n processes in ready queue and time quantum is qthen each process
will get 1\n of cpu time at most q time units and each process must wait no longer than (n-1)*q
time units until its next time quantum,

The performance of the RR algorithm depends heavily on the size of the time quantum. If the
time quantum is extremely large, the RR policy is the same as the FCFS policy. If the time
quantum is extremely small (say, 1 millisecond), the RR approach is called processor sharing

5.Multilevel Queue Scheduling

 A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
For example: A common division is made between foreground(or interactive) processes and
background (or batch) processes. These two types of processes have different response-time
requirements, and so might have different scheduling needs. In addition, foreground processes
may have priority over background processes.
Each queue has its own scheduling algorithm. For example, separate queues might be used for
foreground(interactive) and background(batch) processes. The foreground queue might be
scheduled by an RR algorithm, while the background queue is scheduled by an FCFS algorithm

 Let us consider an example of a multilevel queue-scheduling algorithm with five queues:

 System Processes

 Interactive Processes

 Interactive Editing Processes

 Batch Processes

 Student Processes

Each queue has absolute priority over lower-priority queues. No process in the batch queue, for
example, could run unless the queues for system processes, interactive processes, and interactive
editing processes were all empty. If an interactive editing process entered the ready queue while a
batch process was running, the batch process will be preempted.

6.Multilevel Feedback-Queue Scheduling

 In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on


entry to the system. Processes do not move between queues. This setup has the advantage of
low scheduling overhead, but the disadvantage of being inflexible.

 Multilevel feedback queue scheduling, however, allows a process to move between queues. The
idea is to separate processes with different CPU-burst characteristics. If a process uses too much
CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too long in a
lower-priority queue may be moved to a higher-priority queue. This form of aging prevents
starvation.

You might also like