You are on page 1of 34

UNIT-I: INTRODUCTION

Operating system concepts - System Calls - OS Structure - Process and


Threads: Process - Threads - Inter Process Communication - Scheduling -
Classical IPC Problems.

Operating System – Definition:

 An operating system is a program that controls the execution of


application programs and acts as an interface between the user of a
computer and the computer hardware.
 A more common definition is that the operating system is the one
program running at all times on the computer (usually called the
kernel), with all else being application programs.
 An operating system is concerned with the allocation of resources
and services, such as memory, processors, devices, and information.
The operating system correspondingly includes programs to manage
these resources, such as a traffic controller, a scheduler, a memory
management module, I/O programs, and a file system.
Functions of Operating system – Operating system performs three
functions:

1. Convenience: An OS makes a computer more convenient to use.


2. Efficiency: An OS allows the computer system resources to be used
efficiently.
3. Ability to Evolve: An OS should be constructed in such a way as to
permit the effective development, testing, and introduction of new
system functions at the same time without interfering with service.
4. Throughput: An OS should be constructed so that It can give
maximum throughput(Number of tasks per unit time).
Major Functionalities of Operating System:
 Resource Management: When parallel accessing happens in the OS
means when multiple users are accessing the system the OS works
as Resource Manager, Its responsibility is to provide hardware to the
user. It decreases the load in the system.
 Process Management: It includes various tasks
like scheduling, termination of the process. OS manages various
tasks at a time. Here CPU Scheduling happens means all the tasks
would be done by the many algorithms that use for scheduling.
 Storage Management: The file system mechanism used for the
management of the storage. NIFS, CFS, CIFS, NFS, etc. are some file
systems. All the data stores in various tracks of Hard disks that all
managed by the storage manager. It included Hard Disk.
 Memory Management: Refers to the management of primary
memory. The operating system has to keep track, how much memory
has been used and by whom. It has to decide which process needs
memory space and how much. OS also has to allocate and deallocate
the memory space.
 Security/Privacy Management: Privacy is also provided by the
Operating system by means of passwords so that unauthorized
applications can’t access programs or data. For example, Windows
uses Kerberos authentication to prevent unauthorized access to
data.
The process operating system as User Interface:
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of the hardware, operating system,
system programs, and application programs. The hardware consists of
memory, CPU, ALU, and I/O devices, peripheral devices, and storage devices.
System program consists of compilers, loaders, editors, OS, etc. The
application program consists of business programs, database programs.

Fig1: Conceptual view of a computer system


Every computer must have an operating system to run other programs. The
operating system coordinates the use of the hardware among the various
system programs and application programs for various users. It simply
provides an environment within which other programs can do useful work.
The operating system is a set of special programs that run on a computer
system that allows it to work properly. It performs basic tasks such as
recognizing input from the keyboard, keeping track of files and directories on
the disk, sending output to the display screen, and controlling peripheral
devices.
OS is designed to serve two basic purposes:

1. It controls the allocation and use of the computing System’s


resources among the various user and tasks.
2. It provides an interface between the computer hardware and the
programmer that simplifies and makes it feasible for coding,
creation, debugging of application programs.
The Operating system must support the following tasks. The tasks are:

1. Provides the facilities to create, modification of programs and data


files using an editor.
2. Access to the compiler for translating the user program from high-
level language to machine language.
3. Provide a loader program to move the compiled program code to the
computer’s memory for execution.
4. Provide routines that handle the details of I/O programming.
I/O System Management –
The module that keeps track of the status of devices is called the I/O traffic
controller. Each I/O device has a device handler that resides in a separate
process associated with that device.
The I/O subsystem consists of

 A memory Management component that includes buffering caching


and spooling.
 A general device driver interface.
Drivers for specific hardware devices.
Assembler –
The input to an assembler is an assembly language program. The output is an
object program plus information that enables the loader to prepare the object
program for execution. At one time, the computer programmer had at his
disposal a basic machine that interpreted, through hardware, certain
fundamental instructions. He would program this computer by writing a
series of ones and Zeros (Machine language), place them into the memory of
the machine.
Compiler –
The High-level languages- examples are FORTRAN, COBOL, ALGOL, and PL/I
are processed by compilers and interpreters. A compiler is a program that
accepts a source program in a “high-level language “and produces a
corresponding object program. An interpreter is a program that appears to
execute a source program as if it was machine language. The same name
(FORTRAN, COBOL, etc.) is often used to designate both a compiler and its
associated language.
Loader –
A Loader is a routine that loads an object program and prepares it for
execution. There are various loading schemes: absolute, relocating, and
direct-linking. In general, the loader must load, relocate and link the object
program. The loader is a program that places programs into memory and
prepares them for execution. In a simple loading scheme, the assembler
outputs the machine language translation of a program on a secondary device
and a loader places it in the core. The loader places into memory the machine
language version of the user’s program and transfers control to it. Since the
loader program is much smaller than the assembler, those make more core
available to the user’s program.
History of Operating system –
The operating system has been evolving through the years. The following table
shows the history of OS.

Generation Year Electronic device used Types of OS Device


First 1945-55 Vacuum Tubes Plug Boards
Second 1955-65 Transistors Batch Systems
Third 1965-80 Integrated Circuits(IC) Multiprogramming
Fourth Since 1980 Large Scale Integration PC
Types of Operating System –
 Batch Operating System- Sequence of jobs in a program on a
computer without manual interventions.
 Time-sharing operating System- allows many users to share the
computer resources. (Max utilization of the resources).
 Distributed operating System- Manages a group of different
computers and makes appear to be a single computer.
 Network operating system- computers running in different operating
systems can participate in a common network (It is used for security
purposes).
 Real-time operating system – meant applications to fix the deadlines.
Examples of Operating System are –
 Windows (GUI based, PC)
 GNU/Linux (Personal, Workstations, ISP, File and print server,
Three-tier client/Server)
 macOS (Macintosh), used for Apple’s personal computers and
workstations (MacBook, iMac).
 Android (Google’s Operating System for
smartphones/tablets/smartwatches)
 iOS (Apple’s OS for iPhone, iPad, and iPod Touch)

What is a distributed Operating System?

Distributed Operating System is a type of model where applications are running


on multiple computers linked by communications. It is an extension of the
network operating system which supports higher levels of communication and
integration of the machines on the network.
Distributed OS runs on multiple CPUs but for an end-user, it is just an ordinary
centralized operating system. It can share all resources like CPU, disk, network
interface, nodes, computers, etc. from one site to another site, and it increases
the data available on the entire system.
All processors are connected by valid communication media such as high-speed
buses and telephone lines, and in which every processor contains its own local
memory along with other local processors.
According to this nature, a distributed operating system is known as a loosely
coupled system. This operating system involves multiple computers, nodes, and
sites, and these components are linked to each other with LAN/WAN lines.
Distributed OS is capable of sharing their computational capacity and I/O files
while allowing virtual machine abstraction to users.
The distributed operating system is depicted below −

Applications of Distributed Operating System


The applications of distributed OS are as follows −
 Internet Technology

 Distributed databases System

 Air Traffic Control System

 Airline reservation Control systems

 Peep-to-peer networks system

 Telecommunication networks

 Scientific Computing System

 Cluster Computing

 Grid Computing

 Data rendering
Types
There are three types of Distributed OS.

 Client-Server Systems − It is a tightly coupled operating system. It is


used for multiprocessors and homogeneous multicomputer. Client-Server
Systems works as a centralized server because it provides the approval to
all requests, which are generated by the client systems side.

 Peer-to-Peer Systems − It is a loosely coupled system. It is implemented


in the computer network application because it contains a bunch of
processors, and they are not shareable memories or clocks as well. Every
processor consists of its own local memory, and these processors
communicate with each other through various communication media such
as high-speed buses or telephone lines.

 Middleware − It allows the interoperability in the between of all


applications, which are running on other operating systems. By using
these services those applications are capable of transferring all data to
each other. It allows distribution transparency.
Protection and security in Distributed Operating system
The Distributed OS is used to a large extent in the organization. Because of more
usage the Protection and Security come into the surface where their role is to
preserve the system from any cause of damage or loss from external sources and
keep it protected.
There are different ways to safeguard the distributed OS by applying measures.
Some of the methods are Authentication that includes username/password, user
key. One Time Password (OTP) is also one of the main applications that are
applied under security in distributed OS

OPERATING SYSTEM CONCEPTS


Processes
Address Spaces
Files
Input/Output
Protection
The Shell
Ontogeny Recapitulates Phylogeny

System Calls

System Calls in Operating System

A system call is a way for a user program to interface with the operating system.
The program requests several services, and the OS responds by invoking a series
of system calls to satisfy the request. A system call can be written in assembly
language or a high-level language like C or Pascal. System calls are predefined
functions that the operating system may directly invoke if a high-level language
is used.

In this article, you will learn about the system calls in the operating system and
discuss their types and many other things.

What is a System Call?

A system call is a method for a computer program to request a service from the
kernel of the operating system

on which it is running. A system call is a method of interacting with the


operating system via programs. A system call is a request from computer
software to an operating system's kernel.
The Application Program Interface (API) connects the operating system's
functions to user programs. It acts as a link between the operating system and
a process, allowing user-level programs to request operating system services.
The kernel system can only be accessed using system calls. System calls are
required for any programs that use resources.

How are system calls made?

When a computer software needs to access the operating system's kernel, it


makes a system call. The system call uses an API to expose the operating
system's services to user programs. It is the only method to access the kernel
system. All programs or processes that require resources for execution must use
system calls, as they serve as an interface between the operating system and
user programs.

Below are some examples of how a system call varies from a user function.

1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system
call with kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that
are not present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.

Why do you need system calls in Operating System?

There are various situations where you must require system calls in the
operating system. Following of the situations are as follows:

1. It is must require when a file system wants to create or delete a file.


2. Network connections require the system calls to sending and receiving
data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you
need a system call.
5. System calls are used to create and manage new processes.
How System Calls Work

The Applications run in an area of memory known as user space. A system call
connects to the operating system's kernel, which executes in kernel space. When
an application creates a system call, it must first obtain permission from the
kernel. It achieves this using an interrupt request, which pauses the current
process and transfers control to the kernel.

If the request is permitted, the kernel performs the requested action, like creating
or deleting a file. As input, the application receives the kernel's output. The
application resumes the procedure after the input is received. When the
operation is finished, the kernel returns the results to the application and then
moves data from kernel space to user space in memory.

A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time. A more complicated system call, such as
connecting to a network device, may take a few seconds. Most operating systems
launch a distinct kernel thread for each system call to avoid bottlenecks. Modern
operating systems are multi-threaded, which means they can handle various
system calls at the same time.

Types of System Calls

There are commonly five types of system calls. These are as follows:

1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Now, you will learn about all the different types of system calls one-by-one.

Process Control

Process control is the system call that is used to direct the processes. Some
process control examples include creating, load, abort, end, execute, process,
terminate the process, etc.

File Management

File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write,
etc.

Device Management

Device management is a system call that is used to deal with devices. Some
examples of device management include read, device, write, get device attributes,
release device, etc.

Information Maintenance

Information maintenance is a system call that is used to maintain information.


There are some examples of information maintenance, including getting system
data, set time or date, get time or date, set system data, etc.

Communication

Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication
connections, send, receive messages, etc.

Examples of Windows and Unix system calls

There are various examples of Windows and Unix system calls. These are as
listed below in the table:
Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess() Exit()
WaitForSingleObject() Wait()

File Manipulation CreateFile() Open()


ReadFile() Read()
WriteFile() Write()
CloseHandle() Close()

Device Management SetConsoleMode() Ioctl()


ReadConsole() Read()
WriteConsole() Write()

Information GetCurrentProcessID() Getpid()


Maintenance SetTimer() Alarm()
Sleep() Sleep()

Communication CreatePipe() Pipe()


CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

Protection SetFileSecurity() Chmod()


InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

Here, you will learn about some methods briefly:

open()

The open() system call allows you to access a file on a file system. It allocates
resources to the file and provides a handle that the process may refer to. Many
processes can open a file at once or by a single process only. It's all based on the
file system and structure.

read()
It is used to obtain data from a file on the file system. It accepts three arguments
in general:

o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.

The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.

wait()

In some systems, a process may have to wait for another process to complete its
execution before proceeding. When a parent process makes a child process, the
parent process execution is suspended until the child process is finished.
The wait() system call is used to suspend the parent process. Once the child
process has completed its execution, control is returned to the parent process.

write()

It is used to write data from a user buffer to a device like a file. This system call
is one way for a program to generate data. It takes three arguments in general:

o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.

fork()

Processes generate clones of themselves using the fork() system call. It is one of
the most common ways to create processes in operating systems. When a parent
process spawns a child process, execution of the parent process is interrupted
until the child process completes. Once the child process has completed its
execution, control is returned to the parent process.

close()

It is used to end file system access. When this system call is invoked, it signifies
that the program no longer requires the file, and the buffers are flushed, the file
information is altered, and the file resources are de-allocated as a result.

exec()
When an executable file replaces an earlier executable file in an already
executing process, this system function is invoked. As a new process is not built,
the old process identification stays, but the new process replaces data, stack,
data, head, etc.

exit()

The exit() is a system call that is used to end program execution. This call
indicates that the thread execution is complete, which is especially useful in
multi-threaded environments. The operating system reclaims resources spent by
the process following the use of the exit() system function.

OS Structure
Operating system can be implemented with the help of various structures.
The structure of the OS depends mainly on how the various common
components of the operating system are interconnected and melded into the
kernel. Depending on this we have following structures of the operating
system:
Simple structure:
Such operating systems do not have well defined structure and are small,
simple and limited systems. The interfaces and levels of functionality are not
well separated. MS-DOS is an example of such operating system. In MS-DOS
application programs are able to access the basic I/O routines. These types of
operating system cause the entire system to crash if one of the user programs
fails.
Diagram of the structure of MS-DOS is shown below.
Advantages of Simple structure:
 It delivers better application performance because of the few
interfaces between the application program and the hardware.
 Easy for kernel developers to develop such an operating system.
Disadvantages of Simple structure:
 The structure is very complicated as no clear boundaries exists
between modules.
 It does not enforce data hiding in the operating system.
Layered structure:
An OS can be broken into pieces and retain much more control on system. In
this structure the OS is broken into number of layers (levels). The bottom
layer (layer 0) is the hardware and the topmost layer (layer N) is the user
interface. These layers are so designed that each layer uses the functions of
the lower level layers only. This simplifies the debugging process as if lower
level layers are debugged and an error occurs during debugging then the error
must be on that layer only as the lower level layers have already been
debugged.
The main disadvantage of this structure is that at each layer, the data needs
to be modified and passed on which adds overhead to the system. Moreover
careful planning of the layers is necessary as a layer can use only lower level
layers. UNIX is an example of this structure.
Advantages of Layered structure:

 Layering makes it easier to enhance the operating system as


implementation of a layer can be changed easily without affecting the
other layers.
 It is very easy to perform debugging and system verification.
Disadvantages of Layered structure:
 In this structure the application performance is degraded as
compared to simple structure.
 It requires careful planning for designing the layers as higher layers
use the functionalities of only the lower layers.
Micro-kernel:
This structure designs the operating system by removing all non-essential
components from the kernel and implementing them as system and user
programs. This result in a smaller kernel called the micro-kernel.
Advantages of this structure are that all new services need to be added to
user space and does not require the kernel to be modified. Thus it is more
secure and reliable as if a service fails then rest of the operating system
remains untouched. Mac OS is an example of this type of OS.
Advantages of Micro-kernel structure:
 It makes the operating system portable to various platforms.
 As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel structure:
 Increased level of inter module communication degrades system
performance.
Modular structure or approach:
It is considered as the best approach for an OS. It involves designing of a
modular kernel. The kernel has only set of core components and other
services are added as dynamically loadable modules to the kernel either
during run time or boot time. It resembles layered structure due to the fact
that each kernel has defined and protected interfaces but it is more flexible
than the layered structure as a module can call any other module.
For example Solaris OS is organized as shown in the figure.

Process and Threads


Process:
Process means any program is in execution. Process control block controls
the operation of any process. Process Control Block(PCB) contains
information about processes for example process priority, process id, process
state, CPU, register, etc. A process can creates other processes which are
known as Child Processes. Process takes more time to terminate and it is
isolated means it does not share memory with any other process.
The process can have the following states like new, ready, running, waiting,
terminated, suspended.
Thread:
Thread is the segment of a process means a process can have multiple
threads and these multiple threads are contained within a process. A thread
have 3 states: running, ready, and blocked.
Thread takes less time to terminate as compared to process but unlike
process threads do not isolate.

Properties of Process
Here are the important properties of the process:

 Creation of each process requires separate system calls for each process.
 It is an isolated execution entity and does not share data and
information.
 Processes use the IPC(Inter-Process Communication) mechanism for
communication that significantly increases the number of system calls.
 Process management takes more system calls.
 A process has its stack, heap memory with memory, and data map.

Properties of Thread
Here are important properties of Thread:

 Single system call can create more than one thread


 Threads share data and information.
 Threads shares instruction, global, and heap regions. However, it has its
register and stack.
 Thread management consumes very few, or no system calls because of
communication between threads that can be achieved using shared
memory.

Difference between Process and Thread:

S.NO Process Thread


Process means any program is in
1. execution. Thread means segment of a process.
Process takes more time to
2. terminate. Thread takes less time to terminate.
3. It takes more time for creation. It takes less time for creation.
It also takes more time for context
4. switching. It takes less time for context switching.
Process is less efficient in term of Thread is more efficient in term of
5. communication. communication.
6. Process consume more resources. Thread consume less resources.
7. Process is isolated. Threads share memory.
A Thread is lightweight as each thread in
Process is called heavy weight a process shares code, data and
8. process. resources.
Thread switching does not require to call
Process switching uses interface in a operating system and cause an
9. operating system. interrupt to the kernel.
If one process is blocked then it
will not effect the execution of Second thread in the same task could not
10. other process run, while one server thread is blocked.
Process has its own Process Thread has Parents’ PCB, its own Thread
Control Block, Stack and Address Control Block and Stack and common
11. Space. Address space.
If one process is blocked, then no While one thread is blocked and waiting,
other process can execute until the a second thread in the same task can
12. first process is unblocked. run.
Since all threads of the same process
share address space and other resources
so any changes to the main thread may
Changes to the parent process affect the behavior of the other threads of
13. does not affect child processes. the process.

Inter Process Communication


Interprocess communication is the mechanism provided by the operating system
that allows processes to communicate with each other. This communication
could involve a process letting another process know that some event has
occurred or the transferring of data from one process to another.
A diagram that illustrates interprocess communication is as follows −

Synchronization in Interprocess Communication


Synchronization is a necessary part of interprocess communication. It is either
provided by the interprocess control mechanism or handled by the
communicating processes. Some of the methods to provide synchronization are
as follows −

 Semaphore
A semaphore is a variable that controls the access to a common resource
by multiple processes. The two types of semaphores are binary
semaphores and counting semaphores.

 Mutual Exclusion

Mutual exclusion requires that only one process thread can enter the
critical section at a time. This is useful for synchronization and also
prevents race conditions.

 Barrier

A barrier does not allow individual processes to proceed until all the
processes reach it. Many parallel languages and collective routines impose
barriers.

 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a
loop while checking if the lock is available or not. This is known as busy
waiting because the process is not doing any useful operation even though
it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as
follows −

 Pipe

A pipe is a data channel that is unidirectional. Two pipes can be used to


create a two-way data channel between two processes. This uses standard
input and output methods. Pipes are used in all POSIX systems as well as
Windows operating systems.

 Socket

The socket is the endpoint for sending or receiving data in a network. This
is true for data sent between processes on the same computer or data sent
between different computers on the same network. Most of the operating
systems use sockets for interprocess communication.

 File
A file is a data record that may be stored on a disk or acquired on demand
by a file server. Multiple processes can access a file as required. All
operating systems use files for data storage.

 Signal

Signals are useful in interprocess communication in a limited way. They


are system messages that are sent from one process to another. Normally,
signals are not used to transfer data but are used for remote commands
between processes.

 Shared Memory

Shared memory is the memory that can be simultaneously accessed by


multiple processes. This is done so that the processes can communicate
with each other. All POSIX systems, as well as Windows operating systems
use shared memory.

 Message Queue

Multiple processes can read and write data to the message queue without
being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows −

Scheduling
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process
is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling
queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in
this queue.
 Device queues − The processes which are blocked due to unavailability
of an I/O device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are
described below −
S.N. State & Description

1 Running
When a new process is created, it enters into the system as in the
running state.

2
Not Running
Processes that are not running are kept in queue, waiting for their turn
to execute. Each entry in the queue is a pointer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as
follows. When a process is interrupted, that process is transferred in the
waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the
queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types −

Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from
the queue and loads them into memory for execution. Process loads into the
memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler. When a process
changes the state from new to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects a process
among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term
schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from
the memory. It reduces the degree of multiprogramming. The medium-term
scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A
suspended processes cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Medium-Term
Scheduler Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than Speed is fastest Speed is in between


short term scheduler among other two both short and long
term scheduler.

3 It controls the degree It provides lesser It reduces the degree of


of multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time


minimal in time time sharing system sharing systems.
sharing system
5 It selects processes It selects those It can re-introduce the
from pool and loads processes which are process into memory
them into memory for ready to execute and execution can be
execution continued.

Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from
the same point at a later time. Using this technique, a context switcher enables
multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its
own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.
Context switches are computationally intensive since register and memory state
must be saved and restored. To avoid the amount of context switching time,
some hardware systems employ two or more sets of processor registers. When
the process is switched, the following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU
based on particular scheduling algorithms. There are six popular process
scheduling algorithms which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time, whereas the preemptive
scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Shortest Job Next (SJN)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known
in advance.
 Impossible to implement in interactive systems where required CPU time
is not known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14
P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority.
Here we are considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0
P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6


Shortest Remaining Time
 Shortest remaining time (SRT) is the preemptive version of the SJN
algorithm.
 The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time
is not known.
 It is often used in batch environments where short jobs need to give
preference.
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make
use of other existing algorithms to group and schedule jobs with common
characteristics.

 Multiple queues are maintained for processes with common


characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to
the queue.

Classical IPC Problems

we will see number of classical problems of synchronization as examples of a


large class of concurrency-control problems. In our solutions to the problems,
we use semaphores for synchronization, since that is the traditional way to
present such solutions. However, actual implementations of these solutions
could use mutex locks in place of binary semaphores.
These problems are used for testing nearly every newly proposed
synchronization scheme. The following problems of synchronization are
considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
These are summarized, for detailed explanation, you can view the linked
articles for each.
 Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem.
This problem is generalized in terms of the Producer-Consumer
problem. Solution to this problem is, creating two counting
semaphores “full” and “empty” to keep track of the current number
of full and empty buffers respectively. Producers produce a product
and consumers consume the product, but both use of one of the
containers each time.

 Dining-Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated
around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to
him. One chopstick may be picked up by any one of its adjacent
followers but not both. This problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-
free manner.

 Readers and Writers Problem:


Suppose that a database is to be shared among several concurrent
processes. Some of these processes may want only to read the
database, whereas others may want to update (that is, to read and
write) the database. We distinguish between these two types of
processes by referring to the former as readers and to the latter as
writers. Precisely in OS we call this situation as the readers-writers
problem. Problem parameters:
 One set of data is shared among a number of processes.

 Once a writer is ready, it performs its write. Only one writer


may write at a time.

 If a process is writing, no other process can read it.

 If at least one reader is reading, no other process can write.

 Readers may not write and only read.

 Sleeping Barber Problem:


Barber shop with one barber, one barber chair and N chairs to wait
in. When no customers the barber goes to sleep in barber chair and
must be woken when a customer comes in. When barber is cutting
hair new customers take empty seats to wait, or leave if no vacancy.

Note:-
Refer Classwork note for Algorithms Example of Above Classical IPC Problems

You might also like