You are on page 1of 49

Yash Arora

Operating System: Concept, Components of Operating System, Operating System


Operations, Protection and Security. Computing Environment. Abstract View of OS: User
view, System View, Operating System Services, System Calls: Concept, types of System
Calls.
Computer System Architecture: Single-Processor Systems, Multiprocessor Systems. Types
of Operating systems: Batch Operating System, Multi-Programmed Operating System,
Time-Shared Operating System, Real Time Operating System, Distributed Operating
Systems.
Process Management: Process Concept, Operation on Processes, Cooperating Processes,
inter-Process Communication, Threads.
Linux Operating System: Introduction to Linux OS, Basic Commands of Linux OS.

Architecture of Linux

Linux architecture has the following components:

1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the
common hardware resources of the computer to provide each process with its
virtual resources. This makes the process seem as if it is the sole process
running on the machine. The kernel is also responsible for preventing and
mitigating conflicts between different processes. Different types of the kernel
are:
 Monolithic Kernel
 Hybrid kernels
 Exo kernels
 Micro kernels
2. System Library: Isthe special types of functions that are used to implement the
functionality of the operating system.
3. Shell: It is an interface to the kernel which hides the complexity of the kernel’s
functions from the users. It takes commands from the user and executes the
kernel’s functions.
4. Hardware Layer: This layer consists all peripheral devices like RAM/ HDD/
CPU etc.
5. System Utility: It provides the functionalities of an operating system to the
user.
Operating System
An Operating System (OS) is an interface between a computer user and computer hardware. An
operating system is a software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices
such as disk drives and printers.
An operating system is software that enables applications to interact with a computer's hardware.
The software that contains the core components of the operating system is called the kernel.
The primary purposes of an Operating System are to enable applications (software) to interact
with a computer's hardware and to manage a system's hardware and software resources.

Definition:

An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.

Components of Operating System


There are following 8-components of an Operating System:

1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System

Process Management

A process is program or a fraction of a program that is loaded in main memory. A process needs
certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The
process management component manages the multiple processes running simultaneously on the
Operating System.
A program in running state is called a process.

The operating system is responsible for the following activities in connection with process
management:

 Create, load, execute, suspend, resume, and terminate processes.


 Switch system among multiple processes in main memory.
 Provides communication mechanisms so that processes can communicate with each others
 Provides synchronization mechanisms to control concurrent access to shared data to keep
shared data consistent.
 Allocate/de-allocate resources properly to prevent or avoid deadlock situation.
I/O Device Management

One of the purposes of an operating system is to hide the peculiarities of specific hardware devices
from the user. I/O Device Management provides an abstract level of H/W devices and keep the
details from applications to ensure proper use of devices, to prevent errors, and to provide users
with convenient and efficient programming environment.
Following are the tasks of I/O Device Management component:

 Hide the details of H/W devices


 Manage main memory for the devices using cache, buffer, and spooling
 Maintain and provide custom drivers for each device.

File Management

File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms; magnetic tape, disk, and drum are the most common
forms.
A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly
files represent data, source and object forms, and programs. Data files can be of any type like
alphabetic, numeric, and alphanumeric.
A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.

The operating system implements the abstract concept of the file by managing mass storage
device, such as types and disks. Also, files are normally organized into directories to ease their use.
These directories may contain files and other directories and so on.
The operating system is responsible for the following activities in connection with file
management:

 File creation and deletion


 Directory creation and deletion
 The support of primitives for manipulating files and directories
 Mapping files onto secondary storage
 File backup on stable (non-volatile) storage media

Network Management

The definition of network management is often broad, as network management involves several
different components. Network management is the process of managing and administering a
computer network. A computer network is a collection of various types of computers connected
with each other.
Network management comprises fault analysis, maintaining the quality of service, provisioning of
networks, and performance management.
Network management is the process of keeping your network healthy for an efficient communication
between different computers.

Following are the features of network management:


 Network administration
 Network maintenance
 Network operation
 Network provisioning
 Network security

Main Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly
accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device which means it loses its contents in the case of system
failure or as soon as system power goes down.
The main motivation behind Memory Management is to maximize memory utilization on the computer
system.

The operating system is responsible for the following activities in connections with memory
management:

 Keep track of which parts of memory are currently being used and by whom.
 Decide which processes to load when memory space becomes available.
 Allocate and deallocate memory space as needed.

Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs, together with
the data they access, must be in main memory during execution. Since the main memory is too
small to permanently accommodate all data and program, the computer system must provide
secondary storage to backup main memory.
Most modern computer systems use disks as the principle on-line storage medium, for both
programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters,
and so on, are stored on the disk until loaded into memory, and then use the disk as both the source
and destination of their processing.
The operating system is responsible for the following activities in connection with disk
management:

 Free space management


 Storage allocation

Disk scheduling

Security Management

The operating system is primarily responsible for all task and activities happen in the computer
system. The various processes in an operating system must be protected from each other’s
activities. For that purpose, various mechanisms which can be used to ensure that the files, memory
segment, CPU and other resources can be operated on only by those processes that have gained
proper authorization from the operating system.
Security Management refers to a mechanism for controlling the access of programs, processes, or
users to the resources defined by a computer control to be imposed, together with some means of
enforcement.

For example, memory addressing hardware ensure that a process can only execute within its own
address space. The timer ensure that no process can gain control of the CPU without relinquishing
it. Finally, no process is allowed to do its own I/O, to protect the integrity of the various peripheral
devices.

Command Interpreter System

One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Command Interpreter System executes a user command by calling one or more number of
underlying system programs or system calls.
Command Interpreter System allows human users to interact with the Operating System and
provides convenient programming environment to the users.

Many commands are given to the operating system by control statements. A program which reads
and interprets control statements is automatically executed. This program is called the shell and
few examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.

Protection and Security in Operating System:


Protection and security requires that computer resources such as CPU, softwares, memory etc. are
protected. This extends to the operating system as well as the data in the system. This can be
done by ensuring integrity, confidentiality and availability in the operating system. The system
must be protect against unauthorized access, viruses, worms etc.

Threats to Protection and Security

A threat is a program that is malicious in nature and leads to harmful effects for the system. Some
of the common threats that occur in a system are −

Virus
Viruses are generally small snippets of code embedded in a system. They are very dangerous and
can corrupt files, destroy data, crash systems etc. They can also spread further by replicating
themselves as required.

Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious user can use
these to enter the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the knowledge of the
users. It can be exploited to harm the data or files in a system by malicious people.

Worm
A worm can destroy a system by using its resources to extreme levels. It can generate multiple
copies which claim all the resources and don't allow any other processes to access them. A worm
can shut down a whole network in this way.

Denial of Service
These types of attacks do not allow the legitimate users to access a system. It overwhelms the
system with requests so it is overwhelmed and cannot work properly for other user.

Protection and Security Methods


The different methods that may provide protect and security for different computer systems are −

Authentication
This deals with identifying each user in the system and making sure they are who they claim to
be. The operating system makes sure that all the users are authenticated before they access the
system. The different ways to make sure that the users are authentic are:

 Username/ Password
Each user has a distinct username and password combination and they need to enter
it correctly before they can access the system.
 User Key/ User Card
The users need to punch a card into the card slot or use they individual key on a
keypad to access the system.
 User Attribute Identification
Different user attribute identifications that can be used are fingerprint, eye retina etc.
These are unique for each user and are compared with the existing samples in the
database. The user can only access the system if there is a match.
One Time Password
These passwords provide a lot of security for authentication purposes. A one time password can
be generated exclusively for a login every time a user wants to enter the system. It cannot be used
more than once. The various ways a one time password can be implemented are −

 Random Numbers
The system can ask for numbers that correspond to alphabets that are pre arranged.
This combination can be changed each time a login is required.
 Secret Key
A hardware device can create a secret key related to the user id for login. This key can
change each time.
Operating System Operations
An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
The major operations of the operating system are process management, memory management,
device management and file management. These are given in detail as follows:

Process Management

The operating system is responsible for managing the processes i.e assigning the processor to a
process at a time. This is known as process scheduling. The different algorithms used for process
scheduling are FCFS (first come first served), SJF (shortest job first), priority scheduling, round
robin scheduling etc.
There are many scheduling queues that are used to handle processes in process management.
When the processes enter the system, they are put into the job queue. The processes that are
ready to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the device queue.

Memory Management

Memory management plays an important part in operating system. It deals with memory and the
moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are −

 The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
 All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
 The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.

Device Management

There are many I/O devices handled by the operating system such as mouse, keyboard, disk drive
etc. There are different device drivers that can be connected to the operating system to handle a
specific device. The device controller is an interface between the device and the device driver. The
user applications can access all the I/O devices using the device drivers, which are device specific
codes.

File Management

Files are used to provide a uniform view of data storage by the operating system. All the files are
mapped onto physical devices that are usually non-volatile so data is safe in the case of system
failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access −

 Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed on after another. Most of the file systems such as editors,
compilers etc. use sequential access.
 Direct Access
In direct access or relative access, the files can be accessed in random for read and
write operations. The direct access model is based on the disk model of a file, since it
allows random accesses.

Abstract View of OS:


The operating system acts as a manager for all the above resources and allocates them to specific
programs and users, whenever necessary to perform a particular task. Therefore, the operating system is
the resource manager that means it can manage the resources of a computer system internally. The
resources are processor, memory, files, and I/O devices.

The abstract view of components of computer system is as follows −

Viewpoints of Operating System


The operating system may be observed from the viewpoint of the user or the system. It is known as
the user view and the system view. There are mainly two types of views of the operating system.
These are as follows:

1. User View
2. System View
User View
The user view depends on the system interface that is used by the users. Some systems are designed
for a single user to monopolize the resources to maximize the user's task. In these cases, the OS is
designed primarily for ease of use, with little emphasis on quality and none on resource utilization.

The user viewpoint focuses on how the user interacts with the operating system through the usage
of various application programs. In contrast, the system viewpoint focuses on how the hardware
interacts with the operating system to complete various tasks.

1. Single User View Point

Most computer users use a monitor, keyboard, mouse, printer, and other accessories to operate their
computer system. In some cases, the system is designed to maximize the output of a single user. As
a result, more attention is laid on accessibility, and resource allocation is less important. These
systems are much more designed for a single user experience and meet the needs of a single user,
where the performance is not given focus as the multiple user systems.

2. Multiple User View Point

Another example of user views in which the importance of user experience and performance is given
is when there is one mainframe computer and many users on their computers trying to interact with
their kernels over the mainframe to each other. In such circumstances, memory allocation by the
CPU must be done effectively to give a good user experience. The client-server architecture is
another good example where many clients may interact through a remote server, and the same
constraints of effective use of server resources may arise.

3. Handled User View Point

Moreover, the touchscreen era has given you the best handheld technology ever. Smartphones
interact via wireless devices to perform numerous operations, but they're not as efficient as a
computer interface, limiting their usefulness. However, their operating system is a great example of
creating a device focused on the user's point of view.

4. Embedded System User View Point

Some systems, like embedded systems that lack a user point of view. The remote control used to
turn on or off the tv is all part of an embedded system in which the electronic device communicates
with another program where the user viewpoint is limited and allows the user to engage with the
application.

System View
The OS may also be viewed as just a resource allocator. A computer system comprises various
sources, such as hardware and software, which must be managed effectively. The operating system
manages the resources, decides between competing demands, controls the program execution, etc.
According to this point of view, the operating system's purpose is to maximize performance.
The operating system is responsible for managing hardware resources and allocating them to
programs and users to ensure maximum performance.
From the user point of view, we've discussed the numerous applications that require varying degrees
of user participation. However, we are more concerned with how the hardware interacts with the
operating system than with the user from a system viewpoint. The hardware and the operating
system interact for a variety of reasons, including:

1. Resource Allocation

The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O interaction, etc.
These are all resources that the operating system needs when an application program demands
them. Only the operating system can allocate resources, and it has used several tactics and strategies
to maximize its processing and memory space. The operating system uses a variety of strategies to
get the most out of the hardware resources, including paging, virtual memory, caching, and so on.
These are very important in the case of various user viewpoints because inefficient resource
allocation may affect the user viewpoint, causing the user system to lag or hang, reducing the user
experience.

2. Control Program

The control program controls how input and output devices (hardware) interact with the operating
system. The user may request an action that can only be done with I/O devices; in this case, the
operating system must also have proper communication, control, detect, and handle such devices.

Components of Computer System


A computer system can be divided into four components, which are as follows −
Hardware − The hardware is the physical part which we can touch and feel, the central processing
unit (CPU), the memory, and the input/output (I/O) devices are the basic computing resources of a
computer system.
Application programs − Application programs are user or programmer created programs like
compilers, database systems, games, and business programs that define the ways in which these
resources can be used to solve the computing problems of the users.
Users − There are different types of users like people, machines, and even other computers which
are trying to solve different problems.
Operating system − An operating system is the interface between the user and the machine which
controls and coordinates the use of the hardware among the various application programs for the
various users.

Operating System - Services


An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Program execution
Operating systems handle many kinds of activities from user programs to system programs like printer
spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.

A process includes the complete execution context (code to execute, data to manipulate, registers, OS
resources in use). Following are the major activities of an operating system with respect to program
management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the
peculiarities of specific hardware devices from the users.

An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation


A file represents a collection of related information. Computers can store files on the disk (secondary
storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk
and optical disk drives like CD, DVD. Each of these media has its own properties like speed, capacity, data
transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions. Following are the major activities of an operating system with respect to
file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication
In case of distributed systems which are a collection of processors that do not share memory, peripheral
devices, or a clock, the operating system manages communications between all the processes. Multiple
processes communicate with one another through communication lines in the network.

The OS handles routing and connection strategies, and the problems of contention and security. Following
are the major activities of an operating system with respect to communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are connected
through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.

Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory
hardware. Following are the major activities of an operating system with respect to error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files
storage are to be allocated to each user or job. Following are the major activities of an operating system
with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.

Protection
Considering a computer system having multiple users and concurrent execution of multiple processes, the
various processes must be protected from each other's activities.

Protection refers to a mechanism or a way to control the access of programs, processes, or users to the
resources defined by a computer system. Following are the major activities of an operating system with
respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.
System Calls in Operating System (OS)
A system call is a way for a user program to interface with the operating system. The program requests
several services, and the OS responds by invoking a series of system calls to satisfy the request. A system
call can be written in assembly language or a high-level language like C or Pascal. System calls are
predefined functions that the operating system may directly invoke if a high-level language is used.

In this article, you will learn about the system calls in the operating system and discuss their types and
many other things.

What is a System Call?


A system call is a method for a computer program to request a service from the kernel of the operating
system on which it is running. A system call is a method of interacting with the operating system via
programs. A system call is a request from computer software to an operating system's kernel.

The Application Program Interface (API) connects the operating system's functions to user programs.
It acts as a link between the operating system and a process, allowing user-level programs to request
operating system services. The kernel system can only be accessed using system calls. System calls are
required for any programs that use resources.

How are system calls made?


When a computer software needs to access the operating system's kernel, it makes a system call. The system
call uses an API to expose the operating system's services to user programs. It is the only method to access
the kernel system. All programs or processes that require resources for execution must use system calls, as
they serve as an interface between the operating system and user programs.

Below are some examples of how a system call varies from a user function.

1. A system call function may create and use kernel processes to execute the asynchronous processing.

2. A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes
in the kernel protection domain.

3. System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.

4. The code and data for system calls are stored in global kernel memory.

Why do you need system calls in Operating System?


There are various situations where you must require system calls in the operating system. Following of the
situations are as follows:

1. It is must require when a file system wants to create or delete a file.

2. Network connections require the system calls to sending and receiving data packets.

3. If you want to read or write a file, you need to system calls.

4. If you want to access hardware devices, including a printer, scanner, you need a system call.
5. System calls are used to create and manage new processes.

How System Calls Work


The Applications run in an area of memory known as user space. A system call connects to the operating
system's kernel, which executes in kernel space. When an application creates a system call, it must first obtain
permission from the kernel. It achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel.

If the request is permitted, the kernel performs the requested action, like creating or deleting a file. As input,
the application receives the kernel's output. The application resumes the procedure after the input is received.
When the operation is finished, the kernel returns the results to the application and then moves data from
kernel space to user space in memory.

A simple system call may take few nanoseconds to provide the result, like retrieving the system date and time.
A more complicated system call, such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to avoid bottlenecks. Modern operating
systems are multi-threaded, which means they can handle various system calls at the same time.

Types of System Calls


There are commonly five types of system calls. These are as follows:

1. Process Control

2. File Management

3. Device Management

4. Information Maintenance

5. Communication

Now, you will learn about all the different types of system calls one-by-one.
Process Control
Process control is the system call that is used to direct the processes. Some process control examples include
creating, load, abort, end, execute, process, terminate the process, etc.

File Management
File management is a system call that is used to handle the files. Some file management examples include
creating files, delete files, open, close, read, write, etc.

Device Management
Device management is a system call that is used to deal with devices. Some examples of device management
include read, device, write, get device attributes, release device, etc.

Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data,
etc.

Communication
Communication is a system call that is used for communication. There are some examples of communication,
including create, delete communication connections, send, receive messages, etc.

Examples of Windows and Unix system calls


There are various examples of Windows and Unix system calls. These are as listed below in the table:

Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess() Exit()
WaitForSingleObject() Wait()

File Manipulation CreateFile() Open()


ReadFile() Read()
WriteFile() Write()
CloseHandle() Close()

Device Management SetConsoleMode() Ioctl()


ReadConsole() Read()
WriteConsole() Write()

Information Maintenance GetCurrentProcessID() Getpid()


SetTimer() Alarm()
Sleep() Sleep()
Communication CreatePipe() Pipe()
CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

Protection SetFileSecurity() Chmod()


InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

Here, you will learn about some methods briefly:

open()
The open() system call allows you to access a file on a file system. It allocates resources to the file and
provides a handle that the process may refer to. Many processes can open a file at once or by a single process
only. It's all based on the file system and structure.

read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:

o A file descriptor.

o A buffer to store read data.

o The number of bytes to read from the file.

The file descriptor of the file to be read could be used to identify it and open it using open() before reading.

wait()
In some systems, a process may have to wait for another process to complete its execution before proceeding.
When a parent process makes a child process, the parent process execution is suspended until the child
process is finished. The wait() system call is used to suspend the parent process. Once the child process has
completed its execution, control is returned to the parent process.

write()
It is used to write data from a user buffer to a device like a file. This system call is one way for a program to
generate data. It takes three arguments in general:

o A file descriptor.

o A pointer to the buffer in which data is saved.

o The number of bytes to be written from the buffer.

fork()
Processes generate clones of themselves using the fork() system call. It is one of the most common ways to
create processes in operating systems. When a parent process spawns a child process, execution of the parent
process is interrupted until the child process completes. Once the child process has completed its execution,
control is returned to the parent process.

close()
It is used to end file system access. When this system call is invoked, it signifies that the program no longer
requires the file, and the buffers are flushed, the file information is altered, and the file resources are de-
allocated as a result.

exec()
When an executable file replaces an earlier executable file in an already executing process, this system
function is invoked. As a new process is not built, the old process identification stays, but the new process
replaces data, stack, data, head, etc.

exit()
The exit() is a system call that is used to end program execution. This call indicates that the thread execution
is complete, which is especially useful in multi-threaded environments. The operating system reclaims
resources spent by the process following the use of the exit() system function.

Differences Between Single Processor and Multiprocessor


Systems
There are many differences between single processor and multiprocessor systems.Some of these
are illustrated as follows −

 Single processor system contains only one processor while multiprocessor systems may
contain two or more processors.
 Single processor systems use different controllers for completing special tasks such as DMA
(Direct Memory Access) Controller. On the other hand, multiprocessor systems have many
processors that can perform different tasks. This can be done in symmetric or asymmetric
multiprocessing.
 Single processor systems can be more expensive than multiprocessor systems. If n
processor multiprocessor system is available, it is cheaper than n different single processor
systems because the memory, peripherals etc. are shared.
 It is easier to design a single processor system as compared to a multiprocessor system.
This is because all the processors in the multiprocessor system need to be synchronized and
this can be quite complicated.
 Throughput of a multiprocessor system is more than a single processor system. However, if
the throughput of n single processor systems is T then the throughput of n processor
multiprocessor system will be less than T.
 Single processor systems are less reliable than multiprocessor systems because if the
processor fails for some reason then system cannot work. In multiprocessor systems, even if
one processor fails than the rest of the processors can pick up the slack. At most the
throughput of the system decreases a little.
 Most modern personal computers are single processor systems while multiprocessors are
used in niche systems only.
Types of Operating Systems

An Operating System performs all the basic tasks like managing files, processes, and memory.
Thus operating system acts as the manager of all the resources, i.e. resource manager. Thus, the
operating system becomes an interface between user and machine.

Types of Operating Systems: Some widely used operating systems are as follows-

1. Batch Operating System –


This type of operating system does not interact with the computer directly. There is an operator
which takes similar jobs having the same requirement and group them into batches. It is the
responsibility of the operator to sort jobs with similar needs.

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required for any job to complete.
Processors of the batch systems know how long the job would be when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
Examples of Batch based Operating System: Payroll System, Bank Statements, etc.

2. Time-Sharing Operating Systems –


Each task is given some time to execute so that all the tasks work smoothly. Each user gets the
time of CPU as they use a single system. These systems are also known as Multitasking Systems.
The task can be from a single user or different users also. The time that each task gets to execute
is called . After this time interval is over OS switches over to the next task.
Advantages of Time-Sharing OS:
 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of the security and integrity of user programs and data
 Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix, etc.

3. Distributed Operating System –


These types of the operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, with a great pace.
Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. These
are referred to as loosely coupled systems or distributed systems. These system’s processors
differ in size and function. The major benefit of working with these types of the operating system
is that it is always possible that one user can access the files or software which are not actually
present on his system but some other system connected within this network i.e., remote access is
enabled within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all systems are
independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not well defined yet
 These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.

4. Network Operating System –


These systems run on a server and provide the capability to manage data, users, groups, security,
applications, and other networking functions. These types of operating systems allow shared
access of files, printers, security, applications, and other networking functions over a small
private network. One more important aspect of Network Operating Systems is that all the users
are well aware of the underlying configuration, of all other users within the network, their
individual connections, etc. and that’s why these computers are popularly known as tightly
coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated into the system
 Server access is possible remotely from different locations and types of systems
Disadvantages of Network Operating System:
 Servers are costly
 User has to depend on a central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD, etc.
5. Real-Time Operating System –
These types of OSs serve real-time systems. The time interval required to process and respond
to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
 Hard Real-Time Systems:
These OSs are meant for applications where time constraints are very strict and even
the shortest possible delay is not acceptable. These systems are built for saving life
like automatic parachutes or airbags which are required to be readily available in case
of any accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.

Advantages of RTOS:
 Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
 Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to
another, and in the latest systems, it takes 3 microseconds.
 Focus on Application: Focus on running applications and less importance to
applications which are in the queue.
 Real-time operating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
 Limited Tasks: Very few tasks run at the same time and their concentration is very
less on few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
 Device driver and interrupt signals: It needs specific device drivers and interrupts
signals to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
Operating System - Processes

Process
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory

S.N. Component & Description

1
Stack

The process Stack contains the temporary data such as method/function parameters, return address
and local variables.

2
Heap

This is dynamically allocated memory to a process during its run time.

3
Text

This includes the current activity represented by the value of Program Counter and the contents of the
processor's registers.
4
Data

This section contains the global and static variables.

Program
A program is a piece of code which may be a single line or millions of lines. A computer program is usually
written by a computer programmer in a programming language. For example, here is a simple program
written in C programming language −

#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm. A collection of
computer programs, libraries and related data are referred to as a software.

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description

1
Start

This is the initial state when a process is first started/created.

2
Ready

The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into this
state after Start state or while running it by but interrupted by the scheduler to assign CPU to some
other process.

3
Running

Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.

4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
or waiting for a file to become available.

5
Terminated or Exit

Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every process. The PCB
is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a
process as listed below in the table −

S.N. Information & Description

1
Process State

The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2
Process privileges

This is required to allow/disallow access to system resources.

3
Process ID

Unique identification for each of the process in the operating system.

4
Pointer

A pointer to parent process.

5
Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for this process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running state.

7
CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.

8
Memory management information

This includes the information of page table, memory limits, Segment table depending on memory used
by the operating system.

9
Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.

10
IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Operating System - Process Scheduling

Definition
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.

Categories of Scheduling
There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to a
waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to other processes and replace the
process with higher priority with the running process.

Process Scheduling Queues


The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.

 Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.

 Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler
determines how to move processes between the ready and run queues which can only have one entry per
processor core on the system; in the above diagram, it has been merged with the CPU.

Two-State Process Model


Two-state process model refers to running and non-running states which are described below −

S.N. State & Description

1
Running

When a new process is created, it enters into the system as in the running state.

2
Not Running

Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher
is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the dispatcher then selects
a process from the queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the
system for processing. It selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the
degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-
processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short term Speed is fastest among Speed is in between both short-
scheduler other two and long-term scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or minimal in It is also minimal in time It is a part of Time sharing


time sharing system sharing system systems.

5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory for which are ready to execute into memory and execution can
execution be continued.

Context Switching
A context switching is the mechanism to store and restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from the same point at a later time. Using this technique,
a context switcher enables multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state from the
current running process is stored into the process control block. After this, the state for the process to run
next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can
start executing.

Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or more sets
of processor registers. When the process is switched, the following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
Operating System Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4
P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)


 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling


 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so
on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1 is
the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5
Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a newer
ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling


 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes
for a given time period.
 Context switching is used to save states of preempted processes.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling


Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.

 Multiple queues are maintained for processes with common characteristics.


 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The
Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on
the algorithm assigned to the queue.
Process creation in Operating Systems?
A process can create several new processes through creating process system calls during the
process execution. Creating a process, we call it the parent process and the new process is a child
process.
Every new process creates another process forming a tree-like structure. It can be identified with
a unique process identifier that usually represents it as pid which is typically an integer number.
Every process needs some resources like CPU time, memory, file, I/O devices to accomplish.
Whenever a process creates a sub process, and may be each sub process is able to obtain its
resources directly from the operating system or from the resources of the parent process. The
parent process needs to partition its resources among all its children or it may be able to share
some resources to several children.
Restricting a child process to a subset of the parent’s resources prevents any process from
overloading the system by creating too many sub-processes. A process is going to obtain its
resources whenever it is created.
Let us consider a tree of process on a typical Solaris system as follows −

Whenever a process creates a new process, there are two possibilities in terms of execution, which
are as follows −
 The parent continues to execute concurrently with its children.
 The parent waits till some or all its children have terminated.
There are two more possibilities in terms of address space of the new process, which are as
follows −
 The child process is a duplicate of the parent process.
 The child process has a new program loaded into it.
Cooperating Process
Cooperating processes are those that can affect or are affected by other processes running on the
system. Cooperating processes may share data with each other.

Reasons for needing cooperating processes


There may be many reasons for the requirement of cooperating processes. Some of these are given
as follows −

 Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These subtasks
can be completed by different cooperating processes. This leads to faster and more
efficient completion of the required tasks.
 Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A mechanism is
required so that the processes can access the files in parallel to each other.
 Convenience
There are many tasks that a user needs to do such as compiling, printing, editing etc.
It is convenient if these tasks can be managed by cooperating processes.
 Computation Speedup
Subtasks of a single task can be performed parallelly using cooperating processes.
This increases the computation speedup as the task can be executed faster. However,
this is only possible if the system has multiple processing elements.

Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages. Details
about these are given as follows −

 Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such as
memory, variables, files, databases etc. Critical section is used to provide data
integrity and writing is mutually exclusive to prevent inconsistent data.
A diagram that demonstrates cooperation by sharing is given as follows −
In the above diagram, Process P1 and P2 can cooperate with each other using shared
data such as memory, variables, files, databases etc.
 Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This may
lead to deadlock if each process is waiting for a message from the other to perform a
operation. Starvation is also possible if a process never receives a message.
A diagram that demonstrates cooperation by communication is given as follows −

In the above diagram, Process P1 and P2 can cooperate with each other using messages to
communicate.

Process Creation vs Process Termination in Operating System


Process Creation and Process termination are used to create and terminate processes respectively.
Details about these are given as follows −

Process Creation
A process may be created in the system for different operations. Some of the events that lead to
process creation are as follows −

 User request for process creation


 System Initialization
 Batch job initialization
 Execution of a process creation system call by a running process
A process may be created by another process using fork(). The creating process is called the parent
process and the created process is the child process. A child process can have only one parent but
a parent process may have many children. Both the parent and child processes have the same
memory image, open files and environment strings. However, they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows −
Process Termination
Process termination occurs when the process is terminated The exit() system call is used by most
operating systems for process termination.
Some of the causes of process termination are as follows −

 A process may be terminated after its execution is naturally completed. This process leaves
the processor and releases all its resources.
 A child process may be terminated if its parent process requests for its termination.
 A process can be terminated if it tries to use a resource that it is not allowed to. For
example - A process can be terminated for trying to write into a read only file.
 If an I/O failure occurs for a process, it can be terminated. For example - If a process requires
the printer and it is not working, then the process will be terminated.
 In most cases, if a parent process is terminated then its child processes are also terminated.
This is done because the child process cannot exist without the parent process.
 If a process requires more memory than is currently available in the system, then it is
terminated because of memory scarcity.
Interprocess communication in Operating System
Processes in operating system needs to communicate with each other. That is called Interprocess
communication. If you want to know more about it then read this article which will cover different
types of ways inter process communication is done.

Interprocess communication (IPC) is one of the key mechanisms used by operating systems to
achieve these goals. IPC helps processes communicate with each other without having to go
through user-level routines or interfaces. It allows different parts of a program to access shared
data and files without causing conflicts among them. In inter-process communication, messages
are exchanged between two or more processes. Processes can be on the same computer or on
different computers. In this article, we will discuss IPC and its need, and different approaches for
doing IPC.

What is interprocess communication?


Interprocess communication (IPC) is a process that allows different processes of a computer
system to share information. IPC lets different programs run in parallel, share data, and
communicate with each other. It’s important for two reasons: First, it speeds up the execution of
tasks, and secondly, it ensures that the tasks run correctly and in the order that they were
executed.

Advantages of interprocess communication

 Interprocess communication allows one application to manage another and enables glitch-
free data sharing.
 Interprocess communication helps send messages efficiently between processes.
 The program is easy to maintain and debug because it is divided into different sections of
code that work separately.
 Programmers can perform a variety of other tasks at the same time, including Editing,
listening to music, compiling, etc.
 Data can be shared between different programs at the same time.
 Tasks can be subdivided and run on special types of processors. You can then exchange
data via IPC.
Disadvantages of interprocess communication

 The program cannot write to similar locations.


 Processes or programs that use the shared memory model must make sure that they are
not writing to similar memory locations.
 The shared storage model can cause problems such as storage synchronization and
protection that need to be addressed.
 It’s slower than a direct function call.

How does IPC work in computer systems?


IPC occurs when an application sends a message to an operating system process. The operating system
sends the message to a designated IPC mechanism, which handles the message and sends a response back
to the application. IPC mechanisms can be found in the kernel or the user space of an operating system.

IPC is an essential process in the operation of computer systems. It enables different programs to run in
parallel, share data, and communicate with each other. IPC is important for the efficient operation of an
operating system and ensures that the tasks run correctly and in the order that they were executed.

How to do interprocess communication


1. Message passing

Another important way that inter-process communication takes place with other processes is via message
passing. When two or more processes participate in inter-process communication, each process sends
messages to the others via Kernel. Here is an example of sending messages between two processes: – Here,
the process sends a message like “M” to the OS kernel. This message is then read by Process B. A
communication link is required between the two processes for successful message exchange. There are
several ways to create these links.
2. Shared memory

Shared memory is a memory shared between all processes by two or more processes established using
shared memory. This type of memory should protect each other by synchronizing access between all
processes. Both processes like A and B can set up a shared memory segment and exchange data through
this shared memory area. Shared memory is important for these reasons-

 It is a way of passing data between processes.


 Shared memory is much faster than these methods and is also more reliable.
 Shared memory allows two or more processes to share the same copy of the data.

Suppose process A wants to communicate to process B, and needs to attach its address space to this
shared memory segment. Process A will write a message to the shared memory and Process B will read
that message from the shared memory. So processes are responsible for ensuring synchronization so that
both processes do not write to the same location at the same time.

3. Pipes

Pipes are a type of data channel commonly used for one-way communication between two processes.
Because this is a half-duplex technique, the primary process communicates with the secondary process.
However, additional lines are required to achieve a full duplex. The two pipes create a bidirectional data
channel between the two processes. But one pipe creates a unidirectional data channel. Pipes are
primarily used on Windows operating systems. Like in the diagram it is shown that one process will send a
message to the pipe. The message will be retrieved and another process will write it to the standard
output.
4. Signal

The signal is a facility that allows processes to communicate with each other. A signal is a way of
telling a process that it needs to do something. A process can send a signal to another process. A
signal also allows a process to interrupt another process. A signal is a way of communicating
between processes.
Threads
A thread is a flow of execution through the process code, with its own program counter that keeps
track of which instruction to execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing
network servers and web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure shows the working of a
single-threaded and a multithreaded process.

Difference between Process and Thread

S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.

2 Process switching needs interaction with operating Thread switching does not need to
system. interact with operating system.
3 In multiple processing environments, each process All threads can share same set of
executes the same code but has its own memory open files, child processes.
and file resources.

4 If one process is blocked, then no other process can While one thread is blocked and
execute until the first process is unblocked. waiting, a second thread in the same
task can run.

5 Multiple processes without using threads use more Multiple threaded processes use
resources. fewer resources.

6 In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.

Advantages of Thread

 Threads minimize the context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

User Level Threads


In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application
can be programmed to be multithreaded. All of the threads within an application are supported
within a single process.
The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are generally slower to
create and manage than the user threads.

Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. Multithreading models are three types

 Many to many relationship.


 Many to one relationship.
 One to one relationship.

Many to Many Model


The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads
as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor
machine. This model provides the best accuracy on concurrency and when a thread performs a
blocking system call, the kernel can schedule another thread for execution.

Many to One Model


Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking system
call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple
threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the
system does not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run when
a thread makes a blocking system call. It supports multiple threads to execute in parallel on
microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create and manage. Kernel-level threads are slower to create
and manage.

2 Implementation is by a thread library at the user Operating system supports creation of


level. Kernel threads.

3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.

4 Multi-threaded applications cannot take advantage Kernel routines themselves can be


of multiprocessing. multithreaded.

You might also like