You are on page 1of 30

Operating System, Computer-System Organization,

Architecture, OS Structure and Operations, Process,


Memory, Storage Management, Protection, Security,
Distributed Systems, Computing
Environments (Operating System)
Introduction (Presentation)
Content:- What Operating Systems Do, Computer-System Organization, Computer-

System Architecture, Operating-System Structure, Operating-System Operations,

Process Management, Memory Management, Storage Management, Protection and

Security, Distributed Systems, Special-Purpose Systems, Computing Environments,

Open-Source Operating Systems

Details
Objectives
 To provide a grand tour of the major operating systems component
 To provide coverage of basic computer system organization
What is an Operating System?
 A program that acts as an intermediary between a user of a computer and the
computer hardware
 Operating system goals:
o Execute user programs and make solving user problems easier
o Make the computer system convenient to use
o Use the computer hardware in an efficient manner
Computer System Structure
 Computer system can be divided into four components:
o Hardware – provides basic computing resources
 CPU, memory, I/O devices
o Operating system
 Controls and coordinates use of hardware among various applications
and users
o Application programs – define the ways in which the system resources are
used to solve the computing problems of the users
 Word processors, compilers, web browsers, database systems, video
games
o Users
 People, machines, other computers
Four Components of a Computer System
 What Operating Systems Do
 Depends on the point of view
 Users want convenience, ease of use
o Don’t care about resource utilization
 But shared computer such as mainframe or minicomputer must keep all
users happy
 Users of dedicate systems such as workstations have dedicated resources but
frequently use shared resources from servers
 Handheld computers are resource poor, optimized for usability and battery life
 Some computers have little or no user interface, such as embedded computers
in devices and automobiles
Operating System Definition
 OS is a resource allocator
o Manages all resources
o Decides between conflicting requests for efficient and fair resource use
 OS is a control program
o Controls execution of programs to prevent errors and improper use of the
computer
 No universally accepted definition
 “Everything a vendor ships when you order an operating system” is good
approximation
o But varies wildly
 “The one program running at all times on the computer” is the kernel.
Everything else is either a system program (ships with the operating system) or
an application program.
Computer Startup
 bootstrap program is loaded at power-up or reboot
o Typically stored in ROM or EPROM, generally known as firmware
o Initializes all aspects of system
o Loads operating system kernel and starts execution
Computer System Organization
 Computer-system operation
o One or more CPUs, device controllers connect through common bus
providing access to shared memory
o Concurrent execution of CPUs and devices competing for memory cycles
      

Computer-System Operation
 I/O devices and the CPU can execute concurrently
 Each device controller is in charge of a particular device type
 Each device controller has a local buffer
 CPU moves data from/to main memory to/from local buffers
 I/O is from the device to local buffer of controller
 Device controller informs CPU that it has finished its operation by causing an
interrupt
Common Functions of Interrupts
 Interrupt transfers control to the interrupt service routine generally, through
the interrupt vector, which contains the addresses of all the service routines
 Interrupt architecture must save the address of the interrupted instruction
 Incoming interrupts are disabled while another interrupt is being processed to
prevent a lost interrupt
 A trap is a software-generated interrupt caused either by an error or a user
request
 An operating system is interrupt driven      
Interrupt Handling
 The operating system preserves the state of the CPU by storing registers and
the program counter
 Determines which type of interrupt has occurred:
o polling
o vectored interrupt system
 Separate segments of code determine what action should be taken for each
type of interrupt
Interrupt Timeline
I/O Structure
 After I/O starts, control returns to user program only upon I/O completion
o Wait instruction idles the CPU until the next interrupt
o Wait loop (contention for memory access)
o At most one I/O request is outstanding at a time, no simultaneous I/O
processing
 After I/O starts, control returns to user program without waiting for I/O
completion
o System call – request to the operating system to allow user to wait for
I/O completion
o Device-status table contains entry for each I/O device indicating its
type, address, and state
o Operating system indexes into I/O device table to determine device status
and to modify table entry to include interrupt                
Direct Memory Access Structure
 Used for high-speed I/O devices able to transmit information at close to
memory speeds
 Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention
 Only one interrupt is generated per block, rather than the one interrupt per
byte
Storage Structure
 Main memory – only large storage media that the CPU can access directly
o Random access
o Typically volatile
 Secondary storage – extension of main memory that provides
large nonvolatile storage capacity
 Magnetic disks – rigid metal or glass platters covered with magnetic recording
material
o Disk surface is logically divided into tracks, which are subdivided
into sectors
o The disk controller determines the logical interaction between the device
and the computer
Storage Hierarchy
 Storage systems organized in hierarchy
o Speed
o Cost
o Volatility
 Caching – copying information into faster storage system; main memory can
be viewed as a cache for secondary storage
Storage-Device Hierarchy

Caching
 Important principle, performed at many levels in a computer (in hardware,
operating system, software)
 Information in use copied from slower to faster storage temporarily
 Faster storage (cache) checked first to determine if information is there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
 Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy
Computer-System Architecture
 Most systems use a single general-purpose processor (PDAs through
mainframes)
o Most systems have special-purpose processors as well
 Multiprocessors systems growing in use and importance
o Also known as parallel systems, tightly-coupled systems
o Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault tolerance
o Two types:
1. Asymmetric Multiprocessing
2. Symmetric Multiprocessing
How a Modern Computer Works
 A von Neumann architecture
Symmetric Multiprocessing Architecture]

A Dual-Core Design
Clustered Systems
 Like multiprocessor systems, but multiple systems working together
 Usually sharing storage via a storage-area network (SAN)
 Provides a high-availability service which survives failures
o Asymmetric clustering has one machine in hot-standby mode
o Symmetric clustering has multiple nodes running applications,
monitoring each other
 Some clusters are for high-performance computing (HPC)
o Applications must be written to use parallelization

Operating System Structure


 Multiprogramming needed for efficiency
o Single user cannot keep CPU and I/O devices busy at all times
o Multiprogramming organizes jobs (code and data) so CPU always has one
to execute
o A subset of total jobs in system is kept in memory
o One job selected and run via job scheduling
o When it has to wait (for I/O for example), OS switches to another job
 Timesharing (multitasking) is logical extension in which CPU switches jobs
so frequently that users can interact with each job while it is running,
creating interactive computing
o Response time should be < 1 second
o Each user has at least one program executing in memory [process
o If several jobs ready to run at the same time [ CPU scheduling
o If processes don’t fit in memory, swapping moves them in and out to run
o Virtual memory allows execution of processes not completely in memory
Memory Layout for Multiprogrammed System

Operating-System Operations
 Interrupt driven by hardware
 Software error or request creates exception or trap
o Division by zero, request for operating system service
 Other process problems include infinite loop, processes modifying each other or
the operating system
 Dual-mode operation allows OS to protect itself and other system components
o User mode and kernel mode
o Mode bit provided by hardware
 Provides ability to distinguish when system is running user code or
kernel code
 Some instructions designated as privileged, only executable in
kernel mode
 System call changes mode to kernel, return from call resets it to user
Transition from User to Kernel Mode
 Timer to prevent infinite loop / process hogging resources
o Set interrupt after specific period
o Operating system decrements counter
o When counter zero generate an interrupt
o Set up before scheduling process to regain control or terminate program
that exceeds allotted time
Process Management
 A process is a program in execution. It is a unit of work within the system.
Program is a passive entity, process is an active entity.
 Process needs resources to accomplish its task
o CPU, memory, I/O, files
o Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location of next
instruction to execute
o Process executes instructions sequentially, one at a time, until completion
 Multi-threaded process has one program counter per thread
 Typically system has many processes, some user, some operating system
running concurrently on one or more CPUs
o Concurrency by multiplexing the CPUs among the processes / threads       
Process Management Activities
The operating system is responsible for the following activities in connection with
process management:
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication
 Providing mechanisms for deadlock handling
Memory Management
 All data in memory before and after processing
 All instructions in memory in order to execute
 Memory management determines what is in memory when
o Optimizing CPU utilization and computer response to users
 Memory management activities
o Keeping track of which parts of memory are currently being used and by
whom
o Deciding which processes (or parts thereof) and data to move into and out
of memory
o Allocating and deallocating memory space as needed        
Storage Management
 OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk drive, tape drive)
 Varying properties include access speed, capacity, data-transfer rate,
access method (sequential or random)
 File-System management
o Files usually organized into directories
o Access control on most systems to determine who can access what
o OS activities include
 Creating and deleting files and directories
 Primitives to manipulate files and dirs
 Mapping files onto secondary storage
 Backup files onto stable (non-volatile) storage media
Mass-Storage Management
 Usually disks used to store data that does not fit in main memory or data that
must be kept for a “long” period of time
 Proper management is of central importance
 Entire speed of computer operation hinges on disk subsystem and its
algorithms
 OS activities
o Free-space management
o Storage allocation
o Disk scheduling
 Some storage need not be fast
o Tertiary storage includes optical storage, magnetic tape
o Still must be managed – by OS or applications
o Varies between WORM (write-once, read-many-times) and RW (read-
write)
Performance of Various Levels of Storage
Movement between levels of storage hierarchy can be explicit or implicit

Migration of Integer A from Disk to Register


 Multitasking environments must be careful to use most recent value, no matter
where it is stored in the storage hierarchy
      

 Multiprocessor environment must provide cache coherency in hardware such


that all CPUs have the most recent value in their cache
 Distributed environment situation even more complex
o Several copies of a datum can exist
I/O Subsystem
 One purpose of OS is to hide peculiarities of hardware devices from the user
 I/O subsystem responsible for
o Memory management of I/O including buffering (storing data temporarily
while it is being transferred), caching (storing parts of data in faster
storage for performance), spooling (the overlapping of output of one job
with input of other jobs)
o General device-driver interface
o Drivers for specific hardware devices        
Protection and Security
 Protection – any mechanism for controlling access of processes or users to
resources defined by the OS
 Security – defense of the system against internal and external attacks
o Huge range, including denial-of-service, worms, viruses, identity theft,
theft of service
 Systems generally first distinguish among users, to determine who can do what
o User identities (user IDs, security IDs) include name and associated
number, one per user
o User ID then associated with all files, processes of that user to determine
access control
o Group identifier (group ID) allows set of users to be defined and controls
managed, then also associated with each process, file
o Privilege escalation allows user to change to effective ID with more
rights
Distributed Computing
 Collection of separate, possibly heterogeneous, systems networked together
o Network is a communications path
 Local Area Network (LAN)
 Wide Area Network (WAN)
 Metropolitan Area Network (MAN)
 Network Operating System provides features between systems across network
o Communication scheme allows systems to exchange messages
o Illusion of a single system
Special-Purpose Systems
 Real-time embedded systems most prevalent form of computers
o Vary considerable, special purpose, limited purpose OS, real-time OS
 Multimedia systems
o Streams of data must be delivered according to time restrictions
 Handheld systems
o PDAs, smart phones, limited CPU, memory, power
o Reduced feature set OS, limited I/O
Computing Environments
 Traditional computer
o Blurring over time
o Office environment
 PCs connected to a network, terminals attached to mainframe or
minicomputers providing batch and timesharing
 Now portals allowing networked and remote systems access to same
resources
o Home networks
 Used to be single system, then modems
 Now firewalled, networked
 Client-Server Computing
o Dumb terminals supplanted by smart PCs
o Many systems now servers, responding to requests generated by clients
 Compute-server provides an interface to client to request services
(i.e., database)
 File-server provides interface for clients to store and retrieve files
          

Peer-to-Peer Computing
 Another model of distributed system
 P2P does not distinguish clients and servers Instead all nodes are considered
peers
o May each act as client, server or both
o Node must join P2P network
 Registers its service with central lookup service on network, or
 Broadcast request for service and respond to requests for service
via discovery protocol
o Examples include Napster and Gnutella
Web-Based Computing
 Web has become ubiquitous
 PCs most prevalent devices
 More devices becoming networked to allow web access
 New category of devices to manage web traffic among similar servers: load
balancers
 Use of operating systems like Windows 95, client-side, have evolved into Linux
and Windows XP, which can be clients and servers
Open-Source Operating System
 Operating systems made available in source-code format rather than just
binary closed-source
 Counter to the copy protection and Digital Rights Management
(DRM) movement
 Started by Free Software Foundation (FSF), which has “copyleft” GNU
Public License (GPL)
 Examples include GNU/Linux and BSD UNIX (including core of Mac OS X),
and many more

What is an Open-Source Operating System?


The term "open source" refers to computer software or applications where the
owners or copyright holders enable the users or third parties to use, see, and edit the
product's source code. The source code of an open-source OS is publicly visible and
editable. The usually operating systems such as Apple's iOS, Microsoft's Windows,
and Apple's Mac OS are closed operating systems. Open-Source Software is licensed
in such a way that it is permissible to produce as many copies as you want and to use
them wherever you like. It generally uses fewer resources than its commercial
counterpart because it lacks any code for licensing, promoting other products,
authentication, attaching advertisements, etc.

The open-source operating system allows the use of code that is freely distributed
and available to anyone and for commercial purposes. Being an open-source
application or program, the program source code of an open-source OS is available.
The user may modify or change those codes and develop new applications according
to the user requirement. Some basic examples of the open-source operating systems
are Linux, Open Solaris, Free RTOS, Open BDS, Free BSD, Minix, etc.

How does Open-Source Operating System work?


It works similarly to a closed operating system, except that the user may modify the
source code of the program or application. There may be a difference in function
even if there is no difference in performance.

For instance, the information is packed and stored in a proprietary (closed) operating
system. In open-source, the same thing happens. However, because the source code
is visible to you, you may better understand the process and change how data is
processed.

While the former operating system is secure and hassle-free, and the latter requires
some technical knowledge, you may customize these and increase performance.
There is no specific way or framework for working on the open-source OS, but it may
be customized on the user requirements.

What is a System Call?


A system call is a method for a computer program to request a
service from the kernel of the operating system on which it is
running. A system call is a method of interacting with the operating
system via programs. A system call is a request from computer
software to an operating system's kernel.

The Application Program Interface (API) connects the operating


system's functions to user programs. It acts as a link between the
operating system and a process, allowing user-level programs to
request operating system services. The kernel system can only be
accessed using system calls. System calls are required for any
programs that use resources.

How are system calls made?


When a computer software needs to access the operating system's
kernel, it makes a system call. The system call uses an API
(Application program interface) to expose the operating system's
services to user programs. It is the only method to access the kernel
system. All programs or processes that require resources for
execution must use system calls, as they serve as an interface
between the operating system and user programs.

How System Calls Work


The Applications run in an area of memory known as user space. A
system call connects to the operating system's kernel, which
executes in kernel space. When an application creates a system call,
it must first obtain permission from the kernel. It achieves this using
an interrupt request, which pauses the current process and transfers
control to the kernel.

If the request is permitted, the kernel performs the requested


action, like creating or deleting a file. As input, the application
receives the kernel's output. The application resumes the procedure
after the input is received. When the operation is finished, the
kernel returns the results to the application and then moves data
from kernel space to user space in memory.

Types of System Calls


There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Process Control
Process control is the system call that is used to direct the
processes. Some process control examples include creating, load,
abort, end, execute, process, terminate the process, etc.

File Management
File management is a system call that is used to handle the files.
Some file management examples include creating files, delete files,
open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with
devices. Some examples of device management include read,
device, write, get device attributes, release device, etc.

Information Maintenance
Information maintenance is a system call that is used to maintain
information. There are some examples of information maintenance,
including getting system data, set time or date, get time or date, set
system data, etc.

Communication
Communication is a system call that is used for communication.
There are some examples of communication, including create,
delete communication connections, send, receive messages, etc.
System Programs

 System programs provide OS functionality through separate applications,


which are not part of the kernel or command interpreters. They are also
known as system utilities or system applications.
 Most systems also ship with useful applications such as calculators and
simple editors, ( e.g. Notepad ). Some debate arises as to the border
between system and non-system applications.
 System programs may be divided into these categories:
o File management - programs to create, delete, copy, rename, print,
list, and generally manipulate files and directories.
o Status information - Utilities to check on the date, time, number of
users, processes running, data logging, etc. System registries are
used to store and recall configuration information for particular
applications.
o File modification - e.g. text editors and other tools which can change
file contents.
o Programming-language support - E.g. Compilers, linkers, debuggers,
profilers, assemblers, library archive management, interpreters for
common languages, and support for make.
o Program loading and execution - loaders, dynamic loaders, overlay
loaders, etc., as well as interactive debuggers.
o Communications - Programs for providing connectivity between
processes and users, including mail, web browsers, remote logins,
file transfers, and remote command execution.
o Background services - System daemons are commonly started when
the system is booted, and run for as long as the system is running,
handling necessary services. Examples include network daemons,
print servers, process schedulers, and system error monitoring
services.
 Most operating systems today also come complete with a set
of application programs to provide additional services, such as copying files
or checking the time and date.
 Most users' views of the system is determined by their command
interpreter and the application programs. Most never make system calls,
even through the API, ( with the exception of simple ( file ) I/O in user-
written programs. )

Different approaches or Structures of Operating Systems

Operating system can be implemented with the help of various structures.


The structure of the OS depends mainly on how the various common
components of the operating system are interconnected and melded into the
kernel. Depending on this we have following structures of the operating
system: 
Simple structure: 
Such operating systems do not have well defined structure and are small,
simple and limited systems. The interfaces and levels of functionality are not
well separated. MS-DOS is an example of such operating system. In MS-
DOS application programs are able to access the basic I/O routines. These
types of operating system cause the entire system to crash if one of the user
programs fails. 
Diagram of the structure of MS-DOS is shown below. 
 
Advantages of Simple structure: 
 It delivers better application performance because of the few interfaces
between the application program and the hardware.
 Easy for kernel developers to develop such an operating system.
Disadvantages of Simple structure: 
 The structure is very complicated as no clear boundaries exists between
modules.
 It does not enforce data hiding in the operating system.
Layered structure: 
An OS can be broken into pieces and retain much more control on system. In
this structure the OS is broken into number of layers (levels). The bottom
layer (layer 0) is the hardware and the topmost layer (layer N) is the user
interface. These layers are so designed that each layer uses the functions of
the lower level layers only. This simplifies the debugging process as if lower
level layers are debugged and an error occurs during debugging then the
error must be on that layer only as the lower level layers have already been
debugged. 
The main disadvantage of this structure is that at each layer, the data needs
to be modified and passed on which adds overhead to the system. Moreover
careful planning of the layers is necessary as a layer can use only lower
level layers. UNIX is an example of this structure. 
 
Advantages of Layered structure:
 Layering makes it easier to enhance the operating system as
implementation of a layer can be changed easily without affecting the
other layers.
 It is very easy to perform debugging and system verification.
Disadvantages of Layered structure:
 In this structure the application performance is degraded as compared to
simple structure. 
 It requires careful planning for designing the layers as higher layers use
the functionalities of only the lower layers.
Micro-kernel: 
This structure designs the operating system by removing all non-essential
components from the kernel and implementing them as system and user
programs. This result in a smaller kernel called the micro-kernel. 
Advantages of this structure are that all new services need to be added to
user space and does not require the kernel to be modified. Thus it is more
secure and reliable as if a service fails then rest of the operating system
remains untouched. Mac OS is an example of this type of OS. 
Advantages of Micro-kernel structure:
 It makes the operating system portable to various platforms.
 As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel structure:
 Increased level of inter module communication degrades system
performance.
Modular structure or approach: 
It is considered as the best approach for an OS. It involves designing of a
modular kernel. The kernel has only set of core components and other
services are added as dynamically loadable modules to the kernel either
during run time or boot time. It resembles layered structure due to the fact
that each kernel has defined and protected interfaces but it is more flexible
than the layered structure as a module can call any other module. 
For example Solaris OS is organized as shown in the figure. 
 

OPERATING SYSTEM GENERATIONS


Operating Systems have evolved over the years. So, their evolution through the years can
be mapped using generations of operating systems. There are four generations of operating
systems. These can be described as follows –

Booting in Operating System


Booting is the process of starting a computer. It can be initiated by hardware such as
a button press or by a software command. After it is switched on, a CPU has no
software in its main memory, so some processes must load software into memory
before execution. This may be done by hardware or firmware in the CPU or by a
separate processor in the computer system.

Restarting a computer also is called rebooting, which can be "hard", e.g., after
electrical power to the CPU is switched from off to on, or "soft", where the power is
not cut. On some systems, a soft boot may optionally clear RAM to zero. Hard and
soft booting can be initiated by hardware such as a button press or a software
command. Booting is complete when the operative runtime system, typically the
operating system and some applications, is attained.

Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned
in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −

S.N. Component & Description


1
Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.

4
Data
This section contains the global and static variables.

Program
A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming
language −
#include<stdio.h>

int main(){
printf("Hello, World! \n");
return0;
}
A computer program is a collection of instructions that performs a specific task when
executed by a computer. When we compare a program with a process, we can
conclude that a process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are
referred to as a software.

Process Life Cycle


When a process executes, it passes through different states. These stages may
differ in different operating systems, and the names of these states are also not
standardized.
In general, a process can have one of the following five states at a time.
S.N. State & Description

1
Start
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so that
they can run. Process may come into this state after Start state or while running it
by but interrupted by the scheduler to assign CPU to some other process.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.

4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from
main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID). A PCB keeps
all the information needed to keep track of a process as listed below in the table −
S.N. Information & Description

1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

2
Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.

4
Pointer
A pointer to parent process.

5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed
for this process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running
state.

7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule
the process.

8
Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

9
Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.

10
IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may


contain different information in different operating systems. Here is a simplified
diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the
process terminates.

A process can be of two types:


 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running
independently, will execute very efficiently, in reality, there are many
situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process
communication (IPC) is a mechanism that allows processes to communicate
with each other and synchronize their actions. The communication between
these processes can be seen as a method of co-operation between them.
Processes can communicate with each other through both:
 
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between
processes via the shared memory method and via the message passing
method. 

An operating system can implement both methods of communication. First,


we will discuss the shared memory methods of communication and then
message passing. Communication between processes using shared memory
requires processes to share some variable, and it completely depends on
how the programmer will implement it. One way of communication using
shared memory can be imagined like this: Suppose process1 and process2
are executing simultaneously, and they share some resources or use some
information from another process. Process1 generates information about
certain computations or resources being used and keeps it as a record in
shared memory. When process2 needs to use the shared information, it will
check in the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared
memory for extracting information as a record from another process as well
as for delivering any specific information to other processes. 

Multi Threading Models in Process Management


Multi threading-It is a process of multiple threads executes at same time.
Many operating systems support kernel thread and user thread in a
combined way. Example of such system is Solaris. Multi threading model
are of three types. 
 
Many to many model.
Many to one model.
one to one model.
Many to Many Model 
In this model, we have multiple user threads multiplex to same or lesser
number of kernel level threads. Number of kernel level threads are
specific to the machine, advantage of this model is if a user thread is
blocked we can schedule others user thread to other kernel thread. Thus,
System doesn’t block if a particular thread is blocked.
It is the best multi threading model.
 

Many to One Model 


In this model, we have multiple user threads mapped to one kernel
thread. In this model when a user thread makes a blocking system call
entire process blocks. As we have only one kernel thread and only one
user thread can access kernel at a time, so multiple threads are not able
access multiprocessor at the same time. 
The thread management is done on the user level so it is more efficient.
 

One to One Model 


In this model, one to one relationship between kernel and user thread. In
this model multiple thread can run on multiple processor. Problem with
this model is that creating a user thread requires the corresponding
kernel thread. 
As each user thread is connected to different kernel , if any user thread
makes a blocking system call, the other user threads won’t be blocked.
 

What are thread libraries?


A thread is a lightweight of process and is a basic unit of CPU utilization which
consists of a program counter, a stack, and a set of registers.
Given below is the structure of thread in a process −

A process has a single thread of control where one program can counter and
one sequence of instructions is carried out at any given time. Dividing an
application or a program into multiple sequential threads that run in quasi-
parallel, the programming model becomes simpler.
Thread has the ability to share an address space and all of its data among
themselves. This ability is essential for some specific applications.
Threads are lighter weight than processes, but they are faster to create and
destroy than processes.
Thread Library
A thread library provides the programmer with an Application program interface
for creating and managing thread.

Ways of implementing thread library


There are two primary ways of implementing thread library, which are as follows

 The first approach is to provide a library entirely in user space with kernel
support. All code and data structures for the library exist in a local function
call in user space and not in a system call.
 The second approach is to implement a kernel level library supported
directly by the operating system. In this case the code and data structures
for the library exist in kernel space.
Invoking a function in the application program interface for the library typically
results in a system call to the kernel.
The main thread libraries which are used are given below −
 POSIX threads − Pthreads, the threads extension of the POSIX standard,
may be provided as either a user level or a kernel level library.
 WIN 32 thread − The windows thread library is a kernel level library
available on windows systems.
 JAVA thread − The JAVA thread API allows threads to be created and
managed directly as JAVA programs.
What are threading issues?
The fork() and exec() system calls
The fork() is used to create a duplicate process. The meaning of the fork() and
exec() system calls change in a multithreaded program.
If one thread in a program which calls fork(), does the new process duplicate all
threads, or is the new process single-threaded? If we take, some UNIX systems
have chosen to have two versions of fork(), one that duplicates all threads and
another that duplicates only the thread that invoked the fork() system call.
If a thread calls the exec() system call, the program specified in the parameter to
exec() will replace the entire process which includes all threads.

Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular
event has occurred. A signal received either synchronously or asynchronously,
based on the source of and the reason for the event being signalled.
All signals, whether synchronous or asynchronous, follow the same pattern as
given below −
 A signal is generated by the occurrence of a particular event.
 The signal is delivered to a process.
 Once delivered, the signal must be handled.
Cancellation
Thread cancellation is the task of terminating a thread before it has completed.
For example − If multiple database threads are concurrently searching through
a database and one thread returns the result the remaining threads might be
cancelled.
A target thread is a thread that is to be cancelled, cancellation of target thread
may occur in two different scenarios −
 Asynchronous cancellation − One thread immediately terminates the
target thread.
 Deferred cancellation − The target thread periodically checks whether it
should terminate, allowing it an opportunity to terminate itself in an ordinary
fashion.
Thread polls
Multithreading in a web server, whenever the server receives a request it creates
a separate thread to service the request.
Some of the problems that arise in creating a thread are as follows −
 The amount of time required to create the thread prior to serving the
request together with the fact that this thread will be discarded once it has
completed its work.
 If all concurrent requests are allowed to be serviced in a new thread, there
is no bound on the number of threads concurrently active in the system.
 Unlimited thread could exhaust system resources like CPU time or
memory.
A thread pool is to create a number of threads at process start-up and place
them into a pool, where they sit and wait for work.

You might also like