You are on page 1of 40

Unit-1 OPERATING SYSTEM BASICS

1.1 Operating-System Services

Figure 1.1 - A view of operating system services

OS provide environments in which programs run, and services for the users of the system,
including:

 User Interfaces - Means by which users can issue commands to the system. Depending
on the system these may be a command-line interface ( e.g. sh, csh, ksh, tcsh, etc. ), a
GUI interface ( e.g. Windows, X-Windows, KDE, Gnome, etc. ), or a batch command
systems. The latter are generally older systems using punch cards of job-control language,
JCL, but may still be used today for specialty systems designed for a single purpose.
 Program Execution - The OS must be able to load a program into RAM, run the program,
and terminate the program, either normally or abnormally.
 I/O Operations - The OS is responsible for transferring data to and from I/O devices,
including keyboards, terminals, printers, and storage devices.
 File-System Manipulation - In addition to raw data storage, the OS is also responsible for
maintaining directory and subdirectory structures, mapping file names to specific blocks of
data storage, and providing tools for navigating and utilizing the file system.
 Communications - Inter-process communications, IPC, either between processes running
on the same processor, or between processes running on separate processors or separate
machines. May be implemented as either shared memory or message passing, ( or some
systems may offer both. )
 Error Detection - Both hardware and software errors must be detected and handled
appropriately, with a minimum of harmful repercussions. Some systems may include
complex error avoidance or recovery systems, including backups, RAID drives, and other
redundant systems. Debugging and diagnostic tools aid users and administrators in tracing
down the cause of problems.

Other systems aid in the efficient operation of the OS itself:

 Resource Allocation - E.g. CPU cycles, main memory, storage space, and peripheral
devices. Some resources are managed with generic systems and others with very carefully
designed and specially tuned systems, customized for a particular resource and operating
environment.
 Accounting - Keeping track of system activity and resource usage, either for billing
purposes or for statistical record keeping that can be used to optimize future performance.

Protection and Security - Preventing harm to the system and to resources, either
through wayward internal processes or malicious outsiders. Authentication,
ownership, and restricted access are obvious parts of this system. Highly secure
systems may log all process activity down to excruciating detail, and security
regulation dictate the storage of those records on permanent non-erasable medium
for extended times in secure ( off-site ) facilities.

1.2 Types of Operating Systems

An operating system is a well-organized collection of programs that manages the computer


hardware. It is a type of system software that is responsible for the smooth functioning of the
computer system.

i) Batch Operating System


ii) Multiprogramming Operating System
iii) Multiprogramming operating system
iv) Multiprocessing operating system
v) Multitasking operating system
vi) Real time operating system
vii) Time sharing operating system
viii) Distributed operating system

i)Batch Operating System

In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were
batched together and executed in time. People were used to having a single computer which was
called a mainframe.

In Batch operating system, access is given to more than one person; they submit their respective
jobs to the system for the execution.

The system put all of the jobs in a queue on the basis of first come first serve and then executes
the jobs one by one. The users collect their respective output when all the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to another as
soon as the job was completed. It contained a small set of programs called the resident monitor
that always resided in one part of the main memory. The remaining part is used for servicing
jobs.

Advantages of Batch OS

o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.

Disadvantages of Batch OS

1. Starvation

Batch processing suffers from starvation.

For Example:

There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very
high, then the other four jobs will never be executed, or they will have to wait for a very long time.
Hence the other processes get starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires
the input of two numbers from the console, then it will never get it in the batch processing
scenario since the user is not present at the time of execution.

ii) Multiprogramming Operating System

Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each
process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.

Advantages of Multiprogramming OS

o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS

o Multiprogramming systems provide an environment in which various systems resources are


used efficiently, but they do not provide any user interaction with the computer system.

iii) Multiprocessing Operating System

In Multiprocessing, Parallel computing is achieved. There are more than one processors present
in the system which can execute more than one process at the same time. This will increase the
throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the throughput of
the system.

Types of multiprocessing system

i) Symmetric multiprocessing
ii) asymmetric multiprocessing

Symmetric multiprocessing refers to the use of all processors to conduct OS tasks. It doesn't
have a master-slave relationship like asymmetric multiprocessing, and the shared memory is
used to communicate by all processors here. The processors start processes from the standard
ready queue, and every CPU might have its private queue of ready-to-execute programs. The
scheduler must ensure that no two CPUs run the same task simultaneously.

Symmetric Multiprocessing provides proper load balancing, improved fault tolerance, and
decreases the possibility of a CPU bottleneck. It is complicated since the entire CPUs share a
memory, and a processor failure in Symmetric Multiprocessing reduces computing capacity.

Asymmetric multiprocessing

The processors in asymmetric multiprocessing have a master-slave relationship, with one


master processor controlling the other slave processors. The slave processors may receive
specified tasks from the master processor or receive processes from the master processor. The
master processor manages the data structure. The master processor manages the scheduling of
processes, I/O processing, and other system operations.

If a master processor fails, one of the slave processors assumes control of the execution. If one
slave processor fails, the other slave processor takes over. It is simple since the data structure
and a single processor controls all system actions. Assume there are four CPUs named C1, C2,
C3, and C4. C4 is the master processor and allocates tasks to the other CPUs. If C1 is assigned
process P1, C2 is assigned process P2, and C3 is assigned process P3. Each processor will only
work on the processes that have been assigned to them.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks can be


distributed among several processors. This increases reliability as if one processor fails,
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.

Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it takes care of


multiple CPUs simultaneously.
iv)Multitasking Operating System

The multitasking operating system is a logical extension of a multiprogramming system that


enables multiple programs simultaneously. It allows a user to perform more than one computer
task at the same time.

There are mainly two types of multitasking. These are as follows:

1. Preemptive Multitasking
2. Cooperative Multitasking

Preemptive Multitasking

Preemptive multitasking is a special task assigned to a computer operating system. It decides


how much time one task spends before assigning another task to use the operating system.
Because the operating system controls the entire process, it is referred to as 'preemptive'.

Preemptive multitasking is used in desktop operating systems. Unix was the first operating
system to use this method of multitasking. Windows NT and Windows 95 were the first versions
of Windows that use preemptive multitasking.

Cooperative Multitasking

The term 'Non-Preemptive Multitasking' refers to cooperative multitasking. The main purpose
of cooperative multitasking is to run the present task while releasing the CPU to allow another
process to run. This task is carried out by using taskYIELD (). When the taskYIELD() function is
called, context-switch is executed.

Advantages of Multitasking operating system

o This operating system is more suited to supporting multiple users simultaneously.


o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system


o The multiple processors are busier at the same time to complete any task in a multitasking
environment, so the CPU generates more heat.

v) Network Operating System

An Operating system, which includes software and associated protocols to communicate with
other computers via a network conveniently and cost-effectively, is called Network Operating
System.

Types of Network Operating System

Network operating systems can be specialized serve as:

o Peer To Peer System


o Client-Server System

Peer To Peer Network Operating System

Peer To Peer networks are the network resources in which each system has the same
capabilities and responsibilities, i.e., none of the systems in this architecture is superior to the
others in terms of functionality.

There is no master-slave relationship among the systems, i.e., every node is equal in a Peer to
Peer Network Operating System. All the nodes at the Network have an equal relationship with
others and have a similar type of software that helps the sharing of resources.

A Peer to Peer Network Operating System allows two or more computers to share their
resources, along with printers, scanners, CD-ROM, etc., to be accessible from each computer.
These networks are best suitable for smaller environments with 25 or fewer workstations.

To establish a Peer Peer Network, you need network adapter cards, properly installed network
cabling to connect them, and a network hub or switch to interconnect the computers.
Peer to Peer Networks is organized, simply a group of computers that can share resources. Each
computer in a workstation keeps track of its user accounts and security settings, so no single
computer is in charge of the workgroup. Workgroups have little security, and there is no central
login process. Any user can use any shared resources once he logs into a peer on the Network.
As there is no central security, sharing resources can be controlled by a password, or the user
may stop the accessibility of certain files or folders by making them not shared.

Advantages of Peer To Peer Network Operating System

o This type of system is less expensive to set up and maintain.


o In this, dedicated hardware is not required.
o It does not require a dedicated network administrator to set up some network policies.
o It is very easy to set up as a simple cabling scheme is used, usually a twisted pair cable.

Disadvantages of Peer To Peer Network Operating System

o Peer to Peer networks are usually less secure because they commonly use share-level
security.
o This failure of any node in a system affects the whole system.
o Its performance degrades as the Network grows.
o Peer to Peer networks cannot differentiate among network users who are accessing a
resource.
o In Peer to Peer Network, each shared resource you wish to control must have its
password. These multiple passwords may be difficult to remember.
o Lack of central control over the Network.

Client-Server Network Operating System

In Client-Server systems, there are two broad categories of systems:

o The server is called the backend.


o A client called as frontend.

Client-Server Network Operating System is a server-based Network in which storage and


processing workload is shared among clients and servers.

The client requests offerings which include printing and document storage, and servers satisfy
their requests. Normally all community offerings like digital mail, printing are routed through the
server.

Server computers systems are commonly greater effective than client computer systems. This
association calls for software programs for the customers and servers. The software program
walking at the server is known as the Network Operating System, which offers a community of
surroundings for server and client.
Client-Server Network was developed to deal with the environment when many PC printers and
servers are connected via a network. The fundamental concept changed to outline a specialized
server with unique functionality.

Advantages of Client-Server Network Operating System

o This Network is more secure than the Peer Peer Network system due to centralized data
security.
o Network traffic reduces due to the division of work among clients and the server.
o The area covered is quite large, so it is valuable to large and modern organizations
because it distributes storage and processing.
o The server can be accessed remotely and across multiple platforms in the Client-Server
Network system.

Disadvantages of Client-Server Network Operating Systems

o In Client-Server Networks, security and performance are important issues. So trained


network administrators are required for network administration.
o Implementing the Client-Server Network can be a costly issue depending upon the security,
resources, and connectivity.

Advantages of Network Operating System

o In this type of operating system, network traffic reduces due to the division between clients
and the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System

o In this type of operating system, the failure of any node in a system affects the whole
system.
o Security and performance are important issues. So trained network administrators are
required for network administration.

vi) Real Time Operating System

In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be
completely useless.

The Application of a Real-Time system exists in the case of military applications, if you want to
drop a missile, then the missile is supposed to be dropped with a certain precision.

Hard Real-Time operating system:


In Hard RTOS, all critical tasks must be completed within the specified time duration, i.e., within
the given deadline. Not meeting the deadline would result in critical failures such as damage to
equipment or even loss of human life.

Soft Real-Time operating system:

Soft RTOS accepts a few delays via the means of the Operating system. In this kind of RTOS,
there may be a closing date assigned for a particular job, but a delay for a small amount of time
is acceptable. So, cut off dates are treated softly via means of this kind of RTOS.

Firm Real-Time operating system:

In Firm RTOS additionally want to observe the deadlines. However, lacking a closing date might
not have a massive effect, however may want to purposely undesired effects, like a massive
discount within the fine of a product.

Advantages of Real-time operating system:

o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o In a Real-time operating system, the maximum utilization of devices and systems.

Disadvantages of Real-time operating system:

o Real-time operating systems are very costly to develop.


o Real-time operating systems are very complex and can consume critical CPU cycles.

vii) Time-Sharing Operating System

In the Time Sharing operating system, computer resources are allocated in a time-dependent
fashion to several programs simultaneously. Thus it helps to provide a large number of user's
direct access to the main computer. It is a logical extension of multiprogramming. In time-sharing,
the CPU is switched among multiple programs given by different users on a scheduled basis.

A time-sharing operating system allows many users to be served simultaneously, so


sophisticated CPU scheduling schemes and Input/output management are required.
Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System

o The time-sharing operating system provides effective utilization and sharing of resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System

o Data transmission rates are very high in comparison to other methods.


o Security and integrity of user programs loaded in memory and data need to be maintained
as many users access the system at the same time.

viii) Distributed Operating System

The Distributed Operating system is not installed on a single machine, it is divided into parts, and
these parts are loaded on different machines. A part of the distributed Operating system is
installed on each machine to make their communication possible. Distributed Operating systems
are much more complex, large, and sophisticated than Network operating systems because they
also have to take care of varying networking protocols.

Advantages of Distributed Operating System

o The distributed operating system provides sharing of resources.


o This type of system is fault-tolerant.

Disadvantages of Distributed Operating System

o Protocol overhead can dominate computation cost.

An operating system is a program that acts as an intermediary between a user of a


computer and the computer hardware. It provides an environment in which a user can
execute programs. It also manages all the programs.

1.3 Computer-system operation


A modern computer system

Bootstrap program is the first code that is executed when the computer system is started. The
entire operating system depends on the bootstrap program to work correctly as it loads the
operating system.

- Bootstrap program is loaded at power-up or reboot


- Typically stored in ROM or EPROM, generally known as firmware
- Initializes all aspects of system
- Loads operating system kernel and starts execution

- One or more CPUs, device controllers connect through common bus providing access to
shared memory

- Concurrent execution of CPUs and devices competing for memory cycles

- I/O devices and the CPU can execute concurrently

-Each device controller is in charge of a particular device type

- Each device controller has a local buffer

- CPU moves data from/to main memory to/from local buffers

- I/O is from the device to local buffer of controller

- Device controller informs CPU that it has finished its operation by causing an interrupt

1.3.1 Common Functions of Interrupts

Interrupt transfers control to the interrupt service routine generally, through the interrupt
vector, which contains the addresses of all the service routines
Interrupt architecture must save the address of the interrupted instruction

A trap or exception is a software-generated interrupt caused either by an error or a user


request

An operating system is interrupt driven

1.3.2 Interrupt Handling

The operating system preserves the state of the CPU by storing registers and the program
counter

Determines which type of interrupt has occurred:

Polling
The process in which the CPU constantly checks the status of the device- to see if it needs
the CPU's attention.
It is a protocol.
In this protocol, the CPU services the device.

vectored interrupt system

In a computer, a vectored interrupt is an I/O interrupt that tells the part of the computer that
handles I/O interrupts at the hardware level that a request for attention from an I/O device has
been received and and also identifies the device that sent the request.

A vectored interrupt is an alternative to a polled interrupt , which requires that the interrupt
handler poll or send a signal to each device in turn in order to find out which one sent the
interrupt request

1.4 I/O Structure consists of Programmed I/O, Interrupt driven I/O, DMS, CPU, Memory,
External devices, these are all connected with the help of Peripheral I/O Buses and General I/O
Buses.
Different types of I/O Present inside the system are shown below −
Programmed I/O
In the programmed I/O when we write the input then the device should be ready to take the data
otherwise the program should wait for some time so that the device or buffer will be free then it
can take the input.
Once the input is taken then it will be checked whether the output device or output buffer is free
then it will be printed. This process is continued every time in transferring of the data.
I/O Interrupts
To initiate any I / O operation, the CPU first loads the registers to the device controller. Then the
device controller checks the contents of the registers to determine what operation to perform.
There are two possibilities if I / O operations want to be executed. These are as follows −
 Synchronous I / O − The control is returned to the user process after the I/O process is
completed.
 Asynchronous I/O − The control is returned to the user process without waiting for the I/O
process to finish. Here, I/O process and the user process run simultaneously.
DMA Structure
Direct Memory Access (DMA) is a method of handling I / O. Here the device controller directly
communicates with memory without CPU involvement.
After setting the resources of I/O devices like buffers, pointers, and counters, the device
controller transfers blocks of data directly to storage without CPU intervention.
DMA is generally used for high speed I / O devices.

1.5 Storage Structure / Memory Hierarchy


The Computer memory hierarchy looks like a pyramid structure which is used to describe the
differences among memory types. It separates the computer storage based on hierarchy.
Level 0: CPU registers
Level 1: Cache memory
Level 2: Main memory or primary memory
Level 3: Magnetic disks or secondary memory
Level 4: Optical disks or magnetic types or tertiary Memory

In Memory Hierarchy the cost of memory, capacity is inversely proportional to speed. Here the
devices are arranged in a manner Fast to slow, that is from register to Tertiary memory.
Level-0 − Registers
The registers are present inside the CPU. As they are present inside the CPU, they have least
access time. Registers are most expensive and smallest in size generally in kilobytes. They are
implemented by using Flip-Flops.
Level-1 − Cache
Cache memory is used to store the segments of a program that are frequently accessed by the
processor. It is expensive and smaller in size generally in Megabytes and is implemented by
using static RAM.
Level-2 − Primary or Main Memory
It directly communicates with the CPU and with auxiliary memory devices through an I/O
processor. Main memory is less expensive than cache memory and larger in size generally in
Gigabytes. This memory is implemented by using dynamic RAM.
Level-3 − Secondary storage
Secondary storage devices like Magnetic Disk are present at level 3. They are used as backup
storage. They are cheaper than main memory and larger in size generally in a few TB.
Level-4 − Tertiary storage
Tertiary storage devices like magnetic tape are present at level 4. They are used to store
removable files and are the cheapest and largest in size (1-20 TB).
Memory levels in terms of size, access time, bandwidth is as follows.
Level Register Cache Primary Secondary
memory memory
Bandwidth 4k to 800 to 5k 400 to 2k 4 to 32
32k MB/sec MB/sec MB/sec
MB/sec
Size Less Less Less than Greater
than than 4MB 2 GB than 2 GB
1KB
Access 2 to 3 to 10 80 to 400 5ms
time 5nsec nsec nsec
Managed Compiler Hardware Operating OS or
by system user
Why memory Hierarchy is used in systems?
Memory hierarchy is arranging different kinds of storage present on a computing device based
on speed of access. At the very top, the highest performing storage is CPU registers which are
the fastest to read and write to. Next is cache memory followed by conventional DRAM memory,
followed by disk storage with different levels of performance including SSD, optical and magnetic
disk drives.
To bridge the processor memory performance gap, hardware designers are increasingly relying
on memory at the top of the memory hierarchy to close / reduce the performance gap. This is
done through increasingly larger cache hierarchies reducing the dependency on main memory
which is slower.
1.6 Common System Components

There are various components of an Operating System to perform well defined tasks. Though
most of the Operating Systems differ in structure but logically they have similar components.
Each component must be a well-defined portion of a system that appropriately describes the
functions, inputs, and outputs.
There are following 8-components of an Operating System:

1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System

Explanation of the above components in more detail:


Process Management
A process is program or a fraction of a program that is loaded in main memory. A process needs
certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The
process management component manages the multiple processes running simultaneously on the
Operating System.
A program in running state is called a process.

The operating system is responsible for the following activities in connection with process
management:

 Create, load, execute, suspend, resume, and terminate processes.


 Switch system among multiple processes in main memory.
 Provides communication mechanisms so that processes can communicate with each
others
 Provides synchronization mechanisms to control concurrent access to shared data to keep
shared data consistent.
 Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

I/O Device Management


One of the purposes of an operating system is to hide the peculiarities of specific hardware
devices from the user. I/O Device Management provides an abstract level of H/W devices and
keep the details from applications to ensure proper use of devices, to prevent errors, and to
provide users with convenient and efficient programming environment.
Following are the tasks of I/O Device Management component:

 Hide the details of H/W devices


 Manage main memory for the devices using cache, buffer, and spooling
 Maintain and provide custom drivers for each device.

File Management
File management is one of the most visible services of an operating system. Computers can
store information in several different physical forms; magnetic tape, disk, and drum are the most
common forms.
A file is defined as a set of correlated information and it is defined by the creator of the file.
Mostly files represent data, source and object forms, and programs. Data files can be of any type
like alphabetic, numeric, and alphanumeric.
A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and
user.

The operating system implements the abstract concept of the file by managing mass storage
device, such as types and disks. Also files are normally organized into directories to ease their
use. These directories may contain files and other directories and so on.
The operating system is responsible for the following activities in connection with file
management:

 File creation and deletion


 Directory creation and deletion
 The support of primitives for manipulating files and directories
 Mapping files onto secondary storage
 File backup on stable (nonvolatile) storage media

Network Management
The definition of network management is often broad, as network management involves several
different components. Network management is the process of managing and administering a
computer network. A computer network is a collection of various types of computers connected
with each other.
Network management comprises fault analysis, maintaining the quality of service, provisioning of
networks, and performance management.
Network management is the process of keeping your network healthy for an efficient
communication between different computers.

Following are the features of network management:

 Network administration
 Network maintenance
 Network operation
 Network provisioning
 Network security
Main Memory Management
Memory is a large array of words or bytes, each with its own address. It is a repository of quickly
accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device which means it loses its contents in the case of system
failure or as soon as system power goes down.
The main motivation behind Memory Management is to maximize memory utilization on the
computer system.

The operating system is responsible for the following activities in connections with memory
management:

 Keep track of which parts of memory are currently being used and by whom.
 Decide which processes to load when memory space becomes available.
 Allocate and deallocate memory space as needed.

Secondary Storage Management


The main purpose of a computer system is to execute programs. These programs, together with
the data they access, must be in main memory during execution. Since the main memory is too
small to permanently accommodate all data and program, the computer system must provide
secondary storage to backup main memory.
Most modern computer systems use disks as the principle on-line storage medium, for both
programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters,
and so on, are stored on the disk until loaded into memory, and then use the disk as both the
source and destination of their processing.
The operating system is responsible for the following activities in connection with disk
management:

 Free space management


 Storage allocation
 Disk scheduling

Security Management
The operating system is primarily responsible for all task and activities happen in the computer
system. The various processes in an operating system must be protected from each other’s
activities. For that purpose, various mechanisms which can be used to ensure that the files,
memory segment, cpu and other resources can be operated on only by those processes that
have gained proper authorization from the operating system.
Security Management refers to a mechanism for controlling the access of programs, processes,
or users to the resources defined by a computer controls to be imposed, together with some
means of enforcement.

For example, memory addressing hardware ensure that a process can only execute within its
own address space. The timer ensure that no process can gain control of the CPU without
relinquishing it. Finally, no process is allowed to do it’s own I/O, to protect the integrity of the
various peripheral devices.
Command Interpreter System
One of the most important component of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Command Interpreter System executes a user command by calling one or more number of
underlying system programs or system calls.
Command Interpreter System allows human users to interact with the Operating System and
provides convenient programming environment to the users.

Many commands are given to the operating system by control statements. A program which
reads and interprets control statements is automatically executed. This program is called the
shell and few examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of
Unix/Linux.
Other Important Activities
An Operating System is a complex Software System. Apart from the above mentioned
components and responsibilities, there are many other activities performed by the Operating
System. Few of them are listed below:
 Security − By means of password and similar other techniques, it prevents unauthorized
access to programs and data.
 Control over system performance − Recording delays between request for a service and
response from the system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other debugging
and error detecting aids.
 Coordination between other softwares and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the computer
systems.

1.7 System Calls

The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also included in
the manuals used by the assembly level programmers. System calls are usually made when a
process in user mode requires access to a resource. Then it requests the kernel to provide the
resource via a system call.
A figure representing the execution of the system call is given as follows −
As can be seen from this diagram, the processes execute normally in the user mode until a
system call interrupts this. Then the system call is executed on a priority basis in the kernel
mode. After the execution of the system call, the control returns to the user mode and execution
of user processes can be resumed.
In general, system calls are required in the following situations −
 If a file system requires the creation or deletion of files. Reading and writing from files also
require a system call.
 Creation and management of new processes.
 Network connections also require system calls. This includes sending and receiving packets.
 Access to a hardware devices such as a printer, scanner etc. requires a system call.

1.7.1 Types of System Calls


There are mainly five types of system calls. These are explained in detail as follows −
Process Control
These system calls deal with processes such as process creation, process termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a file,
writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from device buffers,
writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system and the
user program.
Communication
These system calls are useful for interprocess communication. They also deal with creating and
deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix are given as
follows −
Types of
Windows Linux
System Calls

Process
CreateProcess()ExitProcess()WaitForSingleObject() fork()exit()wait()
Control

File
CreateFile()ReadFile()WriteFile()CloseHandle() open()read()write()close()
Management

Device
SetConsoleMode()ReadConsole()WriteConsole() ioctl()read()write()
Management

Information
GetCurrentProcessID()SetTimer()Sleep() getpid()alarm()sleep()
Maintenance

Communication CreatePipe()CreateFileMapping()MapViewOfFile() pipe()shmget()mmap()

There are many different system calls as shown above. Details of some of those system calls are
as follows −
open()
The open() system call is used to provide access to a file in a file system. This system call
allocates resources to the file and provides a handle that the process uses to refer to the file. A
file can be opened by multiple processes at the same time or be restricted to one process. It all
depends on the file organisation and file system.
read()
The read() system call is used to access data from a file that is stored in the file system. The file
to read can be identified by its file descriptor and it should be opened using open() before it can
be read. In general, the read() system calls takes three arguments i.e. the file descriptor, buffer
which stores read data and number of bytes to be read from the file.
write()
The write() system calls writes the data from a user buffer into a device such as a file. This
system call is one of the ways to output data from a program. In general, the write system calls
takes three arguments i.e. file descriptor, pointer to the buffer where data is stored and number
of bytes to write from the buffer.
close()
The close() system call is used to terminate access to a file system. Using this system call
means that the file is no longer required by the program and so the buffers are flushed, the file
metadata is updated and the file resources are de-allocated.
1.8 System Programs

In an operating system a user is able to use different types of system programs and the system
program is responsible for all the application software performance of the computer.
The system programs are responsible for the development and execution of a program and they
can be used by the help of system calls because system calls define different types of system
programs for different tasks.
 File management − These programs create, delete, copy, rename, print, exit and generally
manipulate the files and directory.
 Status information − It is the information regarding input, output process, storage and the
CPU utilization time how the process will be calculated in how much memory required to
perform a task is known by status information.
 Programming language supports − compiler, assembler, interrupt are programming
language support used in the operating system for a particular purpose in the computer.
 Programming Loading and execution − The needs to enter the program and after the
loading of a program it needs to execute the output of the programs and this task is also
performed by system calls by the help of system programs.
 Communication − These services are provided by the user because by using this number of
devices communicates with each other by the help of device or wireless and communication is
necessary for the operating system.
 Background services − There are different types of services available on the operating
system for communication and background service is used to change the background of your
window and it also works for scanning and detecting viruses in the computer.
1.8.1 Purpose of using system program
System programs communicate and coordinate the activities and functions of hardware and
software of a system and also controls the operations of the hardware. An operating system is
one of the examples of system software. Operating system controls the computer hardware and
acts like an interface between the application software’s.
1.8.2 Types of System programs
The types of system programs are as follows −
i)Utility program
It manages, maintains and controls various computer resources. Utility programs are
comparatively technical and are targeted for the users with solid technical knowledge.
Example - antivirus software, backup software and disk tools.
ii)Device drivers
It controls the particular device connected to a computer system. Device drivers basically act as
a translator between the operating system and device connected to the system.
Example − printer driver, scanner driver, storage device driver etc.
iii)Directory reporting tools
These tools are required in an operation system to have some software to facilitate the
navigation through the computer system.
Example − dir, ls, Windows Explorer etc.
1.9 Operating System Design and Implementation
An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
There are many problems that can occur while designing and implementing an operating system.
These are covered in operating system design and implementation.
1.9.1 Operating System Design Goals
It is quite complicated to define all the goals and specifications of the operating system while
designing it.The design changes depending on the type of the operating system i.e if it is batch
system, time shared system, single user system, multi user system, distributed system etc.
There are basically two types of goals while designing an operating system. These are −
User Goals
The operating system should be convenient, easy to use, reliable, safe and fast according to the
users. However, these specifications are not very useful as there is no set method to achieve
these goals.
System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system. But
there is not specific method to achieve these goals as well.
1.9.2 Operating System Mechanisms and Policies
There is no specific way to design an operating system as it is a highly creative task. However,
there are general software principles that are applicable to all operating systems.
A subtle difference between mechanism and policy is that mechanism shows how to do
something and policy shows what to do. Policies may change over time and this would lead to
changes in mechanism. So, it is better to have a general mechanism that would require few
changes even when a policy change occurs.
For example - If the mechanism and policy are independent, then few changes are required in
mechanism if policy changes. If a policy favours I/O intensive processes over CPU intensive
processes, then a policy change to preference of CPU intensive processes will not change the
mechanism.
1.9.3 Operating System Implementation
The operating system needs to be implemented after it is designed. Earlier they were written in
assembly language but now higher level languages are used.
Advantages of Higher Level Language
There are multiple advantages to implementing an operating system using a higher level
language such as: the code is written more fast, it is compact and also easier to debug and
understand. Also, the operating system can be easily moved from one hardware to another if it is
written in a high level language.
Disadvantages of Higher Level Language
Using high level language for implementing an operating system leads to a loss in speed and
increase in storage requirements. However in modern systems only a small amount of code is
needed for high performance, such as the CPU scheduler and memory manager. Also, the
bottleneck routines in the system can be replaced by assembly language equivalents if required.

1.10.1 Process Concept

 A process is an instance of a program in execution.


 Batch systems work in terms of "jobs". Many modern process concepts are still expressed
in terms of jobs, ( e.g. job scheduling ), and the two terms are often used interchangeably.

1.10.2 The Process

 Process memory is divided into four sections as shown in Figure 1.10 below:
o The text section comprises the compiled program code, read in from non-
volatile storage when the program is launched.
o The data section stores global and static variables, allocated and initialized
prior to executing main.
o The heap is used for dynamic memory allocation, and is managed via calls to
new, delete, malloc, free, etc.
o The stack is used for local variables. Space on the stack is reserved for local
variables when they are declared , and the space is freed up when the
variables go out of scope. Note that the stack is also used for function return
values, and the exact mechanisms of stack management may be language
specific.
o Note that the stack and the heap start at opposite ends of the process's free
space and grow towards each other. If they should ever meet, then either a
stack overflow error will occur, or else a call to new or malloc will fail due to
insufficient memory available.
 When processes are swapped out of memory and later restored, additional
information must also be stored and restored. Key among them are the program
counter and the value of all program registers.

Figure 1.10 - A process in memory

1.10.3 Process State

 Processes may be in one of 5 states, as shown in Figure 1.11 below.


o New - The process is in the stage of being created.
o Ready - The process has all the resources available that it needs to run, but the
CPU is not currently working on this process's instructions.
o Running - The CPU is working on this process's instructions.
o Waiting - The process cannot run at the moment, because it is waiting for
some resource to become available or for some event to occur. For example
the process may be waiting for keyboard input, disk access request, inter-
process messages, a timer to go off, or a child process to finish.
o Terminated - The process has completed.
 The load average reported by the "w" command indicate the average number of
processes in the "Ready" state over the last 1, 5, and 15 minutes, i.e. processes who
have everything they need to run but cannot because the CPU is busy doing
something else.
 Some systems may have other states besides the ones listed here.

Figure 1.11 - Diagram of process state

1.10.4 Process Control Block

For each process there is a Process Control Block, PCB, which stores the following ( types of )
process-specific information, as illustrated in Figure 1.12 .

 Process State - Running, waiting, etc., as discussed above.


 Process ID, and parent process ID.
 CPU registers and Program Counter - These need to be saved and restored when
swapping processes in and out of the CPU.
 CPU-Scheduling information - Such as priority information and pointers to
scheduling queues.
 Memory-Management information - E.g. page tables or segment tables.
 Accounting information - user and kernel CPU time consumed, account numbers,
limits, etc.
 I/O Status information - Devices allocated, open file tables, etc.
Figure 1.12 - Process control block ( PCB )

Figure 1.13 - Diagram showing CPU switch from process to process

1.11 Process Scheduling

 The two main objectives of the process scheduling system are to keep the CPU busy at all
times and to deliver "acceptable" response times for all programs, particularly for interactive
ones.
 The process scheduler must meet these objectives by implementing suitable policies for
swapping processes in and out of the CPU.

1.11.1 Scheduling Queues

 All processes are stored in the job queue.


 Processes in the Ready state are placed in the ready queue.
 Processes waiting for a device to become available or to deliver data are placed
in device queues. There is generally a separate device queue for each device.
 Other queues may also be created and used as needed.
Figure 1.14 - The ready queue and various I/O device queues

1.11.2 Schedulers

A process migrates between the various scheduling queues throughout its lifetime. The
operating system must select , for scheduling purposes, process from these queues in some
fashion. The selection process is carried out by the appropriate scheduler.

 A long-term scheduler is typical of a batch system or a very heavily loaded system.


It runs infrequently, ( such as when one process ends selecting one more to be
loaded in from disk in its place ), and can afford to take the time to implement
intelligent and advanced scheduling algorithms.
 The short-term scheduler, or CPU Scheduler, runs very frequently, on the order of
100 milliseconds, and must very quickly swap one process out of the CPU and swap
in another one.
 Some systems also employ a medium-term scheduler. When system loads get
high, this scheduler will swap one or more processes out of the ready queue system
for a few seconds, in order to allow smaller faster jobs to finish up quickly and clear
the system. See the differences in Figures 1.15 and 1.16 below.
 An efficient scheduling system will select a good process mix of CPU-
bound processes and I/O bound processes.
Figure 1.15 - Queueing-diagram representation of process scheduling

Figure 1.16Addition of a medium-term scheduling to the queueing diagram

1.11.3 Context Switch

 Whenever an interrupt arrives, the CPU must do a state-save of the currently running
process, then switch into kernel mode to handle the interrupt, and then do a state-
restore of the interrupted process.This is known as context switch .
 Similarly, a context switch occurs when the time slice for one process has expired
and a new process is to be loaded from the ready queue. This will be instigated by a
timer interrupt, which will then cause the current process's state to be saved and the
new process's state to be restored.
 Saving and restoring states involves saving and restoring all of the registers and
program counter(s), as well as the process control blocks described above.
 Context switching happens VERY VERY frequently, and the overhead of doing the
switching is just lost CPU time, so context switches ( state saves & restores ) need to
be as fast as possible. Some hardware has special provisions for speeding this up,
such as a single machine instruction for saving or restoring all registers at once.
1.12 Operations on Processes

1.12.1 Process Creation

 Processes may create other processes through appropriate system calls, such
as fork or spawn. The process which does the creating is termed the parent of the
other process, which is termed its child.
 Each process is given an integer identifier, termed its process identifier, or PID. The
parent PID ( PPID ) is also stored for each process.
 On typical UNIX systems the process scheduler is termed sched, and is given PID 0.
The first thing it does at system startup time is to launch init, which gives that
process PID 1. Init then launches all system daemons and user logins, and becomes
the ultimate parent of all other processes. Figure 1.12.1 shows a typical process tree
for a Linux system, and other systems will have similar though not identical trees:

Figure 1.12.1 - A tree of processes on a typical Linux system

 Depending on system implementation, a child process may receive some amount of


shared resources with its parent. Child processes may or may not be limited to a
subset of the resources originally allocated to the parent, preventing runaway children
from consuming all of a certain system resource.
 There are two options for the parent process after creating the child:
1. Wait for the child process to terminate before proceeding. The parent makes a
wait( ) system call, for either a specific child or for any child, which causes the
parent process to block until the wait( ) returns. UNIX shells normally wait for
their children to complete before issuing a new prompt.
2. Run concurrently with the child, continuing to process without waiting. This is
the operation seen when a UNIX shell runs a process as a background task. It
is also possible for the parent to run for a while, and then wait for the child later,
which might occur in a sort of a parallel processing operation. ( E.g. the parent
may fork off a number of children without waiting for any of them, then do a little
work of its own, and then wait for the children. )
 Two possibilities for the address space of the child relative to the parent:
1. The child may be an exact duplicate of the parent, sharing the same program
and data segments in memory. Each will have their own PCB, including
program counter, registers, and PID. This is the behavior of the fork system call
in UNIX.
2. The child process may have a new program loaded into its address space, with
all new code and data segments. This is the behavior of the spawn system
calls in Windows. UNIX systems implement this as a second step, using
the exec system call.
 Figures 3.10 below shows the fork process on a UNIX system. Note that
the fork system call returns the PID of the processes child to each process - It
returns a zero to the child process and a non-zero child PID to the parent, so the
return value indicates which process is which. Process IDs can be looked up any time
for the current process or its direct parent using the getpid( ) and getppid( ) system
calls respectively.
1.12.2 Process Termination

 Processes may request their own termination by making the exit( ) system call,
typically returning an int. This int is passed along to the parent if it is doing a wait( ),
and is typically zero on successful completion and some non-zero code in the event
of problems.
 processes may also be terminated by the system for a variety of reasons, including:
o The inability of the system to deliver necessary system resources.
o In response to a KILL command, or other un handled process interrupt.
o A parent may kill its children if the task assigned to them is no longer needed.
 If the parent exits, the system may or may not allow the child to continue without a
parent.
 When a process terminates, all of its system resources are freed up, open files
flushed and closed, etc. The process termination status and execution times are
returned to the parent if the parent is waiting for the child to terminate, or eventually
returned to init if the process becomes an orphan.

1.13 Interprocess Communication

 Independent Processes operating concurrently on a systems are those that can neither
affect other processes or be affected by other processes.
 Cooperating Processes are those that can affect or be affected by other processes.

There are several reasons why cooperating processes are allowed:

o Information Sharing - There may be several processes which need access to the
same file for example. ( e.g. pipelines. )
o Computation speedup - Often a solution to a problem can be solved faster if the
problem can be broken down into sub-tasks to be solved simultaneously ( particularly
when multiple processors are involved. )
o Modularity - The most efficient architecture may be to break a system down into
cooperating modules. ( E.g. databases with a client-server architecture. )
o Convenience - Even a single user may be multi-tasking, such as editing, compiling,
printing, and running the same code in different windows.
 Cooperating processes require some type of inter-process communication, which is most
commonly one of two types: Shared Memory systems or Message Passing systems. Figure
1.17 illustrates the difference between the two systems:
Figure 1.17 - Communications models: (a) Message passing. (b) Shared memory.

 Shared Memory is faster once it is set up, because no system calls are required and
access occurs at normal memory speeds. However it is more complicated to set up, and
doesn't work as well across multiple computers. Shared memory is generally preferable
when large amounts of information must be shared quickly on the same computer.
 Message Passing requires system calls for every message transfer, and is therefore
slower, but it is simpler to set up and works well across multiple computers. Message
passing is generally preferable when the amount and/or frequency of data transfers is
small, or when multiple computers are involved.

1.13.1 Shared-Memory Systems

 In general the memory to be shared in a shared-memory system is initially within the


address space of a particular process, which needs to make system calls in order to
make that memory publicly available to one or more other processes.
 Other processes which wish to use the shared memory must then make their own
system calls to attach the shared memory area onto their address space.
 Generally a few messages must be passed back and forth between the cooperating
processes first in order to set up and coordinate the shared memory access.

Producer-Consumer Example Using Shared Memory

 This is a classic example, in which one process is producing data and another
process is consuming the data. ( In this example in the order in which it is produced,
although that could vary. )
 The data is passed via an intermediary buffer, which may be either unbounded or
bounded. With a bounded buffer the producer may have to wait until there is space
available in the buffer, but with an unbounded buffer the producer will never need to
wait. The consumer may need to wait in either case until there is data available.
 This example uses shared memory and a circular queue. Note in the code below that
only the producer changes "in", and only the consumer changes "out", and that they
can never be accessing the same array location at the same time.
 First the following data is set up in the shared memory area:

#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[ BUFFER_SIZE ];


int in = 0;
int out = 0;

 Then the producer process. Note that the buffer is full when "in" is one less than "out"
in a circular sense:

item nextProduced;

while( true ) {

/* Produce an item and store it in nextProduced */


nextProduced = makeNewItem( . . . );

/* Wait for space to become available */


while( ( ( in + 1 ) % BUFFER_SIZE ) == out )
; /* Do nothing */

/* And then store the item and repeat the loop. */


buffer[ in ] = nextProduced;
in = ( in + 1 ) % BUFFER_SIZE;

 Then the consumer process. Note that the buffer is empty when "in" is equal to "out":

item nextConsumed;

while( true ) {

/* Wait for an item to become available */


while( in == out )
; /* Do nothing */

/* Get the next available item */


nextConsumed = buffer[ out ];
out = ( out + 1 ) % BUFFER_SIZE;

/* Consume the item in nextConsumed


( Do something with it ) */

}
1.13.2 Message-Passing Systems

 Message passing systems must support at a minimum system calls for "send
message" and "receive message".
 A communication link must be established between the cooperating processes before
messages can be sent.
 There are three key issues to be resolved in message passing systems:
o Direct or indirect communication ( naming )
o Synchronous or asynchronous communication
o Automatic or explicit buffering.

1.13.2.1 Naming

 With direct communication the sender must know the name of the receiver to
which it wishes to send a message.
o There is a one-to-one link between every sender-receiver pair.
o For symmetric communication, the receiver must also know the specific
name of the sender from which it wishes to receive messages.
For asymmetric communications, this is not necessary.
 Indirect communication uses shared mailboxes, or ports.
o Multiple processes can share the same mailbox or boxes.
o Only one process can read any given message in a mailbox. Initially the
process that creates the mailbox is the owner, and is the only one allowed
to read mail in the mailbox, although this privilege may be transferred.
o The OS must provide system calls to create and delete mailboxes, and to
send and receive messages to/from mailboxes.

1.13.2.2 Synchronization

 Either the sending or receiving of messages ( or neither or both ) may be


either blocking or non-blocking.
 Blocking send: The sending process is blocked until the message is received
by the receiving process or by the mailbox.
 Nonblock send: The sending process sends the message and resumes
operation.
 Blocking receive: The receiver blocks until a message is available.
 Nonblocking receive: The receiver retrieves either a valid message or a null.

1.13.2.3 Buffering

 Messages are passed via queues, which may have one of three capacity
configurations:
1. Zero capacity - Messages cannot be stored in the queue, so senders
must block until receivers accept the messages.
2. Bounded capacity- There is a certain pre-determined finite capacity in
the queue. Senders must block if the queue is full, until space becomes
available in the queue, but may be either blocking or non-blocking
otherwise.
3. Unbounded capacity - The queue has a theoretical infinite capacity, so
senders are never forced to block.

1.14 Communication in Client-Server Systems

1.14.1 Sockets

 A socket is an endpoint for communication.


 Two processes communicating over a network often use a pair of connected sockets
as a communication channel. Software that is designed for client-server operation
may also use sockets for communication between two processes running on the
same computer - For example the UI for a database program may communicate with
the back-end database manager using sockets. A socket is identified by an IP
address concatenated with a port number, e.g. 200.100.50.5:80.

Figure 1.18 - Communication using sockets

 Port numbers below 1024 are considered to be well-known, and are generally
reserved for common Internet services. For example, telnet servers listen to port 23,
ftp servers to port 21, and web servers to port 80.
 General purpose user sockets are assigned unused ports over 1024 by the operating
system in response to system calls such as socket( ) or soctkepair( ).
 Communication channels via sockets may be of one of two major forms:
o Connection-oriented ( TCP, Transmission Control Protocol ) connections
emulate a telephone connection. All packets sent down the connection are
guaranteed to arrive in good condition at the other end, and to be delivered to
the receiving process in the order in which they were sent. The TCP layer of the
network protocol takes steps to verify all packets sent, re-send packets if
necessary, and arrange the received packets in the proper order before
delivering them to the receiving process. There is a certain amount of overhead
involved in this procedure, and if one packet is missing or delayed, then any
packets which follow will have to wait until the errant packet is delivered before
they can continue their journey.
o Connectionless ( UDP, User Datagram Protocol ) emulate individual
telegrams. There is no guarantee that any particular packet will get through
undamaged ( or at all ), and no guarantee that the packets will get delivered in
any particular order. There may even be duplicate packets delivered,
depending on how the intermediary connections are configured. UDP
transmissions are much faster than TCP, but applications must implement their
own error checking and recovery procedures.

1.14.2 Pipes

 Pipes are one of the earliest and simplest channels of communications between (
UNIX ) processes.
 There are four key considerations in implementing pipes:
1. Unidirectional or Bidirectional communication?
2. Is bidirectional communication half-duplex or full-duplex?
3. Must a relationship such as parent-child exist between the processes?
4. Can pipes communicate over a network, or only on the same machine?
 The following sections examine these issues on UNIX and Windows

1.14.2.1 Ordinary Pipes

 Ordinary pipes are uni-directional, with a reading end and a writing end. ( If
bidirectional communications are needed, then a second pipe is required. )
 In UNIX ordinary pipes are created with the system call "int pipe( int fd [ ] )".
o The return value is 0 on success, -1 if an error occurs.
o The int array must be allocated before the call, and the values are filled in by
the pipe system call:
 fd[ 0 ] is filled in with a file descriptor for the reading end of the pipe
 fd[ 1 ] is filled in with a file descriptor for the writing end of the pipe
o UNIX pipes are accessible as files, using standard read( ) and write( ) system
calls.
o Ordinary pipes are only accessible within the process that created them.
 Typically a parent creates the pipe before forking off a child.
 When the child inherits open files from its parent, including the pipe file(s),
a channel of communication is established.
 Each process ( parent and child ) should first close the ends of the pipe
that they are not using. For example, if the parent is writing to the pipe
and the child is reading, then the parent should close the reading end of
its pipe after the fork and the child should close the writing end.
 Figure 3.22 shows an ordinary pipe in UNIX.
1.14.2.2 Named Pipes

 Named pipes support bidirectional communication, communication between non


parent-child related processes, and persistence after the process which created them
exits. Multiple processes can also share a named pipe, typically one reader and
multiple writers.
 In UNIX, named pipes are termed fifos, and appear as ordinary files in the file
system.
o UNIX named pipes are bidirectional, but half-duplex, so two pipes are still
typically used for bidirectional communications.
o UNIX named pipes still require that all processes be running on the same
machine. Otherwise sockets are used.
 Windows named pipes provide richer communications.
 Full-duplex is supported.
 Processes may reside on the same or different machines
 Created and manipulated using CreateNamedPipe( ), ConnectNamedPipe( ),
ReadFile( ),and WriteFile( ).

1.15 Threads

 A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU


utilization, consisting of a program counter, a stack, and a set of registers, ( and a thread
ID. )
 Traditional ( heavyweight ) processes have a single thread of control - There is one
program counter, and one sequence of instructions that can be carried out at any given
time.
 As shown in Figure 1.19multi-threaded applications have multiple threads within a single
process, each having their own program counter, stack and set of registers, but sharing
common code, data, and certain structures such as open files.
Figure 1.19 - Single-threaded and multithreaded processes

1.15.1 Motivation

 Threads are very useful in modern programming whenever a process has multiple
tasks to perform independently of the others.
 This is particularly true when one of the tasks may block, and it is desired to allow the
other tasks to proceed without blocking.
 For example in a word processor, a background thread may check spelling and
grammar while a foreground thread processes user input ( keystrokes ), while yet a
third thread loads images from the hard drive, and a fourth does periodic automatic
backups of the file being edited.

1.15.2 Benefits

 There are four major categories of benefits to multi-threading:


1. Responsiveness - One thread may provide rapid response while other threads
are blocked or slowed down doing intensive calculations.
2. Resource sharing - By default threads share common code, data, and other
resources, which allows multiple tasks to be performed simultaneously in a
single address space.
3. Economy - Creating and managing threads ( and context switches between
them ) is much faster than performing the same tasks for processes.
4. Scalability, i.e. Utilization of multiprocessor architectures - A single
threaded process can only run on one CPU, no matter how many may be
available, whereas the execution of a multi-threaded application may be split
amongst available processors.

1.15.3 Multithreading Models

 There are two types of threads to be managed in a modern system: User threads and
kernel threads.
 User threads are supported above the kernel, without kernel support. These are the
threads that application programmers would put into their programs.
 Kernel threads are supported within the kernel of the OS itself. All modern OSes support
kernel level threads, allowing the kernel to perform multiple simultaneous tasks and/or to
service multiple kernel system calls simultaneously.
 In a specific implementation, the user threads must be mapped to kernel threads, using
one of the following strategies.

1.15.4 Many-To-One Model

 In the many-to-one model, many user-level threads are all mapped onto a single
kernel thread.
 Thread management is handled by the thread library in user space, which is very
efficient.
 However, if a blocking system call is made, then the entire process blocks, even if the
other user threads would otherwise be able to continue.
 Because a single kernel thread can operate only on a single CPU, the many-to-one
model does not allow individual processes to be split across multiple CPUs.

Figure 1.20 - Many-to-one model

1.15.5 One-To-One Model

 The one-to-one model creates a separate kernel thread to handle each user thread.
 One-to-one model overcomes the problems listed above involving blocking system
calls and the splitting of processes across multiple CPUs.
 However the overhead of managing the one-to-one model is more significant,
involving more overhead and slowing down the system.
 Most implementations of this model place a limit on how many threads can be
created.
 Linux and Windows from 95 to XP implement the one-to-one model for threads.

Figure 1.21- One-to-one model


1.15.6 Many-To-Many Model

 The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
 Users have no restrictions on the number of threads created.
 Blocking kernel system calls do not block the entire process.
 Processes can be split across multiple processors.
 Individual processes may be allocated variable numbers of kernel threads, depending
on the number of CPUs present and other factors.

Figure 1.22 - Many-to-many model

You might also like