Professional Documents
Culture Documents
Architecture of Linux
1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the
common hardware resources of the computer to provide each process with its
virtual resources. This makes the process seem as if it is the sole process
running on the machine. The kernel is also responsible for preventing and
mitigating conflicts between different processes. Different types of the kernel
are:
Monolithic Kernel
Hybrid kernels
Exo kernels
Micro kernels
2. System Library: Isthe special types of functions that are used to implement the
functionality of the operating system.
3. Shell: It is an interface to the kernel which hides the complexity of the kernel’s
functions from the users. It takes commands from the user and executes the
kernel’s functions.
4. Hardware Layer: This layer consists all peripheral devices like RAM/ HDD/
CPU etc.
5. System Utility: It provides the functionalities of an operating system to the
user.
Operating System
An Operating System (OS) is an interface between a computer user and computer hardware. An
operating system is a software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices
such as disk drives and printers.
An operating system is software that enables applications to interact with a computer's hardware.
The software that contains the core components of the operating system is called the kernel.
The primary purposes of an Operating System are to enable applications (software) to interact
with a computer's hardware and to manage a system's hardware and software resources.
Definition:
An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System
Process Management
A process is program or a fraction of a program that is loaded in main memory. A process needs
certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The
process management component manages the multiple processes running simultaneously on the
Operating System.
A program in running state is called a process.
The operating system is responsible for the following activities in connection with process
management:
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices
from the user. I/O Device Management provides an abstract level of H/W devices and keep the
details from applications to ensure proper use of devices, to prevent errors, and to provide users
with convenient and efficient programming environment.
Following are the tasks of I/O Device Management component:
File Management
File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms; magnetic tape, disk, and drum are the most common
forms.
A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly
files represent data, source and object forms, and programs. Data files can be of any type like
alphabetic, numeric, and alphanumeric.
A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.
The operating system implements the abstract concept of the file by managing mass storage
device, such as types and disks. Also, files are normally organized into directories to ease their use.
These directories may contain files and other directories and so on.
The operating system is responsible for the following activities in connection with file
management:
Network Management
The definition of network management is often broad, as network management involves several
different components. Network management is the process of managing and administering a
computer network. A computer network is a collection of various types of computers connected
with each other.
Network management comprises fault analysis, maintaining the quality of service, provisioning of
networks, and performance management.
Network management is the process of keeping your network healthy for an efficient communication
between different computers.
Memory is a large array of words or bytes, each with its own address. It is a repository of quickly
accessible data shared by the CPU and I/O devices.
Main memory is a volatile storage device which means it loses its contents in the case of system
failure or as soon as system power goes down.
The main motivation behind Memory Management is to maximize memory utilization on the computer
system.
The operating system is responsible for the following activities in connections with memory
management:
Keep track of which parts of memory are currently being used and by whom.
Decide which processes to load when memory space becomes available.
Allocate and deallocate memory space as needed.
The main purpose of a computer system is to execute programs. These programs, together with
the data they access, must be in main memory during execution. Since the main memory is too
small to permanently accommodate all data and program, the computer system must provide
secondary storage to backup main memory.
Most modern computer systems use disks as the principle on-line storage medium, for both
programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters,
and so on, are stored on the disk until loaded into memory, and then use the disk as both the source
and destination of their processing.
The operating system is responsible for the following activities in connection with disk
management:
Disk scheduling
Security Management
The operating system is primarily responsible for all task and activities happen in the computer
system. The various processes in an operating system must be protected from each other’s
activities. For that purpose, various mechanisms which can be used to ensure that the files, memory
segment, CPU and other resources can be operated on only by those processes that have gained
proper authorization from the operating system.
Security Management refers to a mechanism for controlling the access of programs, processes, or
users to the resources defined by a computer control to be imposed, together with some means of
enforcement.
For example, memory addressing hardware ensure that a process can only execute within its own
address space. The timer ensure that no process can gain control of the CPU without relinquishing
it. Finally, no process is allowed to do its own I/O, to protect the integrity of the various peripheral
devices.
One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Command Interpreter System executes a user command by calling one or more number of
underlying system programs or system calls.
Command Interpreter System allows human users to interact with the Operating System and
provides convenient programming environment to the users.
Many commands are given to the operating system by control statements. A program which reads
and interprets control statements is automatically executed. This program is called the shell and
few examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.
A threat is a program that is malicious in nature and leads to harmful effects for the system. Some
of the common threats that occur in a system are −
Virus
Viruses are generally small snippets of code embedded in a system. They are very dangerous and
can corrupt files, destroy data, crash systems etc. They can also spread further by replicating
themselves as required.
Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious user can use
these to enter the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the knowledge of the
users. It can be exploited to harm the data or files in a system by malicious people.
Worm
A worm can destroy a system by using its resources to extreme levels. It can generate multiple
copies which claim all the resources and don't allow any other processes to access them. A worm
can shut down a whole network in this way.
Denial of Service
These types of attacks do not allow the legitimate users to access a system. It overwhelms the
system with requests so it is overwhelmed and cannot work properly for other user.
Authentication
This deals with identifying each user in the system and making sure they are who they claim to
be. The operating system makes sure that all the users are authenticated before they access the
system. The different ways to make sure that the users are authentic are:
Username/ Password
Each user has a distinct username and password combination and they need to enter
it correctly before they can access the system.
User Key/ User Card
The users need to punch a card into the card slot or use they individual key on a
keypad to access the system.
User Attribute Identification
Different user attribute identifications that can be used are fingerprint, eye retina etc.
These are unique for each user and are compared with the existing samples in the
database. The user can only access the system if there is a match.
One Time Password
These passwords provide a lot of security for authentication purposes. A one time password can
be generated exclusively for a login every time a user wants to enter the system. It cannot be used
more than once. The various ways a one time password can be implemented are −
Random Numbers
The system can ask for numbers that correspond to alphabets that are pre arranged.
This combination can be changed each time a login is required.
Secret Key
A hardware device can create a secret key related to the user id for login. This key can
change each time.
Operating System Operations
An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
The major operations of the operating system are process management, memory management,
device management and file management. These are given in detail as follows:
Process Management
The operating system is responsible for managing the processes i.e assigning the processor to a
process at a time. This is known as process scheduling. The different algorithms used for process
scheduling are FCFS (first come first served), SJF (shortest job first), priority scheduling, round
robin scheduling etc.
There are many scheduling queues that are used to handle processes in process management.
When the processes enter the system, they are put into the job queue. The processes that are
ready to execute in the main memory are kept in the ready queue. The processes that are
waiting for the I/O device are kept in the device queue.
Memory Management
Memory management plays an important part in operating system. It deals with memory and the
moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are −
The operating system assigns memory to the processes as required. This can be
done using best fit, first fit and worst fit algorithms.
All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.
Device Management
There are many I/O devices handled by the operating system such as mouse, keyboard, disk drive
etc. There are different device drivers that can be connected to the operating system to handle a
specific device. The device controller is an interface between the device and the device driver. The
user applications can access all the I/O devices using the device drivers, which are device specific
codes.
File Management
Files are used to provide a uniform view of data storage by the operating system. All the files are
mapped onto physical devices that are usually non-volatile so data is safe in the case of system
failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access −
Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed on after another. Most of the file systems such as editors,
compilers etc. use sequential access.
Direct Access
In direct access or relative access, the files can be accessed in random for read and
write operations. The direct access model is based on the disk model of a file, since it
allows random accesses.
1. User View
2. System View
User View
The user view depends on the system interface that is used by the users. Some systems are designed
for a single user to monopolize the resources to maximize the user's task. In these cases, the OS is
designed primarily for ease of use, with little emphasis on quality and none on resource utilization.
The user viewpoint focuses on how the user interacts with the operating system through the usage
of various application programs. In contrast, the system viewpoint focuses on how the hardware
interacts with the operating system to complete various tasks.
Most computer users use a monitor, keyboard, mouse, printer, and other accessories to operate their
computer system. In some cases, the system is designed to maximize the output of a single user. As
a result, more attention is laid on accessibility, and resource allocation is less important. These
systems are much more designed for a single user experience and meet the needs of a single user,
where the performance is not given focus as the multiple user systems.
Another example of user views in which the importance of user experience and performance is given
is when there is one mainframe computer and many users on their computers trying to interact with
their kernels over the mainframe to each other. In such circumstances, memory allocation by the
CPU must be done effectively to give a good user experience. The client-server architecture is
another good example where many clients may interact through a remote server, and the same
constraints of effective use of server resources may arise.
Moreover, the touchscreen era has given you the best handheld technology ever. Smartphones
interact via wireless devices to perform numerous operations, but they're not as efficient as a
computer interface, limiting their usefulness. However, their operating system is a great example of
creating a device focused on the user's point of view.
Some systems, like embedded systems that lack a user point of view. The remote control used to
turn on or off the tv is all part of an embedded system in which the electronic device communicates
with another program where the user viewpoint is limited and allows the user to engage with the
application.
System View
The OS may also be viewed as just a resource allocator. A computer system comprises various
sources, such as hardware and software, which must be managed effectively. The operating system
manages the resources, decides between competing demands, controls the program execution, etc.
According to this point of view, the operating system's purpose is to maximize performance.
The operating system is responsible for managing hardware resources and allocating them to
programs and users to ensure maximum performance.
From the user point of view, we've discussed the numerous applications that require varying degrees
of user participation. However, we are more concerned with how the hardware interacts with the
operating system than with the user from a system viewpoint. The hardware and the operating
system interact for a variety of reasons, including:
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O interaction, etc.
These are all resources that the operating system needs when an application program demands
them. Only the operating system can allocate resources, and it has used several tactics and strategies
to maximize its processing and memory space. The operating system uses a variety of strategies to
get the most out of the hardware resources, including paging, virtual memory, caching, and so on.
These are very important in the case of various user viewpoints because inefficient resource
allocation may affect the user viewpoint, causing the user system to lag or hang, reducing the user
experience.
2. Control Program
The control program controls how input and output devices (hardware) interact with the operating
system. The user may request an action that can only be done with I/O devices; in this case, the
operating system must also have proper communication, control, detect, and handle such devices.
Program execution
Operating systems handle many kinds of activities from user programs to system programs like printer
spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers, OS
resources in use). Following are the major activities of an operating system with respect to program
management −
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the
peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions. Following are the major activities of an operating system with respect to
file management −
Communication
In case of distributed systems which are a collection of processors that do not share memory, peripheral
devices, or a clock, the operating system manages communications between all the processes. Multiple
processes communicate with one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and security. Following
are the major activities of an operating system with respect to communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory
hardware. Following are the major activities of an operating system with respect to error handling −
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files
storage are to be allocated to each user or job. Following are the major activities of an operating system
with respect to resource management −
Protection
Considering a computer system having multiple users and concurrent execution of multiple processes, the
various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users to the
resources defined by a computer system. Following are the major activities of an operating system with
respect to protection −
In this article, you will learn about the system calls in the operating system and discuss their types and
many other things.
The Application Program Interface (API) connects the operating system's functions to user programs.
It acts as a link between the operating system and a process, allowing user-level programs to request
operating system services. The kernel system can only be accessed using system calls. System calls are
required for any programs that use resources.
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes
in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.
4. The code and data for system calls are stored in global kernel memory.
2. Network connections require the system calls to sending and receiving data packets.
4. If you want to access hardware devices, including a printer, scanner, you need a system call.
5. System calls are used to create and manage new processes.
If the request is permitted, the kernel performs the requested action, like creating or deleting a file. As input,
the application receives the kernel's output. The application resumes the procedure after the input is received.
When the operation is finished, the kernel returns the results to the application and then moves data from
kernel space to user space in memory.
A simple system call may take few nanoseconds to provide the result, like retrieving the system date and time.
A more complicated system call, such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to avoid bottlenecks. Modern operating
systems are multi-threaded, which means they can handle various system calls at the same time.
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Now, you will learn about all the different types of system calls one-by-one.
Process Control
Process control is the system call that is used to direct the processes. Some process control examples include
creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management examples include
creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device management
include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data,
etc.
Communication
Communication is a system call that is used for communication. There are some examples of communication,
including create, delete communication connections, send, receive messages, etc.
open()
The open() system call allows you to access a file on a file system. It allocates resources to the file and
provides a handle that the process may refer to. Many processes can open a file at once or by a single process
only. It's all based on the file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
The file descriptor of the file to be read could be used to identify it and open it using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its execution before proceeding.
When a parent process makes a child process, the parent process execution is suspended until the child
process is finished. The wait() system call is used to suspend the parent process. Once the child process has
completed its execution, control is returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one way for a program to
generate data. It takes three arguments in general:
o A file descriptor.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the most common ways to
create processes in operating systems. When a parent process spawns a child process, execution of the parent
process is interrupted until the child process completes. Once the child process has completed its execution,
control is returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that the program no longer
requires the file, and the buffers are flushed, the file information is altered, and the file resources are de-
allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already executing process, this system
function is invoked. As a new process is not built, the old process identification stays, but the new process
replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call indicates that the thread execution
is complete, which is especially useful in multi-threaded environments. The operating system reclaims
resources spent by the process following the use of the exit() system function.
Single processor system contains only one processor while multiprocessor systems may
contain two or more processors.
Single processor systems use different controllers for completing special tasks such as DMA
(Direct Memory Access) Controller. On the other hand, multiprocessor systems have many
processors that can perform different tasks. This can be done in symmetric or asymmetric
multiprocessing.
Single processor systems can be more expensive than multiprocessor systems. If n
processor multiprocessor system is available, it is cheaper than n different single processor
systems because the memory, peripherals etc. are shared.
It is easier to design a single processor system as compared to a multiprocessor system.
This is because all the processors in the multiprocessor system need to be synchronized and
this can be quite complicated.
Throughput of a multiprocessor system is more than a single processor system. However, if
the throughput of n single processor systems is T then the throughput of n processor
multiprocessor system will be less than T.
Single processor systems are less reliable than multiprocessor systems because if the
processor fails for some reason then system cannot work. In multiprocessor systems, even if
one processor fails than the rest of the processors can pick up the slack. At most the
throughput of the system decreases a little.
Most modern personal computers are single processor systems while multiprocessors are
used in niche systems only.
Types of Operating Systems
An Operating System performs all the basic tasks like managing files, processes, and memory.
Thus operating system acts as the manager of all the resources, i.e. resource manager. Thus, the
operating system becomes an interface between user and machine.
Types of Operating Systems: Some widely used operating systems are as follows-
Advantages of RTOS:
Maximum Consumption: Maximum utilization of devices and system, thus more output
from all the resources
Task Shifting: The time assigned for shifting tasks in these systems are very less. For
example, in older systems, it takes about 10 microseconds in shifting one task to
another, and in the latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance to
applications which are in the queue.
Real-time operating system in the embedded system: Since the size of programs are
small, RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS:
Limited Tasks: Very few tasks run at the same time and their concentration is very
less on few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts
signals to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
Operating System - Processes
Process
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory
−
1
Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
2
Heap
3
Text
This includes the current activity represented by the value of Program Counter and the contents of the
processor's registers.
4
Data
Program
A program is a piece of code which may be a single line or millions of lines. A computer program is usually
written by a computer programmer in a programming language. For example, here is a simple program
written in C programming language −
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A collection of
computer programs, libraries and related data are referred to as a software.
In general, a process can have one of the following five states at a time.
1
Start
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into this
state after Start state or while running it by but interrupted by the scheduler to assign CPU to some
other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input,
or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to the
terminated state where it waits to be removed from main memory.
1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2
Process privileges
3
Process ID
4
Pointer
5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.
6
CPU registers
Various CPU registers where process need to be stored for execution for running state.
7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
8
Memory management information
This includes the information of page table, memory limits, Segment table depending on memory used
by the operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10
IO status information
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Operating System - Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems
allow more than one process to be loaded into the executable memory at a time and the loaded process
shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to a
waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to other processes and replace the
process with higher priority with the running process.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler
determines how to move processes between the ready and run queues which can only have one entry per
processor core on the system; in the above diagram, it has been merged with the CPU.
1
Running
When a new process is created, it enters into the system as in the running state.
2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher
is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the dispatcher then selects
a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with
the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler
selects a process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.
A running process may become suspended if it makes an I/O request. A suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space
for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
2 Speed is lesser than short term Speed is fastest among Speed is in between both short-
scheduler other two and long-term scheduler.
5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory for which are ready to execute into memory and execution can
execution be continued.
Context Switching
A context switching is the mechanism to store and restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from the same point at a later time. Using this technique,
a context switcher enables multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state from the
current running process is stored into the process control block. After this, the state for the process to run
next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can
start executing.
Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or more sets
of processor registers. When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Operating System Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
Waiting time of each process is as follows −
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The
Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on
the algorithm assigned to the queue.
Process creation in Operating Systems?
A process can create several new processes through creating process system calls during the
process execution. Creating a process, we call it the parent process and the new process is a child
process.
Every new process creates another process forming a tree-like structure. It can be identified with
a unique process identifier that usually represents it as pid which is typically an integer number.
Every process needs some resources like CPU time, memory, file, I/O devices to accomplish.
Whenever a process creates a sub process, and may be each sub process is able to obtain its
resources directly from the operating system or from the resources of the parent process. The
parent process needs to partition its resources among all its children or it may be able to share
some resources to several children.
Restricting a child process to a subset of the parent’s resources prevents any process from
overloading the system by creating too many sub-processes. A process is going to obtain its
resources whenever it is created.
Let us consider a tree of process on a typical Solaris system as follows −
Whenever a process creates a new process, there are two possibilities in terms of execution, which
are as follows −
The parent continues to execute concurrently with its children.
The parent waits till some or all its children have terminated.
There are two more possibilities in terms of address space of the new process, which are as
follows −
The child process is a duplicate of the parent process.
The child process has a new program loaded into it.
Cooperating Process
Cooperating processes are those that can affect or are affected by other processes running on the
system. Cooperating processes may share data with each other.
Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These subtasks
can be completed by different cooperating processes. This leads to faster and more
efficient completion of the required tasks.
Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A mechanism is
required so that the processes can access the files in parallel to each other.
Convenience
There are many tasks that a user needs to do such as compiling, printing, editing etc.
It is convenient if these tasks can be managed by cooperating processes.
Computation Speedup
Subtasks of a single task can be performed parallelly using cooperating processes.
This increases the computation speedup as the task can be executed faster. However,
this is only possible if the system has multiple processing elements.
Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages. Details
about these are given as follows −
Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such as
memory, variables, files, databases etc. Critical section is used to provide data
integrity and writing is mutually exclusive to prevent inconsistent data.
A diagram that demonstrates cooperation by sharing is given as follows −
In the above diagram, Process P1 and P2 can cooperate with each other using shared
data such as memory, variables, files, databases etc.
Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This may
lead to deadlock if each process is waiting for a message from the other to perform a
operation. Starvation is also possible if a process never receives a message.
A diagram that demonstrates cooperation by communication is given as follows −
In the above diagram, Process P1 and P2 can cooperate with each other using messages to
communicate.
Process Creation
A process may be created in the system for different operations. Some of the events that lead to
process creation are as follows −
A process may be terminated after its execution is naturally completed. This process leaves
the processor and releases all its resources.
A child process may be terminated if its parent process requests for its termination.
A process can be terminated if it tries to use a resource that it is not allowed to. For
example - A process can be terminated for trying to write into a read only file.
If an I/O failure occurs for a process, it can be terminated. For example - If a process requires
the printer and it is not working, then the process will be terminated.
In most cases, if a parent process is terminated then its child processes are also terminated.
This is done because the child process cannot exist without the parent process.
If a process requires more memory than is currently available in the system, then it is
terminated because of memory scarcity.
Interprocess communication in Operating System
Processes in operating system needs to communicate with each other. That is called Interprocess
communication. If you want to know more about it then read this article which will cover different
types of ways inter process communication is done.
Interprocess communication (IPC) is one of the key mechanisms used by operating systems to
achieve these goals. IPC helps processes communicate with each other without having to go
through user-level routines or interfaces. It allows different parts of a program to access shared
data and files without causing conflicts among them. In inter-process communication, messages
are exchanged between two or more processes. Processes can be on the same computer or on
different computers. In this article, we will discuss IPC and its need, and different approaches for
doing IPC.
Interprocess communication allows one application to manage another and enables glitch-
free data sharing.
Interprocess communication helps send messages efficiently between processes.
The program is easy to maintain and debug because it is divided into different sections of
code that work separately.
Programmers can perform a variety of other tasks at the same time, including Editing,
listening to music, compiling, etc.
Data can be shared between different programs at the same time.
Tasks can be subdivided and run on special types of processors. You can then exchange
data via IPC.
Disadvantages of interprocess communication
IPC is an essential process in the operation of computer systems. It enables different programs to run in
parallel, share data, and communicate with each other. IPC is important for the efficient operation of an
operating system and ensures that the tasks run correctly and in the order that they were executed.
Another important way that inter-process communication takes place with other processes is via message
passing. When two or more processes participate in inter-process communication, each process sends
messages to the others via Kernel. Here is an example of sending messages between two processes: – Here,
the process sends a message like “M” to the OS kernel. This message is then read by Process B. A
communication link is required between the two processes for successful message exchange. There are
several ways to create these links.
2. Shared memory
Shared memory is a memory shared between all processes by two or more processes established using
shared memory. This type of memory should protect each other by synchronizing access between all
processes. Both processes like A and B can set up a shared memory segment and exchange data through
this shared memory area. Shared memory is important for these reasons-
Suppose process A wants to communicate to process B, and needs to attach its address space to this
shared memory segment. Process A will write a message to the shared memory and Process B will read
that message from the shared memory. So processes are responsible for ensuring synchronization so that
both processes do not write to the same location at the same time.
3. Pipes
Pipes are a type of data channel commonly used for one-way communication between two processes.
Because this is a half-duplex technique, the primary process communicates with the secondary process.
However, additional lines are required to achieve a full duplex. The two pipes create a bidirectional data
channel between the two processes. But one pipe creates a unidirectional data channel. Pipes are
primarily used on Windows operating systems. Like in the diagram it is shown that one process will send a
message to the pipe. The message will be retrieved and another process will write it to the standard
output.
4. Signal
The signal is a facility that allows processes to communicate with each other. A signal is a way of
telling a process that it needs to do something. A process can send a signal to another process. A
signal also allows a process to interrupt another process. A signal is a way of communicating
between processes.
Threads
A thread is a flow of execution through the process code, with its own program counter that keeps
track of which instruction to execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing
network servers and web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure shows the working of a
single-threaded and a multithreaded process.
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.
2 Process switching needs interaction with operating Thread switching does not need to
system. interact with operating system.
3 In multiple processing environments, each process All threads can share same set of
executes the same code but has its own memory open files, child processes.
and file resources.
4 If one process is blocked, then no other process can While one thread is blocked and
execute until the first process is unblocked. waiting, a second thread in the same
task can run.
5 Multiple processes without using threads use more Multiple threaded processes use
resources. fewer resources.
6 In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.
Advantages of Thread
Types of Thread
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
within the same application can run in parallel on multiple processors and a blocking system call
need not block the entire process. Multithreading models are three types
1 User-level threads are faster to create and manage. Kernel-level threads are slower to create
and manage.
3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.