Professional Documents
Culture Documents
An Operating System (OS) is a system software which act as an interface between a computer user and
computer hardware.
Objectives
• To manage the computer's resources, such as the central processing unit, memory, disk drives,
printers etc. efficiently.
1.batch os
This is the most primitive type of OS .batch processing required that a program ,its
related data and relevant control commands should be submitted together in the form of a job
normally on punch cards.The batch monitor automatically batches jobs with similar needs and
executes the batches one by one without user intervention. Thus long term jobs (payrolls
,forecasting, statistical analysis)with less operator interaction are well serviced by batch
processing .but due to long term turn around delays and infeasibilities of online debugging it is
not suitable for software development
Features of batch os
Scheduling: follows FCFS scheduling but suffers from long avg turn around time and avg
waiting time
I/O management: since only one batch is under execution at a time there was no contention for
allocation of I/O device so a simple program-controlled I/O was used to access i/o devices
File management : since there is only onre program accessing a file at a time ,there was no need
of providing concurrency control
2.Multi programming OS
Multi programming systems permit multiple programs to be loaded in to memory and execute
the program concurrently and thus improve utilization of system resources. A program in
execution is called a process or a Task .multiprogramming OS will have multi tasking capability
with good memory management
Multi –user os: multi programming os that supports simultaneous interaction with multiple users
Multi – access os : refers to os which permits simultaneous access to a computer system ,through
multiple terminals but not multi programming(eg: airline reservation system)
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.
With resource sharing facility, a user at one site may be able to use the resources
available at another.
Speedup the exchange of data with one another via electronic mail.
If one site fails in a distributed system, the remaining sites can potentially continue
operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.
They are designed to run on handheld machines such as smart phones, tablets etc that have
lower speed processors and less memory, they were designed to use less memory and require
fewer resources. Handheld operating systems are also designed to work with different types of
hardware than standard desktop operating systems.
Sensos is a Sensor Node Operating System with a Device Management Scheme for Sensor
Nodes.
10.Smart Card Operating System
The smart card operating system (Card OS) is the hardware-specific firmware (firmware is a
specific class of computer software that provides the low-level control for a device's specific
hardware) that provides basic functionality such as secure access to on-card storage,
authentication and encryption.
Eg. MULTOS
• Memory Management
• Process Management
• Device Management
• File Management
• Security
• Job accounting
Memory Management
Memory management refers to management of Main Memory. Main memory provides a fast
storage that can be accessed directly by the CPU. For a program to be executed, it must in the main
memory. An Operating System does the following activities for memory management −
• Keeps tracks of primary memory, i.e., what part of it are in use by whom , w hat part are not in use.
• In multiprogramming, the OS decides which process will get memory when space becomes available
Process Management
A process is an instance of a program in execution .An Operating System does the following
activities for process management −
process creation which involves loading the program from secondary storage to memory and
Device Management
An Operating System manages device communication via their respective drivers. It does the
following activities for device management −
• Decides which process gets the device when and for how much time.
• De-allocates devices.
File Management
computers can store information on different media like hard disk,floppy,CD,magnetic tapes etc
all these medias have different characteristics in terms of physical organization,capacity and access
methods ,transfer rate etc.but for convenient accessing ,the os provides a uniform logical view of
information storage.this is called file A files are normally organized into directories for easy navigation
and usage. An Operating System does the following activities for file management −
Security
Recording delays between request for a service and response from the system.
Job accounting
− Keeping track of time and resources used by various jobs and users.
− Production of traces, error messages, and do ther debugging and error detecting
aids. Coordination between other softwares and users
An operating system provides an environment for the execution of user programs. OS provides
certain services to the programs and to the users of those programs for the accessing of system
resources. Some of the common services are:
1. Program Execution
The main purpose of an OS is to provide an efficient and convenient environment for the execution of
programs. So, an Os must provide various functions for loading a program Into RAM and executing it.
Each executing program must terminate, either normally or abnormally.
2. I/O Operations
A running program would need I/O operations for reading-in of input data and for outputting of
results. This I/O may be from/to a file or from/to an I/O device. For each device some special functions
may be necessary (such as rewind a tape. dear screen ete.). All these operations are managed by an OS.
3. File Manipulation
Each executing program would need to create. delete and manipulate file, which
4. Communications
Os manages inter-process communication between the processes, executing on the same computer
or running on different computers in a distributed/multiprocessor environment. An OS would provide a
mechanism for this inter-process communication; like mailboxes, shared memory etc.
Errors may occur during execution of a program: like divide by zero, memory access violation etc.
The OS should provide for detection of such errors (or exceptions) and handle recovery (called Exception
Handling.)
6. Resource Allocation
When multiple users are logged onto the system or multiple jobs are running concurrently,
resources would need to be shared amongst them. The OS would decide on the allocation of resources:
Like the CPU scheduling algorithm will determine the control of CPU amongst the concurrent processes.
Similarly, there will be routines for the allocation and deallocation of other resources like memory. 1/0
devices, files ete.
An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
The operating system can be observed from the point of view of the user or the system. This is
known as the user view and the system view respectively. More details about these are given as follows
−
User View
The user view depends on the system interface that is used by the users. The different types of
user view experiences can be explained as follows −
If the user is using a personal computer, the operating system is largely designed to make the
interaction easy. Some attention is also paid to the performance of the system, but there is no need for
the operating system to worry about resource utilization. This is because the personal computer uses all
the resources available and there is no sharing.
If the user is using a system connected to a mainframe or a minicomputer, the operating system is
largely concerned with resource utilization. This is because there may be multiple terminals connected
to the mainframe and the operating system makes sure that all the resources such as CPU,memory, I/O
devices etc. are divided uniformly between them.
If the user is sitting on a workstation connected to other workstations through networks, then
the operating system needs to focus on both individual usage of resources and sharing though the
network. This happens because the workstation exclusively uses its own resources but it also needs to
share files etc. with other workstations across the network.
If the user is using a handheld computer such as a mobile, then the operating system handles the
usability of the device including a few remote operations. The battery level of the device is also taken
into account.
There are some devices that contain very less or no user view because there is no interaction with
the users. Examples are embedded computers in home devices, automobiles etc.
System View
According to the computer system, the operating system is the bridge between applications and
hardware. It is most intimate with the hardware and is used to control it as required.
The different types of system view for operating system can be explained as follows:
The system views the operating system as a resource allocator. There are many resources such as
CPU time, memory space, file storage space, I/O devices etc. that are required by processes for
execution. It is the duty of the operating system to allocate these resources judiciously to the processes
so that the computer system can run as smoothly as possible.
The operating system can also work as a control program. It manages all the processes and I/O
devices so that the computer system works smoothly and there are no errors. It makes sure that the I/O
devices work in a proper manner without creating problems.
Operating systems can also be viewed as a way to make using hardware easier. Computers were
required to easily solve user problems. However it is not easy to work directly with the computer
hardware. So, operating systems were developed to easily communicate with the hardware.
An operating system can also be considered as a program running at all times in the background
of a computer system (known as the kernel) and handling all the application programs. This is the
definition of the operating system that is generally followed.
The Operating System as a Resource Manager
Internally an Operating System acts as a manager of resources of the computer system such as
processor, memory, files, and I/O device.In this role, the operating system keeps track of the status of
each resource, and decides who gets a resource, for how long and when. In system that supports
concurrent execution of program, the operating system resolves conflicting requests for resources in
manner that preserves system integrity, and in doing so attempts to optimize the resulting performance.
The Text section is made up of the compiled program code, read in from non-volatile
storage when the program is launched.
The Data section is made up the global and static variables, allocated and initialized
prior to executing the main.
The Heap is used for the dynamic memory allocation, and is managed via calls to new,
delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables
when they are declared.
WAITING- The process is waiting for some event to occur(such as an I/O completion or
reception of a signal).
TERMINATED- The process has finished execution.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process
There is a Process Control Block for each process, enclosing all the information about the
process. It is a data structure, which contains the following:
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the
basis of a particular strategy. The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization. The objective of time sharing is to switch
the CPU among processes so frequently that users can interact with each program while it is
running. To meet these objectives, the process scheduler selects an available process
(possibly from a set of several available processes) for program execution on the CPU.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system. As processes enter the
system, they are put into a job queue, which consists of all processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue. This queue is generally stored
as a linked list. A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Schedulers
A process migrates among the various scheduling queues throughout its lifetime. The
operating system must select, for scheduling purposes, from these queues in some fashion.
processes Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −
Long-Term Schedulers
Short-Term Scheduler
Medium-Term Scheduler
Long-Term Scheduler
The process that enters the system are kept in a mass storage device ,typically a disk
(called job pool).The long term scheduler or job scheduler selects processes from this pool and
loads them into memory for execution .or it selects jobs from job queue to ready queue
Short-Term Scheduler
The Short-Term Scheduler or CPU Scheduler, selects from among the processes, that are
ready to execute and allocate the cpu to one among them .or it select jobs from ready queue and
allocates to cpu
Medium-term scheduling is a part of swapping. The process is swapped out, and is later
swapped in, by the medium-term scheduler
Comparison between schedulers
Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
An I\O bound process spend more of its time doing I\O than it spends doing computation
A cpu bound process generates I/O request infrequently, using more of its time doing
computation
It is important that the long term scheduler select a good process mix of cpu bound and
I/O bound process. If all processes are I/O bound, the ready queue will almost always be empty,
and the short-term scheduler will have little to do. If all processes are CPU bound, the I/O
waiting queue will almost always be empty, devices will go unused, and again the system will be
unbalanced. The system with the best performance will thus have a combination of CPU-bound
and I/O-bound processes
Context Switch
When an interrupt occurs, the system needs to save the current context of the
process running on the cpu so that a its execution can be resumed from the same point at a
later time. The context is represented in the PCB of the process. When the scheduler switches
the CPU from executing one process to execute another process requires a state save of the
current process and state restore of different processes this task is known as context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
OPERATIONS ON PROCESS
PROCESS CREATION
PROCESS TERMINATION
PROCESS CREATION
PROCESS CREATION
Process creation is a task of creating new processes. There are different ways to
create new process. A new process can be created at the time of initialization of operating
system or when system calls such as fork () are initiated by other processes. The process, which
creates a new process using system calls, is called parent process while the new process that is
created is called child process. The child processes can create new processes using system calls.
A new process can also create by an operating system based on the request received from the
user. Each process is given an integer identifier, termed its process identifier, or PID
When a process creates a new process ,two possibilities for the execution exist
◦ The parent wait until some or all of its children have terminated
Two possibilities for the address space of the child relative to the parent:
◦ The child may be an exact duplicate of the parent, sharing the same program and data
segments in memory. Each will have their own PCB, including program counter,
registers, and PID. This is the behavior of the fork system call in UNIX.
◦ The child process may have a new program loaded into its address space, with all new
code and data segments.
Process Termination
Processes may request their own termination by making the exit( ) system call, typically
returning an int. This int is passed along to the parent if it is doing a wait( ), and is typically zero
on successful completion and some non-zero code in the event of problems.
Processes may also be terminated by the system for a variety of reasons, including:
◦ A parent may kill its children if the task assigned to them is no longer needed.
If the parent exits, the system may or may not allow the child to continue without a
parent
When a process terminates, all of its system resources are freed up, open files flushed
and closed, etc. The process termination status and execution times are returned to the parent
if the parent is waiting for the child to terminate, or eventually returned to init if the process
becomes an orphan
Interprocess Communication
Independent Processes operating concurrently on a systems are those that can neither
affect other processes or be affected by other processes.
Cooperating Processes are those that can affect or be affected by other processes.
◦ Information Sharing - There may be several processes which need access to the same
file for example. ( e.g. pipelines. )
◦ Modularity - The most efficient architecture may be to break a system down into
cooperating modules. ( E.g. databases with a client-server architecture. )
Message-Passing Systems
There are several methods for logically implementing a link and the send and receive
operations:
direct communication
With direct communication the sender must know the name of the receiver to which it wishes
to send a message using 2 primitives
◦ There is a one-to-one link between every sender-receiver pair with following property
◦ A link is established automatically between every pair of process that want to
communicate
Indirect communication
◦ Only one process can read any given message in a mailbox. Initially the process that
creates the mailbox is the owner, and is the only one allowed to read mail in the
mailbox, although this privilege may be transferred.
( Of course the process that reads the message can immediately turn around
and place an identical message back in the box for someone else to read, but
that may put it at the back end of a queue of messages. )
◦ The OS must provide system calls to create and delete mailboxes, and to send and
receive messages to/from mailboxes.
Synchronization
Either the sending or receiving of messages ( or neither or both ) may be either blocking or non-
blocking(synchronous or nonynchronous )
Blocking send-the sending processes is blocked untill the message is received by the receiving
process or the mailbox
Non blocking send-the sending process sends the message and resume operation
Buffering
Messages are passed via queues, which may have one of three capacity configurations:
◦ Zero capacity - Messages cannot be stored in the queue, so senders must block until
receivers accept the messages.
◦ Bounded capacity- There is a certain pre-determined finite capacity in the queue.
Senders must block if the queue is full, until space becomes available in the queue, but
may be either blocking or non-blocking otherwise.
◦ Unbounded capacity - The queue has a theoretical infinite capacity, so senders are
never forced to block.
CPU Scheduling
A scheduling system allows one process to use the CPU while another is waiting for I/O, thereby
making full use of otherwise lost CPU cycles.
CPU Scheduler
Whenever the CPU becomes idle, it is the job of the CPU Scheduler ( the short-term scheduler )
to select another process from the ready queue to run next.
The storage structure for the ready queue and the algorithm used to select the next process are
not necessarily a FIFO queue. There are several alternatives to choose from, as well as
numerous adjustable parameters for each algorithm,
◦ When a process switches from the running state to the waiting state, such as for an I/O
request or invocation of the wait( ) system call.
◦ When a process switches from the running state to the ready state, for example in
response to an interrupt.
◦ When a process switches from the waiting state to the ready state, say at completion of
I/O or a return from wait( ).
For conditions 2 and 3 there is a choice - To either continue running the current process, or
select a different one.
If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive,
or cooperative. Under these conditions, once a process starts running it keeps running, until it
either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive.
Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected by the scheduler.
This function involves:
◦ Switching context.
The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
consumed by the dispatcher is known as dispatch latency.
Scheduling Criteria
There are several different criteria to consider when trying to select the "best" scheduling algorithm for
a particular situation and environment, including:
◦ CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU
cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90%
( heavily loaded. )
◦ Throughput - Number of processes completed per unit time. May range from 10 /
second to 1 / hour depending on the specific processes.
◦ Turnaround time - Time required for a particular process to complete, from submission
time to completion. ( Wall clock time. )
◦ Waiting time - How much time processes spend in the ready queue waiting their turn to
get on the CPU.
◦ Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command.
The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm.
With this scheme, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters
the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to
the process at the head of the queue. The average waiting time under the FCFS policy is often quite
long.
Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in
milliseconds:
If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the result shown
in the following Gantt chart:
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27
milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
The waiting time is 0 milliseconds for process P2, 3 milliseconds for process P3, and 6
milliseconds for process P1. Thus, the average waiting time is (0 + 3 + 6)/3 = 3 milliseconds.
So Average Waiting Time is not optimal .
2. It is Non Pre-emptive algorithm, which means the process priority doesn't matter, once the cpu
has been allocated to a process ,that process keeps the cpu until by itself.
3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor
resource (CPU, I/O etc) utilization.
Convoy Effect is a situation where many processes, who need to use a resource for short time are
blocked by one long process holding that resource for a long time.
2. Shortest-Job-First Scheduling
The shortest-job-first (SJF) scheduling algorithm associates with each process the length of the
process's next CPU burst. When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.
If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. It is
also called shortest-next-CPU-burst algorithm, because scheduling depends on the length of the
next CPU burst of a process, rather than its total length.
As an example of SJF scheduling, consider the following set of processes, with the length of the
CPU burst given in milliseconds:
Using SJF scheduling, we would schedule these processes according to the following Gantt chart
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for
process P3, and 0 milliseconds for process P4 . Thus, the average waiting time is (3 + 16 + 9 +
0)/4= 7 milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting
time for a given set of processes. SJF scheduling is used frequently in long- term scheduling.
The SJF algorithm can be either preemptive or non preemptive SJF scheduling
As an example, consider the following four processes, with the length of the CPU burst given in
milliseconds:
If the processes arrive at the ready queue at the times shown and need the indicated burst times,
then the resulting preemptive SJF schedule is as shown in the following Gantt chart:
Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at time
1. The remaining time for process P1 (7 milliseconds) is larger than the time required by process
P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. The average waiting
time for this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5 milliseconds.
Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
3.Priority Scheduling
Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first served basis. Priority can be decided
either internally or externally. Internal priority based on factors such as memory requirements, time
requirements or any other resource requirement. Whereas external priority is based on criteria’s
outside the system such as importance of the process, type, amount and funding factors etc
As an example, consider the following four processes, with the length of the CPU burst given in
milliseconds and priority is given:
Priority scheduling can be of two types
Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher
priority than the currently running process, the CPU is preempted, which means the processing
of the current process is stopped and the incoming new process with higher priority gets the
CPU for its execution.
To prevent starvation of any process, we can use the concept of aging where we keep on
increasing the priority of low-priority process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each minutes of waiting, then if a
process with priority 7 comes in the ready queue. After 10 minutes of waiting, its priority is
increased to 2 and so on.Doing so, we can ensure that no process will have to wait for indefinite
time for getting CPU time for processing.
4.Round-Robin Scheduling
The round-robin (RR) scheduling algorithm is designed especially for timesharing systemsIt is a
preemptive process scheduling algorithm A fixed time is allotted to each process,
called quantum, for execution. Once a process is executed for given time period that process is
preempted and other process executes for given time period.Context switching is used to save
states of preempted processes. To implement RR scheduling, we keep the ready queue as a
circular FIFO queue
In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in
a row (unless it is the only runnable process). If a process's CPU burst exceeds 1 time quantum,
that process is preempted and is put back in the ready queue. The RR scheduling algorithm is
thus preemptive. If thre are n processes in ready queue and time quantum is qthen each process
will get 1\n of cpu time at most q time units and each process must wait no longer than (n-1)*q
time units until its next time quantum,
The performance of the RR algorithm depends heavily on the size of the time quantum. If the
time quantum is extremely large, the RR policy is the same as the FCFS policy. If the time
quantum is extremely small (say, 1 millisecond), the RR approach is called processor sharing
A multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
For example: A common division is made between foreground(or interactive) processes and
background (or batch) processes. These two types of processes have different response-time
requirements, and so might have different scheduling needs. In addition, foreground processes
may have priority over background processes.
Each queue has its own scheduling algorithm. For example, separate queues might be used for
foreground(interactive) and background(batch) processes. The foreground queue might be
scheduled by an RR algorithm, while the background queue is scheduled by an FCFS algorithm
System Processes
Interactive Processes
Batch Processes
Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for
example, could run unless the queues for system processes, interactive processes, and interactive
editing processes were all empty. If an interactive editing process entered the ready queue while a
batch process was running, the batch process will be preempted.
Multilevel feedback queue scheduling, however, allows a process to move between queues. The
idea is to separate processes with different CPU-burst characteristics. If a process uses too much
CPU time, it will be moved to a lower-priority queue. Similarly, a process that waits too long in a
lower-priority queue may be moved to a higher-priority queue. This form of aging prevents
starvation.