Professional Documents
Culture Documents
Resident Monitor: The purpose of this operating system was mainly to transfer control from one job to another
as soon as the job was completed. It contained a small set of programs called the resident monitor that always
resided in one part of the main memory.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high, then the
other four jobs will never be executed, or they will have to wait for a very long time. Hence the other processes
get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the input of two
numbers from the console, then it will never get it in the batch processing scenario since the user is not present at
Multiprogramming Operating System
Advantages of Multiprogramming OS
• Throughout the system, it increased as the CPU
always had one program to execute.
• Response time can also be reduced.
Disadvantages of Multiprogramming OS
• Multiprogramming systems provide an environment in
which various systems resources are used efficiently,
but they do not provide any user interaction with the
computer system.
Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved.
There are more than one processors present in the
system which can execute more than one process at
the same time. This will increase the throughput of the
system.
Program Creation
The Operating system offers the structures and tools, including editors and debuggers, to help the programmer
create, modify, and debugging programs.
While working with computers, errors may occur quite often. Errors may occur in the:
• Input/ Output devices: For example, connection failure in the network, lack of paper in the printer, etc.
• User program: For example: attempt to access illegal memory locations, divide by zero, use too much CPU
time, etc.
• Memory hardware: For example, Memory error, the memory becomes full, etc.
Accounting
An Operating device collects utilization records for numerous assets and tracks the overall performance
parameters and responsive time to enhance overall performance. These personal records are beneficial for
additional upgrades and tuning the device to enhance overall performance.
For Example:
When a user downloads something from the internet, that program may contain malicious code that may harm
the already existing programs. The operating system ensures that proper checks are applied while downloading
such programs.
If one computer system is shared amongst a couple of users, then the various processes must be protected from
another intrusion. The Operating System include providing unique users ids and passwords to each user.
File management
Computers keep data and information on secondary storage devices like magnetic tape, magnetic disk, optical
disk, etc. Each storage media has its capabilities like speed, capacity, data transfer rate, and data access
methods.
Communication
The operating system manages the exchange of data and programs among different computers connected over
Buffering in Operating System
The buffer is an area in the main memory used to store or hold the data temporarily. In other words, buffer
temporarily stores data transmitted from one place to another, either between two devices or an application. The
act of storing data temporarily in the buffer is called buffering.
Purpose of Buffering
It helps in matching speed between two devices in which the data is transmitted.
It helps the devices with different sizes of data transfer to get adapted to each other. It helps devices to
manipulate data before sending or receiving it.
Advantages of Buffer
Buffering plays a very important role in any operating system during the execution of any process or task. It
has the following advantages.
• The use of buffers allows uniform disk access. It simplifies system design.
• The system places no data alignment restrictions on user processes doing I/O. By copying data from user
buffers to system buffers and vice versa, the kernel eliminates the need for special alignment of user
buffers, making user programs simpler and more portable.
• The use of the buffer can reduce the amount of disk traffic, thereby increasing overall system throughput
and decreasing response time.
• The buffer algorithms help ensure file system integrity.
Disadvantages of Buffer
Buffers are not better in all respects. Therefore, there are a few disadvantages as follows, such as:
• It is costly and impractical to have the buffer be the exact size required to hold the number of elements.
Thus, the buffer is slightly larger most of the time, with the rest of the space being wasted.
• Buffers have a fixed size at any point in time. When the buffer is full, it must be reallocated with a larger
size, and its elements must be moved. Similarly, when the number of valid elements in the buffer is
significantly smaller than its size, the buffer must be reallocated with a smaller size and elements be
moved to avoid too much waste.
• Use of the buffer requires an extra data copy when reading and writing to and from user processes.
What is Spooling
Spooling is a process in which data is temporarily held to be used and executed by a device,
program, or system. Data is sent to and stored in memory or other volatile storage until the
program or computer requests it for execution.
SPOOL is an acronym for simultaneous peripheral operations online. Generally, the spool is
maintained on the computer's physical memory, buffers, or the I/O device-specific interrupts.
The spool is processed in ascending order, working based on a FIFO (first-in, first-out)
algorithm.
Spooling refers to putting data of various I/O jobs in a buffer. This buffer is a special area in
memory or hard disk which is accessible to I/O devices. An operating system does the following
activities related to the distributed environment:
• Handles I/O device data spooling as devices have different data access rates.
• Maintains the spooling buffer, which provides a waiting station where data can rest while the
slower device catches up.
• Maintains parallel computation because of the spooling process as a computer can perform
I/O in parallel order. It becomes possible to have the computer read data from a tape, write data
Example of Spooling
The biggest example of Spooling is printing. The documents which are to be
printed are stored in the SPOOL and then added to the queue for printing. During
this time, many processes can perform their operations and use the CPU without
waiting while the printer executes the printing process on the documents one-by-
one.
Process Management
A program in execution is called a process. In order to accomplish its task, process
needs the computer resources.
There may exist more than one process in the system which may require the same
resource at the same time. Therefore, the operating system has to manage all the
processes and the resources in a convenient and efficient way.
The operating system is responsible for the following activities in connection with
Process Management
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.
2. Program Counter
A program counter stores the address of the last instruction of the process on which the process
was suspended. The CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new,
ready, running and waiting. We will discuss about them later in detail.
4. Priority
Every process has its own priority. The process with the highest
priority among the processes gets the CPU first. This is also
stored on the process control block.
2. Ready
Whenever a process is created, it directly enters in
the ready state, in which, it waits for the CPU to be
assigned. The OS picks the new processes from the
secondary memory and put all of them in the main
3. Running
memory.
One of the processes from the ready state will be chosen by the OS depending upon the scheduling
algorithm. If we have n processors in the system then we can have n processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the
scheduling algorithm or the intrinsic behavior of the process.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory due
to lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS
have to make the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes remain in the
secondary memory until the main memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already waiting for
some resource to get available hence it is better if it waits in the secondary memory and make
room for the higher priority process. These processes complete their execution once the main
memory gets available and their wait is finished.
Operations on the Process
1. Creation
Once the process is created, it will be ready and come into the ready queue (main
memory) and will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses
one process and start executing it. Selecting the process which is to be executed next, is
known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it.
Process may come to the blocked or wait state during the execution then in that case the
processor starts executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context
of the process (PCB) will be deleted and the process gets terminated by the Operating
Process Scheduling
Definition
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Categories of Scheduling
There are two categories of scheduling:
1.Non-preemptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves to a
waiting state.
2.Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During
resource allocation, the process switches from running state to ready state or from waiting state to
ready state. This switching occurs as the CPU may give priority to other processes and replace the
process with higher priority with the running process.
Process Scheduling Queues
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.