Professional Documents
Culture Documents
Is the program that, after being initially loaded into the computer by a boot
program, manages all of the other application programs in a computer.
An operating system brings powerful benefits to computer software and software
development. Without an operating system, every application would need to
include its own UI, as well as the comprehensive code needed to handle all low-
level functionality of the underlying computer, such as disk storage, network
interfaces, and so on.
Types of Operating System
This type of Operating System does not interact with the computer directly. There
is an operator which takes similar jobs having them into batches. It is the
responsibility of the operator to sort jobs with similar needs.
Advantages of Batch Operating System
It is very difficult to guess or know the time required for any job to complete.
Processor of the batch system know how long job would be when it is in the
queue.
Multiple users can share the batch systems.
The idle time for the batch system is very less.
It is easy to manage large work very less.
Disadvantages of Batch Operating System
Can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for
better execution of resources.
Advantages of Multi-Programming
Operating Systems
Multi programming increases the throughput of the system.
It helps in reducing the response time.
Disadvantage of Multi-programming
Operating System
There is not any facility for user interaction of system resources with the system.
Multi-Processing Operating System
Is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the system.
Advantages of Multi-Processing Operating
System
It increases the throughput of the system.
As it has several processors, so, if one processor fails, we can proceed with
another processor.
Disadvantage of Multi-Processing
Operating System
Due to the multiple CPU, it can be more complex and somehow difficult to
understand.
Multi-tasking Operating System
The operating system can initiate a context switching from the running process to
another process. In other words, the operating system allows stopping the
execution of the currently running process and allocating the CPU to some other
process. The OS uses some criteria to decide for how long a process should
execute before allowing another process to use the operating system. The
mechanism of taking control of the operating system from one process and giving
it to another process.
Cooperative multitasking
The operating system never initiates context switching from the running process
to another process. A context switch occurs only when the processes voluntarily
yield control periodically or when idle or logically blocked to allow multiple
applications to execute simultaneously. Also, in the multitasking, all the processes
cooperate for the scheduling scheme to work.
Difference between preemptive multitasking and
cooperative multitasking
Each task is given some time to execute so that all the tasks work smoothly. Each
user gets the time of the CPU as they use a single system. These systems are also
known as multitasking systems. The task can be from a single user or different
users also. The time that each task gets to execute is called quantum. After this
time interval is over OS switches over the next task.
Advantages of Time-Sharing OS
These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These types of
operating systems allow shared access to files, printers, security, applications, and
other networking functions over a small private network. One more important
aspect of Network Operating Systems is that all the users are well aware of the
underlying configuration, of all other users within the network, their individual
connections, etc. and that’s why these computers are popularly known as tightly
coupled systems.
Advantages of Network Operating System
These types of OSs serve real-time systems. The time interval required to process
and respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict
like missile systems, air traffic control systems, robots, etc.
Types of Real-time Operating Systems
Hard Real-time Systems
Soft Real-time Systems
Types of Real-time Operating System
Hard Real-Time Systems: are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable. These systems
are built for saving life like automatic parachutes or airbags which are required to
be readily available in case of an accident. Virtual memory is rarely found in these
systems.
Soft Real-Time Systems - These OSs are for applications where time-constraint
is less strict.
Difference of Hard Real-time System and
Soft Real-time System
Hard Real-time System Soft Real-time System
In hard real time system, the size of data In soft real time system, the size of data
file is small or medium. file is large.
In this system response time is in In this system response time are higher.
millisecond. In soft real time system, peak load can be
Peak load performance should be tolerated.
predictable. In this system safety is not critical.
In this system safety is critical. A Soft real time system is less restrictive.
A hard real time system is very In case of an soft real time system,
restrictive.
computation is rolled back to previously
In case of an error in a hard real time established a checkpoint.
system, the computation is rolled back.
Difference of Hard Real-time System and
Soft Real-time System
Hard Real-time System Soft Real-time System
Satellite launch, Railway signaling DVD player, telephone switches,
system etc. electronic games etc.
Guarantees response within a specific Does not guarantee response within a
deadline. specific deadline.
Catastrophic or severe consequences Minor consequences (e.g., degraded
(e.g., loss of life or property damage). performance or reduced quality).
Focused on processing critical tasks with Focused on processing tasks with lower
high priority. priority.
Highly predictable, with well-defined and Less predictable, with behavior that may
deterministic behavior. vary depending on system load or
conditions.
Advantages of RTOS
Maximum Consumption: Maximum utilization of devices and systems, thus
more output from all the resources.
Task Shifting: The time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds in shifting from
one task to another, and in the latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
Real-time operating system in the embedded system: Since the size of programs
is small, RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their concentration is
very less on a few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good
and they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.
Functions of an Operating System
The operating system manages the Primary Memory or Main Memory. Main
memory is made up of a large array of bytes or words where each byte or word is
assigned a certain address. Main memory is fast storage and it can be accessed
directly by the CPU. For a program to be executed, it should be first loaded in the
main memory. An operating system manages the allocation and deallocation of
memory to various processes and ensures that the other process does not consume
the memory allocated to one process.
An Operating System performs the following activities
for Memory Management:
It keeps track of primary memory, i.e., which bytes of memory are used by which
user program. The memory addresses that have already been allocated and the
memory addresses of the memory that has not yet been used.
In multiprogramming, the OS decides the order in which processes are granted
memory access, and for how long.
It Allocates the memory to a process when the process requests it and deallocates
the memory when the process has terminated or is performing an I/O operation.
Processor Management
The process of saving the context of one process and loading the context of
another process is known as Context Switching. In simple terms, it is like loading
and unloading the process from the running state to the ready state.
When does Context Switching Happen?
When a high-priority process comes to a ready state (i.e. with higher priority than
the running process)
An Interrupt occurs
User and kernel-mode switch (It is not necessary though)
Preemptive CPU scheduling is used.
When does Context Switching Happen?
When a high-priority process comes to a ready state (i.e. with higher priority than
the running process)
An Interrupt occurs
User and kernel-mode switch (It is not necessary though)
Preemptive CPU scheduling is used.
Scheduling Algorithms
This is the simplest scheduling algorithm, where the process is executed on a first-
come, first-served basis. FCFS is non-preemptive, which means that once a
process starts executing, it continues until it is finished or waiting for I/O.
is an operating system scheduling algorithm that automatically executes queued
requests and processes in order of their arrival. It is the easiest and simplest CPU
scheduling algorithm. In this type of algorithm, processes which requests the CPU
first get the CPU allocation first. This is managed with a FIFO queue. The full
form of FCFS is First Come First Serve.
Characteristics of FCFS Method
is an algorithm in which the process having the smallest execution time is chosen
for the next execution. This scheduling method can be preemptive or non-
preemptive. It significantly reduces the average waiting time for other processes
awaiting execution.
Characteristics of SJF Scheduling
The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns. It is the oldest, simplest
scheduling algorithm, which is mostly used for multitasking.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice. This algorithm also offers starvation free execution
of processes.
Characteristics of Round-Robin
the tasks are mostly assigned with their priorities. Sometimes it is important to run
a task with a higher priority before another lower priority task, even if the lower
priority task is still running. The lower priority task holds for some time and
resumes when the higher priority task finishes its execution.
Preemptive Priority Scheduling
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific
process. The process that keeps the CPU busy, will release the CPU either by
switching context or terminating. It is the only method that can be used for
various hardware platforms. That’s because it doesn’t need special hardware (for
example, a timer) like preemptive scheduling.
Non-Preemptive Scheduling
Characteristics of Priority Scheduling
If the system eventually crashes, all low priority processes get lost.
If high priority processes take lots of CPU time, then the lower priority processes
may starve and will be postponed for an indefinite time.
This scheduling algorithm may leave some low priority processes waiting
indefinitely.
A process will be blocked when it is ready to run but has to wait for the CPU
because some other process is running currently.
If a new higher priority process keeps on coming in the ready queue, then the
process which is in the waiting state may need to wait for a long duration of time.
Multilevel Queue Algorithm
divides the ready queue into multiple levels or tiers, each with a different priority.
The appropriate level is then assigned to the processes based on their
characteristics, such as priority, memory requirements, and CPU usage. In this
article, we will go over the fundamentals of multilevel queue scheduling, as well
as the benefits and drawbacks of multilevel queue scheduling and the concept of
multilevel feedback scheduling.
A method of organizing the tasks or processes that a computer must perform is
multilevel queue scheduling. The computer system divides tasks or processes into
different queues based on their priority in this method. A task’s priority can be
determined by factors such as memory capacity, process priority, or type.
Advantages of Multilevel Queue
Algorithm
Efficient Resource Utilization: Multilevel queue scheduling allows the system to
allocate resources more efficiently by grouping processes with similar resource
requirements into separate queues.
Improved Response Time: By assigning higher priority to interactive processes
that require a fast response time, multilevel queue scheduling can reduce response
time and improve system performance.
Better Throughput: Multilevel queue scheduling can improve the overall
throughput of the system by executing processes more efficiently. The system can
execute multiple processes concurrently from different queues, improving the
overall efficiency of the system.
Advantages of Multilevel Queue
Algorithm
Flexibility: Multilevel queue scheduling can be customized to suit different types
of applications or workloads by adjusting the priority levels of the queues. This
allows the system to adapt to changing workload demands, ensuring that
resources are allocated efficiently.
Fairness: Multilevel queue scheduling can provide a fair allocation of CPU time
to all processes by ensuring that each queue is executed in turn. This can prevent
any single process from monopolizing the CPU and starving other processes of
CPU time.
Disadvantage of Multilevel Queue
Algorithm
Complexity: Multilevel queue scheduling is a relatively complex scheduling
algorithm that can be difficult to implement and manage. The system needs to
maintain multiple separate queues with different priority levels, which can be
challenging to maintain.
Overhead: Dividing the ready queue into multiple queues can increase the
overhead associated with the scheduling algorithm, which can negatively impact
system performance.
Disadvantage of Multilevel Queue
Algorithm
Inflexibility: The fixed priority levels of multilevel queue scheduling may not be
suitable for all types of applications or workloads. Some applications may require
more or less CPU time than the priority level assigned to them, which can result
in inefficient resource allocation.
Starvation: If a queue has a large number of processes with a higher priority than
those in other queues, the processes in the lower-priority queues may be starved
of CPU time.
Multilevel Queue Algorithm
Device Management
A file system is organized into directories for efficient or easy navigation and
usage. These directories may contain other directories and other files. An
Operating System carries out the following file management activities. It keeps
track of where information is stored, user access settings, the status of every file,
and more. These facilities are collectively known as the file system. An OS keeps
track of information regarding the creation, deletion, transfer, copy, and storage of
files in an organized way. It also maintains the integrity of the data stored in these
files, including the file directory structure, by protecting against unauthorized
access.
User Interface or Command Interpreter
The user interacts with the computer system through the operating system. Hence
OS acts as an interface between the user and the computer hardware. This user
interface is offered through a set of commands or a graphical user interface (GUI).
Through this interface, the user makes interacts with the applications and the
machine hardware.
Booting the Computer
The operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users. In a multitasking OS where multiple programs run simultaneously,
the OS determines which applications should run in which order and how time
should be allocated to each application.
Error-Detecting Aids
The operating system constantly monitors the system to detect errors and avoid
malfunctioning computer systems. From time to time, the operating system
checks the system for any external threat or malicious software activity. It also
checks the hardware for any type of damage. This process displays several alerts
to the user so that the appropriate action can be taken against any damage caused
to the system.
Coordination Between Other Software and
Users
Operating systems also coordinate and assign interpreters, compilers, assemblers,
and other software to the various users of the computer systems. In simpler terms,
think of the operating system as the traffic cop of your computer. It directs and
manages how different software programs can share your computer’s resources
without causing chaos. It ensures that when you want to use a program, it runs
smoothly without crashing or causing problems for others. So, it’s like the
friendly officer ensuring a smooth flow of traffic on a busy road, making sure
everyone gets where they need to go without any accidents or jams.
Performs Basic Computer Tasks
The management of various peripheral devices such as the mouse, keyboard, and
printer is carried out by the operating system. Today most operating systems are
plug-and-play. These operating systems automatically recognize and configure the
devices with no user interference.
Network Management
is the process of orchestrating network traffic and data flow across the enterprise
ecosystem using network monitoring, network security, network automation, and
other tools hosted on-premise or on the cloud.
there are five types of network management to look after the entire spectrum of
network-related processes: Network Fault Management; Network Configuration
Management; Network Accounting and Utilization Management; Network
Performance Management; and Network Security Management.
Network Fault Management
You can have a designated network fault management team to anticipate, detect,
and resolve network faults to minimize downtime. In addition to fault resolution,
this function is responsible for logging fault information, maintaining records,
conducting analysis, and aiding in regular audits.
There needs to be clear channels so that the network fault management team can
report back to the network administrator to maintain transparency. It will also
work closely with the end-user in case they report faults.
Network Configuration Management
As most enterprise processes move online, network security is vital for resilience,
risk management, and success. For example, 68% of enterprises as surveyed by
Telia Carrier in 2020 faced a distributed denial of service (DDoS) attack last year.
In a DDOS attack, multiple connected online devices target an enterprise website
with fake traffic to block legitimate traffic. Network security management
involves protecting a system against these and other issues. An enterprise network
also generates a regular stream of logs analyzed by the network security
management team to find any threat fingerprints.