You are on page 1of 92

Operating Systems

WHAT IS OPERATING SYSTEM?


TYPES OF OPERATING SYSTEMS
CORE TASK OF OPERATING SYSTEM
COMMON OPERATING SYSTEM
Operating System

 Is the program that, after being initially loaded into the computer by a boot
program, manages all of the other application programs in a computer.
 An operating system brings powerful benefits to computer software and software
development. Without an operating system, every application would need to
include its own UI, as well as the comprehensive code needed to handle all low-
level functionality of the underlying computer, such as disk storage, network
interfaces, and so on.
Types of Operating System

 Batch Operating System


 Multi-Programming System
 Multi-Processing System
 Multi-Tasking Operating System
 Time-Sharing Operating System
 Distributed Operating System
 Network Operating System
 Real-Time Operating System
Batch Operating System

 This type of Operating System does not interact with the computer directly. There
is an operator which takes similar jobs having them into batches. It is the
responsibility of the operator to sort jobs with similar needs.
Advantages of Batch Operating System

 It is very difficult to guess or know the time required for any job to complete.
Processor of the batch system know how long job would be when it is in the
queue.
 Multiple users can share the batch systems.
 The idle time for the batch system is very less.
 It is easy to manage large work very less.
Disadvantages of Batch Operating System

 The computer operators should be well known with batch systems.


 Batch Systems are hard to debug.
 It is sometimes costly.
 The other jobs will have to wait for an unknown time if any job fails.
Multi-Programming Operating System

 Can be simply illustrated as more than one program is present in the main
memory and any one of them can be kept in execution. This is basically used for
better execution of resources.
Advantages of Multi-Programming
Operating Systems
 Multi programming increases the throughput of the system.
 It helps in reducing the response time.
Disadvantage of Multi-programming
Operating System
 There is not any facility for user interaction of system resources with the system.
Multi-Processing Operating System

 Is a type of Operating System in which more than one CPU is used for the
execution of resources. It betters the throughput of the system.
Advantages of Multi-Processing Operating
System
 It increases the throughput of the system.
 As it has several processors, so, if one processor fails, we can proceed with
another processor.
Disadvantage of Multi-Processing
Operating System
 Due to the multiple CPU, it can be more complex and somehow difficult to
understand.
Multi-tasking Operating System

 Is simply multiprogramming operating system with having facility of a round-


robin scheduling algorithm. It can run multiple programs simultaneously.
 There are two-types of Multi-Tasking Systems:
 Preemptive Multi-Tasking
 Cooperative Multi-Tasking
Preemptive Multitasking

 The operating system can initiate a context switching from the running process to
another process. In other words, the operating system allows stopping the
execution of the currently running process and allocating the CPU to some other
process. The OS uses some criteria to decide for how long a process should
execute before allowing another process to use the operating system. The
mechanism of taking control of the operating system from one process and giving
it to another process.
Cooperative multitasking

 The operating system never initiates context switching from the running process
to another process. A context switch occurs only when the processes voluntarily
yield control periodically or when idle or logically blocked to allow multiple
applications to execute simultaneously. Also, in the multitasking, all the processes
cooperate for the scheduling scheme to work.
Difference between preemptive multitasking and
cooperative multitasking

PREEMPTIVE MULTITASKING COOPERATIVE MULTITASKING


 Preemptive multitasking is a task used by  Cooperative multitasking is a type of
the OS to decide for how long a task computer multitasking in which the
should be executed before allowing operating system never initiates a context
another task to use the OS switch from a running process to another
 process.
Interrupts applications and gives control
to other processes outside the  In cooperative multitasking, process
application’s control scheduler never interrupts a process
unexpectedly.
Difference between preemptive multitasking and
cooperative multitasking

PREEMPTIVE MULTITASKING COOPERATIVE MULTITASKING


 The operating System can initiate context  The operating system does not initiate a
switch from a running process to another context switch from a running process to
process another process.
 A malicious program initiates an infinite  A malicious program can bring the entire
loop, it only hurts itself without affecting system to a halt by busy waiting or
other programs or threads. running an infinite loop and not giving up
control.
Difference between preemptive multitasking and
cooperative multitasking

PREEMPTIVE MULTITASKING COOPERATIVE MULTITASKING


 Preemptive multitasking forces  In cooperative multitasking, all programs
applications to share the CPU whether must cooperate for it to work. If one
they want to or not. program does not cooperate, it can log the
 CPU.
UNIX, Windows 95, Windows NT
operating systems are the examples of  Macintosh OS version 8.0-9.2.2 and
preemptive multitasking Windows 3.x operating systems are
examples of cooperative multitasking
Advantages of Multi-tasking Operating
System
 Multiple programs can be executed simultaneously in multi-tasking operating
system
 It comes with proper memory management
Disadvantage of multi-tasking operating
system
 The system gets heated in case of heavy programs multiple times.
Time-Sharing Operating System

 Each task is given some time to execute so that all the tasks work smoothly. Each
user gets the time of the CPU as they use a single system. These systems are also
known as multitasking systems. The task can be from a single user or different
users also. The time that each task gets to execute is called quantum. After this
time interval is over OS switches over the next task.
Advantages of Time-Sharing OS

 Each task gets an equal opportunity.


 Fewer chances of duplication of software.
 CPU idle time can be reduced.
 Resource sharing: time-sharing systems allow multiple users to share hardware
resources such as the CPU, memory, and peripherals, reducing the cost of
hardware and increasing efficiency.
 Improved Productivity: time-sharing allows users to work concurrently, thereby
reducing productivity translates to more work getting done in less time.
 Improved User Experience: Time-sharing provides an interactive environment
that allows users to communicate with the computer in real time, providing a
better user experience than batch processing.
Disadvantage of Time-Sharing OS
 Reliability problem.
 One must have to take care of the security and integrity of user programs and
data.
 Data communication problems
 High Overhead: time-sharing systems have a higher overhead than other operating
systems due to the need for scheduling, contest switching, and other overheads
that come with supporting multiple users.
 Complexity: time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of
bugs and errors.
 Security Risks: with multiple users sharing resources, the risk of security breaches
increases. Time-sharing systems require careful management of user access,
authentication, and authorization ensure that security of data and software.
Distributed Operating System

 These types of operating system is a recent advancement in the world of computer


technology and are being widely accepted all over the world and, that too, at a
great pace. Various autonomous interconnected computers communicate with
each other using a shared communication network. Independent systems possess
their own memory unit and CPU. These are referred to as loosely coupled systems
or distributed systems. These systems’ processors differ in size and function. The
major benefit of working with these types of the operating system is that it is
always possible that one user can access the files or software which are not
actually present on his system but some other system connected within this
network i.e., remote access is enabled within the devices connected in that
network.
Advantages of Distributed Operating
System
 Failure of one will not affect the other network communication, as all systems are
independent of each other.
 Electronic mail increases the data exchange speed.
 Since resources are being shared, computation is highly fast and durable.
 Load on host computer reduces.
 These systems are easily scalable as many systems can be easily added to the
network.
 Delay in data processing reduces.
Disadvantage of Distributed Operating
System
 Failure of the main network will stop the entire communication.
 To establish distributed systems the language is used not well-defined yet.
 These types of systems are not readily available as they are very expensive. Not
only that the underlying software is highly complex and not understood well yet.
Network Operating System

 These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These types of
operating systems allow shared access to files, printers, security, applications, and
other networking functions over a small private network. One more important
aspect of Network Operating Systems is that all the users are well aware of the
underlying configuration, of all other users within the network, their individual
connections, etc. and that’s why these computers are popularly known as tightly
coupled systems.
Advantages of Network Operating System

 Highly stable centralized servers.


 Secure concerns are handled through servers.
 New technologies and hardware up-gradation are easily integrated into the
system.
 Server access is possible remotely from different locations and types of systems.
Disadvantage of Network Operating
System
 Servers are costly.
 User has to depend on a central location for most operations.
 Maintenance and updates are required regularly.
Real-time Operating System

 These types of OSs serve real-time systems. The time interval required to process
and respond to inputs is very small. This time interval is called response time.
 Real-time systems are used when there are time requirements that are very strict
like missile systems, air traffic control systems, robots, etc.
 Types of Real-time Operating Systems
 Hard Real-time Systems
 Soft Real-time Systems
Types of Real-time Operating System

 Hard Real-Time Systems: are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable. These systems
are built for saving life like automatic parachutes or airbags which are required to
be readily available in case of an accident. Virtual memory is rarely found in these
systems.
 Soft Real-Time Systems - These OSs are for applications where time-constraint
is less strict.
Difference of Hard Real-time System and
Soft Real-time System
Hard Real-time System Soft Real-time System
 In hard real time system, the size of data  In soft real time system, the size of data
file is small or medium. file is large.
 In this system response time is in  In this system response time are higher.
millisecond.  In soft real time system, peak load can be
 Peak load performance should be tolerated.
predictable.  In this system safety is not critical.
 In this system safety is critical.  A Soft real time system is less restrictive.
 A hard real time system is very  In case of an soft real time system,
restrictive.
computation is rolled back to previously
 In case of an error in a hard real time established a checkpoint.
system, the computation is rolled back.
Difference of Hard Real-time System and
Soft Real-time System
Hard Real-time System Soft Real-time System
 Satellite launch, Railway signaling  DVD player, telephone switches,
system etc. electronic games etc.
 Guarantees response within a specific  Does not guarantee response within a
deadline. specific deadline.
 Catastrophic or severe consequences  Minor consequences (e.g., degraded
(e.g., loss of life or property damage). performance or reduced quality).
 Focused on processing critical tasks with  Focused on processing tasks with lower
high priority. priority.
 Highly predictable, with well-defined and  Less predictable, with behavior that may
deterministic behavior. vary depending on system load or
conditions.
Advantages of RTOS
 Maximum Consumption: Maximum utilization of devices and systems, thus
more output from all the resources.
 Task Shifting: The time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds in shifting from
one task to another, and in the latest systems, it takes 3 microseconds.
 Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
 Real-time operating system in the embedded system: Since the size of programs
is small, RTOS can also be used in embedded systems like in transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS

 Limited Tasks: Very few tasks run at the same time and their concentration is
very less on a few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources are not so good
and they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
 Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.
Functions of an Operating System

 Memory Management  Control Over System Performance


 Processor Management  Job Accounting
 Device Management  Error-Detecting Aids
 File Management  Coordination between other Software and
 Users
User Interface or Command Interpreter
 Perform Basic Computer Tasks
 Booting the Computer
 Network Management
 Security
Memory Management

 The operating system manages the Primary Memory or Main Memory. Main
memory is made up of a large array of bytes or words where each byte or word is
assigned a certain address. Main memory is fast storage and it can be accessed
directly by the CPU. For a program to be executed, it should be first loaded in the
main memory. An operating system manages the allocation and deallocation of
memory to various processes and ensures that the other process does not consume
the memory allocated to one process.
An Operating System performs the following activities
for Memory Management:

 It keeps track of primary memory, i.e., which bytes of memory are used by which
user program. The memory addresses that have already been allocated and the
memory addresses of the memory that has not yet been used.
 In multiprogramming, the OS decides the order in which processes are granted
memory access, and for how long.
 It Allocates the memory to a process when the process requests it and deallocates
the memory when the process has terminated or is performing an I/O operation.
Processor Management

 In a multi-programming environment, the OS decides the order in which


processes have access to the processor, and how much processing time each
process has.
 An operating system manages the processor’s work by allocating various jobs to it
and ensuring that each process receives enough time from the processor to
function properly.
 Keeps track of the status of processes. The program which performs this task is
known as a traffic controller. Allocates the CPU that is a processor to a process.
De-allocates processor when a process is no longer required.
State of Process
State of Process

1. New: Newly Created Process (or) being-created process.


2. Ready: After the creation process moves to the Ready state, i.e. the process is
ready for execution.
3. Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
4. Wait (or Block): When a process requests I/O access.
5. Complete (or Terminated): The process completed its execution.
6. Suspended Ready: When the ready queue becomes full, some processes are
moved to a suspended ready state
7. Suspended Block: When the waiting queue becomes full.
Context Switching

 The process of saving the context of one process and loading the context of
another process is known as Context Switching. In simple terms, it is like loading
and unloading the process from the running state to the ready state.
When does Context Switching Happen?

 When a high-priority process comes to a ready state (i.e. with higher priority than
the running process)
 An Interrupt occurs
 User and kernel-mode switch (It is not necessary though)
 Preemptive CPU scheduling is used.
When does Context Switching Happen?

 When a high-priority process comes to a ready state (i.e. with higher priority than
the running process)
 An Interrupt occurs
 User and kernel-mode switch (It is not necessary though)
 Preemptive CPU scheduling is used.
Scheduling Algorithms

 First-Come, First-Served (FCFS)


 Shortest Job First (SJF)
 Round Robin (RR)
 Priority Scheduling
 Multilevel Queue
First-Come, First-Served (FCFS)

 This is the simplest scheduling algorithm, where the process is executed on a first-
come, first-served basis. FCFS is non-preemptive, which means that once a
process starts executing, it continues until it is finished or waiting for I/O.
 is an operating system scheduling algorithm that automatically executes queued
requests and processes in order of their arrival. It is the easiest and simplest CPU
scheduling algorithm. In this type of algorithm, processes which requests the CPU
first get the CPU allocation first. This is managed with a FIFO queue. The full
form of FCFS is First Come First Serve.
Characteristics of FCFS Method

 It supports non-preemptive and pre-emptive scheduling algorithm.


 Jobs are always executed on a first-come first-serve basis.
 It is easy to implement and use.
 This method is poor in performance, and the general wait time is quite high.
How to compute FCFS?
Advantages of FCFS

 The simplest form of a CPU Scheduling Algorithm


 Easy to Program
 First come First Served
Disadvantages of FCFS

 It is a Non-Preemptive CPU scheduling algorithm, so after the process has been


allocated to the CPU, it will never release the CPU until it finishes executing.
 The Average Waiting Time is high.
 Short processes that are at the back of the queue have to wait for the long process
at the front to finish.
 Not an ideal technique for time-sharing systems.
 Because of its simplicity, FCFS is not very efficient.
Shortest Job First (SJF)

 is an algorithm in which the process having the smallest execution time is chosen
for the next execution. This scheduling method can be preemptive or non-
preemptive. It significantly reduces the average waiting time for other processes
awaiting execution.
Characteristics of SJF Scheduling

 It is associated with each job as a unit of time to complete.


 This algorithm method is helpful for batch-type processing, where waiting for
jobs to complete is not critical.
 It can improve process throughput by making sure that shorter jobs are executed
first, hence possibly have a short turnaround time.
 It improves job output by offering shorter jobs, which should be executed first,
which mostly have a shorter turnaround time.
Types of SJF

 Non-Preemptive - In non-preemptive scheduling, once the CPU cycle is allocated


to process, the process holds it till it reaches a waiting state or terminated.
 Preemptive SJF - In Preemptive SJF Scheduling, jobs are put into the ready
queue as they come. A process with shortest burst time begins execution. If a
process with even a shorter burst time arrives, the current process is removed or
preempted from execution, and the shorter job is allocated CPU cycle.
How to compute SJF?
Advantages of SJF

 SJF is frequently used for long term scheduling.


 It reduces the average waiting time over FIFO (First in First Out) algorithm.
 SJF method gives the lowest average waiting time for a specific set of processes.
 It is appropriate for the jobs running in batch, where run times are known in
advance.
 For the batch system of long-term scheduling, a burst time estimate can be
obtained from the job description.
 For Short-Term Scheduling, we need to predict the value of the next burst time.
 Probably optimal with regard to average turnaround time.
Disadvantage of SJF

 Job completion time must be known earlier, but it is hard to predict.


 It is often used in a batch system for long term scheduling.
 SJF can’t be implemented for CPU scheduling for the short term. It is because
there is no specific method to predict the length of the upcoming CPU burst.
 This algorithm may cause very long turnaround times or starvation.
 Requires knowledge of how long a process or job will run.
 It leads to the starvation that does not reduce average turnaround time.
 It is hard to know the length of the upcoming CPU request.
 Elapsed time should be recorded, that results in more overhead on the processor.
Round Robin Scheduling

 The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns. It is the oldest, simplest
scheduling algorithm, which is mostly used for multitasking.
 In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice. This algorithm also offers starvation free execution
of processes.
Characteristics of Round-Robin

 Round robin is a pre-emptive algorithm


 The CPU is shifted to the next process after fixed interval time, which is called
time quantum/time slice.
 The process that is preempted is added to the end of the queue.
 Round robin is a hybrid model which is clock-driven
 Time slice should be minimum, which is assigned for a specific task that needs to
be processed. However, it may differ OS to OS.
 It is a real time algorithm which responds to the event within a specific time limit.
 Round robin is one of the oldest, fairest, and easiest algorithm.
 Widely used scheduling method in traditional OS.
How to compute Round-Robin?
Priority Scheduling

 is a method of scheduling processes that is based on priority. In this algorithm,


the scheduler selects the tasks to work as per the priority.
 The processes with higher priority should be carried out first, whereas jobs with
equal priorities are carried out on a round-robin or FCFS basis. Priority depends
upon memory requirements, time requirements, etc.
 There are two-types of Priority Scheduling: Preemptive Scheduling and Non-
Preemptive Scheduling
Preemptive Scheduling

 the tasks are mostly assigned with their priorities. Sometimes it is important to run
a task with a higher priority before another lower priority task, even if the lower
priority task is still running. The lower priority task holds for some time and
resumes when the higher priority task finishes its execution.
Preemptive Priority Scheduling
Non-Preemptive Scheduling

 In this type of scheduling method, the CPU has been allocated to a specific
process. The process that keeps the CPU busy, will release the CPU either by
switching context or terminating. It is the only method that can be used for
various hardware platforms. That’s because it doesn’t need special hardware (for
example, a timer) like preemptive scheduling.
Non-Preemptive Scheduling
Characteristics of Priority Scheduling

 A CPU algorithm that schedules processes based on priority.


 It used in Operating systems for performing batch processes.
 If two jobs having the same priority are READY, it works on a FIRST COME,
FIRST SERVED basis.
 In priority scheduling, a number is assigned to each process that indicates its
priority level.
 Lower the number, higher is the priority.
 In this type of scheduling algorithm, if a newer process arrives, that is having a
higher priority than the currently running process, then the currently running
process is preempted.
Advantages of Priority Scheduling

 Easy to use scheduling method


 Processes are executed on the basis of priority so high priority does not need to
wait for long which saves time
 This method provides a good mechanism where the relative important of each
process may be precisely defined.
 Suitable for applications with fluctuating time and resource requirements.
Disadvantages of Priority Scheduling

 If the system eventually crashes, all low priority processes get lost.
 If high priority processes take lots of CPU time, then the lower priority processes
may starve and will be postponed for an indefinite time.
 This scheduling algorithm may leave some low priority processes waiting
indefinitely.
 A process will be blocked when it is ready to run but has to wait for the CPU
because some other process is running currently.
 If a new higher priority process keeps on coming in the ready queue, then the
process which is in the waiting state may need to wait for a long duration of time.
Multilevel Queue Algorithm

 divides the ready queue into multiple levels or tiers, each with a different priority.
The appropriate level is then assigned to the processes based on their
characteristics, such as priority, memory requirements, and CPU usage. In this
article, we will go over the fundamentals of multilevel queue scheduling, as well
as the benefits and drawbacks of multilevel queue scheduling and the concept of
multilevel feedback scheduling.
 A method of organizing the tasks or processes that a computer must perform is
multilevel queue scheduling. The computer system divides tasks or processes into
different queues based on their priority in this method. A task’s priority can be
determined by factors such as memory capacity, process priority, or type.
Advantages of Multilevel Queue
Algorithm
 Efficient Resource Utilization: Multilevel queue scheduling allows the system to
allocate resources more efficiently by grouping processes with similar resource
requirements into separate queues.
 Improved Response Time: By assigning higher priority to interactive processes
that require a fast response time, multilevel queue scheduling can reduce response
time and improve system performance.
 Better Throughput: Multilevel queue scheduling can improve the overall
throughput of the system by executing processes more efficiently. The system can
execute multiple processes concurrently from different queues, improving the
overall efficiency of the system.
Advantages of Multilevel Queue
Algorithm
 Flexibility: Multilevel queue scheduling can be customized to suit different types
of applications or workloads by adjusting the priority levels of the queues. This
allows the system to adapt to changing workload demands, ensuring that
resources are allocated efficiently.
 Fairness: Multilevel queue scheduling can provide a fair allocation of CPU time
to all processes by ensuring that each queue is executed in turn. This can prevent
any single process from monopolizing the CPU and starving other processes of
CPU time.
Disadvantage of Multilevel Queue
Algorithm
 Complexity: Multilevel queue scheduling is a relatively complex scheduling
algorithm that can be difficult to implement and manage. The system needs to
maintain multiple separate queues with different priority levels, which can be
challenging to maintain.
 Overhead: Dividing the ready queue into multiple queues can increase the
overhead associated with the scheduling algorithm, which can negatively impact
system performance.
Disadvantage of Multilevel Queue
Algorithm
 Inflexibility: The fixed priority levels of multilevel queue scheduling may not be
suitable for all types of applications or workloads. Some applications may require
more or less CPU time than the priority level assigned to them, which can result
in inefficient resource allocation.
 Starvation: If a queue has a large number of processes with a higher priority than
those in other queues, the processes in the lower-priority queues may be starved
of CPU time.
Multilevel Queue Algorithm
Device Management

 An OS manages device communication via its respective drivers. It performs the


following activities for device management. Keeps track of all devices connected
to the system. designates a program responsible for every device known as the
Input/Output controller. Decide which process gets access to a certain device and
for how long. Allocates devices effectively and efficiently. Deallocates devices
when they are no longer required. There are various input and output devices. an
OS controls the working of these input-output devices. It receives the requests
from these devices, performs a specific task, and communicates back to the
requesting process.
File Management

 A file system is organized into directories for efficient or easy navigation and
usage. These directories may contain other directories and other files. An
Operating System carries out the following file management activities. It keeps
track of where information is stored, user access settings, the status of every file,
and more. These facilities are collectively known as the file system. An OS keeps
track of information regarding the creation, deletion, transfer, copy, and storage of
files in an organized way. It also maintains the integrity of the data stored in these
files, including the file directory structure, by protecting against unauthorized
access.
User Interface or Command Interpreter

 The user interacts with the computer system through the operating system. Hence
OS acts as an interface between the user and the computer hardware. This user
interface is offered through a set of commands or a graphical user interface (GUI).
Through this interface, the user makes interacts with the applications and the
machine hardware.
Booting the Computer

 The process of starting or restarting the computer is known as booting. If the


computer is switched off completely and if turned on then it is called cold
booting. Warm booting is a process of using the operating system to restart the
computer.
Security

 The process of starting or restarting the computer is known as booting. If the


computer is switched off completely and if turned on then it is called cold
booting. Warm booting is a process of using the operating system to restart the
computer.
 The following security measures are used to protect user data:
 Protection against unauthorized access through login.
 Protection against intrusion by keeping the firewall active.
 Protecting the system memory against malicious access.
 Displaying messages related to system vulnerabilities.
Control Over System Performance

 Operating systems play a pivotal role in controlling and optimizing system


performance. They act as intermediaries between hardware and software, ensuring
that computing resources are efficiently utilized. One fundamental aspect is
resource allocation, where the OS allocates CPU time, memory, and I/O devices
to different processes, striving to provide fair and optimal resource utilization.
Process scheduling, a critical function, helps decide which processes or threads
should run when preventing any single task from monopolizing the CPU and
enabling effective multitasking.
Job Accounting

 The operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users. In a multitasking OS where multiple programs run simultaneously,
the OS determines which applications should run in which order and how time
should be allocated to each application.
Error-Detecting Aids

 The operating system constantly monitors the system to detect errors and avoid
malfunctioning computer systems. From time to time, the operating system
checks the system for any external threat or malicious software activity. It also
checks the hardware for any type of damage. This process displays several alerts
to the user so that the appropriate action can be taken against any damage caused
to the system.
Coordination Between Other Software and
Users
 Operating systems also coordinate and assign interpreters, compilers, assemblers,
and other software to the various users of the computer systems. In simpler terms,
think of the operating system as the traffic cop of your computer. It directs and
manages how different software programs can share your computer’s resources
without causing chaos. It ensures that when you want to use a program, it runs
smoothly without crashing or causing problems for others. So, it’s like the
friendly officer ensuring a smooth flow of traffic on a busy road, making sure
everyone gets where they need to go without any accidents or jams.
Performs Basic Computer Tasks

 The management of various peripheral devices such as the mouse, keyboard, and
printer is carried out by the operating system. Today most operating systems are
plug-and-play. These operating systems automatically recognize and configure the
devices with no user interference.
Network Management

 is the process of orchestrating network traffic and data flow across the enterprise
ecosystem using network monitoring, network security, network automation, and
other tools hosted on-premise or on the cloud.
 there are five types of network management to look after the entire spectrum of
network-related processes: Network Fault Management; Network Configuration
Management; Network Accounting and Utilization Management; Network
Performance Management; and Network Security Management.
Network Fault Management

 You can have a designated network fault management team to anticipate, detect,
and resolve network faults to minimize downtime. In addition to fault resolution,
this function is responsible for logging fault information, maintaining records,
conducting analysis, and aiding in regular audits.
 There needs to be clear channels so that the network fault management team can
report back to the network administrator to maintain transparency. It will also
work closely with the end-user in case they report faults.
Network Configuration Management

 are a key aspect of performance. These configurations are expected to change


dynamically to keep up with data and traffic demands in a large enterprise. An
example of a network configuration management task is an IT professional
remotely altering the connectivity settings to boost performance.
 Network configuration management relies heavily on automation so that the team
does not need to manually look up configuration requirements and can provision
changes automatically instead. Like network fault management, the network
configuration management team must also keep detailed records of all changes,
their outcomes, and issues, if any.
Network Accounting and Utilization
Management
 As network requirements evolve, employees will consume more network
resources and add to enterprise costs. The network accounting management team
monitors utilization, finds anomalies, and tracks utilization trends for different
departments, business functions, office locations, online products, or even
individual users.
 In some businesses (especially digital service providers), network accounting
management is directly linked to profitability. For example, an ecommerce
company might need to track network utilization and benchmark against
profitability during peak and lull periods. In large enterprises, network accounting
management is a shared service organization that leases network resources to
different branches and subsidiaries to maintain an internal profit margin.
Network Performance Management

 This is one of the most central aspects of network management. Network


performance management involves various tasks that help boost network uptime,
service availability, and concurrent bandwidth speeds. Here too, automation plays
a major role.
 A singular dashboard is connected to various network components that monitor
performance KPIs and raises an alert if a threshold is breached. For example, the
network performance management team might want to map network response
times 24/7 to avoid impacting the end-user experience. If there is an anomaly, the
network performance management team will work closely with the network fault
management team to resolve the issue.
Network Security Management

 As most enterprise processes move online, network security is vital for resilience,
risk management, and success. For example, 68% of enterprises as surveyed by
Telia Carrier in 2020 faced a distributed denial of service (DDoS) attack last year.
 In a DDOS attack, multiple connected online devices target an enterprise website
with fake traffic to block legitimate traffic. Network security management
involves protecting a system against these and other issues. An enterprise network
also generates a regular stream of logs analyzed by the network security
management team to find any threat fingerprints.

You might also like