You are on page 1of 46

1. What is operating system?

Discuss role/functions of OS as a resource


Operating System as Resource Manager
Let us understand how the operating system works as a Resource Manager.

 Now-a-days all modern computers consist of processors, memories, timers,


network interfaces, printers, and so many other devices.

 The operating system provides for an orderly and controlled allocation of the
processors, memories, and I/O devices among the various programs in the
bottom-up view.

 Operating system allows multiple programs to be in memory and run at the


same time.

 Resource management includes multiplexing or sharing resources in two


different ways: in time and in space.

 In time multiplexed, different programs take a chance of using CPU. First one
tries to use the resource, then the next one that is ready in the queue and so
on. For example: Sharing the printer one after another.

 In space multiplexing, Instead of the customers taking a chance, each one gets
part of the resource. For example − Main memory is divided into several running
programs, so each one can be resident at the same time.
The diagram given below shows the functioning of OS as a resource manager −
2. Explain the different types of operating system. OR List out types of
operating system and explain batch OS and time sharing OS in brief. OR What
is Operating System? explain any one types of operating system

Types of Operating Systems: Some widely used operating systems are


as follows-
1. Batch Operating System –
This type of operating system does not interact with the computer directly.
There is an operator which takes similar jobs having the same requirement
and group them into batches. It is the responsibility of the operator to sort jobs
with similar needs.

Advantages of Batch Operating System:


 It is very difficult to guess or know the time required for any job to
complete. Processors of the batch systems know how long the job
would be when it is in queue
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails
Examples of Batch based Operating System: Payroll System, Bank
Statements, etc.
2. Time-Sharing Operating Systems –
Each task is given some time to execute so that all the tasks work smoothly.
Each user gets the time of CPU as they use a single system. These systems
are also known as Multitasking Systems. The task can be from a single user
or different users also. The time that each task gets to execute is called
quantum. After this time interval is over OS switches over to the next task.

Advantages of Time-Sharing OS:


 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of the security and integrity of user
programs and data
 Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix, etc.
3. Distributed Operating System –
These types of the operating system is a recent advancement in the world of
computer technology and are being widely accepted all over the world and,
that too, with a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network.
Independent systems possess their own memory unit and CPU. These are
referred to as loosely coupled systems or distributed systems. These
system’s processors differ in size and function. The major benefit of working
with these types of the operating system is that it is always possible that one
user can access the files or software which are not actually present on his
system but some other system connected within this network i.e., remote
access is enabled within the devices connected in that network.

Advantages of Distributed Operating System:


 Failure of one will not affect the other network communication, as all
systems are independent from each other
 Electronic mail increases the data exchange speed
 Since resources are being shared, computation is highly fast and
durable
 Load on host computer reduces
 These systems are easily scalable as many systems can be easily
added to the network
 Delay in data processing reduces
Disadvantages of Distributed Operating System:
 Failure of the main network will stop the entire communication
 To establish distributed systems the language which is used are not
well defined yet
 These types of systems are not readily available as they are very
expensive. Not only that the underlying software is highly complex
and not understood well yet
Examples of Distributed Operating System are- LOCUS, etc.
4. Network Operating System –
These systems run on a server and provide the capability to manage data,
users, groups, security, applications, and other networking functions. These
types of operating systems allow shared access of files, printers, security,
applications, and other networking functions over a small private network. One
more important aspect of Network Operating Systems is that all the users are
well aware of the underlying configuration, of all other users within the network,
their individual connections, etc. and that’s why these computers are popularly
known as tightly coupled systems.

Advantages of Network Operating System:


 Highly stable centralized servers
 Security concerns are handled through servers
 New technologies and hardware up-gradation are easily integrated
into the system
 Server access is possible remotely from different locations and types
of systems
Disadvantages of Network Operating System:
 Servers are costly
 User has to depend on a central location for most operations
 Maintenance and updates are required regularly
Examples of Network Operating System are: Microsoft Windows Server
2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell
NetWare, and BSD, etc.
5. Real-Time Operating System –
These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is
called response time.
Real-time systems are used when there are time requirements that are very
strict like missile systems, air traffic control systems, robots, etc.
Two types of Real-Time Operating System which are as follows:
 Hard Real-Time Systems:
These OSs are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable.
These systems are built for saving life like automatic parachutes or
airbags which are required to be readily available in case of any
accident. Virtual memory is rarely found in these systems.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less
strict.

Advantages of RTOS:
 Maximum Consumption: Maximum utilization of devices and
system, thus more output from all the resources
 Task Shifting: The time assigned for shifting tasks in these systems
are very less. For example, in older systems, it takes about 10
microseconds in shifting one task to another, and in the latest
systems, it takes 3 microseconds.
 Focus on Application: Focus on running applications and less
importance to applications which are in the queue.
 Real-time operating system in the embedded system: Since the
size of programs are small, RTOS can also be used in embedded
systems like in transport and others.
 Error Free: These types of systems are error-free.
 Memory Allocation: Memory allocation is best managed in these
types of systems.
Disadvantages of RTOS:
 Limited Tasks: Very few tasks run at the same time and their
concentration is very less on few applications to avoid errors.
 Use heavy system resources: Sometimes the system resources
are not so good and they are expensive as well.
 Complex Algorithms: The algorithms are very complex and difficult
for the designer to write on.
 Device driver and interrupt signals: It needs specific device
drivers and interrupts signals to respond earliest to interrupts.
 Thread Priority: It is not good to set thread priority as these systems
are very less prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments,
medical imaging systems, industrial control systems, weapon systems, robots,
air traffic control systems, etc.

3. Explain different services provided by operating system. OR Discuss Various


Operating System Service
Following are a few common services provided by an operating system −
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities
is encapsulated as a process.
A process includes the complete execution context (code to execute, data to
manipulate, registers, OS resources in use). Following are the major activities of an
operating system with respect to program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation


A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose. Examples of storage media
include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of
these media has its own properties like speed, capacity, data transfer rate and data
access methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major
activities of an operating system with respect to file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes. Multiple processes communicate with
one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect to
communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are
connected through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.

Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system
with respect to error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job. Following are
the major activities of an operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.

Protection
Considering a computer system having multiple users and concurrent execution of
multiple processes, the various processes must be protected from each other's
activities.
Protection refers to a mechanism or a way to control the access of programs,
processes, or users to the resources defined by a computer system. Following are
the major activities of an operating system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

4. Explain Monolithic and Layered system of Operating System

The monolithic operating system is a very basic operating system in which file
management, memory management, device management, and process management
are directly controlled within the kernel. The kernel can access all the resources present
in the system. In monolithic systems, each component of the operating system is
contained within the kernel. Operating systems that use monolithic architecture were
first time used in the 1970s.

The monolithic operating system is also known as the monolithic kernel. This is an old
operating system used to perform small tasks like batch processing and time-sharing
tasks in banks. The monolithic kernel acts as a virtual machine that controls all
hardware parts.

It is different from a microkernel, which has limited tasks. A microkernel is divided into
two parts, kernel space, and user space. Both parts communicate with each other
through IPC (Inter-process communication). Microkernel's advantage is that if one
server fails, then the other server takes control of it.

Monolithic kernel
A monolithic kernel is an operating system architecture where the entire operating
system is working in kernel space. The monolithic model differs from other operating
system architectures, such as the microkernel architecture, in that it alone defines a
high-level virtual interface over computer hardware.

39.3M
646
Hello Java Program for Beginners

A set of primitives or system calls implement all operating system services such as
process management, concurrency, and memory management. Device drivers can be
added to the kernel as modules.

Advantages of Monolithic Kernel

Here are the following advantages of a monolithic kernel, such as:

o The execution of the monolithic kernel is quite fast as the services such as
memory management, file management, process scheduling, etc., are
implemented under the same address space.
o A process runs completely in single address space in the monolithic kernel.
o The monolithic kernel is a static single binary file.

Disadvantages of Monolithic Kernel

Here are some disadvantages of the monolithic kernel, such as:


o If any service fails in the monolithic kernel, it leads to the failure of the entire
system.
o The entire operating system needs to be modified by the user to add any new
service.

Monolithic System Architecture


A monolithic design of the operating system architecture makes no special
accommodation for the special nature of the operating system. Although the design
follows the separation of concerns, no attempt is made to restrict the privileges
granted to the individual parts of the operating system. The entire operating system
executes with maximum privileges. The communication overhead inside the monolithic
operating system is the same as that of any other software, considered relatively low.

The operating system can be implemented with the help of various structures. The
structure of the OS depends mainly on how the various common components of the
operating system are interconnected and melded into the kernel. Depending on this,
we have to follow the structures of the operating system.

The layered structure approach breaks up the operating system into different layers
and retains much more control on the system. The bottom layer (layer 0) is the
hardware, and the topmost layer (layer N) is the user interface. These layers are so
designed that each layer uses the functions of the lower-level layers only. It simplifies
the debugging process as if lower-level layers are debugged, and an error occurs
during debugging. The error must be on that layer only as the lower-level layers have
already been debugged.

o This allows implementers to change the inner workings and increases


modularity.
o As long as the external interface of the routines doesn't change, developers
have more freedom to change the inner workings of the routines.
o The main advantage is the simplicity of construction and debugging. The main
difficulty is defining the various layers.

The main disadvantage of this structure is that the data needs to be modified and
passed on at each layer, which adds overhead to the system. Moreover, careful
planning of the layers is necessary as a layer can use only lower-level layers. UNIX is
an example of this structure.

Why Layering in Operating System?


Layering provides a distinct advantage in an operating system

. All the layers can be defined separately and interact with each other as required. Also, it is
easier to create, maintain and update the system if it is done in the form of layers. Change in
one layer specification does not affect the rest of the layers.
29.8M
626
How to find Nth Highest Salary in SQL
Each of the layers in the operating system can only interact with the above and below
layers. The lowest layer handles the hardware, and the uppermost layer deals with the
user applications.

Architecture of Layered Structure


This type of operating system was created as an improvement over the early
monolithic systems. The operating system is split into various layers in the layered
operating system, and each of the layers has different functionalities. There are some
rules in the implementation of the layers as follows.

o A particular layer can access all the layers present below it, but it cannot access
them. That is, layer n-1 can access all the layers from n-2 to 0, but it cannot
access the nth
o Layer 0 deals with allocating the processes, switching between processes when
interruptions occur or the timer expires. It also deals with the basic
multiprogramming of the CPU.

Thus if the user layer wants to interact with the hardware layer, the response will be
traveled through all the layers from n-1 to 1. Each layer must be designed and
implemented such that it will need only the services provided by the layers below it.

There are six layers in the layered operating system. A diagram demonstrating these
layers is as follows:
5. What is the function of Kernel ? Explain System Calls with Example.

Functions of the Kernel in Operating System


1) Process Management

The creation, execution, and termination of processes keep on going inside the system
whenever a system is in the ON mode. A process contains all the information about
the task that needs to be done. So, for executing any task, a process is created inside
the systems. At a time, there are many processes which are in live state inside the
system. The management of all these processes is very important to avoid deadlocks
and for the proper functioning of the system, and it is handled by the Kernel.

2) Memory management

Whenever a process is created and executed, it occupies memory, and when it gets
terminated, the memory can be used again. But the memory should be handled by
someone so that the released memory can be assigned again to the new processes.
This task is also done by the Kernel. The kernel keeps track about which part of the
memory is currently allocated and which part is available for being allocated to the
other processes.

3) Device Management

The Kernel also manages all the different devices which are connected to the system,
like the Input and Output devices, etc.

4) Interrupt Handling

While executing the processes, there are conditions where tasks with more priority
need to be handled first. In these cases, the kernel has to interrupt in-between the
execution of the current process and handle tasks with more priority which has arrived
in between.

5) I/O Communication

As the Kernel manages all the devices connected to it, so it is also responsible for
handling all sorts of input and output that is exchanged through these devices. So, all
the information that the system receives from the user and all the output that the user
is provided with via different applications is handled by the Kernel.
In computing, a system call is the programmatic way in which a computer
program requests a service from the kernel of the operating system it is
executed on. A system call is a way for programs to interact with the
operating system. A computer program makes a system call when it makes
a request to the operating system’s kernel. System call provides the services
of the operating system to the user programs via Application Program
Interface(API). It provides an interface between a process and operating
system to allow user-level processes to request services of the operating
system. System calls are the only entry points into the kernel system. All
programs needing resources must use system calls.
Services Provided by System Calls :
1. Process creation and management
2. Main memory management
3. File Access, Directory and File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
Types of System Calls : There are 5 different categories of system
calls –
1. Process control: end, abort, create, terminate, allocate
and free memory.
2. File management: create, open, close, delete, read file
etc.
3. Device management
4. Information maintenance
5. Communication
Examples of Windows and Unix System Calls –
Windows Unix
fork(
)
CreateProcess() exit(
ExitProcess() )
WaitForSingleOb wait(
Process Control ject() )

open
()
read(
)
CreateFile() write
ReadFile() ()
WriteFile() close
File Manipulation
CloseHandle() ()
ioctl(
)
SetConsoleMode( read(
) )
ReadConsole() write
Device Manipulation WriteConsole() ()

getpi
d()
GetCurrentProces alar
sID() m()
SetTimer() sleep
Information Maintenance Sleep() ()

pipe(
CreatePipe() )
CreateFileMappi shm
ng() get()
MapViewOfFile( mma
Communication ) p()

chm
SetFileSecurity() od()
InitlializeSecurity umas
Descriptor() k()
SetSecurityDescri cho
Protection ptorGroup() wn()

6. Give the role of “Kernel” and “Shell” in UNIX.


OR Give the role of “Kernel”
and “Shell” in UNIX.

7. Write Various States of Process.


 As a process executes, it changes state. The
state of a process is defined as the current activity
of the process.
 1.New
The process is being created.
 2.Ready
The process is waiting to be assigned to a
processor. Ready processes are waiting to
have the processor allocated to them by the
operating system so that they can run.
 3.Running
Process instructions are being executed (i.e.
The process that is currently being
executed).
 4.Waiting
The process is waiting for some event to
occur (such as the completion of an I/O
operation).
 5.Terminated
The process has finished execution

Process can have one of the following five states


at a time.

8. Write about Priority Scheduling, Round robin


Scheduling, Shortest Job first
Scheduling with proper exam.

9. Write about Process Control Block.


 Each process is represented in the
operating system by a process control
block (PCB) also called a task control
block. PCB is the data structure used by
the operating system. Operating system
groups all information that needs about
particular process.
 PCB contains many pieces of information
associated with a specific process which
are described below.
 Process life cycle
 All information about process

 Pointer
Pointer points to another process control
block. Pointer is used for maintaining the
scheduling list.
 Process State
Process state may be new, ready, running,
waiting and so on.
 Program Counter
Program Counter indicates the address of
the next instruction to be executed for this
process.
 CPU registers
CPU registers include general purpose
register, stack pointers, index registers and
accumulators etc. number of register and
type of register totally depends upon the
computer architecture.
 Memory management information
This information may include the value of
base and limit registers, the page tables, or
the segment tables depending on the
memory system used by the operating
system.This information is useful for
deallocating the memory when the process
terminates.
 Accounting information
This information includes the amount of
CPU and real time used, time limits, job or
process numbers, account numbers etc.

10. Explain Benefits of Threads. Differentiate


between process and thread.

Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads,
and each thread is treated as a job, the number of jobs done in the unit time increases.
That is why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
o Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
o Resource sharing: Resources can be shared between a
as code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.

S.NO Process Thread

Process means Thread means


any program is in segment of a
1. execution. process.

Process takes
more time to Thread takes less
2. terminate. time to terminate.

It takes more It takes less time for


3. time for creation. creation.

It also takes more


time for context It takes less time for
4. switching. context switching.
Process is less Thread is more
efficient in term of efficient in term of
5. communication. communication.

We don’t need multi


Multi programs in action
programming for multiple threads
holds the because a single
concepts of multi process consists of
6. process. multiple threads.

Process is Threads share


7. isolated. memory.

A Thread is
lightweight as each
Process is called thread in a process
heavy weight shares code, data
8. process. and resources.

Thread switching
Process does not require to
switching uses call a operating
interface in system and cause
operating an interrupt to the
9. system. kernel.

11. Explain implementation of thread with its


advantages and disadvantages.

 A thread library provides programmers with


an API for creating and managing threads.
Support for threads must be provided either at
the user level or by the kernel.

 Kernel level threads are supported and


managed directly by the operating system.
 User level threads are supported above the
kernel in user space and are managed without
kernel support.
Kernel level threads

Kernel level threads are supported and managed


directly by the operating system.

 The kernel knows about and manages all


threads.
 One process control block (PCP) per process.
 One thread control block (TCB) per thread in
the system.
 Provide system calls to create and manage
threads from user space.

Advantages

 The kernel has full knowledge of all threads.


 Scheduler may decide to give more CPU time
to a process having a large numer of threads.
 Good for applications that frequently block.

Disadvantages

 Kernel manage and schedule all threads.


 Significant overhead and increase in kernel
complexity.
 Kernel level threads are slow and inefficient
compared to user level threads.
 Thread operations are hundreds of times
slower compared to user-level threads.

User level threads

User level threads are supported above the


kernel in user space and are managed without
kernel support.

 Threads managed entirely by the run-time


system (user-level library).
 Ideally, thread operations should be as fast as
a function call.
 The kernel knows nothing about user-level
threads and manage them as if they where
single-threaded processes.
Advantages

 Can be implemented on an OS that does not


suport kernel-level threads.
 Does not require modifications of the OS.
 Simple representation: PC, registers, stack
and small thread control block all stored in
the user-level process address space.
 Simple management: Creating, switching and
synchronizing threads done in user-space
without kernel intervention.
 Fast and efficient: switching threads not
much more expensive than a function call.

Disadvantages

 Not a perfect solution (a trade off).


 Lack of coordination between the user-level
thread manager and the kernel.
 OS may make poor decisions like:
o scheduling a process with idle threads
o blocking a process due to a blocking
thread even though the process has
other threads that can run
o giving a process as a whole one time
slice irrespective of whether the
process has 1 or 1000 threads
o unschedule a process with a thread
holding a lock.
 May require communication between the
kernel and the user-level thread manager
(scheduler activations) to overcome the above
problems.

User-level thread models

In general, user-level threads can be


implemented using one of four models.

 Many-to-one
 One-to-one
 Many-to-many
 Two-level

12. Give the Difference between Mufti-


Programming, Multitasking ,Multiprocessing
System.
13. Explain the micro-kernel system architecture
in detail.

14. Explain the features of time sharing system

15. What are system calls? What is application


programming interface?

16. What is thread? Explain thread Structure?


And explain any one type of thread in
details.

Threads in Operating System


A thread is a single sequential flow of execution of tasks of a process so it is also known
as thread of execution or thread of control. There is a way of thread execution inside
the process of any operating system. Apart from this, there can be more than one
thread inside a process. Each thread of the same process makes use of a separate
program counter and a stack of activation records and control blocks. Thread is often
referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many
tabs can be viewed as threads. MS Word uses many threads - formatting text from one
thread, processing input from another thread, etc.

Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.

Types of Threads
In the operating system

, there are two types of threads.

1. Kernel level thread.


2. User-level thread.

User-level thread
The operating system

does not recognize the user-level thread. User threads can be easily implemented and it is
implemented by the user. If a user performs a user-level thread blocking operation, the whole process
is blocked. The kernel level thread does not know nothing about the user level thread. The kernel-
level thread manages user-level threads as if they are single-threaded processes?examples: Java

thread, POSIX threads, etc.

00:25/05:43

172.8K

This May Be the new Mac mini 2022! Redesign & M1 Pro/M1 Max!

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread


The kernel thread recognizes the operating system. There is a thread control block and
process control block in the system for each thread and process in the kernel-level
thread. The kernel-level thread is implemented by the operating system. The kernel
knows about all the threads and manages them. The kernel-level thread offers a system
call to create and manage the threads from user-space. The implementation of kernel
threads is more difficult than the user thread. Context switch time is longer in the
kernel thread. If a kernel thread performs a blocking operation, the Banky thread
execution can continue. Example: Window Solaris.
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being
large numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.


2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

Components of Threads
Any thread has the following components.

1. Program counter
2. Register set
3. Stack space

17.Define following terms: Starvation, Process, Mutual Exclusion

 Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned
in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −

S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

4
Data
This section contains the global and static variables.

Starvation happens if a method is indefinitely delayed. This can emerge


once a method needs a further resource for execution that isn’t assigned.

These resources are things like:

 CPU time
 memory
 disk space
 network bandwidth
 I/O access to network or disk

Starvation is the problem that occurs when low priority processes


get jammed for an unspecified time as the high priority processes keep
executing.

A steady stream of higher-priority methods will stop a low-priority process


from ever obtaining the processor.
A mutual exclusion (mutex) is a program object that prevents simultaneous
access to a shared resource. This concept is used in concurrent programming
with a critical section, a piece of code in which processes or threads access a
shared resource. Only one thread owns the mutex at a time, thus a mutex with
a unique name is created when a program starts. When a thread holds a
resource, it has to lock the mutex from other threads to prevent concurrent
access of the resource. Upon releasing the resource, the thread unlocks the
mutex.

Mutex comes into the picture when two threads work on the same data at the
same time. It acts as a lock and is the most basic synchronization tool. When a
thread tries to acquire a mutex, it gains the mutex if it is available, otherwise
the thread is set to sleep condition. Mutual exclusion reduces latency and busy-
waits using queuing and context switches. Mutex can be enforced at both the
hardware and software levels.

20.Explain Round Robin algorithm with proper example.


The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns. It is the oldest, simplest
scheduling algorithm, which is mostly used for multitasking.

In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice. This algorithm also offers starvation free
execution of processes.

Advantage of Round-robin Scheduling


Here, are pros/benefits of Round-robin scheduling method:

 It doesn’t face the issues of starvation or convoy effect.


 All the jobs get a fair allocation of CPU.
 It deals with all process without any priority
 If you know the total number of processes on the run queue, then you can
also assume the worst-case response time for the same process.
 This scheduling method does not depend upon burst time. That’s why it
is easily implementable on the system.
 Once a process is executed for a specific set of the period, the process is
preempted, and another process executes for that given time period.
 Allows OS to use the Context switching method to save states of
preempted processes.
 It gives the best performance in terms of average response time.

Disadvantages of Round-robin Scheduling


Here, are drawbacks/cons of using Round-robin scheduling:

 If slicing time of OS is low, the processor output will be reduced.


 This method spends more time on context switching
 Its performance heavily depends on time quantum.
 Priorities cannot be set for the processes.
 Round-robin scheduling doesn’t give special priority to more important
tasks.
 Decreases comprehension
 Lower time quantum results in higher the context switching overhead in
the system.
 Finding a correct time quantum is a quite difficult task in this system.

EXAMPLE MA KOI SUM JATE KARI DEVO.

21.Explain context switching and context switching overhead with appropriate example.

The Context switching is a technique or method used by the operating system to switch a
process from one state to another to execute its function using CPUs in the system. When
switching perform in the system, it stores the old running process's status in the form of
registers and assigns the CPU to a new process to execute its tasks. While a new process is
running in the system, the previous process must wait in a ready queue. The execution of the
old process starts at that point where another process stopped it. It defines the characteristics
of a multitasking operating system in which multiple processes shared the same CPU to
perform multiple tasks without the need for additional processors in the system.

Example of Context Switching


Suppose that multiple processes are stored in a Process Control Block (PCB). One
process is running state to execute its task with the use of CPUs. As the process is
running, another process arrives in the ready queue, which has a high priority of
completing its task using CPU. Here we used context switching that switches the
current process with the new process requiring the CPU to finish its tasks. While
switching the process, a context switch saves the status of the old process in registers.
When the process reloads into the CPU, it starts the execution of the process when the
new process stops the old process. If we do not save the state of the process, we have
to start its execution at the initial level. In this way, context switching helps the
operating system to switch between the processes, store or reload the process when
it requires executing its tasks.

Steps for Context Switching


There are several steps involves in context switching of the processes. The following
diagram represents the context switching of two processes, P1 to P2, when an
interrupt, I/O needs, or priority-based process occurs in the ready queue of PCB.

As we can see in the diagram, initially, the P1 process is running on the CPU to execute
its task, and at the same time, another process, P2, is in the ready state. If an error or
interruption has occurred or the process requires input/output, the P1 process
switches its state from running to the waiting state. Before changing the state of the
process P1, context switching saves the context of the process P1 in the form of
registers and the program counter to the PCB1. After that, it loads the state of the P2
process from the ready state of the PCB2 to the running state.
22. Explain SJF process scheduling algorithm with example
 Shortest Job First (SJF) is an algorithm in which the process having the
smallest execution time is chosen for the next execution. This scheduling
method can be preemptive or non-preemptive. It significantly reduces the
average waiting time for other processes awaiting execution. The full form of
SJF is Shortest Job First.
There are basically two types of SJF methods:

 Non-Preemptive SJF
 Preemptive SJF

Characteristics of SJF Scheduling


 It is associated with each job as a unit of time to complete.
 This algorithm method is helpful for batch-type processing, where
waiting for jobs to complete is not critical.
 It can improve process throughput by making sure that shorter jobs are
executed first, hence possibly have a short turnaround time.
 It improves job output by offering shorter jobs, which should be executed
first, which mostly have a shorter turnaround time.

Non-Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to process, the
process holds it till it reaches a waiting state or terminated.

Consider the following five processes each having its own unique burst time and
arrival time

NONPREEMPTIVE NU KOI BI EXAMPLE JATE BANAI DEVU

Preemptive
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A
process with shortest burst time begins execution. If a process with even a shorter
burst time arrives, the current process is removed or preempted from execution,
and the shorter job is allocated CPU cycle
PREEMPTIVE NU KOI BI EXAMPLE JATE BANAI DEVU

23. What is convoy effect in round robin algorithm ? explain with example

24. What is aging ? Explain importance of aging in process scheduling. OR


explain how starvation of lower priority process is prevented in process
scheduling
 aging is a technique of gradually increasing the priority of processes that
wait in the system for a long time.For example, if priority range from 127(low)
to 0(high), we could increase the priority of a waiting process by 1 Every 15
minutes. Eventually even a process with an initial priority of 127 would take
no more than 32 hours for priority 127 process to age to a priority-0 process.

For example, if a process P is having a priority number as 75 at 0


ms. Then after every 5 ms(you can use any time quantum), we can
decrease the priority number of the process P by 1(here also
instead of 1, you can take any other number). So, after 5 ms, the
priority of the process P will be 74. Again after 5 ms, we will
decrease the priority number of process P by 1. So, after 10 ms, the
priority of the process P will become 73 and this process will
continue. After a certain period of time, the process P will become
a high priority process when the priority number comes closer to 0
and the process P will get the CPU for its execution. In this way,
the lower priority process also gets the CPU. No doubt the CPU is
allocated after a very long time but since the priority of the
process is very low so, we are not that much concerned about the
response time of the process(read more about response time
from here). The only thing that we are taking care of is starvation.

26. Explain PCB with all parameters


Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the
process.
Registers
This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling
information that is contained in the PCB. This may also include any other scheduling
parameters.
Memory Management Information
The memory management information includes the page tables or the segment tables
depending on the memory system used. It also contains the value of the base
registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of files
etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all
a part of the PCB accounting information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the normal
user access. This is done because it contains important process information. Some of
the operating systems place the PCB at the beginning of the kernel stack for the
process as it is a safe location.

27. Discuss in brief different types of scheduler


A Process Scheduler schedules different processes to be assigned to the CPU based
on particular scheduling algorithms. There are six popular process scheduling
algorithms which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it cannot be
preempted until it completes its allotted time, whereas the preemptive scheduling is
based on priority where a scheduler may preempt a low priority running process
anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)


 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in
advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or
any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here
we are considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not
known.
 It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling


 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time


P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

31. List parameters to be considered while selecting scheduling algoritm


Answer: The objective/principle which should be kept in view while selecting a
scheduling policy are the following −
1. Fairness − All processes should be treated the same. No process should
suffer indefinite postponement.
2. Maximize throughput − Attain maximum throughput. The largest possible
number of processes per unit time should be serviced.
3. Predictability − A given job should run in about the same predictable amount
of time and at about the same cost irrespective of the load on the system.
4. Maximum resource usage − The system resources should be kept busy.
Indefinite postponement should be avoided by enforcing priorities.
5. Controlled Time − There should be control over the different times −
o Response time
o Turnaround time
o Waiting time
The objective should be to minimize above mentioned times.

32.Differentiate between preemptive and non preemptive scheduling


PREEMPTIVE NON-PREEMPTIVE
Parameter SCHEDULING SCHEDULING

Once resources(CPU Cycle)


are allocated to a process, the
In this resources(CPU process holds it till it completes
Cycle) are allocated to a its burst time or switches to
Basic process for a limited time. waiting state.

Process can not be interrupted


Process can be interrupted until it terminates itself or its
Interrupt in between. time is up.

If a process having high If a process with a long burst


priority frequently arrives in time is running CPU, then later
the ready queue, a low coming process with less CPU
Starvation priority process may starve. burst time may starve.

It has overheads of
Overhead scheduling the processes. It does not have overheads.

Flexibility flexible rigid

Cost cost associated no cost associated

CPU In preemptive scheduling, It is low in non preemptive


Utilization CPU utilization is high. scheduling.

Examples of preemptive
scheduling are Round Examples of non-preemptive
Robin and Shortest scheduling are First Come First
Examples Remaining Time First. Serve and Shortest Job First.

33. What do you mean by scheduling? Discuss in brief types of scheduling


Ans same as no 27

35. Write Short note on Schedulers.


The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded
into the executable memory at a time and the loaded process shares the
CPU using time multiplexing.
There are three types of process scheduler.

1. Long Term or job scheduler :


It brings the new process to the ‘Ready State’. It controls Degree of
Multi-programming, i.e., number of process present in ready state
at any point of time. It is important that the long-term scheduler
make a careful selection of both IO and CPU bound process. IO
bound tasks are which use much of their time in input and output
operations while CPU bound processes are which spend their time
on CPU. The job scheduler increases efficiency by maintaining a
balance between the two.

2. Short term or CPU scheduler :


It is responsible for selecting one process from ready state for
scheduling it on the running state. Note: Short-term scheduler only
selects the process to schedule it doesn’t load the process on
running. Here is when all the scheduling algorithms are used. The
CPU scheduler is responsible for ensuring there is no starvation
owing to high burst time processes.
Dispatcher is responsible for loading the process selected by
Short-term scheduler on the CPU (Ready to Running State)
Context switching is done by dispatcher only. A dispatcher does the
following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded
program.
3. Medium-term scheduler :
It is responsible for suspending and resuming the process. It mainly
does swapping (moving processes from main memory to disk and
vice versa). Swapping may be necessary to improve the process
mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
It is helpful in maintaining a perfect balance between the I/O bound
and the CPU bound. It reduces the degree of multiprogramming.
41. Write about Semaphores.
Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows −

 Wait

The wait operation decrements the value of its argument S, if it is positive. If S


is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}

 Signal

The signal operation increments the value of its argument S.


signal(S)
{
S++;
}

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −

 Counting Semaphores

These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.

 Binary Semaphores

The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0. It is sometimes easier
to implement binary semaphores than counting semaphores.

Advantages of Semaphores
Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.

49. Differentiate between deadlock and starvation.


Difference between Deadlock and Starvation:
S.NODeadlock Starvation
All processes keep waiting for each other to High priority processes keep executing
1. complete and none get executed and low priority processes are blocked

Resources are continuously utilized by


2. Resources are blocked by the processes high priority processes

Necessary conditions Mutual Exclusion,


Hold and Wait, No preemption, Circular
3. Wait Priorities are assigned to the processes
4. Also known as Circular wait Also know as lived lock

It can be prevented by avoiding the


5. necessary conditions for deadlock It can be prevented by Aging

67. Any example related to deadlock can be

You might also like