Professional Documents
Culture Documents
The operating system provides for an orderly and controlled allocation of the
processors, memories, and I/O devices among the various programs in the
bottom-up view.
In time multiplexed, different programs take a chance of using CPU. First one
tries to use the resource, then the next one that is ready in the queue and so
on. For example: Sharing the printer one after another.
In space multiplexing, Instead of the customers taking a chance, each one gets
part of the resource. For example − Main memory is divided into several running
programs, so each one can be resident at the same time.
The diagram given below shows the functioning of OS as a resource manager −
2. Explain the different types of operating system. OR List out types of
operating system and explain batch OS and time sharing OS in brief. OR What
is Operating System? explain any one types of operating system
Advantages of RTOS:
Maximum Consumption: Maximum utilization of devices and
system, thus more output from all the resources
Task Shifting: The time assigned for shifting tasks in these systems
are very less. For example, in older systems, it takes about 10
microseconds in shifting one task to another, and in the latest
systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less
importance to applications which are in the queue.
Real-time operating system in the embedded system: Since the
size of programs are small, RTOS can also be used in embedded
systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these
types of systems.
Disadvantages of RTOS:
Limited Tasks: Very few tasks run at the same time and their
concentration is very less on few applications to avoid errors.
Use heavy system resources: Sometimes the system resources
are not so good and they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult
for the designer to write on.
Device driver and interrupt signals: It needs specific device
drivers and interrupts signals to respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems
are very less prone to switching tasks.
Examples of Real-Time Operating Systems are: Scientific experiments,
medical imaging systems, industrial control systems, weapon systems, robots,
air traffic control systems, etc.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes. Multiple processes communicate with
one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect to
communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system
with respect to error handling −
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job. Following are
the major activities of an operating system with respect to resource management −
Protection
Considering a computer system having multiple users and concurrent execution of
multiple processes, the various processes must be protected from each other's
activities.
Protection refers to a mechanism or a way to control the access of programs,
processes, or users to the resources defined by a computer system. Following are
the major activities of an operating system with respect to protection −
The monolithic operating system is a very basic operating system in which file
management, memory management, device management, and process management
are directly controlled within the kernel. The kernel can access all the resources present
in the system. In monolithic systems, each component of the operating system is
contained within the kernel. Operating systems that use monolithic architecture were
first time used in the 1970s.
The monolithic operating system is also known as the monolithic kernel. This is an old
operating system used to perform small tasks like batch processing and time-sharing
tasks in banks. The monolithic kernel acts as a virtual machine that controls all
hardware parts.
It is different from a microkernel, which has limited tasks. A microkernel is divided into
two parts, kernel space, and user space. Both parts communicate with each other
through IPC (Inter-process communication). Microkernel's advantage is that if one
server fails, then the other server takes control of it.
Monolithic kernel
A monolithic kernel is an operating system architecture where the entire operating
system is working in kernel space. The monolithic model differs from other operating
system architectures, such as the microkernel architecture, in that it alone defines a
high-level virtual interface over computer hardware.
39.3M
646
Hello Java Program for Beginners
A set of primitives or system calls implement all operating system services such as
process management, concurrency, and memory management. Device drivers can be
added to the kernel as modules.
o The execution of the monolithic kernel is quite fast as the services such as
memory management, file management, process scheduling, etc., are
implemented under the same address space.
o A process runs completely in single address space in the monolithic kernel.
o The monolithic kernel is a static single binary file.
The operating system can be implemented with the help of various structures. The
structure of the OS depends mainly on how the various common components of the
operating system are interconnected and melded into the kernel. Depending on this,
we have to follow the structures of the operating system.
The layered structure approach breaks up the operating system into different layers
and retains much more control on the system. The bottom layer (layer 0) is the
hardware, and the topmost layer (layer N) is the user interface. These layers are so
designed that each layer uses the functions of the lower-level layers only. It simplifies
the debugging process as if lower-level layers are debugged, and an error occurs
during debugging. The error must be on that layer only as the lower-level layers have
already been debugged.
The main disadvantage of this structure is that the data needs to be modified and
passed on at each layer, which adds overhead to the system. Moreover, careful
planning of the layers is necessary as a layer can use only lower-level layers. UNIX is
an example of this structure.
. All the layers can be defined separately and interact with each other as required. Also, it is
easier to create, maintain and update the system if it is done in the form of layers. Change in
one layer specification does not affect the rest of the layers.
29.8M
626
How to find Nth Highest Salary in SQL
Each of the layers in the operating system can only interact with the above and below
layers. The lowest layer handles the hardware, and the uppermost layer deals with the
user applications.
o A particular layer can access all the layers present below it, but it cannot access
them. That is, layer n-1 can access all the layers from n-2 to 0, but it cannot
access the nth
o Layer 0 deals with allocating the processes, switching between processes when
interruptions occur or the timer expires. It also deals with the basic
multiprogramming of the CPU.
Thus if the user layer wants to interact with the hardware layer, the response will be
traveled through all the layers from n-1 to 1. Each layer must be designed and
implemented such that it will need only the services provided by the layers below it.
There are six layers in the layered operating system. A diagram demonstrating these
layers is as follows:
5. What is the function of Kernel ? Explain System Calls with Example.
The creation, execution, and termination of processes keep on going inside the system
whenever a system is in the ON mode. A process contains all the information about
the task that needs to be done. So, for executing any task, a process is created inside
the systems. At a time, there are many processes which are in live state inside the
system. The management of all these processes is very important to avoid deadlocks
and for the proper functioning of the system, and it is handled by the Kernel.
2) Memory management
Whenever a process is created and executed, it occupies memory, and when it gets
terminated, the memory can be used again. But the memory should be handled by
someone so that the released memory can be assigned again to the new processes.
This task is also done by the Kernel. The kernel keeps track about which part of the
memory is currently allocated and which part is available for being allocated to the
other processes.
3) Device Management
The Kernel also manages all the different devices which are connected to the system,
like the Input and Output devices, etc.
4) Interrupt Handling
While executing the processes, there are conditions where tasks with more priority
need to be handled first. In these cases, the kernel has to interrupt in-between the
execution of the current process and handle tasks with more priority which has arrived
in between.
5) I/O Communication
As the Kernel manages all the devices connected to it, so it is also responsible for
handling all sorts of input and output that is exchanged through these devices. So, all
the information that the system receives from the user and all the output that the user
is provided with via different applications is handled by the Kernel.
In computing, a system call is the programmatic way in which a computer
program requests a service from the kernel of the operating system it is
executed on. A system call is a way for programs to interact with the
operating system. A computer program makes a system call when it makes
a request to the operating system’s kernel. System call provides the services
of the operating system to the user programs via Application Program
Interface(API). It provides an interface between a process and operating
system to allow user-level processes to request services of the operating
system. System calls are the only entry points into the kernel system. All
programs needing resources must use system calls.
Services Provided by System Calls :
1. Process creation and management
2. Main memory management
3. File Access, Directory and File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
Types of System Calls : There are 5 different categories of system
calls –
1. Process control: end, abort, create, terminate, allocate
and free memory.
2. File management: create, open, close, delete, read file
etc.
3. Device management
4. Information maintenance
5. Communication
Examples of Windows and Unix System Calls –
Windows Unix
fork(
)
CreateProcess() exit(
ExitProcess() )
WaitForSingleOb wait(
Process Control ject() )
open
()
read(
)
CreateFile() write
ReadFile() ()
WriteFile() close
File Manipulation
CloseHandle() ()
ioctl(
)
SetConsoleMode( read(
) )
ReadConsole() write
Device Manipulation WriteConsole() ()
getpi
d()
GetCurrentProces alar
sID() m()
SetTimer() sleep
Information Maintenance Sleep() ()
pipe(
CreatePipe() )
CreateFileMappi shm
ng() get()
MapViewOfFile( mma
Communication ) p()
chm
SetFileSecurity() od()
InitlializeSecurity umas
Descriptor() k()
SetSecurityDescri cho
Protection ptorGroup() wn()
Pointer
Pointer points to another process control
block. Pointer is used for maintaining the
scheduling list.
Process State
Process state may be new, ready, running,
waiting and so on.
Program Counter
Program Counter indicates the address of
the next instruction to be executed for this
process.
CPU registers
CPU registers include general purpose
register, stack pointers, index registers and
accumulators etc. number of register and
type of register totally depends upon the
computer architecture.
Memory management information
This information may include the value of
base and limit registers, the page tables, or
the segment tables depending on the
memory system used by the operating
system.This information is useful for
deallocating the memory when the process
terminates.
Accounting information
This information includes the amount of
CPU and real time used, time limits, job or
process numbers, account numbers etc.
Process takes
more time to Thread takes less
2. terminate. time to terminate.
A Thread is
lightweight as each
Process is called thread in a process
heavy weight shares code, data
8. process. and resources.
Thread switching
Process does not require to
switching uses call a operating
interface in system and cause
operating an interrupt to the
9. system. kernel.
Advantages
Disadvantages
Disadvantages
Many-to-one
One-to-one
Many-to-many
Two-level
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Types of Threads
In the operating system
User-level thread
The operating system
does not recognize the user-level thread. User threads can be easily implemented and it is
implemented by the user. If a user performs a user-level thread blocking operation, the whole process
is blocked. The kernel level thread does not know nothing about the user level thread. The kernel-
level thread manages user-level threads as if they are single-threaded processes?examples: Java
00:25/05:43
172.8K
This May Be the new Mac mini 2022! Redesign & M1 Pro/M1 Max!
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned
in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −
1
Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
4
Data
This section contains the global and static variables.
CPU time
memory
disk space
network bandwidth
I/O access to network or disk
Mutex comes into the picture when two threads work on the same data at the
same time. It acts as a lock and is the most basic synchronization tool. When a
thread tries to acquire a mutex, it gains the mutex if it is available, otherwise
the thread is set to sleep condition. Mutual exclusion reduces latency and busy-
waits using queuing and context switches. Mutex can be enforced at both the
hardware and software levels.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic
queue for a limited time slice. This algorithm also offers starvation free
execution of processes.
21.Explain context switching and context switching overhead with appropriate example.
The Context switching is a technique or method used by the operating system to switch a
process from one state to another to execute its function using CPUs in the system. When
switching perform in the system, it stores the old running process's status in the form of
registers and assigns the CPU to a new process to execute its tasks. While a new process is
running in the system, the previous process must wait in a ready queue. The execution of the
old process starts at that point where another process stopped it. It defines the characteristics
of a multitasking operating system in which multiple processes shared the same CPU to
perform multiple tasks without the need for additional processors in the system.
As we can see in the diagram, initially, the P1 process is running on the CPU to execute
its task, and at the same time, another process, P2, is in the ready state. If an error or
interruption has occurred or the process requires input/output, the P1 process
switches its state from running to the waiting state. Before changing the state of the
process P1, context switching saves the context of the process P1 in the form of
registers and the program counter to the PCB1. After that, it loads the state of the P2
process from the ready state of the PCB2 to the running state.
22. Explain SJF process scheduling algorithm with example
Shortest Job First (SJF) is an algorithm in which the process having the
smallest execution time is chosen for the next execution. This scheduling
method can be preemptive or non-preemptive. It significantly reduces the
average waiting time for other processes awaiting execution. The full form of
SJF is Shortest Job First.
There are basically two types of SJF methods:
Non-Preemptive SJF
Preemptive SJF
Non-Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to process, the
process holds it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and
arrival time
Preemptive
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A
process with shortest burst time begins execution. If a process with even a shorter
burst time arrives, the current process is removed or preempted from execution,
and the shorter job is allocated CPU cycle
PREEMPTIVE NU KOI BI EXAMPLE JATE BANAI DEVU
23. What is convoy effect in round robin algorithm ? explain with example
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
It has overheads of
Overhead scheduling the processes. It does not have overheads.
Examples of preemptive
scheduling are Round Examples of non-preemptive
Robin and Shortest scheduling are First Come First
Examples Remaining Time First. Serve and Shortest Job First.
Wait
S--;
}
Signal
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0. It is sometimes easier
to implement binary semaphores than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −