Professional Documents
Culture Documents
OS Question Solution-1
OS Question Solution-1
Q9 Define Scheduler.
Ans. It is a special system software which handles the process scheduling. It selects the job and decides that which process is
to be executed.
The different types of schedulers are:
1. Long Term
2. Short. Term
3. Medium Term
Q10 What are the types of Schedulers?
Ans. There are three types of process scheduler:
1. Long Term or Job Scheduler
It brings the new process to the ‘Ready Queue’.
It controls the number of processes present in ready state i.e., Degree of Multiprogramming.
It is responsible for the selection of the I/O bound processes and CPU bound processes. It increases the
efficiency by maintaining the balance between them.
2. Short Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it in running state.
For scheduling the processes, the algorithms are used here.
It ensures that there is no starvation owing to high burst time processes.
Dispatcher loads the process selected by this scheduler also,
Switching user mode to kernel mode.
Context Switching.
Reaching the location of new loaded program
3. Medium Term Scheduler
It is responsible for suspending and resuming the process. It moves the processes from the main memory to
disk and vice versa. It is required to improve the process mix and change in over committed memory space.
And, the over committed memory is required to be freed up. It also decreases the degree of
multiprogramming.
Q11 Define critical section.
Ans. It is the segment of code or the program which tries to access or modify the value of the variables in the shared
resources.
The section above is called the Entry section, for entering the critical section the process must pass the entry section.
The section below it is the Exit section, this section contains the remaining code after critical section.
Q12 Define semaphores.
Ans. These are integer variables that are used to solve the critical section problem by using two atomic operations, wait and
signal that are used for the synchronization of concurrent processes.
There are two types of semaphores,
1. Counting Semaphores
These semaphores have an unrestricted value domain. It is used to coordinate the resource access, where the
semaphore count is the number of available resource. It is incremented or decremented accordingly with the
resource changes.
2. Binary Semaphores
It is similar to counting semaphores but their value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal is succeeded when it is 0.
Advantages of semaphores
It allows only one process into the critical section.
It follows mutual exclusion principle strictly and are much more efficient than some other synchronization
methods.
There is no wastage of resource as waiting process allowed into the critical section according to the checked
condition.
These are machine independent as implemented in microkernel.
Disadvantages of semaphores
These are complicated so the wait and signal operations must be implemented in correct order to prevent
deadlocks.
This may lead to a priority inversion.(low ahead of high priority)
Q13 Name some classic problem of synchronization.
Ans. Some classic problem of synchronization,
1. Bounded-Buffer(Producer-Consumer) Problem:
In this problem, the producer tries to insert data and the consumer tries to remove it. When the processes run
simultaneously the problem occurs.
This problem can be tackled by creating two semaphores, one full and the other empty to keep track of
concurrent processes.
2. Dining-Philosophers Problem
In this problem, K number of philosophers are sitting around a circular table with one chopstick placed
between each pair of philosophers. The philosopher will be able to eat if he can pick up two chopsticks that are
adjacent to philosopher.
This problem deals with the allocation of limited resources.
3. Readers and Writers Problem
The problem occurs when many threads of execution try to access the same shared resources at a time. Some
threads may read or write.
4. Sleeping Barber Problem
This problem is based on a hypothetical barbershop with one barber.
When there are no customers the barber sleeps in his chair. If any customer enters he will wake up and work.
When barber is working and new customer comes in takes empty seat or leave if no vacancy exists.
Q14 What is the use of cooperating processes?
Ans. Cooperating processes are those which can affect or get affected by other processes running on the system. It can
share data with each other.
Use of Cooperating process,
1. Modularity
It involves dividing complicated tasks into smaller subtasks. These subtasks can be completed by different
processes.
2. Information Sharing
Sharing of information between multiple processes can be accomplished using cooperating processes. This may
include access to the same files.
3. Convenience
There are many tasks that a user needs to do such as compiling, printing, editing, etc. It is convenient if these
can be managed by cooperating processes.
4. Computation Speedup
Subtasks of a single task can be performed parallel using cooperating processes. This increase the computation
speed of the program.
Q15 Define race condition.
Ans. A race condition is an undesirable situation that occurs inside a critical section. This happens when the result of
multiple thread execution differs in order of the execution.
Race conditions can be avoided if the critical section is treated as an atomic instruction, proper threads synchronization
using locks.
Q16 What are the requirements that a solution to the critical section problem must satisfy?
Ans. The requirements of the solution to the critical problem are,
5. Mutual Exclusion
It implies that only one process can be inside the critical section at any time. If any other process requires the
critical section, it must wait until its free from current process.
6. Progress
It implies that if a process is not using the critical section, then it should not stop any other process from
accessing it. The process should enter the critical section if it is free.
7. Bounded Waiting
It means that each process must have a limited waiting time for other processes. The process should not take
forever in the critical section.
Long Answer Type
Q1 Define Operating System and list the basic services provided by operating system.
Ans. An Operating System is a software that acts as an interface between computer hardware components and the user.
It provides the services to execute the program in convenient manner to the user.
It provides program an environment to execute.
The basic services provided by OS are:
8. Process Management
A process includes the complete execution of the task. Major activities are Loading of program into memory,
Executes the program, Handles the execution of the program, Provides the mechanism for Process
synchronization, communication, and deadlock handling.
It also responsible for the Process Creation, Deletion, Suspension, Resumption, Synchronization and
Communication.
9. I/O system management
It comprises of I/O devices and their corresponding driver software. It hides the peculiarities of specific
hardware devices from the users. It manages the communication between the user and device drivers.
Operating system provides the access to required I/O device when needed.
10. File Management
A file is a collection of related information defined by its creator. It generally represents program and data.
The OS is responsible for creation and deletion of files and directories.
Mapping of file to secondary storage
Backup of file on secondary storage
11. Memory Management
Memory is a large array of words or bytes, with its own addresses. It is a repository of quickly accessible data.
The OS in memory management,
Keeps track of the part of memory currently in use.
Decides which process will load memory space when the space is available.
Allocation and Deallocation of memory space as needed.
12. Secondary Storage Management
Main memory is too small to store all data and programs permanently, the computer has secondary storage to
backup main memory.
The OS here does is,
It manages the free space in memory i.e., allocation of the storage.
It also manages disk scheduling.
13. Device Management
An OS manages devices communication via their respective drivers.
It keeps the track of all devices. The program responsible for that is known as I/O controller.
It manages the allocation of device to the process and allocation of time as well.
Allocates and Deallocates devices in the efficient way so that no process would wait for extra time.
14. Security Management
The confidential data stored in the system is protected by the OS and does not allow the unauthorized person
to access the system. By means of password of other similar techniques. It helps the system from malware
attack and it gives the system a strong firewall.
Q2. How will you differentiate among the following types of OS by defining their essential properties?
Ans. a) Time Sharing system
A time shared operating system uses CPU scheduling and multi-programming to provide each user with a
small portion of a shared computer at once.
Each user has at least one separate program in memory.
It requires a system to send interrupt signal to CPU. So, that memory of on job can be protected from
interfering with other jobs.
Advantages
Each task gets equal opportunity.
Less chance of duplication of the software.
CPU idle time can be reduced.
Disadvantages
Problem with the reliability.
Risk with the security and integrity of the data. Data communication problem may occur.
b) Parallel system
It is for speeding up the execution of programs by dividing them into multiple segments.
It is used for dealing with multiple processors simultaneously by using computer resources.
It includes a single computer with multiple processors as well as several computers connected by a network to
form a cluster of parallel processing.
It can be further classified into data streams,
SIDM, SIMD, MISD, MIMD.
Advantages
It saves time and allows simultaneous execution.
Solve large complex problem of the OS
Faster as compared to other OS
Disadvantages
Its architecture is very complex.
High cost since more resource is used for synchronization, data transfer, thread, etc.
Huge power consumption and high maintenance.
c) Distributed system
It is a model where distributed application are running on multiple computer linked by a communications
network. Each system has its own memory.
It is also known as loosely coupled systems.
The system communicates with one another through various communication lines.
There are mainly two types of Distributed system: Client Server Systems and Peer to Peer Systems
Advantages
Load on the system decreases.
Size of the system can be set according to requirements.
Fault in one system won’t affect the network.
Disadvantages
Cost for setup is more
Programming is complex
d) Real Time system
It is a multitasking OS intended for applications with fixed deadlines (real-time computing).
It includes some small embedded systems, automobile engine controller, large scale computing systems, etc.
The real time operating system can be classified in two categories:
1) Hard RTS
This system guarantees that critical tasks be completed on time.
It requires that all delays in the system to be bounded from retrieval of stored data so that the OS takes
the time to finish any request made.
These time constraints dictate the facilities that are available in hard time.
e.g., a robot is hired to weld a car body, etc.
2) Soft RTS
It is less restrictive type of Real time system.
In this system, the task gets priority over other tasks.
It can be used with other systems as it is less restricted to the time constraints.
e.g., Multimedia, digital audio system etc.
e) Batch system
This OS accepts more than one jobs at a time.
These jobs are batched or grouped together according to their similar requirements.
The sorting of jobs is done by the computer operator.
This OS does not interact with the user directly.
It allows only one program at a time and the scheduling of jobs according to priority and required resources is
done by OS.
Advantages
Processors knows the time required for any job to complete.
Multiple users can share the batch systems.
Sequential access to all tasks.
Disadvantages
Systems are hard to debug.
Other jobs have to wait for the jobs ahead of it in the queue.
Q3 Describe the fields in a Process Control Block(PCB). What is Switching Overhead?
Ans. Process Control Block is a data structure maintained by the OS for every process. It gets identified by the integer
process ID. I contain,
1. Process State
It stores the current state of the process i.e., whether it is ready, running or waiting.
A process goes from different states from its creation to completion. The states are,
New, Ready, Running, Waiting, Terminated
2. Process Privileges/Priority
It is required to allow or disallow the access to the system resources.
It is a numeric value that represents the priority of each process.
It gets assigned to the process at the time of creation of PCB.
3. Process ID
When a new process is created by the user the OS assigns an ID to it.
It is the unique identification for each of the process in the OS.
This ID helps in distinguishing one process from the other process in the system.
4. Pointer
It is a pointer to parent process.
It contains the address of the next PCB, which is in ready state.
It helps in maintaining the hierarchical control flow between the Parent and Child processes.
5. Program Counter
It is a pointer to the address of the next instruction to be executed.
This attribute contains the address to it only.
6. CPU registers
It is a quickly accessible small size location available to the CPU.
These are the registers where process needs to be stored for execution for running state.
7. CPU scheduling information
It the priority of the process and other scheduling information which is required to schedule the process.
8. Memory management information
This includes the information of page table, memory limits, segment table depending on memory used by OS.
9. Accounting information
It includes the amount of CPU used for the process execution, time limits, execution ID, etc.
10. IO status information
This includes a list of I/O devices allocated to the process.
Also the devices which are required by that process during its execution.
Switching gives the impression to the user that the system has multiple CPUs by executing multiple processes.
It is considered as overhead as the CPU remains idle during that time. It also leads to frequent removal of Translation
Lookaside Buffer) and Cache.
Q4 What is thread? Explain classical thread model.
Ans. Thread is a sequential flow of tasks within a process.
It is used to increase the performance of the applications.
Each thread has its own program counter, stack and set of registers.
It is also termed as lightweight processes as they can share common resources.
There are four classical thread models:
1. User Level Single Thread Model
Each process contains a single thread and it is the single process itself.
The process table contains an entry of every process by maintaining its PCB.
2. User Level Multi Thread Model
Each process contains multiple threads.
All the threads are scheduled by a thread library at user level.
Thread switching is independent of OS which can be done within a process.
3. Kernel Level Single Thread Model
Each process contains a single thread only but the thread used here is kernel level thread.
Process table works as thread table.
4. Kernel Level Multi Thread Model
Thread scheduling is done at kernel level.
If a thread blocks, another thread can be scheduled without blocking the whole process.
Thread scheduling is slower than the user level thread scheduling.
Q5 Explain and differentiate between user level and kernel level thread.
Ans. User Level Thread Kernel Level Thread
User thread are implemented by users. Kernel threads are implemented by the OS.
OS doesn’t recognize user level threads. These threads are recognized by the OS only.
Implementation is easy. Implementation is complicated.
Context switch time is less and requires no hardware Context switch time is more and requires hardware
support. support.
Blocking of User level thread blocks the entire process. Blocking one thread doesn’t block other threads, another
Multithreaded application cannot take advantage of can continue its execution.
multiprocessing. Kernel can be multithreaded.
Creation and management is quicker than kernel level. These threads take more time to create and manage.
Any OS can support user level thread. Kernel level threads are OS specific.
Advantages Advantages
These threads are simple and quick. Multithreading for kernel routines.
It performs better as system call are not required for Scheduling of multiple threads to different processors.
creation of the thread. Disadvantages
Switching doesn’t need kernel privileges. Transferring control within a process from thread
Disadvantages requires switch to kernel mode.
Multithreaded applications cannot use multiprocessing. Kernel threads take more time in creation.
Blocking of single thread can halt the entire process.
Q6 List the main difference and similarities between threads and process.
Ans. Process Thread
It is any program in execution. It is a segment of a process.
It takes more time in creation and termination. It takes less time in creation and termination.
It also takes more time for context switching. It takes less time for context switching.
It is less efficient in terms of communication. It is more efficient in terms of communication.
A process gets isolated memory. Threads shares memory.
Switching requires an interface in an OS. Switching doesn’t require calling OS calls.
Changes to parent process doesn’t affect child process. Since all threads share address space so changes to main
A system call is involved in it. thread can affect other threads.
It doesn’t share data with each other. No system calls are required.
Threads shares data with each other.
Similarities:
Threads share CPU and only thread active at a time, similarly as processes.
Like processes threads within a processes executed sequentially.
Like processes the thread can create child threads.
Similar to processes, if a thread is blocked, another thread can run with or without any other thread.
Q7 What are various criteria for a good process scheduling algorithms? Differentiate between preemptive and non-
preemptive scheduling algorithms with example.
Ans. Criteria for a good process scheduling algorithms:
1. CPU utilization
An algorithm should be designed so that CPU remains as busy as possible. It should make efficient use of CPU.
2. Throughput
It is the amount of work completed in a unit of time. It the process executed to the number of completed
execution in a unit time.
3. Response time
It is the time taken to start responding to the request. The scheduler must aim to minimize the response time
for interactive users.
4. Turnaround time
It refers to the time between the moment of submission of a job and the time of its completion.
5. Waiting time
It is the time a job waits for resource allocation when several hobs are getting completed in multiprogramming.
The algorithm must minimize the waiting time.
6. Fairness
A good scheduling algorithm should make sure that each process gets its fair share of the CPU.
Preemptive Scheduling Non-Preemptive Scheduling
The resources are allocated to a process for a limited Allocated resource holds till it completes its burst time.
time. Process can’t be interrupted until time exceeds of get
Process can be interrupted in between. terminated itself.
Process of high priority arrives frequently then a low Long burst time of running process, then the later
priority process may starve. coming process may starve.
It has overheads of scheduling the processes. It doesn’t have overheads.
CPU is utilized effectively. CPU is not utilized well.
Waiting time, Response time is less. Waiting and Response time is high.
e.g., Round Robin, Shortest Remaining time First. e.g., FCFS and SJF
Q8 Explain the effect of increasing the time quantum to an arbitrary large number and decreasing the time quantum to an
arbitrary small number for Round Robin scheduling algorithm with suitable examples?
Ans.
Q9 Consider following processes with length of CPY burst time in milliseconds.
Processes Burst Time
P1 5
P2 10
P3 2
P4 1
All processes arrived in order P1, P2, P3, P4 all time zero.
a) Draw Gantt charts illustrating execution of these processes for SJF and Round Robin (Quantum = 1).
SJF
Ans.
Round Robin
b) Calculate waiting time for each process for each scheduling algorithm.
c) Calculate average waiting time for each scheduling algorithm.
SJF
Round Robin
Q10 Consider following processes with length of CPU burst time in milliseconds.
Processes Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
All processes arrived in order P1, P2, P3, P4, P5 all at time zero.
Ans.
1) Draw Gantt charts illustrating execution of these processes for SJF, non-preemptive Priority (smaller priority
number implies a higher priority) & Round Robin (Quantum = 1).
SJF
Priority (Non-Preemptive)
Round Robin
2) Calculate Turnaround time for each process for scheduling algorithm in part (1).
3) Calculate waiting time for each scheduling algorithm in part (1),
SJF
Priority (Non-Preemptive)
Round Robin
Q11 Briefly explain and compare fixed and dynamic memory partitioning schemes.
Ans. According to size of partitions, the multiple partition schemes are divide into two types:
Fixed Partitioning Variable/Dynamic Partitioning
The main memory is divided into fixed partitions. The main memory is not divided into fixed sized
Only one process can be placed in a partition. partitions.
It doesn’t utilize the main memory effectively. The process is allocated a chunk of memory.
There can be internal or external fragmentation. It utilizes the main memory well.
Degree of multiprogramming is less. There can be external fragmentation.
It is easy to implement. Degree of multiprogramming is higher.
There is a limitation on the size of process. It is hard to implement as compared to Fixed scheme.
There is no limitation on size of process.
Q12 Explain how paging supports virtual memory with neat diagram explain the logical address is translated into physical
address.
Ans. Paging supports virtual memory:
The virtual memory gets divided into equal size of pages.
Main memory is divided into equal size page frames each frame can hold any page from virtual memory.
When CPU wants to access the page, it first looks into main memory. If it is found there, then it is called hit and if not
then it is a page fault. For that the page has to be loaded from the virtual memory to main memory. There are different
page replacement schemes such as FIFO, LRU, LFU, etc.
Ans.
Q14 Evaluate SJF CPU scheduling algorithm for given problem
Processes P1 P2 P3 P4
Burst Time 8 4 9 5
Arrival Time 0 1 2 3
Ans.
Q15 Evaluate Round Robin CPU scheduling algorithm for given problem.
Ans. Time Slice = 3 ms
Processes P1 P2 P3 P4
Burst Time 10 5 18 6
Arrival Time 5 3 0 4
Q16 Consider the following set of processes with length of CPU burst time and arrival time as specified:
Ans. Processes Burst Time Arrival
Time
P1 7 0
P2 4 1
P3 8 2
P4 5 3
Draw Gantt chart illustrating the execution of these processes using preemptive SJF scheduling algorithm. Also
calculate the average waiting time (WT) and average turnaround time (TAT).
Q17 What is Dining Philosopher problem? Explain monitor solution to dining philosopher problem.
Ans. The dining philosopher's problem is the classical problem of synchronization which says that Five philosophers are
sitting around a circular table and their job is to think and eat alternatively. A bowl of noodles is placed at the center of
the table along with five chopsticks for each of the philosophers. To eat a philosopher needs both their right and a left
chopstick. A philosopher can only eat if both immediate left and right chopsticks of the philosopher is available. In case
if both immediate left and right chopsticks of the philosopher are not available then the philosopher puts down their
(either left or right) chopstick and starts thinking again.
if (readcount == 1)
{
wait (write);
}
signal(mutex);
wait(mutex);
10. readcount --; // on every exit of reader decrement readcount
11. if (readcount == 0)
12. {
13. signal (write);
14. }
15. signal(mutex);
Q20 What is process? Explain various states that process undergoes with the help of process state diagram.
Ans. A process is a program in execution. It is an entity which represents the basic unit of work to be implemented in the
system. When a program starts execution it becomes a process which performs all the tasks mentioned in the program.
The process gets divided into four sections:
1. Stack
It contains the temporary data such as method parameters, addresses or variables.
2. Heap
It stores the dynamically allocated memory to a process during its runtime.
3. Data
This section holds the global and static variables.
4. Text
This includes the current activity represented by the value of Program Counter and the contents of processor’s register.
Process goes through different states throughout the life cycle which are known as the process states.
Different states of the process are:
1. Start
This is the initial state where a process is first created.
2. Ready
This process is waiting to be assigned a processor in order to start the execution. Processes can directly come
to after Start state or while running it may get scheduled according to priority.
3. Running
Once the processor has been assigned to a processor by the OS scheduler, the process state is set to running
and processor executes its instructions.
4. Waiting
Process moves into the waiting state if it need to wait for a resource, such as for some file, or user input.
5. Terminated or Exit
Once a process finishes its execution, or gets terminated by the OS it goes to terminated state to get removed
from the main memory.
Q21 Explain in detail the concept of swapping.
Ans. Swapping in OS is one of the schemes which fulfills the goal of maximum utilization of CPU and memory management
by swapping in and swapping out process from the main memory.
Swap in removes the process from Hard drive which is secondary memory and Swap out removes the process from
RAM which is main memory.
Advantages
It helps in maximum utilization of CPU.
It ensures proper memory availability for every process.
It helps in avoiding the problem of process starvation.
Disadvantages
Bulky swapping may lose all the data if power cut occurs.
There are some algorithms not up to the mark where page faults can increase.
Common resource usage may cause inefficiency.
Q22 Consider the page reference string:
Ans. 09018187871282782383
How many page faults would occur for the following page replacement algorithm with three page frames?
i) FIFO
The problem of internal fragmentation may arise due to the fixed sizes of memory blocks.
It may be solved by assigning space to the process via dynamic partitioning.
Dynamic partitioning allocates the desired memory to the process for execution.
Q24 Explain the concept of paging and demand paging.
Ans. Paging:
It is a memory management scheme that eliminates the need for contiguous allocation of physical memory. This allows
the physical space of a process to be non-contiguous.
Here the generation so Logical and Physical address takes place.
Logical address is generated by the CPU.
The Physical address is available in memory unit.
The mapping of Logical address and Physical address is done by the MMU, which is known as paging technique.
Demand Paging:
Demand paging is a process of swapping in the Virtual Memory system. In this process, all data is not moved from hard
drive to main memory because while using this demand paging, when some programs are getting demand then data
will be transferred. But, if required data is already existed into memory then not need to copy of data. The demand
paging system is done with swapping from auxiliary storage to primary memory, so it is known as “Lazy Swapper”.
Q25 Discuss in detail the process of segmentation.
Ans. Segmentation is a memory management scheme that supports user view of memory.
A logical-address space is a collection of segments. Each segment has a name and a length. The user specifies each
address by two quantities: a segment name/number and an offset.
Segment table maps two-dimensional physical
addresses and each entry in table has:
base – contains the starting physical address where the segments reside in memory.
limit – specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.
Q26 Under what circumstances do page fault occur? Describe the actions taken by operating system when a page fault
Ans. occurs.
Page Faults = 14
Optimal
Page Faults = 8
LRU
Page Faults = 10
Q28
Ans.
Q29 Given memory partitions of 100KB, 500KB, 200 KB, 300KB and 600KB (in order). How would each of first fit, worst fit
Ans. and best fit algorithms place processes of 212KB, 417KB, 112KB and 426KB (in order). Which algorithm makes the most
efficient use of memory?
P1 = 212KB
P2 = 417KB
P3 = 112KB
P4 = 426KB
First Fit:
100KB
500KB P1
200KB P3
300KB
600KB P2
P4 is not allocated with the memory.
Best Fit:
100KB
500KB P2
200KB P3
300KB P1
600KB P4
Worst Fit:
100KB
500KB P2
200KB
300KB P3
600KB P1
P4 is not allocated with the memory.