You are on page 1of 4

QUESTION: 01

How does DMA improve system performance?


The Device Drivers loads the appropriate registers within the Device controller which examines the
information present in the registers to determine what task is to be performed. The Device
controller fetches the instructions in the forms of bits/words and loads it into the registers which is
then stored in the main memory. Once the transfer of data is completed, an interrupt is generated
by device controller informing that it has finished its operation.

This type of transferring of data is slower for two reasons:

1. It produces high overhead when used for the transfer of bulk of data.
2. CPU has to perform multiple tasks in the system, if it’s caught up with just the transfer of
data then it would make the whole system inefficient and slower as it can’t do other
operations in the meantime.

To Solve this Problem, the concept of Direct Memory Access (DMA) is introduced where the device
controller transfers an entire block of data directly to/from its local buffer storage to main memory
without the intervention of CPU. The Operating system then takes the control from the device
controller once the transfer is completed, and CPU can perform other tasks in the meantime. In that
process, only one interrupt is generated per block to let the Device driver know that the operation is
finished.
QUESTION: 02

Briefly explain microkernel structure. How it provides more reliability and scalability?
MICROKERNEL STRUCTURE:

In microkernel structure, all the on essential components is removed from the kernel. It’s
implemented as the system level and the user level. It provides the core functionalities of kernel
such as device drivers, file server, process server and virtual memories as the user level.

User mode is divided into two programs, one is the client program and secondly kernel core
functions. Micro kernel just provide communication between the client programs and other services
which is carried out as message passing.

ADVANTAGE:

If the program running in kernel fails, then it will crash down the entire system while on the other
hand programs failing in user mode keeps working even when a program fails during its execution.
Since in microkernel structure mostly programs are in user mode and kernel is just the means of
communication hence it makes the system more reliable as most functions are executed in the client
mode or user mode.

QUESTION: 03
Explain the following statement.
“Mid-term scheduler reduces the degree of multiprogramming”.
Before explaining the statement, we must have the concept of mid-term schedular and
multiprogramming.

MULTIPROGRAMMING:

“The multiprogramming is the capability/ability of executing multiple programs by the CPU.”

The CPU in general cannot remain busy all the time with a single program. It needs to use other
resources apart from the CPU for its complete execution. When a program is for example requestion
for an I/O operation, in the that time the CPU will remain idle making the system inefficient.
Multiprogramming increases CPU utilization by organizing programs so that CPU always has one to
execute. It executes programs concurrently.

MID-TERM SCHEDULAR:

“Mid-term schedular is in charge of handling swapped out processes.”

It reduces the multiprogramming in a way that some process might require I/O operation that makes
the running process suspended which cannot make any progress towards completion. So, the mid-
term schedular swaps the process with new process means it removes the suspended process from
the main memory and moves it back to the secondary memory so the there some space for more
processes. it decreases the degree of multiprogramming by removing the processes that been
loaded by the Operating System.

QUESTION: 04

Why context-switch takes less time in threads than processes?


The context switch is faster in thread than processes is because threads belong to the same process
which requires only registers, program counters and stack to be changed while the overall memory
management does not need to be switched as threads share the same address space within the
process. However, its slightly expensive as system calls are required to switch between the threads.
Secondly, Operating System provides more time slice to the processes with more threads. The
context switching b/w two processes is slower because they must load/unload all the memory
information from the Process Control Block (PCB) which itself takes time to be loaded/unloaded.
Also, each process has its own Address space that makes the switching slower.

You might also like