Professional Documents
Culture Documents
MIPS stands for millions of instructions per second. It is a measure of the instruction
execution rate of a processor.
CPI stands for cycles per instruction. It is the average number of clock cycles required
to execute one instruction.
A superscalar processor can execute multiple instructions in parallel in each clock cycle.
3) Define: Seek time, Rotational Latency and Transfer time w.r.t hard disk drive
Seek time is the time required to move the read/write head of a hard disk drive to the
desired track where the data is stored.
Rotational latency is the time required to rotate the disk until the desired sector comes
under the read/write head.
Transfer time is the time required to transfer the data from the disk to the buffer or vice
versa
4) Compare RISC and CISC architectures.
RISC stands for reduced instruction set computer. It is a processor design philosophy
that uses a small and simple set of instructions that can be executed in one clock cycle
each.
CISC stands for complex instruction set computer. It is a processor design philosophy
that uses a large and complex set of instructions that can perform multiple operations in
one or more clock cycles each.
There is a need for communication between two processes when they share data or
resources, or when they need to coordinate their actions or synchronize their execution.
Semaphores are synchronization primitives that can be used to control the access to
shared resources or signal the occurrence of events between processes. It is an integer
variable that can be incremented or decremented by special atomic operations, called
wait and signal.
An example of using semaphores is the producer-consumer problem, where one
process produces data and another process consumes it. A semaphore can be used to
count the number of items in the buffer, and to block the producer or the consumer
when the buffer is full or empty respectively.
CPU stands for central processing unit. It is the main processor of a computer that
executes instructions and coordinates other components.
GPU stands for graphics processing unit. It is a specialized processor that handles
graphics rendering and parallel computations.
-Fetch: The processor retrieves the instruction from memory and loads it into the
instruction register (IR). The program counter (PC) is incremented to point to the next
instruction.
-Decode: The processor interprets the instruction and identifies the opcode and the
operands. It may also calculate the effective address of a memory operand.
-Execute: The processor performs the operation specified by the instruction. It may use
the data register (DR) and the accumulator (AC) to hold the data and the result. It may
also update the condition codes or flags to indicate the status of the operation.
-Memory access (if needed): The processor may store the result of the operation back
to memory by writing the data from the AC or the DR to the memory location specified
by the address register (AR).
-Registry write-back (if needed): The processor may store the result of the operation to
a register by copying the data from the AC or the DR to the destination register.
8) Explain FCFS, SJF scheduling
FCFS stands for first come, first served. It is a CPU scheduling algorithm that executes
the processes in the order of their arrival time. The process that arrives first is executed
first by the CPU when it is free.
-Maintain a queue of processes in the ready state, with the front of the queue being the
oldest process and the rear of the queue being the newest process.
-When a process is ready to run, add it to the rear of the queue.
-When the CPU is free, select the process at the front of the queue and execute it until it
finishes or blocks.
-If the process finishes or blocks, remove it from the queue and select the next process.
SJF stands for shortest job first. It is a CPU scheduling algorithm that executes the
processes based on their burst time, which is the estimated time required for a process
to complete. The process with the shortest burst time is executed first by the CPU.
-Maintain a queue of processes in the ready state, sorted by their burst time in
ascending order. -The process with the shortest burst time is at the front of the queue
and the process with the longest burst time is at the rear of the queue.
-When a process is ready to run, add it to the queue in the appropriate position
according to its burst time.
-When the CPU is free, select the process at the front of the queue and execute it until it
finishes or blocks.
-If the process finishes or blocks, remove it from the queue and select the next process.
9) Explain FIFO page replacement algorithm
FIFO stands for first in, first out. It is a page replacement algorithm that replaces the
oldest page in the memory when a page fault occurs.
1) List the difference between deadlock avoidance and prevention? Explain one
deadlock prevention method.
Deadlock avoidance is a technique that ensures that the system does not enter an
unsafe state that can lead to a deadlock. It requires information about the existing,
available, and requested resources of each process. It uses algorithms such as the
banker’s algorithm or the safety algorithm to check if a resource request can be granted
without causing a deadlock.
Deadlock prevention is a technique that ensures that at least one of the necessary
conditions for a deadlock does not hold. It does not require any information about the
resources, but it imposes restrictions on how the processes can request and release
resources. It uses methods such as spooling, non-blocking synchronization, or
preemption to prevent deadlocks.
One deadlock prevention method is to eliminate hold and wait condition. This can be
done by allocating all required resources to the process before the start of its execution,
or by releasing all held resources before requesting new ones. This way, a process
cannot hold some resources while waiting for others, and thus cannot cause a
deadlock. However, this method may lead to low device utilization or starvation.
2) Explain pre-emptive and non pre-emptive scheduling. Give an example of each
type
Pre-emptive scheduling is a type of scheduling in which the CPU can be taken away
from a running process before it completes its execution, usually by a higher priority
process or a timer interrupt.
Non pre-emptive scheduling is a type of scheduling in which the CPU can be released
by a running process only after it completes its execution or by a voluntary action, such
as an I/O request or a system call.
Some of the differences between pre-emptive and non pre-emptive scheduling are:
-Pre-emptive scheduling is more responsive and fair than non pre-emptive scheduling,
as it can handle dynamic and urgent situations better.
-Non pre-emptive scheduling is simpler and less overhead than pre-emptive scheduling,
as it does not require context switching and synchronization mechanisms.
-Pre-emptive scheduling may cause problems such as starvation, race condition, and
inconsistency, while non pre-emptive scheduling may cause problems such as blocking,
convoy effect, and underutilization.
A hardwired control unit is a control unit that uses a fixed set of logic gates and circuits
to generate control signals for each instruction. The control signals are hardwired into
the control unit, so the control unit has a dedicated circuit for each possible instruction.
Hardwired control units are simple and fast, but they can be inflexible and difficult to
modify.
One method to implement a hardwired control unit is the sequence counter method.
-This method uses a sequence counter to generate a sequence number for each step of
the instruction cycle. The sequence number is then decoded by a decoder to produce
the control signals for that step.
-The decoder outputs are connected to the control logic gates, which are programmed
with the operation codes of the instructions. The control logic gates produce the final
control signals for the execution units of the processor.
Pipeline hazards are situations that prevent the next instruction in the instruction stream
from executing in its designated clock cycle. They cause stalls or bubbles in the
pipeline, which reduce the performance and efficiency of the pipeline.
3)Branch Hazards:
-Occur with branch instructions, creating data dependencies
-Some time is required to flush the pipeline & fetch instructions from target location
-This wasted time is called as branch penalty
The performance metrics for instruction pipeline are:
-Throughput: Measures the number of instructions executed per unit time (IPS or
MIPS). Dependent on clock rate and pipeline depth. Deeper pipelines can boost clock
rate but may increase latency and hazard frequency.
-Speedup: Ratio of pipelined processor performance to non-pipelined processor. Ideally
equals pipeline depth but can be lower due to overhead and hazards.
-Efficiency: Ratio of useful work done by the pipeline to total work done, measured in
percentage. Relies on pipeline utilization and stall rate. Higher utilization and lower stall
rate enhance efficiency
File access is the way of locating and retrieving the records of a file from a secondary
storage device. It depends on the file organization and the access method used by the
file system.
Sequential Access:
Pile:
-Simplest Organization, data stored in order of arrival.
-Records may have diverse fields in varying orders.
-No specific structure; exhaustive search required for access.
Sequential File:
-Commonly used, fixed-format records with a key field.
-Records stored in a logical order of the key
-Suitable for batch applications & Can only be organized as a linked list.
Indexed File:
-Always searching based on multiple fields.
-Uses multiple indexes for various attributes.
-Supports variable-length records.
A multi-core processor is a processor that has more than one core on the same die or
chip. A core is a separate processing unit that can execute instructions independently.
Multi-core processor architecture is the way of organizing and interconnecting the cores
on the processor. It involves design choices such as the number and type of cores, the
memory hierarchy, the communication network, and the synchronization mechanisms.
-SMP is an architecture where all cores are identical and have equal access to the
shared memory and resources. It is simple and scalable, but it may suffer from
contention and overhead.
-AMP is an architecture where cores are different and have unequal access to the
shared memory and resources. It is flexible and customizable, but it may suffer from
complexity and compatibility.
-CMP is an architecture where cores are integrated on the same die or chip, and
communicate through a high-speed bus or a crossbar switch. It is fast and
power-efficient, but it may suffer from heat dissipation and die size.
-HMP is an architecture where cores are diverse and have different capabilities and
power consumption. It is adaptive and energy-efficient, but it may suffer from scheduling
and load balancing.