You are on page 1of 10

Q.1 Attempt Any FOUR.

1) Define MIPS. CPI and MFLOPS.

MIPS stands for millions of instructions per second. It is a measure of the instruction
execution rate of a processor.

CPI stands for cycles per instruction. It is the average number of clock cycles required
to execute one instruction.

MFLOPS stands for millions of floating-point operations per second. It is a measure of


the arithmetic performance of a processor, especially for scientific applications.

2) Why does a superscalar processor use dynamic branch prediction? Justify.

A superscalar processor can execute multiple instructions in parallel in each clock cycle.

-Dynamic branch prediction is a technique to predict the outcome of a conditional


branch instruction before it is executed, and fetch the instructions from the predicted
path.
-A superscalar processor uses dynamic branch prediction to reduce the branch penalty
and increase the instruction level parallelism. By predicting the branch outcome, the
processor can avoid stalling the pipeline and wasting cycles on fetching and executing
wrong instructions.

3) Define: Seek time, Rotational Latency and Transfer time w.r.t hard disk drive

Seek time is the time required to move the read/write head of a hard disk drive to the
desired track where the data is stored.

Rotational latency is the time required to rotate the disk until the desired sector comes
under the read/write head.

Transfer time is the time required to transfer the data from the disk to the buffer or vice
versa
4) Compare RISC and CISC architectures.

RISC stands for reduced instruction set computer. It is a processor design philosophy
that uses a small and simple set of instructions that can be executed in one clock cycle
each.
CISC stands for complex instruction set computer. It is a processor design philosophy
that uses a large and complex set of instructions that can perform multiple operations in
one or more clock cycles each.

Some of the differences between RISC and CISC architectures are:


-RISC processors have more registers and less memory accesses than CISC
processors.
-RISC processors have fixed-length and simple instruction formats, while CISC
processors have variable-length and complex instruction formats.
-RISC processors use load/store instructions to access memory, while CISC processors
use memory operands in arithmetic and logic instructions.
-RISC processors rely on compiler optimization to generate efficient code, while CISC
processors rely on hardware optimization to execute complex instructions.

5) Why is there a need for communication between two processes? Explain


semaphores with an example. Also write techniques to implement IPC

There is a need for communication between two processes when they share data or
resources, or when they need to coordinate their actions or synchronize their execution.

Semaphores are synchronization primitives that can be used to control the access to
shared resources or signal the occurrence of events between processes. It is an integer
variable that can be incremented or decremented by special atomic operations, called
wait and signal.
An example of using semaphores is the producer-consumer problem, where one
process produces data and another process consumes it. A semaphore can be used to
count the number of items in the buffer, and to block the producer or the consumer
when the buffer is full or empty respectively.

Some of the techniques to implement IPC are:


-Message passing: Processes exchange data or commands by sending and receiving
messages through a communication channel.
-Shared memory: Processes access a common memory region to read or write data.
-Pipes: Processes use a special file to transfer data in a FIFO manner.
-Sockets: Processes use network protocols to communicate over a network.
6) Compare CPU and GPU

CPU stands for central processing unit. It is the main processor of a computer that
executes instructions and coordinates other components.
GPU stands for graphics processing unit. It is a specialized processor that handles
graphics rendering and parallel computations.

Some of the differences between CPU and GPU are:


-CPU has fewer cores but higher clock speeds than GPU, which has more cores but
lower clock speeds.
-CPU is better at sequential tasks that require complex logic and branching, while GPU
is better at parallel tasks that require simple arithmetic and data processing.
-CPU has larger and faster caches and more control logic than GPU, which has smaller
and slower caches and less control logic.
-CPU can access the main memory directly, while GPU has to use a dedicated memory
or a shared memory with the CPU.

7) Explain typical instruction cycle in a processor

A typical instruction cycle is the basic operation performed by a processor to execute an


instruction. It consists of several steps, each of which performs a specific function in the
execution of the instruction.

The major steps in the instruction cycle are:

-Fetch: The processor retrieves the instruction from memory and loads it into the
instruction register (IR). The program counter (PC) is incremented to point to the next
instruction.
-Decode: The processor interprets the instruction and identifies the opcode and the
operands. It may also calculate the effective address of a memory operand.
-Execute: The processor performs the operation specified by the instruction. It may use
the data register (DR) and the accumulator (AC) to hold the data and the result. It may
also update the condition codes or flags to indicate the status of the operation.
-Memory access (if needed): The processor may store the result of the operation back
to memory by writing the data from the AC or the DR to the memory location specified
by the address register (AR).
-Registry write-back (if needed): The processor may store the result of the operation to
a register by copying the data from the AC or the DR to the destination register.
8) Explain FCFS, SJF scheduling

FCFS stands for first come, first served. It is a CPU scheduling algorithm that executes
the processes in the order of their arrival time. The process that arrives first is executed
first by the CPU when it is free.

The algorithm works as follows:

-Maintain a queue of processes in the ready state, with the front of the queue being the
oldest process and the rear of the queue being the newest process.
-When a process is ready to run, add it to the rear of the queue.
-When the CPU is free, select the process at the front of the queue and execute it until it
finishes or blocks.
-If the process finishes or blocks, remove it from the queue and select the next process.

SJF stands for shortest job first. It is a CPU scheduling algorithm that executes the
processes based on their burst time, which is the estimated time required for a process
to complete. The process with the shortest burst time is executed first by the CPU.

The algorithm works as follows:

-Maintain a queue of processes in the ready state, sorted by their burst time in
ascending order. -The process with the shortest burst time is at the front of the queue
and the process with the longest burst time is at the rear of the queue.
-When a process is ready to run, add it to the queue in the appropriate position
according to its burst time.
-When the CPU is free, select the process at the front of the queue and execute it until it
finishes or blocks.
-If the process finishes or blocks, remove it from the queue and select the next process.
9) Explain FIFO page replacement algorithm

FIFO stands for first in, first out. It is a page replacement algorithm that replaces the
oldest page in the memory when a page fault occurs.

The algorithm works as follows:


-Maintain a queue of pages in the memory, with the front of the queue being the oldest
page and the rear of the queue being the newest page.
-When a page is referenced, check if it is in the memory. If yes, do nothing. If no,
perform a page fault.
-To handle a page fault, check if the memory is full. If not, add the new page to the rear
of the queue. If yes, remove the page at the front of the queue and add the new page to
the rear of the queue.

Q.2 Attempt any THREE questions out of the remaining five.

1) List the difference between deadlock avoidance and prevention? Explain one
deadlock prevention method.

Deadlock avoidance is a technique that ensures that the system does not enter an
unsafe state that can lead to a deadlock. It requires information about the existing,
available, and requested resources of each process. It uses algorithms such as the
banker’s algorithm or the safety algorithm to check if a resource request can be granted
without causing a deadlock.

Deadlock prevention is a technique that ensures that at least one of the necessary
conditions for a deadlock does not hold. It does not require any information about the
resources, but it imposes restrictions on how the processes can request and release
resources. It uses methods such as spooling, non-blocking synchronization, or
preemption to prevent deadlocks.

One deadlock prevention method is to eliminate hold and wait condition. This can be
done by allocating all required resources to the process before the start of its execution,
or by releasing all held resources before requesting new ones. This way, a process
cannot hold some resources while waiting for others, and thus cannot cause a
deadlock. However, this method may lead to low device utilization or starvation.
2) Explain pre-emptive and non pre-emptive scheduling. Give an example of each
type

Pre-emptive scheduling is a type of scheduling in which the CPU can be taken away
from a running process before it completes its execution, usually by a higher priority
process or a timer interrupt.
Non pre-emptive scheduling is a type of scheduling in which the CPU can be released
by a running process only after it completes its execution or by a voluntary action, such
as an I/O request or a system call.

Some of the differences between pre-emptive and non pre-emptive scheduling are:

-Pre-emptive scheduling is more responsive and fair than non pre-emptive scheduling,
as it can handle dynamic and urgent situations better.
-Non pre-emptive scheduling is simpler and less overhead than pre-emptive scheduling,
as it does not require context switching and synchronization mechanisms.
-Pre-emptive scheduling may cause problems such as starvation, race condition, and
inconsistency, while non pre-emptive scheduling may cause problems such as blocking,
convoy effect, and underutilization.

An example of pre-emptive scheduling is round robin scheduling, which allocates the


CPU to each process for a fixed time quantum and pre-empts the process if it does not
finish within the quantum.
An example of non pre-emptive scheduling is first come, first served scheduling, which
allocates the CPU to each process in the order of their arrival and does not pre-empt
the process until it finishes or requests an I/O operation.

3) Explain in detail Hardwired control unit. Discuss one method to implement it

A hardwired control unit is a control unit that uses a fixed set of logic gates and circuits
to generate control signals for each instruction. The control signals are hardwired into
the control unit, so the control unit has a dedicated circuit for each possible instruction.
Hardwired control units are simple and fast, but they can be inflexible and difficult to
modify.
One method to implement a hardwired control unit is the sequence counter method.

-This method uses a sequence counter to generate a sequence number for each step of
the instruction cycle. The sequence number is then decoded by a decoder to produce
the control signals for that step.
-The decoder outputs are connected to the control logic gates, which are programmed
with the operation codes of the instructions. The control logic gates produce the final
control signals for the execution units of the processor.

4) Explain various pipeline hazards. Explain the performance metrics for


instruction pipeline

Pipeline hazards are situations that prevent the next instruction in the instruction stream
from executing in its designated clock cycle. They cause stalls or bubbles in the
pipeline, which reduce the performance and efficiency of the pipeline.

There are three main types of pipeline hazards:

1)Structural hazards(Resource Conflicts):


-Caused by simultaneous memory access by two instructions
-Slightly resolved by using separate instruction and data memories
-Rare in modern processors, because instruction set architecture is designed to support
pipelining

2)Data Hazards(Data Dependency):


-Arises when an instruction depends on the result of previous instruction but that result
is not available yet
-Four categories:
i)RAR (Read after Read) Hazard: Two instructions read from same register
ii)RAW (Read after Write) Hazard: Instruction reads a register that is previously written
by previous register
iii)WAR (Write after Read) & WAW(Write after write) Hazard: Output register of an
instruction has either read or written by a previous instruction

3)Branch Hazards:
-Occur with branch instructions, creating data dependencies
-Some time is required to flush the pipeline & fetch instructions from target location
-This wasted time is called as branch penalty
The performance metrics for instruction pipeline are:

-Throughput: Measures the number of instructions executed per unit time (IPS or
MIPS). Dependent on clock rate and pipeline depth. Deeper pipelines can boost clock
rate but may increase latency and hazard frequency.
-Speedup: Ratio of pipelined processor performance to non-pipelined processor. Ideally
equals pipeline depth but can be lower due to overhead and hazards.
-Efficiency: Ratio of useful work done by the pipeline to total work done, measured in
percentage. Relies on pipeline utilization and stall rate. Higher utilization and lower stall
rate enhance efficiency

5) Describe File organization and access.

File access is the way of locating and retrieving the records of a file from a secondary
storage device. It depends on the file organization and the access method used by the
file system.

Sequential Access:

-Process reads records in order, starting at the beginning.


-Skipping or reading out of order is not possible
-Suitable for storage medium like magnetic tape.
-Reading involves read operation; appending uses write operation.

Random or Direct Access:

-Allows reading out of order on disks.


-Access by key instead of position
-Requires Block numbers in file operations.
-Seek operation sets the current position for reading.

Other Access Methods:

-Built on top of random access, often with an index.


-Index pointers used to access the file directly.

-Factors for file organization:


Access time Minimum, Update ease
Storage economy, Simple Maintenance
Reliability
File organization is the way of arranging and storing the records of a file on a secondary
storage device, such as a disk or a tape. It affects the performance, reliability, and
maintainability of the file system.

Pile:
-Simplest Organization, data stored in order of arrival.
-Records may have diverse fields in varying orders.
-No specific structure; exhaustive search required for access.

Sequential File:
-Commonly used, fixed-format records with a key field.
-Records stored in a logical order of the key
-Suitable for batch applications & Can only be organized as a linked list.

Indexed Sequential File:


-Organized like a sequential file but with an index for random access.
-An overflow file is used for additional records
-Index helps locate required records quickly.

Indexed File:
-Always searching based on multiple fields.
-Uses multiple indexes for various attributes.
-Supports variable-length records.

Direct or Hashed File:


-No sequential order; fast access.
-Hashing function used to access blocks by known address.
-Suitable for fast, one by one records access with fixed-length records.
6) Explain Multi-core processor architecture.

A multi-core processor is a processor that has more than one core on the same die or
chip. A core is a separate processing unit that can execute instructions independently.
Multi-core processor architecture is the way of organizing and interconnecting the cores
on the processor. It involves design choices such as the number and type of cores, the
memory hierarchy, the communication network, and the synchronization mechanisms.

Some of the common multi-core architectures are symmetric multiprocessing (SMP),


asymmetric multiprocessing (AMP), chip multiprocessor (CMP), and heterogeneous
multiprocessor (HMP).

-SMP is an architecture where all cores are identical and have equal access to the
shared memory and resources. It is simple and scalable, but it may suffer from
contention and overhead.
-AMP is an architecture where cores are different and have unequal access to the
shared memory and resources. It is flexible and customizable, but it may suffer from
complexity and compatibility.
-CMP is an architecture where cores are integrated on the same die or chip, and
communicate through a high-speed bus or a crossbar switch. It is fast and
power-efficient, but it may suffer from heat dissipation and die size.
-HMP is an architecture where cores are diverse and have different capabilities and
power consumption. It is adaptive and energy-efficient, but it may suffer from scheduling
and load balancing.

A multi-core processor architecture can have advantages such as higher throughput,


lower power consumption, better scalability, and more flexibility. It can also have
challenges such as complexity, overhead, compatibility, and parallelization5 .

You might also like