You are on page 1of 5

COMPUTER ORGANISATION

(LONG ANSWERS)

Q1) Write a note on the following:


a) Memory interleaving
b) 2D memory organization

Ans) Memory Interleaving and 2D Memory Organization


a) Memory Interleaving
Memory interleaving is a technique used to improve the overall memory bandwidth of a
system. It works by dividing the total memory capacity into multiple smaller modules (banks)
and distributing them across the address space. When the processor requests data, it can
access multiple banks simultaneously, effectively increasing the data transfer rate.
Imagine a highway with multiple lanes. Interleaving is like having separate memory banks
act as these lanes. If you need to transport a large amount of data, you can utilize several
lanes at once, speeding up the overall process compared to a single lane.
Benefits:
• Increased memory bandwidth: By accessing multiple banks concurrently, interleaving
allows for faster data transfer compared to a single memory module.
• Improved performance for certain workloads: Applications that require frequent
memory access, such as video editing or scientific computing, can benefit
significantly from interleaving.
Considerations:
• Complexity: Implementing interleaving requires additional hardware and control
logic, increasing design complexity.
• Cost: Systems with interleaving tend to be more expensive than those with a single
memory module.
• Effectiveness: Interleaving is most effective when the access pattern involves fetching
data from non-contiguous memory locations.
b) 2D Memory Organization
2D memory organization is a fundamental approach for arranging memory chips. It treats
memory as a two-dimensional grid, with each row representing a word (a group of bits) and
each column representing a bit within that word. This organization allows for efficient
addressing using a decoder circuit. The decoder takes the address from the processor and
activates the specific row and column corresponding to the desired data.
Benefits:
• Simplicity: 2D organization is a straightforward and easy-to-implement design.
• Cost-effectiveness: It requires minimal additional circuitry, making it a cost-efficient
choice for many applications.
Limitations:
• Scalability: As memory capacity increases, the number of address lines needed to
access individual words also increases, posing a challenge for scalability.
• Access time: While simple, 2D organization might not offer the fastest access times
compared to more complex memory structures.
In conclusion, memory interleaving and 2D memory organization represent two important
concepts in computer architecture. Interleaving enhances memory bandwidth by dividing
memory into banks, while 2D organization offers a simple and cost-effective way to structure
memory chips. The choice between these approaches depends on factors like performance
requirements, cost constraints, and the specific application.

Q2) Discuss the Booth's algorithm, taking the example of multiplying two numbers: 3 x (-4)
Ans) Booth's Algorithm for Signed Multiplication
Booth's algorithm is a technique for efficiently multiplying two signed integers represented in
two's complement form. It offers an advantage over traditional multiplication algorithms by
reducing the number of addition and subtraction operations required, especially when dealing
with negative numbers.
Understanding Two's Complement:
Two's complement is a method for representing signed integers in binary. To find the two's
complement of a negative number, take its one's complement (invert all bits) and add 1. This
allows for efficient negation and arithmetic operations.
Steps of Booth's Algorithm:
1. Convert operands: Convert both the multiplicand (the number being multiplied) and
the multiplier to two's complement form.
2. Initialize variables:
o P: A register to hold the partial product (initially 0).
o Q: The multiplier in two's complement form.
o SC: A counter for the number of consecutive 0s or 1s encountered in Q
(initially set to the number of bits in Q).
3. Iterate through Q bits:
o Examine the two least significant bits (LSB) of Q.
▪ If both are 0 (00), shift P left by 1 bit and SC remains unchanged (no
operation).
▪ If they are 11 (negative), shift P left by 1 bit, subtract the multiplicand
from P (effectively adding the negative multiplicand), and decrement
SC by 1.
▪ Otherwise (01 or 10), shift P left by 1 bit and SC remains unchanged.
4. Shift and Add/Subtract: After iterating through all bits, depending on the final state
of SC:
o If SC is even (encountered an even number of 1s), shift P left by 1 bit.
o If SC is odd (encountered an odd number of 1s), subtract the multiplicand
from P.
5. Result: The final value in P is the product of the two original numbers.
Example: Multiplying 3 x (-4)
1. Convert operands:
o Multiplicand (3) = 0011 (binary)
o Multiplier (-4) = Two's complement of 4 = 1100 (binary)
2. Initialize: P = 0, Q = 1100, SC = 4
3. Iteration:
o LSB(Q) = 00, SC = 3 (shift left)
o LSB(Q) = 00, SC = 2 (shift left)
o LSB(Q) = 00, SC = 1 (shift left)
o LSB(Q) = 10, SC remains 1 (shift left, subtract multiplicand: P = 0 - 0011 =
1000)
4. Final step: SC is odd (1), so subtract multiplicand again: P = 1000 - 0011 = 1101
(binary)
Result: 1101 in binary represents -12 in decimal, which is the correct product of 3 x (-4).
Benefits of Booth's Algorithm:
• Reduces the number of addition/subtraction operations compared to traditional
multiplication.
• Efficiently handles negative numbers using two's complement.
In conclusion, Booth's algorithm provides a powerful approach for signed multiplication in
computers. It leverages the properties of two's complement and minimizes the number of
arithmetic operations, leading to faster and more efficient computation.

Q3) Discuss in detail ISA and its two parts RISC, CISC along with their characteristics.
Ans) ISA: The Language of Processors and its Design Choices
The Instruction Set Architecture (ISA) acts as a communication bridge between a processor
and its programs. It defines the set of instructions a processor can understand and execute,
including:
• Instruction types: Basic operations like calculations, data movement, and control
flow.
• Instruction format: The structure of an instruction, specifying the operation
(opcode) and data locations (operands).
• Addressing modes: Techniques for specifying memory locations for data operands.
However, there are two main design philosophies for constructing an ISA, leading to distinct
processor architectures: Reduced Instruction Set Computing (RISC) and Complex Instruction
Set Computing (CISC).
1. RISC: Simplicity for Speed
RISC processors prioritize efficiency by focusing on simple instructions:
• Limited scope: Instructions are restricted to basic operations like load, store, add, and
subtract.
• Uniformity: All instructions have the same size, simplifying decoding for the
processor.
• Register focus: Most operands reside in registers within the processor for faster
access.
• Software complexity: Complex operations requiring multiple instructions are
handled by the compiler, not dedicated hardware.
Benefits of RISC:
• Faster execution: Simple instructions decode quickly, and register emphasis reduces
memory access time.
• Efficient pipelining: Fixed-length instructions enable smoother instruction fetching
and execution pipelines.
• Simpler design: Easier to implement due to less complex hardware requirements.
• Scalability: RISC designs adapt well to new technologies and instruction sets.
Examples: ARM processors (powering smartphones and tablets), MIPS processors (used in
some embedded systems).
2. CISC: Versatility with Trade-offs
CISC processors prioritize versatility by offering a wider range of instructions:
• Multifunctional instructions: Instructions can perform multiple operations in one
cycle, potentially including data manipulation and memory access.
• Variable size: Instructions can be of different sizes depending on complexity.
• Hardware support: Complex instructions are often implemented directly in
hardware, reducing software overhead.
Benefits of CISC:
• Potential speed for specific tasks: Complex instructions can handle certain
operations more efficiently.
• Backward compatibility: CISC architectures strive to maintain compatibility with
older instructions.
Drawbacks of CISC:
• Slower decoding: Variable-length instructions take longer to decode.
• Pipelining challenges: Different instruction sizes can disrupt pipeline flow.
• Complex design: CISC processors require more intricate hardware implementation.
• Limited scalability: Adding new instructions becomes cumbersome.
Examples: x86 processors (dominant in personal computers), Intel processors (widely used
in desktops and laptops).
Choosing the Right Fit:
The choice between RISC and CISC depends on the application. RISC excels in tasks
requiring high instruction throughput (e.g., embedded systems, multimedia processing) while
CISC might be preferred for legacy code compatibility or specific instruction sets critical to
certain workloads.
In recent years, the gap has narrowed. Modern processors often borrow elements from both
philosophies, implementing a mix of simple and complex instructions to achieve optimal
performance.

You might also like