You are on page 1of 11

When we are using multiple general-purpose registers, instead of a single

accumulator register, in the CPU Organization then this type of organization is


known as General register-based CPU Organization. In this type of organization, the
computer uses two or three address fields in their instruction format. Each address
field may specify a general register or a memory word. If many CPU registers are
available for heavily used variables and intermediate results, we can avoid memory
references much of the time, thus vastly increasing program execution speed, and
reducing program size.

For example:
MULT R1, R2, R3
This is an instruction of an arithmetic multiplication written in assembly language. It
uses three address fields R1, R2, and R3. The meaning of this instruction is:
R1 <-- R2 * R3
This instruction also can be written using only two address fields as
MULT R1, R2
In this instruction, the destination register is the same as one of the source registers.
This means the operation
R1 <-- R1 * R2

Features of a General Register based CPU organization:

Registers: In this organization, the CPU contains a set of registers, which are small,
high-speed memory locations used to store data that is being processed by the CPU.
The general-purpose registers can be used to store any type of data, including
integers, floating-point numbers, addresses, and control information.
Operand access: The CPU accesses operands directly from the registers, rather than
having to load them from memory each time they are needed. This can significantly
improve performance, as register access is much faster than memory access.
Data processing: The CPU can perform arithmetic and logical operations directly
on the data stored in the registers. This eliminates the need to transfer data between
the registers and memory, which can further improve performance.
Instruction format: The instruction format used in a General Register based CPU
typically includes fields for specifying the operands and operation to be performed.
The operands are identified by register numbers, rather than memory addresses.
Context switching: Context switching in a General Register based CPU involves
saving the contents of the registers to memory, and then restoring them when the
process resumes. This is necessary to allow multiple processes to share the CPU.
The advantages of General register-based CPU organization –
 The efficiency of the CPU increases as large number of registers are used
in this organization.
 Less memory space is used to store the program since the instructions are
written in a compact way.
The disadvantages of General register based CPU organization –
 Care should be taken to avoid unnecessary usage of registers. Thus,
compilers need to be more intelligent in this aspect.
 Since a large number of registers are used, thus extra cost is required in
this organization.

The computers which use Stack-based CPU Organization are based on a data
structure called a stack. The stack is a list of data words. It uses the Last In First
Out (LIFO) access method which is the most popular access method in most of the
CPU. A register is used to store the address of the topmost element of the stack
which is known as Stack pointer (SP). In this organization, ALU operations are
performed on stack data. It means both the operands are always required on the
stack. After manipulation, the result is placed in the stack.
The main two operations that are performed on the operators of the stack
are Push and Pop. These two operations are performed from one end only.
1. Push –
This operation results in inserting one operand at the top of the stack and it increases
the stack pointer register. The format of the PUSH instruction is:
PUSH
// Increment SP by 1
SP <-- SP + 1

//store the content of specified memory address


//into SP; i.e, at top of stack
SP <-- (memory address)

2. Pop –
This operation results in deleting one operand from the top of the stack and
decreasing the stack pointer register. The format of the POP instruction is:
POP
//transfer the content of SP (i.e, at top most data)
//into specified memory location
(memory address) <-- SP

//Decrement SP by 1
SP <-- SP - 1
It deletes the data word at the top of the stack to the specified address.
The advantages of Stack-based CPU organization –
 Efficient computation of complex arithmetic expressions.
 Execution of instructions is fast because operand data are stored in
consecutive memory locations.
 The length of instruction is short as they do not have an address field.
The disadvantages of Stack-based CPU organization –
 The size of the program increases.

In computer organization, addressing modes are techniques used to specify


operands for instructions in a computer program. An addressing mode defines
how the processor interprets the operand's address or data location during the
execution of an instruction. The choice of addressing mode can significantly
affect the flexibility, efficiency, and complexity of a computer's instruction set
architecture.

Here are some common addressing modes:

1. **Immediate Addressing Mode:**


- Operand is specified explicitly in the instruction.
- Example: `MOV A, #5` (Move the immediate value 5 into register A).

2. **Register Addressing Mode:**


- Operand is located in a processor register.
- Example: `ADD B, C` (Add the contents of register C to register B).

3. **Direct Addressing Mode:**


- Operand's memory address is directly specified in the instruction.
- Example: `MOV A, 0x1000` (Move the contents of memory location 0x1000
into register A).
4. **Indirect Addressing Mode:**
- Operand's address is held in a register, and the actual data is at the memory
location specified by that register.
- Example: `MOV A, [B]` (Move the contents of the memory location whose
address is stored in register B into register A).

5. **Register Indirect Addressing Mode:**


- Similar to indirect addressing, but the address is specified implicitly by a
register.
- Example: `MOV A, [BX]` (Move the contents of the memory location whose
address is in the BX register into register A).

6. **Indexed Addressing Mode:**


- An index register (or an offset) is added to a base register to get the actual
memory address.
- Example: `MOV A, [SI + 10]` (Move the contents of the memory location at
address (SI + 10) into register A).

7. **Relative Addressing Mode:**


- The operand's address is specified relative to the current program counter
or instruction pointer.
- Example: `JUMP 20` (Jump to the instruction 20 addresses away from the
current instruction).

8. **Base-Register Addressing Mode:**


- The operand address is obtained by adding a constant value to the content
of a base register.
- Example: `LOAD R1, 100(R2)` (Load the contents of memory location (R2 +
100) into register R1).
These addressing modes provide flexibility in designing instruction sets for
various types of operations and memory access patterns. The choice of
addressing mode depends on factors such as the architecture's design goals,
performance considerations, and the complexity of the instruction set.

Data transfer instructions move data from one place in the computer to another
without changing the data content. The most common transfers are between
memory and processor registers, between processor registers and input or
output, and between the processor registers themselves.

The load instruction is used to transfer for memory to a processor register,


usually an accumulator. The store instruction is used to transfer data to memory.
The move instruction is used to transfer data from one register to other. It has
also been used for data transfers between CPU registers and memory or between
two memory words. The exchange instruction swaps information between two
registers or a register and a memory word. The input and output instructions
transfer data among processor registers and input or output terminals. The push
and pop instructions transfer data between processor registers and a memory
stack.

Data manipulation instructions perform operations on data and provide the


computational capabilities for the computer. The data manipulation instructions
in a typical computer are usually divided into three basic types:
1. Arithmetic instructions
2. Logical and bit manipulation instructions
3. Shift instructions

Arithmetic Instructions

The four basic arithmetic operations are addition, subtraction, multiplication,


and division. The increment instruction adds 1 to the value stored in a register
or memory word. The decrement instruction subtracts 1 from a value stored in a
register or memory word. The instruction "add with carry" performs the
addition on two operands plus the value of the carry from the previous
computation. Similarly, the "subtract with borrow" instruction subtracts two
words and a borrow which may have resulted from a previous subtract
operation.The negate instruction forms the 2' s complement of a number,
effectively reversing the sign of an integer when represented in the signed-2's
complement form.

Logical and Bit Manipulation Instructions


Logical instructions perform binary operations on strings of bits stored in
registers. They are useful for manipulating individual bits or a group of bits that
represent binary-coded information. The AND instruction is used to clear a bit
or a selected group of bits of an operand. The OR instruction is used to set a bit
or a selected group of bits of an operand. Similarly, the XOR instruction is used
to selectively complement bits of an operand..
Shift Instructions
Shifts are operations in which the bits of a word are moved to the left or right.
Shift instructions may specify either logical shifts, arithmetic shifts, or rotate-
type operations. In either case the shift may be to the right or to the left.

PROGRAM CONTROL
https://www.geeksforgeeks.org/types-of-program-control-instructions/
https://www.youtube.com/watch?v=OXz7wKHr0_I
RISC and CISC in Computer Organization

https://www.geeksforgeeks.org/computer-organization-risc-and-cisc/
https://www.youtube.com/watch?v=ZW1gb3h-f9k

Flynn’s taxonomy
Parallel computing is computing where the jobs are broken into discrete parts
that can be executed concurrently. Each part is further broken down into a
series of instructions. Instructions from each piece execute simultaneously on
different CPUs. The breaking up of different parts of a task among
multiple processors will help to reduce the amount of time to run a program.
Parallel systems deal with the simultaneous use of multiple computer
resources that can include a single computer with multiple processors, a
number of computers connected by a network to form a parallel processing
cluster, or a combination of both. Parallel systems are more difficult to
program than computers with a single processor because the architecture of
parallel computers varies accordingly and the processes of multiple CPUs
must be coordinated and synchronized. The difficult problem of parallel
processing is portability.

There are four categories in Flynn’s taxonomy:


1. Single Instruction Single Data (SISD): In a SISD architecture, there
is a single processor that executes a single instruction stream and
operates on a single data stream. This is the simplest type of
computer architecture and is used in most traditional computers.
2. Single Instruction Multiple Data (SIMD): In a SIMD architecture,
there is a single processor that executes the same instruction on
multiple data streams in parallel. This type of architecture is used in
applications such as image and signal processing.
3. Multiple Instruction Single Data (MISD): In a MISD architecture,
multiple processors execute different instructions on the same data
stream. This type of architecture is not commonly used in practice, as
it is difficult to find applications that can be decomposed into
independent instruction streams.
4. Multiple Instruction Multiple Data (MIMD): In a MIMD
architecture, multiple processors execute different instructions on
different data streams. This type of architecture is used in distributed
computing, parallel processing, and other high-performance
computing applications.
PIPELINING
https://www.geeksforgeeks.org/computer-organization-and-architecture-
pipelining-set-1-execution-stages-and-throughput/?ref=lbp

An arithmetic pipeline and an instruction pipeline are concepts related to the


organization and execution of operations within a processor. Let's delve into
each concept:

### Arithmetic Pipeline:

An arithmetic pipeline is a specialized form of a pipeline that focuses on


executing arithmetic operations. In modern processors, the execution of
arithmetic operations is divided into multiple stages to improve throughput.
Here are the basic stages of an arithmetic pipeline:

1. **Fetch Operand Stage:**


- Retrieve the operands from the register file or memory.

2. **Decode Stage:**
- Decode the operation to be performed.

3. **Execute Stage:**
- Perform the actual arithmetic operation (addition, subtraction, multiplication,
etc.).

4. **Write Back Stage:**


- Write the result back to the register file.
Each stage in the arithmetic pipeline can be working on a different instruction
simultaneously. This parallelism allows for faster execution of arithmetic
operations, enhancing the overall performance of the processor.

### Instruction Pipeline:

An instruction pipeline, on the other hand, is a broader concept that


encompasses the execution of all types of instructions, including arithmetic,
logic, and control instructions. The instruction pipeline is designed to overlap
the execution of multiple instructions, breaking down the instruction execution
into stages. Here are the basic stages of an instruction pipeline:

1. **Instruction Fetch (IF):**


- Fetch the instruction from memory.

2. **Instruction Decode (ID):**


- Decode the instruction to determine the operation and operand.

3. **Execution (EX):**
- Execute the operation or calculate the effective address.

4. **Memory Access (MEM):**


- Access memory if required.

5. **Write Back (WB):**


- Write the result back to the register file.

Each of these stages corresponds to a specific part of the instruction execution


process. By allowing different instructions to be at different stages
simultaneously, the processor achieves a higher throughput.
### Combined Pipeline:

In modern processors, arithmetic operations are often part of the overall


instruction pipeline. This means that while arithmetic operations have their
specialized stages within the pipeline, they share the pipeline with other types of
instructions. This approach allows for a more balanced and efficient use of
resources within the processor.

### Advantages of Pipelining:

- **Increased Throughput:**
- Pipelining allows for the parallel execution of multiple instructions,
improving overall throughput.

- **Resource Utilization:**
- Different stages of the pipeline can work on different instructions
concurrently, making better use of the processor's resources.

- **Reduced Cycle Time:**


- Pipelining can reduce the overall cycle time, leading to faster instruction
execution.

- **Efficient Use of Hardware:**


- Specialized hardware for each pipeline stage can be optimized for its specific
function, contributing to more efficient use of resources.

While pipelining brings numerous benefits, it also introduces challenges such as


hazards (data hazards, control hazards) that need to be addressed for optimal
performance.

You might also like