You are on page 1of 12

The addressing modes help us specify the way in which an operand’s effective

address is represented in any given instruction. Some addressing modes allow


referring to a large range of areas efficiently, like some linear array of addresses
along with a list of addresses.Types of Addressing Modes
The various types of addressing modes are discussed below:

Implied mode :In this mode, the instruction contains an indirect definition of the operand. An
example of an implied mode instruction is CMA (complement accumulator). Here, the operand
(complement) is implicitly specified in the instruction.

Immediate addressing mode :In this mode, the instruction contains both the opcode and the
operand. It can be said that an instruction that uses the immediate addressing mode contains an
operand field in place of an address field. The operation, as well as the operands, are mentioned in
the instruction. An example of immediate addressing mode instruction is ADD 10. Here, ADD, which is
the operation, and 10, which is the operand, are specified.

Register mode: In this mode, the instruction specifies a register. This register stores the operand.
An example of register mode instruction is:

AC = AC + [R]

This will add the operand stored at register R to the operand stored in the accumulator.

Register indirect mode: In this mode, the instruction specifies a register. This register stores the
effective address of the operand. An instruction that uses register indirect addressing mode is:

AC = AC + [[R]]

Direct addressing mode : In this mode, the instruction specifies an address. This address is the
address of the operand. An example of a direct addressing mode instruction is:

AC = AC + [X]

This will add the operand stored at address X with the operand stored in the accumulator. This mode
is also referred to as absolute addressing mode.

Indirect addressing Mode: In this mode, the instruction specifies an address. The memory
location specified by the address contains the address of the operand. An example of an indirect
addressing mode instruction is:

AC = AC + [[X]]

Auto-increment/decrement mode: In this mode, the instruction specifies a register which


points to a memory address that contains the operand. However, after the address stored in the
register is accessed, the address is incremented or decremented, as specified. The next operand is
found by the new value stored in the register.
Relative address mode: In this mode, the contents of the address field are added to the
constant stored in the program counter. The result of the addition gives the address of the operand.
For example, suppose the address field contains 850, and the program counter contains 20, then the
operand will be at memory location 850 + 20 = 870.

Indexed addressing mode: In this mode, the address of the operand is determined by adding
the contents of the address field and the contents of the index register.

Base register addressing mode :In this mode, the address of the operand is determined by
adding the contents of the address field and the contents of the base register.

Explain the multi bus architecture with help of diagram


In a multiple bus system many processors may try to access the shared memory
simultaneously. To deal with this problem, a policy might be implemented that allocates the
available buses to the processors making requests to memory. In particular case, the policy
might deal with the case when the number of processors exceeds from the B. For
performance point of view this allocation has to be performed by hardware arbiters which, as
we will see, add significantly to the difficulty of the multiple bus interconnection networks.
Example: PCout, R=B, MARin, Read, IncPC
WFMC
MDRoutB, R=B, IRin
R4out, R5outB, SelectA, Add, R6in, End.
1. Zero Address Instructions –

A stack-based computer does not use the address field in the instruction. To evaluate an expression
first it is converted to reverse Polish Notation i.e. Postfix Notation.

Expression: X = (A+B)*(C+D)

Postfixed : X = AB+CD+*

TOP means top of stack

M[X] is any memory location

PUSH A TOP = A

PUSH B TOP = B

ADD TOP = A+B

PUSH C TOP = C

PUSH D TOP = D

ADD TOP = C+D

MUL TOP = (C+D)*(A+B)

POP X M[X] = TOP

2 .One Address Instructions –

This uses an implied ACCUMULATOR register for data manipulation. One operand is in the
accumulator and the other is in the register or memory location. Implied means that the CPU already
knows that one operand is in the accumulator so there is no need to specify it.

Expression: X = (A+B)*(C+D)

AC is accumulator

M[] is any memory location

M[T] is temporary location

LOAD A AC = M[A]

ADD B AC = AC + M[B]

STORE T M[T] = AC

LOAD C AC = M[C]

ADD D AC = AC + M[D]

MUL T AC = AC * M[T]

STORE X M[X] = AC
3.Two Address Instructions –

This is common in commercial computers. Here two addresses can be specified in the instruction.
Unlike earlier in one address instruction, the result was stored in the accumulator, here the result can
be stored at different locations rather than just accumulators, but require more number of bit to
represent address.

Here destination address can also contain operand.

Expression: X = (A+B)*(C+D)

R1, R2 are registers

M[] is any memory location

MOV R1, A R1 = M[A]

ADD R1, B R1 = R1 + M[B]

MOV R2, C R2 = M[C]

ADD R2, D R2 = R2 + M[D]

MUL R1, R2 R1 = R1 * R2

MOV X, R1 M[X] = R1

4.Three Address Instructions –

This has three address field to specify a register or a memory location. Program created are much
short in size but number of bits per instruction increase. These instructions make creation of program
much easier but it does not mean that program will run much faster because now instruction only
contain more information but each micro operation (changing content of register, loading address in
address bus etc.) will be performed in one cycle only.

Zero-address instructions: Advantages: They are simple and can be executed quickly since they
do not require any operand fetching or addressing. They also take up less memory space.

Disadvantages: They can be limited in their functionality and do not allow for much flexibility in terms
of addressing modes or operand types.

One-address instructions: Advantages: They allow for a wide range of addressing modes,
making them more flexible than zero-address instructions. They also require less memory space than
two or three-address instructions.

Disadvantages: They can be slower to execute since they require operand fetching and addressing.

Two-address instructions: Advantages: They allow for more complex operations and can be
more efficient than one-address instructions since they allow for two operands to be processed in a
single instruction. They also allow for a wide range of addressing modes.
Disadvantages: They require more memory space than one-address instructions and can be slower to
execute since they require operand fetching and addressing.

Three-address instructions: Advantages: They allow for even more complex operations and
can be more efficient than two-address instructions since they allow for three operands to be
processed in a single instruction. They also allow for a wide range of addressing modes.

Disadvantages: They require even more memory space than two-address instructions and can be
slower to execute since they require operand fetching and addressing.
Write the short note on
Rounding is almost unavoidable when reporting many computations – especially when dividing
two numbers in integer or fixed-point arithmetic; when computing mathematical functions such as
square roots, logarithms, and sines; or when using a floating-point representation with a fixed number
of significant digits

Guard bits: You can eliminate the possibility of overflow by appending the appropriate number of
guard bits to a binary word. For a two’s complement signed value, the guard bits are filled with either
0’s or 1’s depending on the value of the most significant bit (MSB).

Bits pair recording


This is derived from the Booth’s algorithm. It pairs the multiplier bits and gives one multiplier bit per
pair, thus reducing the number of summands by half.

Performance metrics are defined as figures and data representative of an organization’s actions,
abilities, and overall quality. There are many different forms of performance metrics, including sales,
profit, return on investment, customer happiness, customer reviews, personal reviews, overall
quality, and reputation in a marketplace. Performance metrics can vary considerably when viewed
through different industries.

Performance metrics are integral to an organization’s success. It’s important that organizations select
their chief performance metrics and focus on these areas because these metrics help guide and gauge
an organization’s success. Key success factors are only useful if they are acknowledged and tracked.
Business measurements must also be carefully managed to make sure that they give right answers,
and that the right questions are being asked.

A superscalar processor is created to produce an implementation rate of more than one


instruction per clock cycle for a single sequential program. Superscalar processor design defines as a
set of methods that enable the central processing unit (CPU) of a computer to manage the throughput
of more than one instruction per cycle while performing a single sequential program.

While there is not a global agreement on the interpretation, superscalar design techniques involve
parallel instruction decoding, parallel register renaming, speculative execution, and out-of-order
execution. These techniques are usually employed along with complementing design techniques
including pipelining, caching, branch prediction, and multi-core in current microprocessor designs.

Superscalar processor emerged in three consecutive phases as first, the idea was conceived,
then a few architecture proposals and prototype machines appeared, and finally, in the last phase, the
commercial products reached the market.
What is multi-core architecture?
Multicore refers to an architecture in which a single physical processor incorporates the core logic of
more than one processor. A single integrated circuit is used to package or hold these processors.
These single integrated circuits are known as a die. Multicore architecture places multiple processor
cores and bundles them as a single physical processor. The objective is to create a system that can
complete more tasks at the same time, thereby gaining better overall system performance.

The concept of multicore technology is mainly centered on the possibility of parallel computing, which
can significantly boost computer speed and efficiency by including two or more central processing
units (CPUs) in a single chip. This reduces the system’s heat and power consumption. This means
much better performance with less or the same amount of energy.

The architecture of a multicore processor enables communication between all available cores to
ensure that the processing tasks are divided and assigned accurately. At the time of task completion,
the processed data from each core is delivered back to the motherboard by means of a single shared
gateway. This technique significantly enhances performance compared to a single-core processor of
similar speed.

Explain the magnetic disk in details ?


Magnetic disk devices.
Magnetic disks are flat circular plates of metal or plastic, coated on both sides with iron
oxide. Input signals, which may be audio, video, or data, are recorded on the surface of a disk
as magnetic patterns or spots in spiral tracks by a recording head while the disk is rotated by
a drive unit. The heads, which are also used to read the magnetic impressions on the disk, can
be positioned anywhere on the disk with great precision. For computer data-storage
applications, a collection of as many as 20 disks (called a disk pack) is mounted vertically on
the spindle of a drive unit. The drive unit is equipped with multiple reading/writing heads.

These features give magnetic disk devices an advantage over tape recorders. A disk unit has
the ability to read any given segment of an audio or video recording or block of data without
having to pass over a major portion of its content sequentially; locating desired information
on tape may take many minutes. In a magnetic disk unit, direct access to a precise track on a
specific disk reduces retrieval time to a fraction of a second.

Magnetic disk technology was applied to data storage in 1962. The random accessibility of
data stored in disk units made these devices particularly suitable for use as auxiliary
memories in high-speed computer systems. Small, flexible plastic disks called floppy disks
were developed during the 1970s. Although floppy disks cannot store as much information as
conventional disks or retrieve data as rapidly, they are adequate for applications such as
those involving minicomputers and microcomputers where low cost and ease of use are of
primary importance.
Explain the following
1. Programed I/o
Programmed input–output (also programmable input/output, programmed input/output,
programmed I/O, PIO) is a method of data transmission, via input/output (I/O), between a
central processing unit (CPU) and a peripheral device, such as a Parallel ATA storage device.
Each data item transfer is initiated by an instruction in the program, involving the CPU for
every transaction. In contrast, in direct memory access (DMA) operations, the CPU is
uninvolved in the data transfer.
The term can refer to either memory-mapped I/O (MMIO) or port-mapped I/O (PMIO). PMIO
refers to transfers using a special address space outside of normal memory, usually accessed
with dedicated instructions, such as IN and OUT in x86 architectures. MMIO refers to
transfers to I/O devices that are mapped into the normal address space available to the
program. PMIO was very useful for early microprocessors with small address spaces, since the
valuable resource was not consumed by the I/O devices.

Interrupt driven I/O is an alternative scheme dealing with I/O. Interrupt I/O is a way of
controlling input/output activity whereby a peripheral or terminal that needs to make or
receive a data transfer sends a signal. This will cause a program interrupt to be set. At a time
appropriate to the priority level of the I/O interrupt. Relative to the total interrupt system,
the processors enter an interrupt service routine. The function of the routine will depend
upon the system of interrupt levels and priorities that is implemented in the processor. The
interrupt technique requires more complex hardware and software, but makes far more
efficient use of the computer’s time and capacities

Explain the (DMA)?


Direct Memory Access (DMA) is a capability provided by some computer bus architectures
that enables data to be sent directly from an attached device, such as a disk drive, to the
main memory on the computer’s motherboard. The microprocessor, or central processing
unit (CPU), is freed from involvement with the data transfer, speeding up overall computer
operation.DMA enables devices – such as disk drives, external memory, graphics cards,
network cards and sound cards – to share and receive data from the main memory in a
computer. It does this while still allowing the CPU to perform other tasks.
Without a process such as DMA, the computer’s CPU becomes preoccupied with data
requests from an attached device and is unable to perform other operations during that time.
With DMA, a CPU initiates a data transfer with an attached device and can still perform other
operations while the data transfer is in progress. DMA enables a computer to transfer data to
and from devices with less CPU overhead.
What is concept of Operand forwarding?
operand forwarding (or data forwarding) is an optimization in pipelined CPUs to limit
performance deficits which occur due to pipeline stalls. A data hazard can lead to a pipeline
stall when the current operation has to wait for the results of an earlier operation which has
not yet finished

What is mean by hazards ? Explain the type of hazard


In the domain of central processing unit (CPU) design, hazards are problems with the
instruction pipeline in CPU microarchitectures when the next instruction cannot execute in
the following clock cycle, and can potentially lead to incorrect computation results.
Three situations in which a data hazard can occur:

2. Read after write (RAW), a true dependency


3. Write after read (WAR), an anti-dependency
4. Write after write (WAW), an output dependency
5. Read after read (RAR) is not a hazard case.

Consider two instructions i1 and i2, with i1 occurring before i2 in program order.

Read after write (RAW)


(i2 tries to read a source before i1 writes to it) A read after write (RAW) data hazard refers to
a situation where an instruction refers to a result that has not yet been calculated or
retrieved. This can occur because even though an instruction is executed after a prior
instruction, the prior instruction has been processed only partly through the pipeline.
For example:
I1. R2 <- R5 + R3
I2. R4 <- R2 + R3
The first instruction is calculating a value to be saved in register R2, and the second is going to
use this value to compute a result for register R4. However, in a pipeline, when operands are
fetched for the 2nd operation, the results from the first have not yet been saved, and hence a
data dependency occur

Write after read (WAR)


(i2 tries to write a destination before it is read by i1) A write after read (WAR) data hazard
represents a problem with concurrent execution.

For example:
I1. R4 <- R1 + R5
I2. R5 <- R1 + R2
In any situation with a chance that i2 may finish before i1 (i.e., with concurrent execution), it
must be ensured that the result of register R5 is not stored before i1 has had a chance to
fetch the operands.

Write after write (WAW)


(i2 tries to write an operand before it is written by i1) A write after write (WAW) data hazard
may occur in a concurrent execution environment
For example:
1. R2 <- R4 + R7
I2. R2 <- R1 + R3
The write back (WB) of i2 must be delayed until i1 finishes executing

You might also like