You are on page 1of 14

Addressing Modes

Addressing Modes– The term addressing modes refers to the way in which the operand of an
instruction is specified. The addressing mode specifies a rule for interpreting or modifying the
address field of the instruction before the operand is actually executed.
Addressing modes for 8086 instructions are divided into two categories:
1) Addressing modes for data
2) Addressing modes for branch
The 8086 memory addressing modes provide flexible access to memory, allowing you to easily
access variables, arrays, records, pointers, and other complex data types. The key to good
assembly language programming is the proper use of memory addressing modes.
An assembly language program instruction consists of two parts

The memory address of an operand consists of two components:


IMPORTANT TERMS
• Starting address of memory segment.
• Effective address or Offset: An offset is determined by adding any combination of three
address elements: displacement, base and index.
• Displacement: It is an 8 bit or 16 bit immediate value given in the instruction.
• Base: Contents of base register, BX or BP.
• Index: Content of index register SI or DI.
According to different ways of specifying an operand by 8086 microprocessor, different
addressing modes are used by 8086.
Addressing modes used by 8086 microprocessor are discussed below:
• Implied mode:: In implied addressing the operand is specified in the instruction itself. In
this mode the data is 8 bits or 16 bits long and data is the part of instruction. Zero address
instruction are designed with implied addressing mode.

Example: CLC (used to reset Carry flag to 0)


• Immediate addressing mode (symbol #):In this mode data is present in address field of
instruction. Designed like one address instruction format.
• Note: Limitation in the immediate mode is that the range of constants are restricted by size
of address field.

Example: MOV AL, 35H (move the data 35H into AL register)
• Register mode: In register addressing the operand is placed in one of 8 bit or 16 bit general
purpose registers. The data is in the register that is specified by the instruction.
Here one register reference is required to access the data.

Example: MOV AX,CX (move the contents of CX register to AX register)


• Register Indirect mode: In this addressing the operand’s offset is placed in any one of the
registers BX,BP,SI,DI as specified in the instruction. The effective address of the data is in
the base register or an index register that is specified by the instruction.
Here two register reference is required to access the data.

The 8086 CPUs let you access memory indirectly through a register using the register indirect
addressing modes.
• MOV AX, [BX](move the contents of memory location s addressed by the register BX to
the register AX)
• Auto Indexed (increment mode): Effective address of the operand is the contents of a
register specified in the instruction. After accessing the operand, the contents of this register
are automatically incremented to point to the next consecutive memory location. (R1)+.
Here one register reference, one memory reference and one ALU operation is required to
access the data.
• Example:
• Add R1, (R2)+ // OR
• R1 = R1 +M[R2]
R2 = R2 + d
Useful for stepping through arrays in a loop. R2 – start of array d – size of an element
• Auto indexed (decrement mode): Effective address of the operand is the contents of a
register specified in the instruction. Before accessing the operand, the contents of this
register are automatically decremented to point to the previous consecutive memory
location. –(R1)
Here one register reference, one memory reference and one ALU operation is required to
access the data.
Example:
Add R1,-(R2) //OR
R2 = R2-d
R1 = R1 + M[R2]
Auto decrement mode is same as auto increment mode. Both can also be used to implement a
stack as push and pop. Auto increment and Auto decrement modes are useful for implementing
“Last-In-First-Out” data structures.
• Direct addressing/ Absolute addressing Mode (symbol [ ]): The operand’s offset is given
in the instruction as an 8 bit or 16 bit displacement element. In this addressing mode the 16
bit effective address of the data is the part of the instruction.
Here only one memory reference operation is required to access the data.

Example: ADD AL,[0301] //add the contents of offset address 0301 to AL


• Indirect addressing Mode (symbol @ or () ):In this mode address field of instruction
contains the address of effective address. Here two references are required.
1st reference to get effective address.
2nd reference to access the data.
Based on the availability of Effective address, Indirect mode is of two kind:
1. Register Indirect:In this mode effective address is in the register, and corresponding
register name will be maintained in the address field of an instruction.
Here one register reference,one memory reference is required to access the data.
2. Memory Indirect:In this mode effective address is in the memory, and corresponding
memory address will be maintained in the address field of an instruction.
Here two memory reference is required to access the data.
• Indexed addressing mode: The operand’s offset is the sum of the content of an index
register SI or DI and an 8 bit or 16 bit displacement.
Example:MOV AX, [SI +05]
• Based Indexed Addressing: The operand’s offset is sum of the content of a base register
BX or BP and an index register SI or DI.
Example: ADD AX, [BX+SI]
Based on Transfer of control, addressing modes are:
• PC relative addressing mode: PC relative addressing mode is used to implement
intra segment transfer of control, In this mode effective address is obtained by
adding displacement to PC.
• EA= PC + Address field value
• PC= PC + Relative value.

• Base register addressing mode: Base register addressing mode is used to


implement inter segment transfer of control. In this mode effective address is
obtained by adding base register value to address field value.
• EA= Base register + Address field value.
• PC= Base register + Relative value.

Note:
1. PC relative and based register both addressing modes are suitable for program
relocation at runtime.
2. Based register addressing mode is best suitable to write position independent
codes.
Advantages of Addressing Modes
1. To give programmers to facilities such as Pointers, counters for loop controls, indexing of
data and program relocation.
2. To reduce the number bits in the addressing field of the Instruction.
Carry Look-Ahead Adder
The adder produce carry propagation delay while performing other arithmetic operations like
multiplication and divisions as it uses several additions or subtraction steps. This is a major
problem for the adder and hence improving the speed of addition will improve the speed of all
other arithmetic operations. Hence reducing the carry propagation delay of adders is of great
importance. There are different logic design approaches that have been employed to overcome
the carry propagation problem. One widely used approach is to employ a carry look-ahead
which solves this problem by calculating the carry signals in advance, based on the input
signals. This type of adder circuit is called a carry look-ahead adder.
Here a carry signal will be generated in two cases:
1. Input bits A and B are 1
2. When one of the two bits is 1 and the carry-in is 1.
In ripple carry adders, for each adder block, the two bits that are to be added are available
instantly. However, each adder block waits for the carry to arrive from its previous block. So,
it is not possible to generate the sum and carry of any block until the input carry is known.
The ith block waits for the i-1th block to produce its carry. So there will be a considerable time
delay which is carry propagation delay.

Consider the above 4-bit ripple carry adder. The sum S3 is produced by the corresponding full
adder as soon as the input signals are applied to it. But the carry input C4 is not available on its
final steady-state value until carry is available at its steady-state value. Similarly C3
depends on C2 and C2 on C1. Therefore, though the carry must propagate to all the stages in
order that output S3 and carry C4 settle their final steady-state value.
The propagation time is equal to the propagation delay of each adder block, multiplied by the
number of adder blocks in the circuit. For example, if each full adder stage has a propagation
delay of 20 nanoseconds, then S3 will reach its final correct value after 60 (20 × 3)
nanoseconds. The situation gets worse, if we extend the number of stages for adding more
number of bits.
Carry Look-ahead Adder :
A carry look-ahead adder reduces the propagation delay by introducing more complex
hardware. In this design, the ripple carry design is suitably transformed such that the carry logic
over fixed groups of bits of the adder is reduced to two-level logic. Let us discuss the design in
detail.
Consider the full adder circuit shown above with corresponding truth table. We define two
variables as ‘carry generate’ Gi and ‘carry propagate’ Pi then,

The sum output and carry output can be expressed in terms of carry generate Gi and carry
propagate Pi as

where Gi produces the carry when both Ai, Bi are 1 regardless of the input carry. Pi is
associated with the propagation of carry from Ci to Ci+1.
The carry output Boolean function of each stage in a 4 stage carry look-ahead adder can be
expressed as

From the above Boolean equations we can observe that C4 does not have to wait for C3 and C2
to propagate but actually C4 is propagated at the same time as C3 and C2. Since the Boolean
expression for each carry output is the sum of products so these can be implemented with one
level of AND gates followed by an OR gate.
The implementation of three Boolean functions for each carry output (C2, C3 and C4) for a
carry look-ahead carry generator shown in below figure.
Time Complexity Analysis :
We could think of a carry look-ahead adder as made up of two “parts”
The part that computes the carry for each bit.
1. The part that adds the input bits and the carry for each bit position.
The log(n) complexity arises from the part that generates the carry, not the circuit that adds the
bits.
Now, for the generation of the n-th carry bit, we need to perform a AND between (n+1) inputs.
The complexity of the adder comes down to how we perform this AND operation. If we have
AND gates, each with a fan-in (number of inputs accepted) of k, then we can find the AND of
all the bits in log(n+1) time. This is represented in asymptotic notation as Θ(log n).
Advantages and Disadvantages of Carry Look-Ahead Adder :
Advantages –

• The propagation delay is reduced.


• It provides the fastest addition logic.
Disadvantages –
The Carry Look-ahead adder circuit gets complicated as the number of variables increase.
• The circuit is costlier as it involves more number of hardware.
NOTE :
For n-bit carry lookahead adder to evaluate all the carry bits it requires [n(n + 1)]/2 AND gates
and n OR gates.

Multiplication Algorithm in Signed Magnitude Representation


Multiplication of two fixed point binary number in signed magnitude representation is done
with process of successive shift and add operation.

❖ In the multiplication process we are considering successive bits of the multiplier, least
significant bit first.
❖ If the multiplier bit is 1, the multiplicand is copied down else 0’s are copied down.
❖ The numbers copied down in successive lines are shifted one position to the left from
the previous number.
❖ Finally numbers are added and their sum form the product.
❖ The sign of the product is determined from the sign of the multiplicand and multiplier.
If they are alike, sign of the product is positive else negative.
Hardware Implementation :
Following components are required for the Hardware Implementation of multiplication
algorithm :
1. Registers:
Two Registers B and Q are used to store multiplicand and multiplier respectively.
Register A is used to store partial product during multiplication.
Sequence Counter register (SC) is used to store number of bits in the multiplier.
2. Flip Flop:
To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).
Flip flop E is used to store carry bit generated during partial product addition.
3. Complement and Parallel adder:
This hardware unit is used in calculating partial product i.e, perform addition required.
Flowchart of Multiplication:

1. Initially multiplicand is stored in B register and multiplier is stored in Q register.


2. Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e., if both the
signs are alike, output of XOR operation is 0 unless 1) and output stored in As (sign of A
register).
Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized
with value n, n is the number of bits in the Multiplier.
3. Now least significant bit of multiplier is checked. If it is 1 add the content of register A
with Multiplicand (register B) and result is assigned in A register with carry bit in flip flop
E. Content of E A Q is shifted to right by one position, i.e., content of E is shifted to most
significant bit (MSB) of A and least significant bit of A is shifted to most significant bit of
Q.
4. If Qn = 0, only shift right operation on content of E A Q is performed in a similar fashion.
5. Content of Sequence counter is decremented by 1.
6. Check the content of Sequence counter (SC), if it is 0, end the process and the final product
is present in register A and Q, else repeat the process.
Example:
Multiplicand = 10111
Multiplier = 10011

Booth Multiplication Algorithm


The Booth multiplication algorithm defines a multiplication algorithm that can multiply two
signed binary numbers in two’s complement. This algorithm helps in the study of computer
architecture.
Booth’s algorithm contains the addition of one of two predetermined values (A and S) to a
product (P) continually and then implementing a rightward arithmetic shift on the product (P).
Let us consider the predetermined values to be A and S, and the product to be P. Consider that
the multiplicand and multiplier are m and r respectively. Let the number of bits in m and r be
x and y respectively.
The Booth’s multiplication algorithm involves the following steps −
Step 1 − The values of A and S and the initial value of P are determined. These values should
have a length that is equal to (x + y + 1).

• For A, the MSB is filled with the value of m, and the remaining (y+1) bits are filled
with zeros.
• For S, the MSB is filled with the value of (-m) in two’s complement notations, and the
remaining (y + 1) bits are filled with zeros.
• For P, the MSB for x is filled with zeros. To the right of this value, the value of r is
appended. Then, the LSB is filled with a zero.
Step 2 − The LSBs of P are determined.

• In case they are 01, find the value of P + A, and ignore the overflow or carry if any.
• In case they are 10, find the value of P + S, and ignore the overflow or carry if any.
• In case they are 00, use P directly in the next step.
• In case they are 11, use P directly in the next step.
Step 3 − The value obtained in the second step is arithmetically shifted by one place to the
right. P is now assigned the new value.
Step 4 − Step 2 and Step 3 are repeated for y number of times. Step 5: The LSB is dropped
from P, which gives the product of m and r.
Example − Find the product of 3 x (-4), where m = 3, r = -4, x = 4 and y = -4.
A = 001100001
S = 110100000
P = 000011000
The loop has to be performed four times since y = 4.
P = 000011000
Here, the last two bits are 00.
Therefore, P = 000001100 after performing the arithmetic right shift.
P = 000001100
Here, the last two bits are 00.
Therefore, P = 000000110 after performing the arithmetic right shift.
P = 000000110
Here, the last two bits are 10.
Therefore, P = P + S, which is 110100110.
P = 111010011 after performing the arithmetic right shift.
P = 111010011
Here, the last two bits are 11.
Therefore, P = 111101001 after performing the arithmetic right shift.
The product is 11110100 after dropping the LSB from P.
11110100 is the binary representation of -12.

Array Multiplier in Digital Logic


An array multiplier is a digital combinational circuit used for multiplying two binary
numbers by employing an array of full adders and half adders. This array is used for the nearly
simultaneous addition of the various product terms involved. To form the various product
terms, an array of AND gates is used before the Adder array.
Checking the bits of the multiplier one at a time and forming partial products is a sequential
operation that requires a sequence of add and shift micro-operations. The multiplication of two
binary numbers can be done with one micro-operation by means of a combinational circuit that
forms the product bits all at once. This is a fast way of multiplying two numbers since all it
takes is the time for the signals to propagate through the gates that form the multiplication
array. However, an array multiplier requires a large number of gates, and for this reason it was
not economical until the development of integrated circuits.
For implementation of array multiplier with a combinational circuit, consider the multiplication
of two 2-bit numbers as shown in figure. The multiplicand bits are b1 and b0, the multiplier
bits are a1 and a0, and the product is
c3c2c1c0

Assuming A = a1a0 and B= b1b0, the various bits of the final product term P can be written
as:-
1. P(0)= a0b0
2. P(1)=a1b0 + b1a0
3. P(2) = a1b1 + c1 where c1 is the carry generated during the addition for the P(1) term.
4. P(3) = c2 where c2 is the carry generated during the addition for the P(2) term.
For the above multiplication, an array of four AND gates is required to form the various product
terms like a0b0 etc. and then an adder array is required to calculate the sums involving the
various product terms and carry combinations mentioned in the above equations in order to get
the final Product bits.
1. The first partial product is formed by multiplying a0 by b1, b0. The multiplication of two
bits such as a0 and b0 produces a 1 if both bits are 1; otherwise, it produces 0. This is
identical to an AND operation and can be implemented with an AND gate.
2. The first partial product is formed by means of two AND gates.
3. The second partial product is formed by multiplying a1 by b1b0 and is shifted one position
to the left.
4. The above two partial products are added with two half-adder(HA) circuits. Usually there
are more bits in the partial products and it will be necessary to use full-adders to produce
the sum.
5. Note that the least significant bit of the product does not have to go through an adder since
it is formed by the output of the first AND gate.
A combinational circuit binary multiplier with more bits can be constructed in similar fashion.
A bit of the multiplier is ANDed with each bit of the multiplicand in as many levels as there
are bits in the multiplier. The binary output in each level of AND gates is added in parallel with
the partial product of the previous level to form a new partial product. The last level produces
the product. For j multiplier bits and k multiplicand we need j*k AND gates and (j-1) k-bit
adders to produce a product of j+k bits.

Fixed Point Representation


Real numbers have a fractional component. This article explains the real number
representation method using fixed points. In digital signal processing (DSP) and gaming
applications, where performance is usually more important than precision, fixed point data
encoding is extensively used.
The Binary Point: Fractional values such as 26.5 are represented using the binary point
concept. The decimal point in a decimal numeral system and a binary point are comparable. It
serves as a divider between a number’s integer and fractional parts.
For instance, the weight of the coefficient 6 in the number 26.5 is 10 0, or 1. The weight of the
coefficient 5 is 10-1 or (5/10 = 1/2 = 0.5).
2 * 101 + 6 * 100 + 5 * 10-1 = 26.5
2 * 10 + 6 * 1 + 0.5 = 26.5
A “binary point” can be created using our binary representation and the same decimal point
concept. A binary point, like in the decimal system, represents the coefficient of the expression
20 = 1. The weight of each digit (or bit) to the left of the binary point is 20, 2 1, 22, and so forth.
The binary point’s rightmost digits (or bits) have weights of 2 -1 , 2-2 , 2-3, and so on.
For illustration, the number 11010.12 represents the value:
11010.12
= 1 * 24 + 1 * 23 + 0 * 22 + 1 * 21 + 0* 20 + 1 * 2-1
= 16 + 8 + 2 + 0.5
= 26.5
Shifting Pattern:
When an integer is shifted right by one bit in a binary system, it is comparable to being divided
by two. Since we cannot represent a digit to the right of a binary point in the case of integers
since there is no fractional portion, this shifting operation is an integer division.
• A number is always divided by two when the bit pattern of the number is shifted to the right
by one bit.
• A number is multiplied by two when it is moved left one bit.
How to write Fixed Point Number?
Understanding fixed point number representation requires knowledge of the shifting process
described above. Simply by implicitly establishing the binary point to be at a specific place of
a numeral, we can define a fixed point number type to represent a real number in computers
(or any hardware, in general). Then we will just use this implicit standard to express numbers.
Two arguments are all that are required to theoretically create a fixed point type:
1. Width of the number representation.
2. Binary point position within the number.
the notation fixed<w, b>, where “w” stands for the overall amount of bits used (the width of a
number) and “b” stands for the location of the binary point counting from the least significant
bit (counting from 0).

Unsigned representation:

For example, fixed<8,3> signifies an 8-bit fixed-point number, the rightmost 3 bits of which
are fractional.
Representation of a real number:
00010.1102
= 1 * 21 + 1 * 2-1 + 1 * 2-2
= 2 + 0.5 + 0.25
= 2.75
Signed representation:
Negative integers in binary number systems must be encoded using signed number
representations. In mathematics, negative numbers are denoted by a minus sign (“ -“) before
them. In contrast, numbers are exclusively represented as bit sequences in computer hardware,
with no additional symbols.
Signed binary numbers (+ve or -ve) can be represented in one of three ways:
1. Sign-Magnitude form
2. 1’s complement form
3. 2’s complement form
Sign-Magnitude form: In sign-magnitude form, the number’s sign is represented by the MSB
(Most Significant Bit also called as Leftmost Bit), while its magnitude is shown by the
remaining bits (In the case of 8-bit representation Leftmost bit is the sign bit and remaining
bits are magnitude bit).
55 10 = 001101112
−55 10 = 101101112
1’s complement form: By complementing each bit in a signed binary integer, the 1’s
complement of a number can be derived. A result is a negative number when a positive number
is complemented by 1. Similar to this, complementing a negative number by 1 results in a
positive number.
55 10 = 001101112
−55 10 = 110010002
2’s complement form: By adding one to the signed binary number’s 1’s complement, a binary
number can be converted to its 2’s complement. Therefore, a positive number’s 2’s
complement results in a negative number. The complement of a negative number by two yields
a positive number.
55 10 = 11001000 + 1 (1’s complement + 1 = 2’s complement)
-55 10 = 11001001 2

Fixed Point representation of negative number:


Consider the number -2.5, fixed<w,b> width = 4 bit, binary point = 1 bit (assume the binary
point is at position 1). First, represent 2.5 in binary, then find its 2’s complement and you will
get the binary fixed-point representation of -2.5.
2.5 10 = 0101 2
-2.5 10 = 1010 2 + 1 (1’s complement + 1 = 2’s complement)
-2.5 10 = 1011 2
1’s complement representation range:
One bit is essentially used as a sign bit for 1’s complement numbers, leaving you with only 7
bits to store the actual number in an 8-bit number.
Therefore, the biggest number is just 127 (anything greater would require 8 bits, making it
appear to be a negative number).
The least figure is likely to be -127 or -128 as well.
1’s complement:
127 = 01111111 : 1s complement is 10000000
128 = 10000000 : 1s complement is 01111111
We can see that storing -128 in 1’s complement is impossible (since the top bit is unset and it
looks like a positive number)
The 1’s complement range is -127 to 127.

2’s complement representation range:


Additionally, one bit in 2’s complement numbers is effectively used as a sign bit, leaving you
with only 7 bits to store the actual number in an 8-bit integer.
2’s complement:
127 = 01111111 : 2s complement is 10000001
128 = 10000000 : 2s complement is 10000000
we can see that we can store -128 in 2s complement.
The 2s complement range is -128 to 127.

Advantages of Fixed Point Representation:


• Integer representation and fixed point numbers are indeed close relatives.
• Because of this, fixed point numbers can also be calculated using all the arithmetic
operations a computer can perform on integers.
• They are just as simple and effective as computer integer arithmetic.
• To conduct real number arithmetic using fixed point format, we can reuse all the hardware
designed for integer arithmetic.

Disadvantages of Fixed Point Representation:


• Loss in range and precision when compared to representations of floating point numbers.

You might also like