You are on page 1of 40

Unit 3

Block Diagram of Hardware for Addition and Subtraction


Floating-Point Arithmetic
• This approach has limitations. Very large numbers cannot be
represented, nor can very small fractions. Furthermore, the fractional
part of the quotient in a division of two large numbers could be lost.
• For decimal numbers, we get around this limitation by using scientific
notation. Thus, 976,000,000,000,000 can be represented as 9.76* ,
and 0.0000000000000976 can be represented as 9.76 *
• This same approach can be taken with binary numbers. We can
represent a number in the form
Contd..
• This number can be stored in a binary word with three fields:
■ Sign: plus or minus
■ Significand S
■ Exponent E
Contd..
• Any floating-point number can be expressed in many ways
• The range of numbers that can be represented in a 32-bit word.
IEEE 754 Formats
Floating-Point Arithmetic
• A floating-point operation may produce one of these conditions:
Exponent overflow: A positive exponent exceeds the maximum
possible exponent value. In some systems, this may be designated as
+∞ or - ∞.
Exponent underflow: A negative exponent is less than the minimum
possible exponent value (e.g., - 200 is less than -127). This means that
the number is too small to be represented, and it may be reported as
0
Floating-Point Numbers and Arithmetic Operations
Addition and Subtraction
• In floating-point arithmetic, addition and subtraction are more
complex than multiplication and division because of the need for
alignment.
• There are four basic phases of the algorithm for addition and
subtraction:
1. Check for zeros.
2. Align the significands.
3. Add or subtract the significands.
4. Normalize the result.
Contd..
• Phase 1. Zero check: Because addition and subtraction are identical
except for a sign change, the process begins by changing the sign of
the subtrahend if it is a subtract operation. Next, if either operand is
0, the other is reported as the result.
• Phase 2. Significand alignment: The next phase is to manipulate the
numbers so that the two exponents are equal
Contd..
• Phase 3. Addition: Next, the two significands are added together,
taking into account their signs. Because the signs may differ, the
result may be 0. There is also the possibility of significand overflow by
1 digit. If so, the significand of the result is shifted right and the
exponent is incremented. An exponent overflow could occur as a
result; this would be reported and the operation halted.
• Phase 4. Normalization: The final phase normalizes the result.
Normalization consists of shifting significand digits left until the most
significant digit (bit, or 4 bits for base-16 exponent) is nonzero. Each
shift causes a decrement of the exponent and thus could cause an
exponent underflow. Finally, the result must be rounded off and then
reported. We defer a discussion of rounding until after a discussion of
multiplication and division.
Floating-Point Addition and Subtraction (Z d X { Y)
Multiplication and Division
• Floating-point multiplication and division are much simpler processes
than addition and subtraction.
• We first consider multiplication, if either operand is 0, 0 is reported as
the result.
• The next step is to add the exponents. If the exponents are stored in
biased form, the exponent sum would have doubled the bias. Thus,
the bias value must be subtracted from the sum. The result could be
either an exponent overflow or underflow, which would be reported,
ending the algorithm.
Contd..
• If the exponent of the product is within the proper range, the next
step is to multiply the significands, taking into account their signs.
• The multiplication is performed in the same way as for integers. In
this case, we are dealing with a sign magnitude representation, but
the details are similar to those for twos complement representation.
• The product will be double the length of the multiplier and
multiplicand. The extra bits will be lost during rounding. After the
product is calculated, the result is then normalized and rounded, as
was done for addition and subtraction. Note that normalization could
result in exponent underflow
Multiplication: Block diagram
Floating-Point Multiplication
ADDRESSING MODES
• The address field or fields in a typical instruction format are relatively
small.
• Will be able to reference a large range of locations in main memory
or, for some systems, virtual memory and to achieve this objective, a
variety of addressing techniques has been employed.
The most common addressing techniques, or modes:
Immediate
Direct
Indirect
Register
Register indirect
Displacement
Stack
Contd..
These modes are illustrated in Figure on next slide In this section, we
use the following notation:
• A = contents of an address field in the instruction
• R = contents of an address field in the instruction that refers to a
register
• EA = actual (effective) address of the location containing the
referenced operand.
• (X) = contents of memory location X or register X
Figure: Addressing Modes
Note: Key Points
• Virtually all computer architectures provide more than one of these
addressing modes.
• Different op codes will use different addressing modes.
• One or more bits in the instruction format can be used as a mode
field. The value of the mode field determines which addressing mode
is to be used.
• In a system without virtual memory, the effective address will be
either a main memory address or a register.
• In a virtual memory system, the effective address is a virtual address
or a register.
• The actual mapping to a physical address is a function of the memory
management unit (MMU) and is invisible to the programmer.
Table: Basic Addressing Modes
 Immediate Addressing:
• The simplest form of addressing is immediate addressing, in which
the operand value is present in the instruction
Operand = A
• This mode can be used to define and use constants or set initial
values of variables.
• The advantage of immediate addressing is that no memory reference
other than the instruction fetch is required to obtain the operand.
• The disadvantage is that the size of the number is restricted to the
size of the address field, which, in most instruction sets, is small
compared with the word length.
 Direct Addressing
• The address field contains the effective address of the operand:
EA = A
• The technique was common in earlier generations of computers but is
not common on contemporary architectures.
• It requires only one memory reference and no special calculation.
• The obvious limitation is that it provides only a limited address space.
 Indirect Addressing
• With direct addressing, the length of the address field is usually less
than the word length, thus limiting the address range
• One solution is to have the address field refer to the address of a
word in memory, which in turn contains a full-length address of the
operand. This is known as indirect addressing:
EA = (A)
• The advantage of this approach is that for a word length of N, an
address space of 2N is
• The disadvantage is that instruction execution requires two memory
references to fetch the operand: one to get its address and a second
to get its value.
 Register Addressing
• Register addressing is similar to direct addressing. The only difference
is that the address field refers to a register rather than a main
memory address:
EA = R
Advantages:
1. Only a small address field is needed in the instruction, and
2. No time- consuming memory references are required.

Disadvantage:
1. The address space is very limited.
 Register Indirect Addressing
Just as register addressing is analogous to direct addressing, register
indirect addressing is analogous to indirect addressing. In both cases,
the only difference is whether the address field refers to a memory
location or a register. Thus, for register indirect address,
EA = (R)
• The advantages and limitations of register indirect addressing are
basically the same as for indirect addressing. In both cases, the
address space limitation of the address field is overcome by having
that field refer to a word length location containing an address.
• In addition, register indirect addressing uses one less memory
reference than indirect addressing.
 Displacement Addressing
• A very powerful mode of addressing combines the capabilities of direct
addressing and register indirect addressing.
• It is known by a variety of names depending on the context of its use, but
the basic mechanism is the same.
• We will refer to this as displacement addressing:
EA = A + (R)
• Displacement addressing requires that the instruction have two address
fields, at least one of which is explicit.
• The value contained in one address field (value = A) is used directly.
• The other address field, or an implicit reference based on opcode, refers to
a register whose contents are added to A to produce the effective address.
Contd..
• Three of the most common uses of displacement addressing:
Relative addressing
Base-register addressing
Indexing
 Stack Addressing
• A stack is a linear array of locations. It is sometimes referred to as a
pushdown list or last-in-first-out queue.
• The stack is a reserved block of locations. Items are appended to the
top of the stack so that, at any given time, the block is partially filled.
• Associated with the stack is a pointer whose value is the address of
the top of the stack.
• Alternatively, the top two elements of the stack may be in processor
registers, in which case the stack pointer references the third element
of the stack.
• The stack pointer is maintained in a register. Thus, references to stack
locations in memory are in fact register indirect addresses.
Stack Organization
• A stack is a data storage structure in which the most recent thing
deposited is the most recent item retrieved.
• It is based on the LIFO concept (Last-in-first-out).
• The stack is a collection of memory locations containing a register
that stores the top-of-element address in digital computers.
Stack's operations are:
Push: Adds an item to the top of the stack.
Pop: Removes one item from the stack's top.
Contd..
What is Stack Organization?
• The Last In First Out (LIFO) list is another name for stack. It is the
CPU's most crucial feature. It saves information so that the last
element saved is retrieved first. A memory space with an address
register is called a stack. This register, known as the Stack Pointer,
affects the stack's address (SP). The address of the element at the top
of the stack is continuously influenced by the stack pointer.
Implementation of Stack
The stack can be implemented using two ways:
1. Register Stack
2. Memory Stack
Stack Organization
Contd..
Register Stack
• The stack can be arranged as a set of memory words or registers.
Consider a 64-word register stack arranged as displayed in the figure.
The stack pointer register includes a binary number, which is the
address of the element present at the top of the stack. The three-
element A, B, and C are located in the stack. See Figure on next Slide.
• The element C is at the top of the stack and the stack pointer holds
the address of C that is 3. The top element is popped from the stack
through reading memory word at address 3 and decrementing the
stack pointer by 1. Then, B is at the top of the stack and the SP holds
the address of B that is 2. It can insert a new word, the stack is
pushed by incrementing the stack pointer by 1 and inserting a word in
that incremented location.
Figure: Register Stack
Contd..
• Because 26 = 64 and the SP cannot exceed 63, the stack pointer has 6
bits (111111 in binary). After all, if you multiply 63 by 1, the outcome
is 0(111111 + 1 = 1000000). Only the six least important bits are
stored in SP. The result of decrementing 000000 by one is 111111.
• As a result, the one-bit register 'FULL' is set to 1 when the stack is full.
The binary information constructed into or readout of the stack is
stored in the data register DR.
• First, the SP is set to 0, the EMTY to 1, and the FULL to 0. The push
operation is used to insert a new piece because the stack is not yet
full (FULL = 0).
Contd..
Memory Stack:
A stack may be implemented in a computer's random access memory
(RAM). A stack is implemented in the CPU by allocating a chunk of
memory to a stack operation and utilizing a processor register as a
stack pointer. The stack pointer is a CPU register that specifies the
stack's initial memory address.
Reverse Polish Notation In Stack
• The reverse polish notation in the stack is also known as postfix
expression. Here, we use stack to solve the postfix expression.
• From the postfix expression, when some operand is found, we push it
into the stack, and when some operator is found, we pop elements
from the stack, and after that, the operation is performed in the
correct sequence, and the result is also stored in the stack.
Contd..
For example, we are given this expression in the form of an array

From the given array we can deduce expression as,


Advantages of Stack Organization
• Complex arithmetic statements may be rapidly calculated.
• Instruction execution is rapid because operand data is stored in
consecutive memory areas.
• The instructions are minimal since they don't contain an address field.
Disadvantages of Stack Organization
• The size of the program increases when we use a stack.
• It's in memory, and memory is slower in several ways than CPU
registers. It generally has a lesser bandwidth and a longer latency.
Memory accesses are more difficult to accelerate.

You might also like