You are on page 1of 11

8086

Pipeline Architecture

Prof. Upendra Patil 1


Prof. Upendra Patil 2
Pipeline Architecture
• 8086 has two independent functional units
– Bus Interface Unit (BIU)

– Execution Unit (EU)

• BIU is responsible for


– Transferring addresses to memory / IO devices

– Transferring data to/from memory/IO devices

– Fetching instruction from code memory

• BIU has 6 byte instruction queue


Prof. Upendra Patil 3
Pipeline Architecture
• EU receives instructions from queue one by one
• EU decodes and executes instruction received
• While EU is decoding and executing instruction, BIU is fetching
instructions from external memory and storing into the queue
• Conclusion:
– BIU and EU have different tasks
– They operates in parallel

This type of Architecture is called Pipelined Architecture


Prof. Upendra Patil 4
Pipeline Architecture
Clock Cycles 1 2 3 4

Non-pipeline
fetch1 exe1 fetch2 exe2
Architecture

8086 pipeline fetch1 exe1


Architecture
fetch2 exe2
By EU

By BIU fetch3 exe3

Prof. Upendra Patil 5


Advantages of Pipelining
• Execution time of program is reduced.

• It increases the throughput of the system


– Throughput is number of instructions executed per unit time

• It makes the system reliable.

Prof. Upendra Patil 6


Disadvantages of Pipelining
• The design of pipelined processor is complex and costly to
manufacture.
• The instruction latency is more
– Latency is the number of processor clocks it takes for an instruction to
have its data available for use by another instruction.
– Therefore, an instruction which has a latency of 6 clocks will have its
data available for another instruction after 6 clocks from start of its
execution
– Latency = time from the start of the instruction until the result is
available
Prof. Upendra Patil 7
Pipe Lining Scheme
• Initially, CS: IP is loaded with the required address.
• Now the Queue is Empty.
• If CS: IP is odd address: the µp fetches one byte of Inst. code.
• If it is at even address: the µp fetches two bytes at a time.
• For one-byte Inst, the first byte is complete op code.
• For two-byte Inst. the remaining part of op code resides in second byte.
• The op codes along with data are fetched and arranged in the queue.
• The µp does not perform the next fetch cycle till at least two bytes of
the Queue are emptied.
• After decoding the first byte, the decoding ckt decides whether the inst.
is of single op code byte or double op code byte.
• If it is single op code: the next bytes are data
• If it is double op code: the next byte is second byte of op code.
Prof. Upendra Patil 8
Pipe Lining Scheme
From Memory

Six Byte
Instruction Queue

De code
1st byte op code
Execute it By
accepting Data
Execute it By Read 2nd Byte
Is it from queue
accepting Data OP code And
1 byte
from The Queue Op code Decode it
yes Inst. Repeat the
No
Same proc.
the Queue Operation
Prof. Upendra Patil
For Succ. Inst. 9
Pipe Lining Scheme
• The queue is updated after every byte is read form the
queue but the fetch cycle is initiated by BIU only at least
two bytes of queue are emptied and EU may be
concurrently executing the fetched Inst.

• The important thing to be noted is that the fetch cycle is


over lapped with the execution of the current Inst.

Prof. Upendra Patil 10


Questions ?

Prof. Upendra Patil 11

You might also like