You are on page 1of 45

Computer Organization and

Architecture

Pipelining

Anugrah Jain
CSE, LNMIIT
Acknowledgement

• Carl Hamacher, Zvonko Vranesic and Safwat Zaky,


“Computer Organization”, Tata McGraw Hill, Fifth Edition,
2002.
• Kai Hwang, "Advanced Computer Architecture: Parallelism,
Scalability, programmability”, Tata McGraw Hill Edition, 2001.
The Idea of Pipelining in Computer
Fetch + Execution
T ime
I1 I2 I3
Time
Clock cycle 1 2 3 4
F E F E F E
1 1 2 2 3 3 Instruction

I1 F1 E1
(a) Sequential execution

I2 F2 E2
Interstage buffer
B1
I3 F3 E3

Instruction Ex ecution
fetch unit (c) Pipelined execution
unit

Figure 8.1. Basic idea of instruction pipelining.


(b) Hardware organization
The Idea of Pipelining in Computer Time
Clock cycle 1 2 3 4 5 6 7

Instruction

I1 F1 D1 E1 W1
Fetch + Decode
+ Execution + Write
I2 F2 D2 E2 W2

I3 F3 D3 E3 W3

I4 F4 D4 E4 W4

(a) Instruction execution divided into four steps

Interstage u
bffers

D : Decode
F : Fetch instruction E: Execute W : Write
instruction and fetch operation results
operands
B1 B2 B3

(b) Hardware organization


Role of Cache Memory

• Each pipeline stage is expected to complete in one


clock cycle.
• The clock period should be long enough to let the
slowest pipeline stage to complete.
• Faster stages can only wait for the slowest one to
complete.
• Since main memory is very slow compared to the
execution, if each instruction needs to be fetched from
main memory, pipeline is almost useless.
• Fortunately, we have cache.
Pipeline Performance

• The potential increase in performance resulting


from pipelining is proportional to the number of
pipeline stages.
• However, this increase would be achieved only if all
pipeline stages require the same time to complete,
and there is no interruption throughout program
execution.
• Unfortunately, this is not true.
Pipeline Performance
Time
Clock cycle 1 2 3 4 5 6 7 8 9

Instruction
I1 F1 D1 E1 W1

I2 F2 D2 E2 W2

I3 F3 D3 E3 W3

I4 F4 D4 E4 W4

I5 F5 D5 E5

Figure 8.3. Effect of an xeecution operation taking more than one clock
ycle.
c
Pipeline Performance
• The previous pipeline is said to have been stalled for two
clock cycles.
• Any condition that causes a pipeline to stall is called a
hazard.
• Data hazard – any condition in which either the source or
the destination operands of an instruction are not available
at the time expected in the pipeline. So, some operation
must be delayed, and the pipeline stalls.
• Instruction (control) hazard – a delay in the availability of an
instruction causes the pipeline to stall.
• Structural hazard – the situation when two instructions
require the use of a given hardware resource at the same
time.
Pipeline Performance

• Pipelining does not result in individual instructions


being executed faster; rather, it is the throughput that
increases.
• Throughput is measured by the rate at which
instruction execution is completed.
• Pipeline stall causes degradation in pipeline
performance.
• We need to identify all hazards that may cause the
pipeline to stall and to find ways to minimize their
impact.
Data Hazards
• We must ensure that the results obtained when instructions
are executed in a pipelined processor are identical to those
obtained when the same instructions are executed
sequentially.
• Example: R2 = 3, R3 = 4, R4 = 0, R5 = 10, R6 = 0
Mul R2, R3, R4 (R4 is destination)
Add R5, R4, R6 (R6 is destination)
• No hazard
R4 ← 3 × 4
R6 ← 10 + 12
• Hazard occurs:
R4 ← 3 × 4
R6 ← 10 + 0
Data Hazards
Time
Clock cycle 1 2 3 4 5 6 7 8 9

Instruction

I 1 (Mul) F1 D1 E1 W1

I 2 (Add) F2 D2 D2A E2 W2

I3 F3 D3 E3 W3

I4 F4 D4 E4 W4

Figure 8.6. Pipeline stalled by data dependenc


y between D
2 and W1.
Figure 8.6. Pipeline stalled by data dependency between D2 and W1.
Operand Forwarding

• Instead of from the register file, the second


instruction can get data directly from the output of
ALU.
• A special arrangement needs to be made to
“forward” the output of ALU to the input of ALU.
Source 1
Source 2

SRC1 SRC2

Register
file

ALU

RSLT

Destination

(a) Datapath

SRC1,SRC2 RSLT

E: Execute W: Write
(ALU) (Register file)

Forwarding path

(b) Position of the source and result registers in the processor pipeline

Figure 8.7. Operand forwarding in a pipelined processor


.
Handling Data Hazards in Software

• Let the compiler detect and handle the hazard:


I1: Mul R2, R3, R4 (R4 is destination)
NOP
NOP
I2: Add R5, R4, R6 (R6 is destination)
• The compiler can reorder the instructions to
perform some useful work during the NOP slots.
Side Effects

• The previous example is explicit and easily detected.


• Sometimes an instruction changes the contents of a register other
than the one named as the destination.
• Example: Carry Flag
Add R1, R3
AddWithCarry R2, R4
• Instructions designed for execution on pipelined hardware should
have few side effects.
Read After Write (RAW) Hazards

• A read after write (RAW) data hazard refers to a situation where an


instruction refers to a result that has not yet been calculated or
retrieved.
• Example:
Mul R2, R3, R4 (R4 ← R2 × R3)
Add R5, R4, R6 (R6 ← R5 + R4)
• Already discussed and operand forwarding can be used to solve.
Write After Read (WAR) Hazards

• When next instruction tries to write data before previous instruction


reads it.
• Example:
Add R1, R3, R2 (R2 ← R1 + R3)
Add R4, R5, R3 (R3 ← R4 + R5)
• Can be occurred during the out-of-order execution of instructions.
Write After Write (WAW) Hazards

• When next instruction tries to write output before previous


instruction writes it. The final value of data will get changed due to
out of order write operations.
• Example:
Add R1, R3, R2 (R2 ← R1 + R3)
Add R4, R5, R2 (R2 ← R4 + R5)
• Can be occurred during the out-of-order execution of instructions.
Instruction (Control) Hazards

• Whenever the stream of instructions supplied by


the instruction fetch unit is interrupted, the
pipeline stalls.
• Cache miss
• Branch
Unconditional Branches
Time
Clock cycle 1 2 3 4 5 6

Instruction
I1 F1 E1

I 2 (Branch) F2 E2 Execution unit idle

I3 F3 X

Ik Fk Ek

I k+1 Fk+1 Ek+1

Figure 8.8. An idle cycle caused by a branch instruction.


Time
Clock cycle 1 2 3 4 5 6 7 8

Branch Timing I1 F1 D1

F2
E1 W1

I 2 (Branch) D2 E2

I3 F3 D3 X

- Branch penalty I4 F4 X

Ik Fk Dk Ek Wk
- Reducing the penalty
I k+1 Fk+1 Dk+1 Ek+1

(a) Branch address computed inecute


Ex stage

Time
Clock cycle 1 2 3 4 5 6 7

I1 F1 D1 E1 W1

I 2 (Branch) F2 D2

I3 F3 X

Ik Fk Dk Ek Wk

I k+1 Fk+1 D k+1 Ek+1

(b) Branch address computed in Decode stage

Figure 8.9. Branch timing.


Instruction Queue and Prefetching

Instruction fetch unit


Instruction queue
F : Fetch
instruction

D : Dispatch/
Decode E : Execute W : Write
instruction results
unit

Figure 8.10. Use of an instruction queue in the hardware organization of Figure 8.2b.
Conditional Branches

• A conditional branch instruction introduces the


added hazard caused by the dependency of the
branch condition on the result of a preceding
instruction.
• The decision to branch cannot be made until the
execution of that instruction has been completed.
• Branch instructions represent about 20% of the
dynamic instruction count of most programs.
Delayed Branch

• The instructions in the delay slots are always


fetched. Therefore, we would like to arrange for
them to be fully executed whether or not the
branch is taken.
• The objective is to place useful instructions in these
slots.
• The effectiveness of the delayed branch approach
depends on how often it is possible to reorder
instructions.
Delayed Branch
LOOP Shift_left R1
Decrement R2
Branch>0 LOOP
NEXT Add R1,R3

(a) Original program loop

LOOP Decrement R2
Branch>0 LOOP
Shift_left R1
NEXT Add R1,R3

(b) Reordered instructions

Figure 8.12. Reordering of instructions for a delayed branch.


Delayed Branch
Time
Clock c ycle 1 2 3 4 5 6 7 8

Instruction
Decrement F E

Branch F E

Shift (delay slot) F E

Decrement (Branch taken) F E

Branch F E

Shift (delay slot) F E

Add (Branch not tak en) F E

Figure 8.13. Execution timing showing the delay slot being filled
during the last two passes through the loop in Figure 8.12.
Branch Prediction

• To predict whether or not a particular branch will be taken.


• Simplest form: assume branch will not take place and continue to
fetch instructions in sequential address order.
• Until the branch is evaluated, instruction execution along the
predicted path must be done on a speculative basis.
• Speculative execution: instructions are executed before the
processor is certain that they are in the correct execution sequence.
• Need to be careful so that no processor registers or memory
locations are updated until it is confirmed that these instructions
should indeed be executed.
Incorrectly Predicted Branch
T ime
Clock cycle 1 2 3 4 5 6

Instruction

I 1 (Compare) F1 D1 E1 W1

I 2 (Branch>0) F2 D 2 /P2 E2

I3 F3 D3 X

I4 F4 X

Ik Fk Dk

Figure 8.14. Timing when a branch decision has been incorrectly predicted
as not taken.
Branch Prediction

• Better performance can be achieved if we arrange for


some branch instructions to be predicted as taken and
others as not taken.
• Can use hardware to observe whether the target
address is lower or higher than that of the branch
instruction.
• Or compiler can include a branch prediction bit to
indicate the desired behavior.
• So far, the branch prediction decision is always the
same every time a given instruction is executed – static
branch prediction.
Dynamic Branch Prediction

• To improve prediction accuracy further:


• Can use actual branch behavior to influence prediction
• Processor hardware assesses likelihood of a given
branch being taken:
• By keeping track of branch decisions every time that a
branch instruction is executed.
• In the simplest form, dynamic prediction can use
result of most recent execution of branch
instruction.
• Processor assumes that next time instruction is
executed, branch decision is likely to be same as last
time.
Dynamic Branch Prediction
Structural Hazards

• Pipelining enables overlapped execution of


instructions.
• But pipeline stalls when there are insufficient hardware
resources to permit all actions to proceed concurrently.
• If two instructions need to access same resource in
same clock cycle, one instruction must be stalled to
allow other instruction to use resource.
• Can be prevented by providing additional hardware.
Structural Hazards
• Ex, such stalls can occur in a computer that has a
single cache that supports only one access per
cycle.
• If both Fetch and Memory stages of pipeline are
connected to cache, it is not possible for activity in both
stages to proceed simultaneously.
• Normally, Fetch stage accesses the cache in every cycle.
• This activity must be stalled for one cycle when there is
a Load/Store instruction in Memory stage
• If 25% of all instructions executed are Load or Store
instructions, these stalls increase execution time by 25%
• Using separate caches for instructions and data:
• Allows Fetch and Memory stages to proceed
simultaneously without stalling.
Structural Hazards
Time
Clock cycle 1 2 3 4 5 6 7

Instruction
I1 F1 D1 E1 W1

I 2 (Load) F2 D2 E2 M2 W2

I3 F3 D3 E3 W3

I4 F4 D4 E4

I5 F5 D5

Figure 8.5. Effect of a Load instruction on pipeline timing.


Superscalar Operation
• Equip processor with multiple execution units
• Each may be pipelined, to increase processor’s ability to
handle several instructions in parallel.
• Several instructions start execution in same clock cycle, but
in different execution units
• The processor is said to use multiple-issue.
• Processors can achieve instruction execution throughput of
more than one instruction per cycle.
• Known as superscalar processors.
• Many modern high-performance processors use this
approach.
Superscalar Operation
• To enable multiple-issue execution, fetch unit:
• Fetches two or more instructions per cycle before they are
needed
• Places them in an instruction queue.
• A separate unit, called the dispatch unit:
• Takes two or more instructions from front of queue
• Decodes them and sends them to appropriate execution
units.
• At the end of the pipeline, another unit is responsible
for writing multiple results into register file.
Superscalar Operation
Superscalar Operation

Also, floating-point unit is organized internally as a three-stage pipeline.


Execution Completion
• It is desirable to use out-of-order execution, so that an execution unit is
freed to execute other instructions as soon as possible.
• At the same time, instructions must be completed in program order to
allow precise exceptions.
• Precise Exception: If an exception occurs during an instruction, all
subsequent instructions that may have been partially executed are
discarded.
Execution Completion
• The use of temporary registers and register renaming.
• Commitment Unit: To perform in-order commitment.
• Instructions are executed out-of-order but committed in program order.
Performance Considerations

• The execution time T of a program that has a dynamic


instruction count N is given by:
N S
T
R
where S is the average number of clock cycles it takes to
fetch and execute one instruction, and R is the clock rate.
• Instruction throughput is defined as the number of
instructions executed per second.
R
Ps 
S
Performance Considerations

• In absence of stalls, S = 1, and ideal throughput with


pipelining is R.
• A 5-stage pipeline can potentially increase throughput by
factor of five.
• Operations with longest delay dictate cycle time, and hence
clock rate R.
• For a processor with on-chip caches, ALU delay is likely to be
critical.
• If it is 2 ns, then R = 500 MHz and ideal throughput is 500 MIPS.
Performance Considerations

• Effect of stalls: an example.


• Assume that:
• 30 percent of the instructions access data in memory. With a 95-
percent instruction hit rate and a 90-percent data hit rate.
• The average increase in the value of S as a result of cache misses is
given by
• δmiss = (0.05 + 0.3 × 0.1) × 17 = 1.36 cycles
• where 17 is the cache miss penalty.
• Taking this delay into account, the processor’s throughput would be

• = 210 MIPS.
Number of Pipeline Stages
• An n-stage pipeline may increase instruction throughput by
n times.
• As the number of stages increase, the probability of the
pipeline being stalled increases.
• The inherent delay in the basic operations increases.
• Hardware considerations (area, power, complexity,…)
• Performance/Cost ratio (PCR) by Larson = f / (c + kh)
• Where f = maximum throughput, c = cost of all logic stages,
k = number of stages and h = cost of each latch.
Thank You

You might also like