Professional Documents
Culture Documents
11.3 Pipelining
We have discussed this concept in Unit 4 and 5, but we need to recap it in
order to get a better idea of the next sections.
What is Pipelining?
An implementation technique by which the execution of multiple instructions
can be overlapped is called pipelining. In other words, it is a method which
breaks down the sequential process into numerous sub–operations. Then
every sub-operation is concurrently executed in dedicated segments
separately. The main advantage of pipelining is that it increases the
instruction throughput, which is specified the count of instructions completed
per unit time. Thus, a program runs faster. In pipelining, several
computations can run in distinct segments simultaneously.
e) Instruction execution cycle: In the last cycle, the result is written into
the register file.
Pipelines are of two types - Linear and Non-linear. Linear pipelines perform
only one pre-defined fixed functions at specific times in a forward direction
from one stage to next stage. On the other hand, a dynamic pipeline which
allows feed forward and feedback connections in addition to the streamline
connections is called a non-linear pipeline.
An Instruction pipeline operates on a stream of instructions by overlapping
and decomposing the three phases of the instruction cycle. Super pipeline
design is an approach that makes use of more and more fine-grained
pipeline stages in order to have more instructions in the pipeline. As RISC
instructions are simpler than those used in CISC processors, they are more
conducive to pipelining.
Self Assessment Questions
3. _____________ specifies the count of instructions completed per unit
time.
4. Pipelining is also called ________________ as it provides an essence
of parallelism only at the instruction level.
5. Linear pipelines perform only one pre-defined fixed functions at specific
times in a forward direction. (True/False)
IS IS IS
P : Processor
IS P2 DS : Data Stream
P1 Pn
IS : Instruction Stream
DS DS DS
Memory
Basically the memory is divided into several modules that is why large
multiprocessors into different categories. Let’s discuss them in detail.
UMA (Uniform Memory Access): In this category every processor and
memory module has similar access time. Hence each memory word can be
read as quickly as other memory word. If not then quick references are
slowed down to match the slow ones, so that programmers cannot find the
difference this is called uniformity here. Uniformity predicts the performance
which is a significant aspect for code writing. Figure 11.4 shows uniform
memory access from the CPU on the left.