You are on page 1of 13

Al-Iraqia University

Engineering College
Computer Engineering Department

CLASSIFICATION OF COMPUTER
ARCHITECTURES

BY

Hiba Abdulwahed Gumar

SUPERVISOR

SUPERVISOR NAME

2018-2019

0
ABSTRACT

In the past two decades, many parallel computers have been designed and created. We
can categorize them based on common features. This classification plan enables us to
study one or more machines as a model for each category, helping us to better
understanding for all categories. Unfortunately, the researchers did not find a
convincing classification scheme that could cover all types of parallel machines. Over
the years, there have been many attempts to find an efficient and convenient way to
classify computers in terms of architecture. Although there is no full classification, the
most widespread classification these days is the one proposed by Michael J. Flynn in
1966. The Flynn classification takes into account two factors: the amount of flow of
instructions and the amount of flow of data flowing to the processor. Flynn's taxonomy
based mainly on the amount of data flow and instructions in the machine. Flow here
is meant as a sequence or sequence of elements (instructions or data) as implemented
or operated by the processor. For example, some machines perform a single flow of
instruction, while several flows are performed in other machines. In the same way,
some machines return a single flow of data, and other machines return multiple flows.
Thus, Flynn puts the machine in one of four categories depending on the presence of
one flow or multiple flows.

1
TABLE OF CONTENTS

Abstract 1
1. Introduction 3
2. Flynn's classification categories 5
1.2 Single-instruction single-data streams (SISD) 5
2.2 Single-instruction multiple-data streams (SIMD) 6
3.2 Multiple-instruction single-data streams (MISD) 8
4.2 Multiple-instruction multiple-data streams (MIMD) 9
3. Conclusion 12
References 13

2
Classification of Computer Architecture

1. Introduction

In 1966 Michael Flynn proposed a characterization of computer systems that was


slightly expanded in 1972. Today it is known as Flynn’s Taxonomy. This proposal is
a methodology to classify general forms of parallel operation that are available within
a processor, the aim is to explain the types of parallelism either supported in the
hardware by a processing system or available in an application; therefore, the
classification can be achieved from the point of view of the machine or the application
by the machine language programmer.

Considering the point of view of the assembly language programmer, parallel


computers are classified according to the multiplicity of instruction and data streams.
The essential computing process is the execution of a sequence of instructions on a set
of data. The term stream is used here to denote a sequence of items (instructions or
data) as executed or operated upon by a single processor. Instructions or data are
defined with respect to a referenced machine. An instruction stream is a sequence of
instructions as executed by the machine; a data stream is a sequence of data including
input, partial, or temporary results, called for the instruction stream. The classification
is as follows:

 Single-instruction single-data streams (SISD)


 Single-instruction multiple-data streams (SIMD)
 Multiple-instruction single-data streams (MISD)
 Multiple-instruction multiple-data streams (MIMD)

3
Figure 1 and 2 shows the orthogonal organization of the streams according to Flynn’s
classification.

Figure 1. Flynn’s classification

Figure 2. Parallel computer architecture

4
2. Flynn's classification categories

2.1 Single Instruction Stream, Single Data Stream (SISD)

These are the conventional systems that contain one CPU (uniprocessor) and
hence can execute serially one instruction at a time (single instruction stream) and
fetches or stores one item of data at a time (single data stream); Von Neumann
computers are classified as SISD systems. Therefore, they are sequential computers,
which exploit no parallelism in either the instruction or data streams. All SISD
computers utilize a single register, called the program counter, which enforces serial
execution of instructions. As each instruction is fetched from the memory, this register
is updated to the address of the next instruction to be fetched and executed, resulting
in a serial order of execution. A SISD computer can be seen as a Finite State Machine
where moving to the next instruction is a transition between states, early CPU designs
of the family 8086 with a single execution unit belong to this category; other examples
are the IBM 704, VAX 11/780, CRAY-1. Figure 3 shows a general structure of the
SISD architecture.

Figure 3. Model of an SISD architecture.

5
2.2 Single Instruction Stream, Multiple Data Stream (SIMD)

This architecture is essential in the parallel computer world. It can manipulate


large vectors and matrices; so it offers greater flexibility than SISD and opportunities
for better performance in video, audio and communications. Examples of common
area of application are 3-D graphics, video processing and theatre quality audio and
high performance scientific computing. SIMD units are present on all G4, G5, the
XBOX CPU processors and Intel Core i7 processors. To take advantage of SIMD,
typically an application must be reprogrammed or at least recompiled; most of the time
is unnecessary to rewrite the entire application. The power of this architecture can be
appreciated when the number of processor units is equivalent to the size of the vector.
In such situations, component-wise addition and multiplication of vector elements can
be done simultaneously. The power of this architecture compared to a sequential one
is huge, even when the size of the vector is larger than the number of processors; Figure
4 shows a block diagram of this architecture. The SIMD architecture is twofold: True
SIMD and Pipelined SIMD, each one has its own advantages and disadvantages.

There are two types of True SIMD architecture: True SIMD with distributed
memory and True SIMD with shared memory. In the distributed memory case, the
SIMD architecture is composed by a single control unit (CU) with multiple processor
elements (PE); each PE works as an arithmetic unit (AU), so the PEs are slaves of the
control unit. In this situation, the only processor which can fetch and interpret
instruction codes is the CU, the only capability of the AUs is to add, subtract, multiply
and divide; each AU has access only to its own memory. If one AU requires
information from a different AU, the AU needs to request the information to the CU
which needs to manage the transferring process. A disadvantage is that the CU is
responsible for handling the communication transfers between the AUs memory. For

6
the shared memory case, the True SIMD architecture is designed with a convenient
configurable association between the PEs and the memory modules.

Here, the local memories that were attached to each AU are replaced by memory
modules, which are shared by all the PEs with the aim of sharing their memory without
accessing the control unit. It is evident that the shared memory True SIMD architecture
is superior to the distributed case.

The Pipelined SIMD architecture (vector machine) consists of a pipeline of AUs


with shared memory. The pipeline is a first in first out (FIFO) kind. To take advantage
of the pipeline, the data to be evaluated must be stored in different memory, pipelining
of arithmetic operations divides one operation into many smaller functions to execute
them in parallel on different data, the data to the pipeline must be fed as fast as it can
be possible. These kinds of computers can be also distinguished according to the
number and types of pipelines, processor/memory interaction and implementation
arithmetic.

Figure 4. Model of an SIMD architecture.

7
2.3 Multiple Instruction Stream, Single Data Stream (MISD)

In the MISD architecture, there are n processor units where a single data stream
is fed into multiple processing units. In this architecture, a single data stream is fed
into several processing elements (PE), each PE operates on the data individually using
independent instructions. Figure 5 shows the architecture of this kind of computers.
MISD did not exist when Flynn was categorizing the machines. It might have been
added for symmetry in his chart. Its applications are very limited and expensive and
currently there seems to be no commercial implementation. However, it is a research
interest topic. One example is a systolic array with matrix multiplication like
computation, and with rows of data processing units (cells) sharing the information
with their neighbors immediately after processing.

Machines in this category execute several different programs on the same data
item. This implies that several instructions are operating on a single piece of data. This
architecture can be illustrated two different categories:

 A class of machines that would require distinct processing units that would
receive distinct instructions to be performed on the same data. This was a big
challenge for many designers and there are currently no machines of this type
in the world.

 A class of machines such that data flows through a series of processing


elements. Pipe lined architectures such as systolic arrays fall into this group of
machines. Pipeline architectures perform vector processing through a series of
stages, each of which performs a particular function and produces an
intermediate result. The reason that such architectures are grouped as MISD
machines is that elements of a vector may be considered to belong to the same
8
piece of data, and all pipeline stages represent multiple instructions that are
being applied to that vector.

Figure 5. Model of an MISD architecture.

2.4 Multiple Instruction Stream, Multiple Data Stream (MIMD)

An extension of Flynn’s taxonomy was introduced by D. J. Kuck in 1978. In his


classification, Kuck extended the instruction stream further to single (scalar and array)
and multiple (scalar and array) streams. The combination of these streams results in a
total of 16 categories of architectures. It is a kind of parallel computer where there are
independent and asynchronous processors running different instructions and data flow.
Figure 6 shows its architecture. These processors are common in modern
supercomputers, cluster, and SMP multiprocessor and multicore processors.

The MIMD permits multiple instruction streams to simultaneously interact with


their own data stream. While MIMD machines composed of completely independent
pairs of instruction and data streams may be of use for trivially parallel applications,

9
they have more than one processor and each one can execute a different program
(multiple instruction stream), on its own data item (multiple data stream).

In most MIMD systems, each processor has access to a global memory, which
may reduce processor communication delay. In addition, each processor possesses a
private memory, which assists in avoiding memory contention. It is generally
necessary to use a network to connect the processors together in a way that allows a
given processor’s data stream to be supplemented by data computed by other
processors. MIMD machines are also called multiprocessors.

A MIMD computer of essentially any size can be built by repeatedly adding


inexpensive microprocessors and simple network elements. This feature, combined
with programming flexibility, has made MIMD the principal architecture for large
scale parallel computers. The MIMD architectures take advantage of medium- and
large-grain parallelism. In current MIMD parallel architectures, the number of
processors is smaller than in SIMD systems. MIMD computers are the most complex,
but they hold great promise for efficiency accomplished via concurrent processing. It
is very likely that in the future, small MIMD systems with a limited number of
processors will be built with complete connectivity, meaning that each processor will
be connected to every other one.

Figure 6. Model of an MISD architecture.

10
3. Conclusion

 Flynn’s classification is among the first of its kind to be introduced and as such
it must have inspired subsequent classifications.
 The classification helped in categorizing architectures that were available and
those that have been introduced later. For example, the introduction of the
SIMD and MIMD machine models in the classification must have inspired
architects to introduce these new machine models.
 The classification stresses the architectural relationship at the memory-
processor level. Other architectural levels are totally overlooked.
 The classification stresses the external (morphological) features of
architectures. No information is included on the revolutionary relationship of
architectures that belong to the same category.
 Owing to its pure abstractness, no practically viable machine has exemplified
the MISD model introduced by the classification (at least so far). It should,
however, be noted that some architects have considered pipelined machines
(and perhaps systolic-array computers) as examples for MISD.
 A very important aspect that is lacking in Flynn’s classification is the issue of
machine performance. Although the classification gives the impression that
machines in the SIMD and the MIMD are superior to their SISD and MISD
counterparts, it gives no information on the relative performance of SIMD and
MIMD machines.

11
References

[1] O. M. Ross and R. S. Cruz, Editor, High Performance Programming for Soft
Computing, National Polytechnic Institute-Research Center and Development
of Digital Technology, Tijuana, Mexico (2014).

[2] C. L. Janssen and M. B. Nielsen, Parallel Computing in Quantum Chemistry,


Sandia National Laboratories, U.S. (2008).

[3] M. Abd-El-Barr and H. El-Rewini, Fundamentals of Computer Organization


and Architecture, Editor, King Fahd University of Petroleum & Minerals and
Southern Methodist University, Hoboken, New Jersey. (2005).

[4] S. H. Roosta, Parallel Processing and Parallel Algorithms, Department of


Computer Science, Oswego, NY 13126, USA. (2000).

12

You might also like