You are on page 1of 35

Parallel processing

CoSc-6512

Dr. Basant Tiwari


basanttiw@gmail.com
0904423939

Department of Computer Science, Hawassa University


Parallel processing
• A computer system is said to be Parallel Processing System or Parallel
Computer

• if it provides facilities for simultaneous processing of various set of data


or simultaneous execution of multiple instruction.

• A parallel computer (or multiple processor system) is a collection of


communicating processing elements (processors)

• that cooperate to solve large computational problems, fast by dividing


such problems into parallel tasks, exploiting Thread-Level Parallelism
(TLP).
• Multiple tasks at once.
• Distribute work into multiple execution units .
• There are two main component of parallel processing system:
• Processing Nodes.

• Interconnection Network
Parallel processing
1. Processing Nodes:
• Each processing node contains one or more processing elements (PEs) or
processor(s), memory system, plus communication assist: (Network interface and
communication controller)

2. Parallel machine network (System Interconnects).


• Function of a parallel machine network is to efficiently (reduce communication cost)
transfer information (data, results .. ) from source node to destination node as
needed.
• It allow cooperation among parallel processing nodes to solve large computational
problems after divided into a number parallel computational tasks.
Various view of processing
• Basically, there are 3 view of processing : 1) Serial Processing 2) Parallel Processing in
multicomputer system, and 3) Parallel Processing on Uniprocessor system i.e. simulated
or virtual parallel processing.

(a) Serial processing (b) True parallel processing with (c) Parallel processing
multiple processor simulated by switching
• Fig(a) represents the serial processing means next processing is started when the
previous process must be completed.
• In fig (b) all three process are running in one clock cycle of three processors.
• In fig (c) all three process are also running in one clock cycle but each process
are getting only 1/3 of actual clock cycle and the CPU is switching from on
process to other in its clock cycle.
Serial Processing Vs. Parallel processing
Why Parallel Computing ?

• Save time - wall clock time


• Solve larger problems
• Parallel nature of the problem, so parallel models fit it best
• Provide concurrency (do multiple things at the same time)
• Taking advantage of non-local resources
• Cost savings
• Overcoming memory constraints
• Can be made highly fault-tolerant (replication)
What are application ?
Enterprise App.
Traditional HPC
• J2EE and Web servers
• Nuclear physics
• Business Intelligence
• Fluid dynamics
• Banking, Finance, Insurance, Risk
• Weather forecast
Analysis
• Image processing, • Regression tests for large software
• Image synthesis, • Storage and Access to large logs
• Virtual reality • Security: Finger Print matching
• Petroleum
• Biology and genomics
How to parallelize ?
• 3 steps :

1. Breaking up the task into smaller tasks


2. Assigning the smaller tasks to multiple workers to work on
simultaneously
3. Coordinating the workers
Additional definitions

Concurrency Simultaneous access to a resource, physically or


logically
Concurrent access to variables, resources, remote Data
Distribution Several address spaces

Locality Data located on several hard disks


Performance ? - Parallel processing
► Performance as Time
Time spent between the start and the end of a computation
► Performance as rate
MIPS ( Millions of Instructions / sec) - Not equivalent on all architectures
► Peak Performance
Maximal Performance of a Resource (theoretical).
Peak performance is a state that is known as peak  experience, the zone of
optimal functioning and flow.
Parallel Computer Architectures
• Flynn in 1966, proposed Flynn's Classical Taxonomy – Based on No. of
instruction/task and data streams.
• Flynn's taxonomy is a specific classification of parallel computer architectures
that are based on the number of concurrent instruction (single or multiple)
and data streams (single or multiple) available in the architecture.
• Single Instruction, Single Data streams (SISD): your single-core uni-
processor PC
• Single Instruction, Multiple Data streams (SIMD): special purpose low-
granularity multi-processor m/c with a single control unit relaying the same
instruction to all processors (with different data)
• Multiple Instruction, Single Data streams (MISD): pipelining is a major
example.
• Multiple Instruction, Multiple Data streams (MIMD): the most prevalent
parallel model. SPMD (Single Program Multiple Data) is a very useful
subset.
Parallel Computer Architectures contd…
SISD
• No instruction parallelism
• No data parallelism
• SISD processing architecture example ─ a personal computer
processing instructions and data on single processor

IS: Instruction Stream


DS: Data Stream
CU: Control Unit   
PU: Processing Unit
MU: Memory Unit

• The single processing element executes instructions sequentially on


a single data stream. The operations are thus ordered in time and
may be easily traced from start to finish.
SIMD
• An SIMD computer consists of a single control unit, fetching instructions from
an instruction store.
• Instructions are sent to the PUs (processing units) for simultaneous execution.
A PU consists of a processing element (PE), which is an ALU with registers,
and a private data memory (the PEM). Some of the SIMD models are:
• Vector computers and special purpose computations
• Distributed memory SIMD (MPP, DAP, CM-1 &2, Maspar)
• Shared memory SIMD (STARAN, vector computers)
MISD
• In MISD, multiple processing units operate on one single-data stream.
Each processing unit operates on the data independently via separate
instruction stream.
• In this system there are ‘n’ processor units, each receiving distinct
instructions operating over the same data stream. The result of one
processor becomes the input of the next processor.
• One the closest architecture to this concept is a pipelined computer. This
structure has received much less attention and has no real implementation.
MIMD
• An MIMD system is a multiprocessor machine which is capable of executing
multiple instructions on multiple data sets.
• Parallel computers are reserved for MIMD machines.
• MIMD systems provide a separate set of instructions for each processor.
This allows the processors to work on different parts of a problem
asynchronously and independently.
• Such systems may consist of a number of interconnected, dedicated processor
and memory nodes, or interconnected “stand-alone” workstations.
Parallel Processor Architectures – At a Glance
Parallel/Vector Computers
• parallel computers are those that execute programs in MIMD mode. There are
two major classes of parallel computers, namely:
• Shared-memory Multiprocessors, and
• Message-passing Multi-computers or Distributed-Memory Multicomputer.

• These architecture is also referred as Memory Architecture of Parallel


Computer.
• The major distinction between multiprocessors and multicomputer lies in memory
sharing and the mechanisms used for inter-processor communication.
• In a shared-memory multiprocessor system, each processor shares a common
memory and different processor are connected by inter-communication network
and no processor has its own memory.
• In Multicomputer, each processor has its own local memory and they are
connected through interconnection network. Inter-processor communication is
done through Message-passing among nodes.
Parallel/Vector Computers contd…

• In multiprocessor, each processor is not a complete computer, while in


multicomputer, each processor is a complete computer.
• Shared Memory multiprocessor, share data, by means of a common memory,
which is the shared by each processor.
• In Message Passing multicomputer, each processor (having own local
memory) computes the result of processing of input data and communicate the
result directly with other computer through interconnection network.
SHARED-MEMORY MULTIPROCESSOR
• In shared memory architecture, multiple processors operate
independently but share the same memory resources. Only one
processor can access the shared memory location at a time.

• Changes in a memory location effected by one processor are visible to all


other processors.
• Synchronization is achieved by controlling tasks reading from and writing
to the shared memory.
Shared-memory Multiprocessor Contd…

• There are three shared-memory multiprocessor models:

1. The Uniform Memory-Access (UMA) model,

2. The Nonuniform Memory-Access (NUMA) model, and

3. The cache-only memory architecture (COMA) model.

• These models differ in, how the memory and peripheral resources are shared
or distributed.
Uniform Memory Access (UMA)
• In this model, all the processors share the physical memory uniformly. All the
processors have equal access time to all the memory words. Each processor may
have a private cache memory.

• When all the processors have equal access to all the peripheral devices, the system is
called a symmetric multiprocessor.

• When only one or a few processors can access the peripheral devices, the system is
called an asymmetric multiprocessor.
UMA contd…
• The simplest multiprocessor system has a single bus to which at least two
CPUs and a memory connected (shared among all processors).
• When a CPU wants to access a memory location, it checks if the bus is
free, then it sends the request to the memory interface module and waits
for the requested data to be available on the bus
• It is a Tightly-coupled systems (high degree of resource sharing)
• Suitable for general-purpose and time-sharing applications by multiple
users.
• It can be used to speedup the execution of a single large program in time-
critical application.
Non-Uniform Memory Access (NUMA)
• In NUMA multiprocessor model, the access time varies with the location
of the memory word. Here, the shared memory is physically distributed
among all the processors, called local memories.
• The collection of all local memories forms a global address space which
can be accessed by all the processors.
NUMA Contd…
• The local memory can be shared by other processor only by
interconnection network through a respective processor.

• It is faster to access a local memory with a local processor. The access of


remote memory take longer time due to added delay through
interconnection network.

• NUMA based machines can be extremely cost effective and scalable


while preserving the semantics of a shared memory Symmetric
Multiprocessor.

• NUMA is a means of implementing a distributed shared memory


system that can make processor/memory interaction appear transparent to
application software.
Cache Only Memory Architecture (COMA)
• The COMA model is a special case of the NUMA model. Here, all the
distributed main memories are converted to cache memories.
• Here, the local memory of processor is convert in to cache. The model has no
memory hierarchy. All the caches form a global address space.
COMA Contd…
• Remote cache access is assisted by distributed cache directory.
• Depending upon the interconnection network used, sometime
hierarchical directories may be used to help locate copies of cache
blocks.
• It is very fast in execution.
• But, cache coherence problem arises in such system.
Message Passing Multicomputer
Distributed-Memory Multicomputer
• Distributed memory multicomputer system consists of multiple computers, known as
nodes, inter-connected by message passing network.

• Each node acts as an autonomous computer having a processor, a local memory and
sometimes I/O devices.

• In this case, all local memories are private and are accessible only to the local
processors.
• That is why, the traditional machines are
called no-remote-memory-access
(NORMA) machines.

• This architecture provide Point-to-Point


static connections among nodes.

• Inter-node communication is carried out


by passing messages through the static
connection network.
Vector Super-Computer
• A vector computer is often built on top of a scalar processor. The vector
processor is attached to the scalar processor as an optional feature.
• Vector processor is a processor which support the facility of input, output
and processing of large collection of data elements in the form of a
metrices.
• Vector super computer consists of following:
1. Host Computer
2. Main Memory
3. Scaler Processor
4. Vector Processor
5. Vector Register
Architecture of Vector Super-Computer
Vector Super-Computer Contd…
1. Host Computer: Host computer acts as an interface between user and the
computer system. It also interface with mass storage or an auxiliary memory.
2. Main-Memory: Main memory contains data and program, which feed
instruction and data into scaler processor and data into vector processor.
3. Scaler Processor: it consists of scaler functional pipelines and scaler control
unit. It accept instructions from the main memory and also accept data when
it is in scaler form. At this time vector unit is disabled. Scaler processor,
processes the scaler data and give result back to main memory.
In case scaler processor`s control unit detect an instruction, which suppose
to be handled by the vector processor, Scaler control unit will give
instruction to the vector control unit, which will activate the vector control
unit. Vector control unit in turn activate the vector functional pipelines and
vector registers to accept the vector data from main memory.
Vector Super-Computer Contd…
4. Vector Processor: It become actives when scaler unit receives data and
instruction which can not be handled by it. At that time, vector control unit
will receives an activated instruction from scaler control unit. Vector data
will now be accepted and vector instruction be accepted. The processing
will be controlled by vector control unit and vector output given to main-
memory.
5. Vector Register: Vector registers are register set, necessary for processing
inside the vector processor of vector computer. They can be arranged in two
configurations:
a. Static Register: Fix number of register having equal number of bits inside and
that arrangement used throughout the processing. For example: 64 vector
register, each having 64 bits storage used in CRAY.
b. Dynamic Register: these registers can be reconfigured during program
execution to match the requirement of vector operand. Ex. Fijitsu VP2000
Series.
SIMD Supercomputers
• In SIMD computers, ‘N’ number of processors are connected to a control unit
and all the processors have their individual memory units. All the processors are
connected by an interconnection network.
• SIMD processors are especially designed for performing vector computations.
SIMD has two basic architectural organizations:
A. Array processor using random access memory
B. Associative processors using content addressable memory.
SIMD Supercomputers Contd…

• SIMD Machine Model: An operational model of an SIMD computer is


specified by a 5-tuple:
(N, C, I, M, R), where
• N = number of processing elements (PEs).
• C = set of instructions (including scalar and flow control)
• I = set of instructions broadcast to all PEs for parallel execution.
• M = set of masking schemes used to partition PEs into
enabled/disabled states. It is a method for determining which PEs
will be active at a given cycle.
• R = set of data-routing functions to enable inter-PE communication
through the interconnection network.
THANKS

You might also like