You are on page 1of 33

Parallel Architecture & Parallel Programming

Content: Introduction
Von-Neumann Architecture. Serial ( Single ) Computational. Concepts and Terminology

Parallel Architecture
Definition Benefits & Advantages Distinguishing Parallel Processors Multiprocessor Architecture Classifications Parallel Computer Memory Architectures Definition Parallel Programming Model Designing Parallel Programs Parallel Algorithm Examples Conclusion

Parallel Programming

Case Study

Introduction:
Von-Neumann Architecture Since then, virtually all computers have followed this basic design, which Comprised of four main components:
Memory Control Unit Arithmetic Logic Unit Input/output

Introduction
Serial Computational :-

Traditionally,
software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU)

Problem is broken into discrete SERIES of instructions. Instructions are EXECUTED one after another.
One instruction may execute at any moment in TIME

Introduction
Serial Computational :-

Parallel Architecture

Definition:
parallel computing: is the simultaneous use of
multiple compute resources to solve a computational problem To be run using multiple CPUs. In which:- A problem is broken into discrete parts that can be solved concurrently - Each part is further broken down to a series of instructions - Instructions from each part execute simultaneously on different CPUs

Definition:

Concepts and Terminology: General Terminology


Task A logically discrete section of computational work Parallel Task Task that can be executed by multiple processors safely Communications Data exchange between parallel tasks Synchronization The coordination of parallel tasks in real time

Benefits & Advantages:

Save Time & Money Solve Larger Problems

How To Distinguishing Parallel processors:


Resource Allocation:
how large a collection? how powerful are the elements? how much memory?

Data access, Communication and Synchronization


how do the elements cooperate and communicate? how are data transmitted between processors? what are the abstractions and primitives for cooperation?

Performance and Scalability


how does it all translate into performance? how does it scale?

Multiprocessor Architecture Classification :


Distinguishes multi-processor architecture by instruction and data: SISD Single Instruction, Single Data

SIMD Single Instruction, Multiple Data

MISD Multiple Instruction, Single Data

MIMD Multiple Instruction, Multiple Data

Flynns Classical Taxonomy: SISD


Serial Only one instruction and data stream is acted on during any one clock cycle

Flynns Classical Taxonomy: SIMD


All processing units execute the same instruction at any given clock cycle. Each processing unit operates on a different data element.

Flynns Classical Taxonomy: MISD


Different instructions operated on a single data element. Very few practical uses for this type of classification. Example: Multiple cryptography algorithms attempting to crack a single coded message.

Flynns Classical Taxonomy: MIMD


Can execute different instructions on different data elements. Most common type of parallel computer.

Parallel Computer Memory Architectures: Shared Memory Architecture

All processors access all memory as a single global address space. Data sharing is fast. Lack of scalability between memory and CPUs

Parallel Computer Memory Architectures: Distributed Memory

Each processor has its own memory. Is scalable, no overhead for cache coherency. Programmer is responsible for many details of communication between processors.

Parallel Programming

Parallel Programming Models


Exist as an abstraction above hardware and memory architectures
Examples:
Shared Memory Threads Messaging Passing Data Parallel

Parallel Programming Models: Shared Memory Model


Appears to the user as a single shared memory, despite hardware implementations Locks and semaphores may be used to control shared memory access. Program development can be simplified since there is no need to explicitly specify communication between tasks.

Parallel Programming Models: Threads Model


A single process may have multiple, concurrent execution paths. Typically used with a shared memory architecture. Programmer is responsible for determining all parallelism.

Parallel Programming Models: Message Passing Model


Tasks exchange data by sending and receiving messages. Typically used with distributed memory architectures. Data transfer requires cooperative operations to be performed by each process. Ex.- a send operation must have a receive operation. MPI (Message Passing Interface) is the interface standard for message passing.

Parallel Programming Models: Data Parallel Model


Tasks performing the same operations on a set of data. Each task working on a separate piece of the set. Works well with either shared memory or distributed memory architectures.

Designing Parallel Programs: Automatic Parallelization Automatic


Compiler analyzes code and identifies opportunities for parallelism Analysis includes attempting to compute whether or not the parallelism actually improves performance. Loops are the most frequent target for automatic parallelism.

Designing Parallel Programs: Manual Parallelization Understand the problem


A Parallelizable Problem:
Calculate the potential energy for each of several thousand independent conformations of a molecule. When done find the minimum energy conformation.

A Non-Parallelizable Problem:
The Fibonacci Series
All calculations are dependent

Designing Parallel Programs: Domain Decomposition


Each task handles a portion of the data set.

Designing Parallel Programs: Functional Decomposition


Each task performs a function of the overall work

Conclusion
Parallel computing is fast. There are many different approaches and models of parallel computing. Parallel computing is the future of computing.

References
A Library of Parallel Algorithms, www2.cs.cmu.edu/~scandal/nesl/algorithms.html Internet Parallel Computing Archive, wotug.ukc.ac.uk/parallel Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/parallel_comp/#Whatis Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw Hill Higher Education, 2003 The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993

Case Study
Developing Parallel Applications On the Web using Java mobile agents and Java threads

My References :
Parallel Computing Using JAVA Mobile Agents
By: Panayiotou Christoforos, George Samaras ,Evaggelia Pitoura, Paraskevas Evripidou

An Environment for Parallel Computing on Internet Using JAVA


By:P C Saxena, S Singh, K S Kahlon

You might also like