KONERU LAKSHMAIAH COLLEGE OF ENGINEERING

(ATONOMOUS)
Approved by AICTE, Accredited by National Board of Accredition and ISO 9001-2000 certified Green fields, vaddeswaram, Guntur dist., A.P.India-522 502. Affiliated to Acharya Nagarjuna University

STRATEGY OF COMPUTING

PARALLEL

PRESENTED BY
NAME: PHNO: K.NAGA TEJA KLCE 9030686681 NAME: PHNO: Email: K.IMMANUEL KLCE 9642672002

Email: naga.tejarocks@gmail.com immanuel.kota@gmail.com

commodity components. It begins with a brief overview. 1. The topics of parallel memory architectures and programming models are then explored. including concepts and terminology associated with parallel computing. A combination of both. Parallel clusters can be built from cheap. INTRODUCTION Only one instruction may execute at any moment in time. and is intended for someone who is just becoming . • A single computer with multiple processors. savings. A problem is broken into a discrete series of instructions.• ABSTRACT: The very basics of parallel computing. Instructions are executed one after another. The very basics of parallel computing. computing Serial/scaled Parallelcomputing Traditionally. software has been written for serial computation: • The compute resources can include: • • • • To be run on a single computer having a single Central Processing Unit (CPU). and is intended for someone who is just becoming acquainted with the subject. An arbitrary number of computers connected by a network. These topics are followed by a discussion on a number of issues related to designing parallel programs.

Shared memory machines can be divided into two main classes based upon memory access times: UMA and NUMA.SHARED MEMORY COMPUTER MODEL: • • TYPES OF PARALLEL COMPUTERS: 1. However. 4.SHARED MEMORY ARCHITECHTURE 2. These topics are followed by a discussion on a number of issues related to designing parallel programs. from single processor or 'scalar' computers to machines with vector processors to massively parallel computers with thousands of microprocessors. including concepts and terminology associated with parallel computing. the real trick is to try to write programs that will run reasonably well on a wide range of computers. Multiple processors can operate independently but share the same memory resources. The topics of parallel memory architectures and programming models are then explored. Changes in a memory location effected by one processor are visible to all other processors.METACOMPUTING 1. DEF: The simultaneous use of more than one processor or computer to solve a problem or task is represented as “PARALLEL COMPUTING” There are many types of computers available today. but generally have in common the ability for all processors to access all memory as global address space.acquainted with the subject. It begins with a brief overview. Advantages: .CLUSTER COMPUTER • • Shared memory parallel computers vary widely. Each platform has its own unique characteristics. Understanding the differences is important to understanding how best to program each.DISRIBUTED MEMORY ARCHITECHTURE 3.

Hence. the concept of cache coherency does not apply. Memory addresses in one processor do not map to another processor. Because each processor has its own local memory. Adding more CPUs can geometrically increases traffic on the shared memoryCPU path. Increase the number of processors and the size of . Programmer responsibility for synchronization constructs that insure "correct" access of global memory. it operates independently. Distributed memory systems require a communication network to connect inter-processor memory.DISTRIBUTED MEMORY MODEL: Advantages: • Like shared memory systems. Changes it makes to its local memory have no effect on the memory of other processors. Expense: it becomes increasingly difficult and expensive to design and produce shared memory machines with ever increasing numbers of processors. 2. distributed memory systems vary widely but share a common Memory is scalable with number of processors. so there is no concept of global address space across all processors. • • Processors have their own local memory.• • Global address space provides a user-friendly programming perspective to memory Data sharing between tasks is both fast and uniform due to the proximity of memory to CPUs characteristic. and for cache coherent systems. geometrically increase traffic associated with cache/memory management. Disadvantages: • • • Primary disadvantage is the lack of scalability between memory and CPUs.

• memory increases proportionately. workstations). It may be difficult to map existing data structures. but are more difficult to use since the network capabilities are currently much lower. These cluster computers are referred to by many names. Non-uniform memory access (NUMA) times computers. Below is a list of some local clusters and their characteristics.METACOMPUTING: Metacomputing is a similar idea. off-the-shelf processors and networking. but with loftier goals. They are much cheaper than traditional MPP systems. the goal in metacomputing is usually to provide very high bandwidths between the supercomputers so that these connections do not produce a bottleneck for the communications. most often involving fewer than 100 Cost effectiveness: can use commodity. This is in part because the networking and software infrastructure for cluster computing is less mature. Supercomputers that may be geographically separated can be combined to run the same program. from a poor-man's supercomputer to COWs (clusters of . and often use the same processors. to this memory organization. 3. making it difficult to make use of very large systems at this time. based on global memory. Disadvantages: • • • The programmer is responsible for many of the details associated with data communication between processors. Cluster computers are also usually much smaller. plus some other notable systems from around the country. Each processor can rapidly access its own memory without interference and without the overhead incurred with trying to maintain cache coherency. 4.CLUSTER COMPUTERS: Distributed memory computers can also be built from scratch using mass produced PCs and workstations. and NOWs (networks of workstations). However.

memory.edu/~scandal/nesl/ algorithms. Lustre) + archive store 3. Serial sections.ukc. www.Need more power (MWs = $) 2. performance.html • Internet Parallel Computing Archive.Need more space (new computer halls = $) . . Applications need to scale to increasing numbers of processors.gov/computing/tut orials/parallel_comp/#Whati s This PC would also need ~2000 Gbytes of memory CONCLUSION: .llnl.Scheduling exclusive time on many supercomputers at the same time can also pose a problem.Load imbalance. About 1 year • A Library of Parallel Algorithms.Have ever increasing processors. This is still an area of active research.uk/parallel • Introduction to Parallel Computing.Parallel computers require/produce a lot of data (I/O) .ac. wotug.cs. Global Communications REFERENCES: REAL TIME APPLICATION FOR PARALLEL COMPUTING: An IFS TL2047L149 forecast model takes about 5000 seconds wall time for a 10 day forecast using 128 nodes of an IBM Power6 cluster. but . How long would this model take using a fast PC with sufficient memory? (e. www2. dual core Dell) ANS: Ans.Require parallel file systems (GPFS.g.cmu. problems areas are .

Sign up to vote on this title
UsefulNot useful