Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
parallel processing

parallel processing



|Views: 328|Likes:
Published by nikhilesh walde

More info:

Categories:Types, School Work
Published by: nikhilesh walde on Jan 10, 2009
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as DOC or read online from Scribd
See more
See less


Parallel Processing
Parallel Processing, computer technique in which multiple operations are carried outsimultaneously. Parallelism reduces computational time. For this reason, it is used for manycomputationally intensive applications such as predicting economic trends or generatingvisual special effects for feature films. Two common ways that parallel processing is accomplished are through multiprocessing orinstruction-level parallelism. Multiprocessing links several processors—computers ormicroprocessors (the electronic circuits that provide the computational power and controlof computers)—together to solve a single problem. Instruction-level parallelism uses asingle computer processor that executes multiple instructions simultaneously.If a problem is divided evenly into ten independent parts that are solved simultaneously onten computers, then the solution requires one tenth of the time it would take on a singlenonparallel computer where each part is solved in sequential order. Many large problemsare easily divisible for parallel processing; however, some problems are difficult to dividebecause their parts are interdependent, requiring the results from another part of theproblem before they can be solved.Portions of a problem that cannot be calculated in parallel are called serial. These serialportions determine the computation time for a problem. For example, suppose a problemhas nine million computations that can be done in parallel and one million computationsthat must be done serially. Theoretically, nine million computers could perform nine-tenthsof the total computation simultaneously, leaving one-tenth of the total problem to becomputed serially. Therefore, the total execution time is only one-tenth of what it would beon a single nonparallel computer, despite the additional nine million processors.
In 1966 American electrical engineer Michael Flynn distinguished four classes of processorarchitecture (the design of how processors manipulate data and instructions). Data can besent either to a computer's processor one at a time, in a single data stream, or severalpieces of data can be sent at the same time, in multiple data streams. Similarly,instructions can be carried out either one at a time, in a single instruction stream, orseveral instructions can be carried out simultaneously, in multiple instruction streams.Serial computers have a Single Instruction stream, Single Data stream (SISD) architecture.One piece of data is sent to one processor. For example, if 100 numbers had to bemultiplied by the number 3, each number would be sent to the processor, multiplied, andthe result stored; then the next number would be sent and calculated, until all 100 resultswere calculated. Applications that are suited for SISD architectures include those thatrequire complex interdependent decisions, such as word processing.
A Multiple Instruction stream, Single Data stream (MISD) processor replicates a stream of data and sends it to multiple processors, each of which then executes a separate program.For example, the contents of a database could be sent simultaneously to severalprocessors, each of which would search for a different value. Problems well-suited to MISDparallel processing include computer vision systems that extract multiple features, such asvegetation, geological features, or manufactured objects, from a single satellite image.A Single Instruction stream, Multiple Data stream (SIMD) architecture has multipleprocessing elements that carry out the same instruction on separate data. For example, aSIMD machine with 100 processing elements can simultaneously multiply 100 numberseach by the number 3. SIMD processors are programmed much like SISD processors, buttheir operations occur on arrays of data instead of individual values. SIMD processors aretherefore also known as array processors. Examples of applications that use SIMDarchitecture are image-enhancement processing and radar processing for air-traffic control.A Multiple Instruction stream, Multiple Data stream (MIMD) processor has separateinstructions for each stream of data. This architecture is the most flexible, but it is also themost difficult to program because it requires additional instructions to coordinate theactions of the processors. It also can simulate any of the other architectures but with lessefficiency. MIMD designs are used on complex simulations, such as projecting city growthand development patterns, and in some artificial-intelligence programs.
Parallel Communication
Another factor in parallel-processing architecture is how processors communicate with eachother. One approach is to let processors share a single memory and communicate byreading each other's data. This is called shared memory. In this architecture, all the datacan be accessed by any processor, but care must be taken to prevent the linked processorsfrom inadvertently overwriting each other's results.An alternative method is to connect the processors and allow them to send messages toeach other. This technique is known as message passing or distributed memory. Data aredivided and stored in the memories of different processors. This makes it difficult to shareinformation because the processors are not connected to the same memory, but it is alsosafer because the results cannot be overwritten.In shared memory systems, as the number of processors increases, access to the singlememory becomes difficult, and a bottleneck forms. To address this limitation, and theproblem of isolated memory in distributed memory systems, distributed memoryprocessors also can be constructed with circuitry that allows different processors to accesseach other's memory. This hybrid approach, known as distributed shared memory,eliminates the bottleneck and sharing problems of both architectures.
Parallel processing is more costly than serial computing because multiple processors areexpensive and the speedup in computation is rarely proportional to the number of additional processors.MIMD processors require complex programming to coordinate their actions. Finding MIMDprogramming errors also is complicated by time-dependent interactions betweenprocessors. For example, one processor might require the result from a second processor'smemory before that processor has produced the result and put it into its memory. Thisresults in an error that is difficult to identify.Programs written for one parallel architecture seldom run efficiently on another. As a result,to use one program on two different parallel processors often involves a costly and time-consuming rewrite of that program.
When a parallel processor performs more than 1000 operations at a time, it is said to bemassively parallel. In most cases, problems that are suited to massive parallelism involvelarge amounts of data, such as in weather forecasting, simulating the properties of hypothetical pharmaceuticals, and code breaking. Massively parallel processors today arelarge and expensive, but technology soon will permit an SIMD processor with 1024processing elements to reside on a single integrated circuit.Researchers are finding that the serial portions of some problems can be processed inparallel, but on different architectures. For example, 90 percent of a problem may be suited

Activity (2)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->