A Multiple Instruction stream, Single Data stream (MISD) processor replicates a stream of data and sends it to multiple processors, each of which then executes a separate program.For example, the contents of a database could be sent simultaneously to severalprocessors, each of which would search for a different value. Problems well-suited to MISDparallel processing include computer vision systems that extract multiple features, such asvegetation, geological features, or manufactured objects, from a single satellite image.A Single Instruction stream, Multiple Data stream (SIMD) architecture has multipleprocessing elements that carry out the same instruction on separate data. For example, aSIMD machine with 100 processing elements can simultaneously multiply 100 numberseach by the number 3. SIMD processors are programmed much like SISD processors, buttheir operations occur on arrays of data instead of individual values. SIMD processors aretherefore also known as array processors. Examples of applications that use SIMDarchitecture are image-enhancement processing and radar processing for air-traffic control.A Multiple Instruction stream, Multiple Data stream (MIMD) processor has separateinstructions for each stream of data. This architecture is the most flexible, but it is also themost difficult to program because it requires additional instructions to coordinate theactions of the processors. It also can simulate any of the other architectures but with lessefficiency. MIMD designs are used on complex simulations, such as projecting city growthand development patterns, and in some artificial-intelligence programs.
Another factor in parallel-processing architecture is how processors communicate with eachother. One approach is to let processors share a single memory and communicate byreading each other's data. This is called shared memory. In this architecture, all the datacan be accessed by any processor, but care must be taken to prevent the linked processorsfrom inadvertently overwriting each other's results.An alternative method is to connect the processors and allow them to send messages toeach other. This technique is known as message passing or distributed memory. Data aredivided and stored in the memories of different processors. This makes it difficult to shareinformation because the processors are not connected to the same memory, but it is alsosafer because the results cannot be overwritten.In shared memory systems, as the number of processors increases, access to the singlememory becomes difficult, and a bottleneck forms. To address this limitation, and theproblem of isolated memory in distributed memory systems, distributed memoryprocessors also can be constructed with circuitry that allows different processors to accesseach other's memory. This hybrid approach, known as distributed shared memory,eliminates the bottleneck and sharing problems of both architectures.
COST OF PARALLEL COMPUTING