You are on page 1of 7

In parallel computing multiple compute resources are used simultaneously to solve computational

problems. For such computations, multiple processes execution and calculations are carried out
simultaneously. To achieve parallel computing the compute resource should fulfill the following
requirement:

• The computing resource necessarily a solitary computer containing various Cores.


• A network that would connect an arbitrary number of such computers together.

The computational problems need to be executed should be:

• Broken down into discrete fragments of work that will be performed simultaneously.
• Execute multiple instructions at same moment of time.
• In less span of time due to multiple computational resources.

The above diagram illustrates the idea of parallel computing, for this a problem is broken down in
chunks that will be further on solved concurrently. Then each problem is further on divided into series of
instruction, now each of these instructions are simultaneously processed by different cores. This whole
mechanism Is performed with overall coordination and control.

Merits of parallel computing:


Advantages offered by parallel computing include:

• Efficient execution of code and applications in a shorter time.


• Parallel programming offers concurrency, particularly performing simultaneously multiple
actions at the same time.
• Parallel computational devices can be easily built from cheap and commodity components.
• It can effectively deal with issue of solving large and complex problems which cannot be solved
by the simple computers for example web search engines need to execute millions of
transactions every second.
• It takes advantage from non-local resources.
• As illustrated in the diagram given below Network has connected multiple stand-alone
computers to make larger parallel computer clusters.

Challenges In parallel computing:


The task of achieving the parallelization of program or software for the multicore processor is difficult
task for programmers and algorithm developers. Some task cannot be converted into parallel
completely. It occurs because of the unavailability of the subject expertise or algorithms. It can create
the challenges of time complexity and space complexity. But it can easily better the throughput time of
program. Hence, the output of the parallel program can be obtained in less time as compared to
sequential program. In the past the parallel programming idea was not accessible to the designers. For,
changing over the accessible calculations into parallel, the subject master is required to check that the
recently created calculation is working fine in the parallel programming. The specialists of the specific
area need to create parallel calculation for parallel programming, generally the yield of the parallel
calculation might be change, or the yield of the parallel program may not be right.

The programs which are mostly developed are sequential i.e. these programs are very lengthy due to
which such programs are developed by multiple developers. Such developers of the sequential
programs are available easily. But in the case of parallel programming the developers are not available
easily. The parallel program developers are self-learned. These are not the well trained professional
developers. This creates a large gap between parallel program and sequential program developers. It is
difficult to convert large sequential program into parallel program. For that special training is required
by the developers which will make them expert so that they can write their own parallel programs .This
is one of the important challenge faced in parallel computing.

Several challenges are faced during parallel computing which are as under:

Transportability:
Transportability is the transfer of the parallel program from one computing environment to another for
instance considers a parallel program written for 5 core processor will not perform well on different
number of core processor.

Compatibility

The compatibility amongst the program with hardware or software also causes issues for user.

Unavailability of resources:
The scarcity of techniques and tools required for debugging, testing and tracing the parallel programs.
This requires lot of research work.

Data dependence:
Data dependence in the subprogram of the parallel programs. In this scenario the output of a
subprogram may be input for other program, and then this subprogram has to wait until the first
program is not completely executed. This condition increases the throughput time of the program. The
other problem in data dependencies may be parallel accessing of the data from a single sequential
storage. So, the vector storage or several parallel storages are required to store and to retrieve the data
parallel in the parallel programming.

Cache memory:
For multicore processors parallel computing is a difficult task if all the cores are sharing up the similar
cache memory. On the other hand if each core is having its own cache memory then the computation
process is accelerated.

Algorithmic overhead

Some processes require more efforts to perform in parallel fashion e.g. Parallel Prefix (Scan).

Speculative Loss
Consider a scenario in which two task are supposed to be done in parallel For example task X and task Y
but ultimately task Y is not needed.

Load imbalance:
It requires all the cores to wait for the lowest one to get fully processed i.e. dynamic behavior.

Communication overhead:
Increasing proportion of time is spent on communication.
Critical Paths:

Dependencies amongst computations spread across processors.

Bottlenecks:

Single processor holding all processes up.

Complex Algorithms:

The algorithms for the computation have been developed for the sequential programming. The
algorithm developers need to create parallel algorithms for parallel programming. It’s difficult to achieve
as execution steps of parallel program are change as compared to the execution steps of sequential
programs .This increases the complexities but the algorithm developers have to ignore the complexities
of the algorithm on the cost of the faster throughput. In the case of mathematical computations one
module is dependent upon other module this adds up to the complexities. So, this implementation can
be performed by using the asynchronous communication in the subprograms, and it may increase the
computation time.

Vistualization

Principle
Complex
of
algorithm
presistance

Challanges
of parallel
Measurement computing
Memory
of
performance
load balancing

Parallel
Communication
performance
cost
issues.
Vendors producing parallel computers:
Various vendors are producing computers supporting the feature of parallel computing. This is the
growing field and vendors are competing to provide users with the best computers in minimum cost.

Future goals:
Over the past 20 years parallelism is the future of computing as indicated by the ever fast networks,
distributed systems and multi-processor computer architecture. Over the same time period it has been
noticed that greater than 500,000x betterment in the performance of supercomputer has been
encountered.

“The race is already on for Exascale Computing!”


Where,

Exaflop = 1018 calculations per second.

CONCLUSIONS:

These challenges exhibit several limitations for the available range of Hardware and Software. To
reduce these challenges, there is a dire need that developers and researchers have to work in
coordination for developing the research software for the researchers. So, the researcher can acquire
research result efficiently with parallel computing. With the technologies and software advancement,
the algorithm developers are needed to implement parallel algorithms. So, that developer can easily
able to implement those algorithms. For this purpose the experts of that field are required to develop
the parallel algorithm and the researchers are needed to motivate for developing the parallel algorithm
to deal with existing challenges and future challenges.

You might also like