You are on page 1of 2

Chapter 2

Introduction to Parallel Computation

Selim G. Akl and Marius Nagy

Abstract
This chapter is intended to provide an overview of the fundamental concepts and
ideas shaping the field of parallel computation. If serial (or sequential) algorithms
are designed for the generic uni-processor architecture of the Random Access Ma-
chine (RAM), in the case of parallel algorithms there are a variety of models and
architectures supporting the parallel mode of operation: shared-memory models, in-
terconnection networks, combinational circuits, clusters and grids.
Sometimes, the methods used in designing sequential algorithms can also lead to
efficient parallel algorithms, as it is the case with divide and conquer techniques.
In other cases, the particularities of a certain model or architecture impose specific
tools and methods that need to be used in order to fully exploit the potential of-
fered by that model. In all situations, however, we seek an improvement either in
the running time of the parallel algorithm or in the quality of the solution produced
by the parallel algorithm with respect to the best sequential algorithm dealing with
the same problem.
The improvement in performance can even become superlinear with respect to
the number of processors employed by the parallel model under consideration. This
is the case, for example, of computations performed under real-time constraints,
when the deadlines imposed on the availability of the input and/or output data leave
little room for sequentially simulating the parallel approach. Furthermore, in the ex-
amples presented at the end of the chapter, the impossibility to simulate a parallel
solution on a sequential machine is due to the intrinsically parallel nature of the
computation, rather than being an artifact of externally imposed time constraints.

Selim G. Akl
School of Computing, Queen’s University, Kingston, Ontario, Canada,
e-mail: akl@cs.queensu.ca
Marius Nagy
School of Computing, Queen’s University, Kingston, Ontario, Canada,
e-mail: marius@cs.queensu.ca

R. Trobec et al. (eds.), Parallel Computing, DOI 10.1007/978-1-84882-409-6_2, 43



c Springer-Verlag London Limited 2009
44 Selim G. Akl and Marius Nagy

In this respect, parallelism proves to be the vehicle leading to a Non-Universality


result in computing: there is no finite computational device, sequential or parallel,
conventional or unconventional, that is able to simulate all others.

2.1 Introduction

In our sophisticated modern world, time is perhaps the most precious commodity.
We live our lives in the fast lane, always trying to buy more time. In this world, speed
is of the essence and efficiency translates naturally into how fast (and sometimes
how well) we can solve the problems we face. To this end, parallel computing, the
central theme of this book, is perhaps our greatest ally.
Indeed, the main motivation for parallel computing is to speed up computation.
The pervasive nature of computers nowadays makes it possible for huge amounts of
data to be acquired and stored in large databases for future analysis, data mining,
referencing, etc. In some cases, the amount of information that needs to be processed
is so huge, that the time required to complete the job becomes prohibitively long.
As an illustrative example, imagine you are charged with the following task: given a
phone number, you are required to look in the phone book for the name and address
of the person whose phone number you were given. If you live in a big city, that is,
if the phone book is big, then this task is a tedious one if you are to perform it all
by yourself. But if you decide to call your friends and each one agrees to look only
at the names beginning with a certain letter, for example, then the task is completed
much faster.
The simplicity of the example above is intentional, so that the main message is
not obstructed by unnecessary details. Often, and this will become apparent from
the applications addressed throughout the book, splitting a job among the available
processors is not a trivial task and the overhead incurred by parallelization may
become significant. Regardless, the message conveyed by the parallel computing
paradigm remains the same: If several processors work together (cooperate) to solve
a given computational problem, then the time required to complete the task may be
greatly reduced.
But time is not the only measure for the advantage gained by using a parallel
approach. Sometimes, it is the quality of the solution computed that is greatly im-
proved if more processors are available, in a fixed amount of time. Furthermore,
computational scenarios have been identified, in which the only chance to terminate
a computation and reach a solution is to have the required number of processors
working simultaneously on that respective task. We call such problems inherently
parallel, because the ability of a parallel computer to be “in more than one place
at a time” through its multiple processing elements is a necessary condition to suc-
cessfully tackle these problems.
The renewed interest in various forms of parallel computing that we are witness-
ing today can be largely explained by the availability and affordability of computing
power. When it becomes increasingly difficult and costly to build faster processors,

You might also like