Professional Documents
Culture Documents
Computer science
Student's name
Institutional affiliation
Professor's name
Course
Date
2
This article, "An Analytical Study of Amdahl's and Gustafson's Law," aimed to outline
parallel computing and highlight the importance of Amdahl's and Gustafson's Law in it.
Comparison and contrast of the two laws in parallel computing using several examples are made.
The paper's conclusion offers suggestions for further work that may be done to enhance the
performance parameter.
Multiple processes are carried out simultaneously in a computer model known as parallel
computing. The main objectives of parallel computing are to increase calculation speed, reduce
costs, and get beyond the limitations of serial computing. Although the two laws are distinct,
they both serve to make processors' jobs easier. The ratio between the time required by a single
processor system and that required by a parallel processing system is one performance metric
used in parallel computing to determine how much a sequential program can be parallelized
(Murray, 2022).
In general, parallel processing refers to the division of a job between at least two
microprocessors. A computer scientist uses specialized software created for the job to break
down a complicated problem into component elements. Then, they designate a specific processor
for each component portion. To complete the entire computing task, each processor completes its
part. The program reassembles the data to solve the difficult initial challenge. It's a high-tech
way of explaining that splitting the workload simplifies things. The load might be distributed
among many processors housed in the same computer, or it could be distributed among other
Amdahl's Law states that in a program with parallel processing, relatively few
instructions that must be completed in sequence will limit program speedup, so adding more
processors may not help the program run faster. This is an argument against parallel processing
for specific applications and, more broadly, against exaggerated claims for parallel computing
(Scott, 2022). Gustafson's Law in computer architecture, which uses a hypothetical work run on
a single-core machine as the baseline, explains the potential speedup in the execution time of a
job that benefits from parallel computing. The theoretical "slowdown" of a parallelized process
when carried out on a serial system is what it is, in other words (Stoker, 2022).
Only in situations when the issue size is fixed does Amdahl's law apply. In reality, as
computer power increases, larger issues (larger datasets) tend to be tackled, which causes the
time spent on the parallelizable portion to increase much more quickly than the fundamentally
serial task. Gustafson's law provides a less pessimistic and more realistic evaluation of the
Gustafson's formulation specifies a new serial percentage of the overall processing time
challenging to utilize Gustafson's formulation to directly quantify P's impact on speedup, given it
may increase the amount of computing required to fit the amount of parallelization available.
Amdahl's Law is more suitable if the quantity of computing that has to be done is fixed and
cannot be altered by parallelization. Amdahl's Law in parallel computing is only helpful when
there are few processors, or the task is perfectly parallel. Depending on the size of the change,
4
Gustafson's Law and Amdahl's Law can be utilized as upper and lower constraints on the
References
HowStuffWorks, Tech, Computer, Hardware, & CPU. (2022). How Parallel Processing Works.
processing.htm.
https://dbpedia.org/page/Gustafson%27s_law.
law#:text=In%20computer%20programmingAmdahllaw,maketheprogramfaster.
https://www.spiceworks.com/tech/iot/articles/what-is-parallel-processing/.