You are on page 1of 2

Development of Car Rental Management System

Literature Review
Existing models for performance optimization in compilers often face limitations in identifying profitable
parallelism. Effective model-based heuristics and profitability estimates are crucial for distinguishing
optimization strategies. Empirical search techniques, focusing on code motion possibilities, and model-
based mechanisms for tiling, vectorization, and parallelization are central to developing automatic
frameworks [1].

Some automatic parallelization approaches for programs utilizing pointer-based dynamic data
structures, particularly in Java, exploit parallelism among methods by initiating an asynchronous thread
of execution for each method invocation [2].

Comparative studies of prevalent tools indicate that PLUTO demonstrates higher efficiency compared to
others. Although CETUS excels in dependency analysis and parallel loop detection, it exhibits errors in
detecting and parallelizing nested loops. GASPARD, while effective for certain workloads like MM,
struggles with model-to-source parallelization, limiting its flexibility across scenarios. A common
limitation of auto-parallelization tools is the generation of parallel OpenMP code, relying heavily on
OpenMP API, compiler, and OS runtime support, which might be lacking in embedded contexts [3].
Future endeavors could explore an automatic accelerator generation flow that integrates PLUTO and
tailors applications for embedded environments [4].

The evolution of parallel computing, driven by the quest for high-performance computing benefits, has
seen the introduction of diverse multiprocessor designs. However, predicting the future of parallel
computing remains challenging due to ongoing research endeavors. With complex machine
architectures, efficient programming becomes daunting, especially when critical decisions, such as
determining data dependencies or optimizing nested parallel loops, require runtime information [5].

Traditional compiler effectiveness has been extensively studied, showcasing the efficiency of techniques
like common subexpression elimination, code motion, and dead code elimination [6].
References

[1] L.-N. Pouchet et al., "Combined Iterative and Model-driven Optimization in an Automatic
Parallelization Framework," in Conference on Supercomputing (SC’10), New Orleans, LA, USA, 2010.

[2] B. Chan, "Run-Time Support for the Automatic Parallelization of Java Programs," M.S. thesis, Dept.
Elect. Com. Eng, University of Toronto, 2002.

[3] G. Tian, G. Hammami, O., "Performance measurements of synchronization mechanisms on 16PE NOC
based multi-core with dedicated synchronization and data NOC," in International Conference on
Electronics, Circuits, and Systems (ICECS’09), 2009, pp. 988–991.

[4] E. Kallel, Y. Aoudni, M. Abid, "OpenMP automatic parallelization tools: An Empirical comparative
evaluation," in IJCSI International Journal of Computer Science Issues, 2013.

[5] R. Eigenmann, D. Padua, "On the Automatic Program Parallelization," 1993.

[6] N. Jones, S. Muchnick, "Flow analysis and optimization of Lisp-like structures," in Program Flow
Analysis, Theory and Applications, Prentice-Hall, Englewood Cliffs, NJ, USA, 1981, ch. 4, pp. 102–131.

You might also like