Professional Documents
Culture Documents
Non-CIE Component
Submitted by
Prashanth S 1MS18CS096
Rithvik S Shetty 1MS18CS100
T Sai Chandu 1MS18CS125
Tharun E 1MS18CS127
Submitted to
Mallegowda M
Assistant Professor
M S RAMAIAH INSTITUTE OF TECHNOLOGY
(Autonomous Institute, Affiliated to VTU)
BANGALORE-560054
www.msrit.edu
TABLE OF CONTENTS
S.No. Title
1 Serial Processing vs Parallel Processing
2 Types of Parallelism
4 Implementation
5 Input Format
6 Analysis
7 Results
8 Snapshots
1. Serial Processing vs Parallel Processing
In serial processing, the processor completes one task at a time. After completing that, it
executes the other tasks in a sequential manner. An operating system executes many
programs and each of them has multiple tasks. The processor has to complete all these tasks,
but it completes one task at a time. The other tasks wait in the queue until the processor
completes the current task.
There are multiple processors in parallel processing. Each processor executes the tasks
assigned to them simultaneously. The processors use the bus to communicate with each other
and to access the main memory. Each processor operates on its local data. As the processors
work independently, failure in one processor does not affect the functionality of another
processor. Therefore, parallel processing increases the throughput as well as improves
reliability. Most modern computers support parallel processing to increase performance.
Number of processors
A major difference between serial and parallel processing is that there is a single processor in
serial processing, but there are multiple processors in parallel processing.
Work Load
In serial processing, the workload of the processor is higher. However, in parallel processing,
the workload per processor is lower.
Data transferring
Moreover, in serial processing, data transfers are in bit by bit format. However, in parallel
processing, data transfers are in byte form (8 bits).
Required time
Time taken is also a difference between serial and parallel processing. That is; serial
processing requires more time than parallel processing to complete a task.
Cost
Furthermore, parallel processing is more costly than serial processing as it uses multiple
processors.
2. Types of Parallelism
The 4 types of parallel computing are bit-level, instruction-level, task and super word-level
parallelism.
Parallel applications are typically classified as either fine-grained parallelism, in which
subtasks will communicate several times per second; coarse-grained parallelism, in which
subtasks do not communicate several times per second; or embarrassing parallelism, in which
subtasks rarely or never communicate.
GPUs are increasing the use of parallel computing since GPUs can process faster than CPUs
using the power of parallel computing.
4. Implementation
1. We initialize the solution matrix same as the input graph matrix as a first step.
2. Then we update the solution matrix by considering all vertices as an intermediate vertex.
3. The idea is to one by one pick all vertices and update all shortest paths which include the
picked vertex as an intermediate vertex in the shortest path.
4. When we pick vertex number k as an intermediate vertex, we already have considered
vertices {0, 1, 2, .. k-1} as intermediate vertices.
5. For every pair (i, j) of the source and destination vertices respectively, there are two
possible cases.
1) k is not an intermediate vertex in shortest path from i to j. We keep the value of
dist[i][j] as it is.
2) k is an intermediate vertex in shortest path from i to j. We update the value of
dist[i][j] as dist[i][k] + dist[k][j] if dist[i][j] > dist[i][k] + dist[k][j].
5. Input Format
Graph is directed and weighted. First two integers must be the number of vertices and edges
which must be followed by pairs of vertices which has an edge between them.
● maxVertices represents the maximum number of vertices that can be present in the
graph.
● vertices represent the number of vertices and edges represent the number of edges in
the graph.
● graph[i][j] represents the weight of edge joining i and j.
● size[maxVertices] is initiated to{0}, represents the size of every vertex i.e. the
number of edges corresponding to the vertex.
● visited[maxVertices]={0} represents the vertex that have been visited.
● distance[maxVertices][maxVertices] represents the weight of the edge between the
two vertices or distance between two vertices.
● Initialize the distance between two vertices using init() function.
● init() function- It takes the distance matrix as an argument.
6. Analysis:
The running time of the Floyd-Warshall algorithm is O(V3), where V is the number of
vertices.
7. Results:
The following table(Time for Execution with Number of nodes N) shows the results of serial
computing in Floyd-Warshall Algorithm.
N t1(serial)
10.0 0.000062
100.0 0.016550
200.0 0.054458
300.0 0.137234
400.0 0.419064
500.0 0.573445
600.0 0.987226
700.0 1.563266
800.0 3.200907
900.0 3.297816
1000.0 4.633267
1500.0 15.544000
The following table(Time for Execution with Number of nodes N) in Fig 7.1 shows the
results of parallel computing in Floyd-Warshall Algorithm. The Column’s t1,t2 and t4 are the
Times of Execution with one thread(serial), 2 threads and 4 threads respectively.
Fig 7.1 Shows the Time of Execution with No.of threads(N)
The graph plot in Fig 7.2 below shows the variation of Number of nodes(N) against Time of
Execution. The graph contains plots with one thread, 2 threads and 4 threads respectively.
Fig 7.2 shows the plot variation of No.of Nodes(N) against Time of Execution
8. Snapshots:
The below screenshots shows the execution and output of Floyd-Warshall Algorithm.