You are on page 1of 5

Graphs II Module Summary

The shortest path varies depending on if the edges are weighted or not. If the paths are unweighted you
can determine the length just by counting the number of nodes you must pass through. If they are
weighted you need to sum each edge to determine the weight.

The shortest path then becomes the path with the lowest cost to travel.

Dijkstra’s Algorithm
The Dijkstra algorithm finds the shortest path between a node to all other nodes, also called the Single-
Source Shortest Path Algorithm. It runs the algorithm for each node separately and builds a table in
which stores all the lengths of the shortest paths to all nodes from a root node, as well as what path to
take to get there. To then build a map of this information you run the algorithm for each node and
combine the graphs.

It starts by initializing an n-dimensional array where n is the number of nodes in the graph. The initial
node is given a shortest path to itself of 0 and all other nodes as infinite. Keep track of nodes and their
status of exploration. Seen in the image below

From the node we travel to a “I’m not sure yet” node that has the lowest cost to travel to. Updating the
d value of all neighbours of that node as their related costs to travel to and determine which is the
closest. We then check all neighbours of that node, in a BST manner, until all neighbours are explored
and then move on to the next closest nodes and repeat.

Bellman-Ford
Bellman-fords algorithm is very similar to Dijkstra’s algorithm, with main differences that instead of
picking one node, we select all nodes one by one. BF algorithm is also capable of dealing with negative
cycles too.
The handling of negative paths mean that BF will find the shortest solution in specific graphs where
Dijkstra’s found an incorrect shortest path due to its greedy behaviour. Although because of how it
approaches the task, it means that if there is a negative cycle in which the value drops lower and lower
the more times you take it you can get stuck in a loop and never reach an answer. To avoid this we
continue running BF at least n+ 1 or n+ 2 times to see if the distances keep changing and exit the
program to show there are negative cycles.

Running Time Analysis


In the Dijkstra’s algorithm we keep track of un-sure nodes in an array. We need to be able to select the
node with the minimum distance with findMin which has a O(n). We need to remove the node or mark
is as Sure, using the removeMin function with a O(n). Then we need to update the d value using
updateKey with a O(1).

This results in: O(n(O(n)+O(n)+mO(1), leaving us with the longest complexity of O(n^2). This is very slow
and can be improved by using a “heap” data structure

findMin = O(1)

removeMin = O(log(n))

updateKey = O(1)

Which gives us a complexity of O(n log(n))+ m)

Bellman-Ford has a time complexity of O(m x n). There is little optimization that can be done so it is
slower than Dijkstra. We aim to analyse each edge(m), n – 1 times.
Dynamic Programming
Dynamic programming is an algorithm design paradigm used for solving optimization problems. Dynamic
programming should be used when the problem has a optimal sub-structure and that there are
overlapping sub-problems, subproblems that show up over and over.

This image shows the smaller sub-problems inside the larger problems for Fibonacci and Bellman-Ford.

To design a dynamic programming algorithm we need to;

- Keep a table of solutions to sub-problems


- Use the solution table to solve bigger problems
- Use information collected to find the solution to whole problem.

When taking the bottom-up approach we build the solution looking at the smallest problems and use
them to work backwards and upwards to solve the larger problems. Such as for Fibonacci we solve F(0)
then F(1) then F(2) and so on until F(n-1) and finally F(n)

The Top down approach requires looking at the whole problem that then breaks down into smaller
problems and those recurse down to smaller problems. This is similar to Divide and Conquer except we
make use of memorization, keeping track of problems that have already been solved to not repeat
solving them when the same problem arises.
This shows the speed in which the different approach can be computed, with top doswn and bottom up
being very similar but much better than the naïve option.

Floyd-Warshall
The current algorithms we have used allow us to find the shortest path from a single source node to all
other nodes. We now want to find the shortest path between any 2 pairs of nodes, known as the All-
Pairs Shortest Paths (APSP). We can build a table with weights between all pairs with an immediate
neighbour but nodes that are not immediately connected are assumed as infinite, as seen in the table
below.

As the Floyd Warshall algorithm is a dynamic programming problem we first look at if there is a sub-
structure that exists to make use of DP properties.

When introducing a new node (K) into an existing path we have to check if the path will improve by
passing through K or not. If it does not lead to a shorter distance then we ignore K and instead we use
the K-1 path, which just ignores K as; D (k) [u , v ]=D (k-1) [u , v ]

If K does shorten the path then we need to create a path from u to K and then K to v to update the path
u to v, depicted as; D (k) [u , v ]=D (k-1) [u , k ]+ D(k-1) [k , v ]

Then we combine the two cases to develop an iterative solution using the following formula,

Showing that there is an optimal sub-structure, where we can solve the bigger problem using smaller
problems. As well as overlapping sub-problems, as D (k-1) [u , k ] can be helped to compute D (k) [u , v ] for
different ‘u’s
The running time complexity for Floyd-Warshall is O(n^3) which is better than running Bellman-Ford n
times. Although not better than running Dijkstra n times but is simpler to implement and handle
negative weights. You also need to store 2 n x n arrays and the graph.

You might also like