You are on page 1of 9

Dijkstra's algorithm is an algorithm we can use to find shortest distances or minimum costs depending on what is

represented in a graph. You're basically working backwards from the end to the beginning, finding the shortest leg
each time. The steps to this algorithm are as follows:
Step 1: Start at the ending vertex by marking it with a distance of 0, because it's 0 units from the end. Call this vertex
your current vertex, and put a circle around it indicating as such.
Step 2: #Identify all of the vertices that are connected to the current vertex with an edge. Calculate their distance to
the end by adding the weight of the edge to the mark on the current vertex. Mark each of the vertices with their
corresponding distance, but only change a vertex's mark if it's less than a previous mark. Each time you mark the
starting vertex with a mark, keep track of the path that resulted in that mark.
Step 3: Label the current vertex as visited by putting an X over it. Once a vertex is visited, we won't look at it again.
Step 4: Of the vertices you just marked, find the one with the smallest mark, and make it your current vertex. Now,
you can start again from step 2.
Step 5: Once you've labeled the beginning vertex as visited - stop. The distance of the shortest path is the mark of
the starting vertex, and the shortest path is the path that resulted in that mark.

The Bellman-Ford algorithm is a graph search algorithm that finds the shortest path between a given source vertex
and all other vertices in the graph. This algorithm can be used on both weighted and unweighted graphs.
Bellman-Ford algorithm is guaranteed to find the shortest path in a graph. Though it is slower than Dijkstra's
algorithm, Bellman-Ford is capable of handling graphs that contain negative edge weights, so it is more versatile. It is
worth noting that if there exists a negative cycle in the graph, then there is no shortest path. Going around the
negative cycle an infinite number of times would continue to decrease the cost of the path (even though the path
length is increasing). Because of this, Bellman-Ford can also detect negative cycles which is a useful feature.
Imagine a scenario where you need to get to a baseball game from your house. Along the way, on each road, one of
two things can happen. First, sometimes the road you're using is a toll road, and you have to pay a certain amount of
money. Second, sometimes someone you know lives on that street (like a family member or a friend). Those people
can give you money to help you restock your wallet. You need to get across town, and you want to arrive across
town with as much money as possible so you can buy hot dogs. Given that you know which roads are toll roads and
which roads have people who can give you money, you can use Bellman-Ford to help plan the optimal route.

Graphical representation of routes to a baseball game

Instead of your home, a baseball game, and streets that either take money away from you or give money to you,
Bellman-Ford looks at a weighted graph. The graph is a collection of edges that connect different vertices in the
graph, just like roads. The edges have a cost to them. Either it is a positive cost (like a toll) or a negative cost (like a
friend who will give you money). So, in the above graphic, a red arrow means you have to pay money to use that
road, and a green arrow means you get paid money to use that road. In the graph, the source vertex is your home,
and the target vertex is the baseball stadium. On your way there, you want to maximize the number and absolute
value of the negatively weighted edges you take. Conversely, you want to minimize the number and value of the
positively weighted edges you take.
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the
edges of that graph which
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the graph
How Kruskal's algorithm works
It falls under a class of algorithms called greedy algorithms which find the local optimum in the hopes of finding a
global optimum.
We start from the edges with the lowest weight and keep adding edges until we we reach our goal.
The steps for implementing Kruskal's algorithm are as follows:
1. Sort all the edges from low weight to high
2. Take the edge with the lowest weight and add it to the spanning tree. If adding the edge created a cycle,
then reject this edge.
3. Keep adding edges until we reach all vertices.
Example of Kruskal's algorithm
Prim's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the edges
of that graph which
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the graph
How Prim's algorithm works
It falls under a class of algorithms called greedy algorithms which find the local optimum in the hopes of finding a
global optimum.
We start from one vertex and keep adding edges with the lowest weight until we we reach our goal.
The steps for implementing Prim's algorithm are as follows:
1. Initialize the minimum spanning tree with a vertex chosen at random.
2. Find all the edges that connect the tree to new vertices, find the minimum and add it to the tree
3. Keep repeating step 2 until we get a minimum spanning tree
Example of Prim's algorithm
Binary search tree is a data structure that quickly allows us to maintain a sorted list of numbers.
 It is called a binary tree because each tree node has maximum of two children.
 It is called a search tree because it can be used to search for the presence of a number in O(log(n)) time.
The properties that separates a binary search tree from a regular binary tree is
1. All nodes of left subtree are less than root node
2. All nodes of right subtree are more than root node
3. Both subtrees of each node are also BSTs i.e. they have the above two properties

The binary tree on the right isn't a binary search tree because the right subtree of the node "3" contains a value
smaller that it.
There are two basic operations that you can perform on a binary search tree:
1. Check if number is present in binary search tree
The algorithm depends on the property of BST that if each left subtree has values below root and each right subtree
has values above root.
If the value is below root, we can say for sure that the value is not in the right subtree; we need to only search in the
left subtree and if the value is above root, we can say for sure that the value is not in the left subtree; we need to
only search in the right subtree.
Algorithm:
1. If root == NULL
2. return NULL;
3. If number == root->data
4. return root->data;
5. If number < root->data
6. return search(root->left)
7. If number > root->data
8. return search(root->right)
Let us try to visualize this with a diagram.
If the value is found, we return the value so that it gets propogated in each recursion step as shown in the image
below.
If you might have noticed, we have called return search(struct node*) four times. When we return either the new
node or NULL, the value gets returned again and again until search(root) returns the final result.
If the value is not found, we eventually reach the left or right child of a leaf node which is NULL and it gets
propagated and returned.
2. Insert value in Binary Search Tree(BST)
Inserting a value in the correct position is similar to searching because we try to maintain the rule that left subtree is
lesser than root and right subtree is larger than root.
We keep going to either right subtree or left subtree depending on the value and when we reach a point left or right
subtree is null, we put the new node there.
Algorithm:
1. If node == NULL
2. return createNode(data)
3. if (data < node->data)
4. node->left = insert(node->left, data);
5. else if (data > node->data)
6. node->right = insert(node->right, data);
7. return node;
The algorithm isn't as simple as it looks. Let's try to visualize how we add a number to an existing BST.
We have attached the node but we still have to exit from the function without doing any damage to the rest of the
tree. This is where the return node; at the end comes in handy. In the case of NULL, the newly created node is
returned and attached to the parent node, otherwise the same node is returned without any change as we go up
until we return to the root.
This makes sure that as we move back up the tree, the other node connections aren't changed.

The complete code for Binary Search Tree insertion and searching in C programming language is posted below:
1. #include<stdio.h>
2. #include<stdlib.h>
3.
4. struct node
5. {
6. int data;
7. struct node* left;
8. struct node* right;
9. };
10.
11. struct node* createNode(value){
12. struct node* newNode = malloc(sizeof(struct node));
13. newNode->data = value;
14. newNode->left = NULL;
15. newNode->right = NULL;
16.
17. return newNode;
18. }
19.
20.
21. struct node* insert(struct node* root, int data)
22. {
23. if (root == NULL) return createNode(data);
24.
25. if (data < root->data)
26. root->left = insert(root->left, data);
27. else if (data > root->data)
28. root->right = insert(root->right, data);
29.
30. return root;
31. }
32.
33. void inorder(struct node* root){
34. if(root == NULL) return;
35. inorder(root->left);
36. printf("%d ->", root->data);
37. inorder(root->right);
38. }
39.
40.
41. int main(){
42. struct node *root = NULL;
43. root = insert(root, 8);
44. insert(root, 3);
45. insert(root, 1);
46. insert(root, 6);
47. insert(root, 7);
48. insert(root, 10);
49. insert(root, 14);
50. insert(root, 4);
51.
52. inorder(root);
53. }
The output of the program will be
1 ->3 ->4 ->6 ->7 ->8 ->10 ->14 ->

Divide and Conquer Method


In the divide and conquer approach, the problem is divided into several small sub-problems. Then the sub-problems
are solved recursively and combined to get the solution of the original problem.
The divide and conquer approach involves the following steps at each level −
 Divide − The original problem is divided into sub-problems.
 Conquer − The sub-problems are solved recursively.
 Combine − The solutions of the sub-problems are combined together to get the solution of the original
problem.
The divide and conquer approach is applied in the following algorithms −
 Binary search
 Quick sort
 Merge sort
 Integer multiplication
 Matrix inversion
 Matrix multiplication
Greedy Method
In greedy algorithm of optimizing solution, the best solution is chosen at any moment. A greedy algorithm is very
easy to apply to complex problems. It decides which step will provide the most accurate solution in the next step.
This algorithm is a called greedy because when the optimal solution to the smaller instance is provided, the
algorithm does not consider the total program as a whole. Once a solution is considered, the greedy algorithm never
considers the same solution again.
A greedy algorithm works recursively creating a group of objects from the smallest possible component parts.
Recursion is a procedure to solve a problem in which the solution to a specific problem is dependent on the solution
of the smaller instance of that problem.
Dynamic Programming
Dynamic programming is an optimization technique, which divides the problem into smaller sub-problems and after
solving each sub-problem, dynamic programming combines all the solutions to get ultimate solution. Unlike divide
and conquer method, dynamic programming reuses the solution to the sub-problems many times.
Recursive algorithm for Fibonacci Series is an example of dynamic programming.
Backtracking Algorithm
Backtracking is an optimization technique to solve combinational problems. It is applied to both programmatic and
real-life problems. Eight queen problem, Sudoku puzzle and going through a maze are popular examples where
backtracking algorithm is used.
In backtracking, we start with a possible solution, which satisfies all the required conditions. Then we move to the
next level and if that level does not produce a satisfactory solution, we return one level back and start with a new
option.
Branch and Bound
A branch and bound algorithm is an optimization technique to get an optimal solution to the problem. It looks for
the best solution for a given problem in the entire space of the solution. The bounds in the function to be optimized
are merged with the value of the latest best solution. It allows the algorithm to find parts of the solution space
completely.
The purpose of a branch and bound search is to maintain the lowest-cost path to a target. Once a solution is found, it
can keep improving the solution. Branch and bound search is implemented in depth-bounded search and depth–first
search.
Linear Programming
Linear programming describes a wide class of optimization job where both the optimization criterion and the
constraints are linear functions. It is a technique to get the best outcome like maximum profit, shortest path, or
lowest cost.
In this programming, we have a set of variables and we have to assign absolute values to them to satisfy a set of
linear equations and to maximize or minimize a given linear objective function.

You might also like