You are on page 1of 1

Algorithms Designing Algorithms Queues Big-O Notation

• A set of instructions used • The priority for an algorithm is to • FIFO (First in first out) • 0(1) - Consistent time complexity - The amount
to solve a set problem. achieve the given task. • Often an array. of time is not affected by the number of inputs.
• Inputs must be clearly • The second priority is to reduce time • The front pointer marks the position of the first • 0(n) - Linear time complexity - The amount of
defined. and space complexity. element. time is directly proportional to the number of
• Must always produce a • There may be a conflict between • The back pointer marks the position of the next inputs.
valid output. space and time complexity and the available space. • 0(nn) - Polynomial time complexity - The
• Must be able to handle requirements and situation for an Queue Functions amount of time is directly proportional to the
invalid inputs. algorithm will dictate which is more • Check size size() number of inputs to the power of n.
• Must always reach a important. • Check if empty isEmpty() • 0(2n) - Exponential time complexity - The
stopping condition. • To reduce space complexity, make • Return top element (but don’t remove) peek() amount of time will double with every additional
• Must be well-documented as many changes on the original data input.
• Add to the queue enqueue(element)
for reference. as possible. Do not create copies. • 0(log n) - Logarithmic time complexity - The
• Remove element at the front of the queue and
• Must be well-commented. • To reduce time complexity, reduce amount of time will increase at a smaller rate as
return it dequeue()
the number of loops. the number of inputs increases.

Unit 2.3 Algorithms


Sorting Algorithms Searching Algorithms
• Places elements into a logical order. • Used to locate an
• Usually numerical or alphabetical. element within a Linear Search Binary Search
• Usually in ascending order. data structure. • Most basic search • Only works with sorted data.
• Can be set to work in descending order. Insertion Sort • Many different algorithm. • Finds the middle element, then decides on
• Places elements forms exist. • Works through the elements which side of the data the requested
Bubble Sort into a sorted list. one at a time until the element is.
• Each is suited to
• Compares elements and swaps as needed. • Starts at element 2 requested element is found. • The unneeded half is discarded and the
different purposes
• Compares element 1 to element 2. and compares it • Does not need data to be process repeats until either the requested
and data structures.
• If they are in the wrong order, they are with the element sorted. element is found or it is determined that the
swapped. directly to its left. • Easy to implement. requested element does not exist.
• This process is repeated with 2 and 3, 3 • When compared, • Not very efficient. • A very efficient algorithm.
and 4, and so on until the end of the list is elements are • Time Complexity is 0(n) • Time Complexity is 0(log n)
reached. inserted into the
• This process must be repeated as many correct position in
times as there are elements in the array. the list.
• Each repeat is referred to as a “pass”. Stacks Linked Lists Path Finding Algorithms Trees
• This repeats until
• Can be modified to improve efficiency by • FILO (First In Last Out) • Contains several nodes. • Consists of nodes and edges.
the last element is
using a flag to indicate if a swap has inserted into the
• Often an array. • Each node has a pointer to the Dijkstra’s Algorithm • Cannot contain cycles.
occurred during the pass. • Uses a single pointer (the top next item in the list. • Finds the shortest path between • Edges are not directed.
correct position.
• If no swaps are made during a pass the list pointer) to track the top of the • For node N, N.next will access two points. • Can be traversed using depth first or
• In the 1st iteration 1
must be in the correct order and so the stack. the next item. • The problem is depicted as a breadth first.
element is sorted,
algorithm stops. • The top pointer is initialised at -1, • The first node is the head. weighted graph. • Both methods can be implemented
in the 2nd iteration
• A slow algorithm. with the first element being 0, the • The last node is the tail. • Nodes represent the items in the recursively.
2 are sorted etc.
• Time complexity of 0(n2) second 1 and so on. • Searched using a linear search. scenario such as places. Depth First (Post Order)
• Time complexity of
0(n2) Stack Functions • Edges connect the nodes Traversal
Merge Sort • Check size size() together. • Moves as far as possible through the
• A divide and conquer algorithm. • Check if empty isEmpty() Time Complexity • Each edge has a cost.
Quick Sort tree before backtracking.
• Formed of a Merge and MergeSort • How much time an algorithm • The algorithm will calculate the
• Selects an element • Return top element (but don’t • Uses a stack.
function. needs to solve a problem. best way, known as the least cost
and divides the input remove) peek() • Moves to the left child node
• MergeSort divides the input into • Measured using big-o notation. path, between two nodes.
around it. • Add to the stack push(element) wherever possible.
two parts. • Shows the amount of time taken
• Often selects the • Remove top element from the • Will use the right child node if no left
• It then recursively calls MergeSort relative to the number of inputs.
central element, which stack and return it pop() child node exists.
on each part until their length is 1. • Allows the required time to be A* Algorithm
is known as the pivot. • If there are no child nodes, the
• Merge is called. predicted. • Provides a faster solution than
• Elements smaller than current node is used.
• Merge puts the groups of elements Dijkstra’s Algorithm to find the
the pivot are listed to Space Complexity • the algorithm then backtracks to the
back together in a sorted order. shortest path between two nodes.
its left. • The amount of storage next node moving right.
• You will not be asked about the • Uses a heuristic element to decide
• Larger elements are space the algorithm takes Logarithms Breadth First
detailed implementation of this which node to consider when
listed to its right. up. • The inverse of an exponential. • Starts from the left.
algorithm but do need to know how choosing a path.
• The process is • Does not have a defined • An operation which determines how • Visits all children of the starting
it works. • Unlike Dijkstra’s Algorithm, A* only
repeated recursively. notation. many times a certain number is multiplied node.
• It is more efficient than bubble and looks for the shortest path
• Slow. • Copying data increases by itself to reach another number. • Then visits all nodes directly
merge sort. between two nodes, instead of the
• Time complexity of the storage used. • x y = log(x) connected to each of these nodes in
• It has a worst case time of O(n log shortest path from the start node
O(n2) • Storage space is • 1 (20 ) 0 turn.
n) to all other nodes.
expensive so this should • 8 (23 ) 3 • Continues until all nodes have been
be avoided. • 1024 (210) 10 visited.
• • Uses a queue.

You might also like