Professional Documents
Culture Documents
1 - Time Complexity Analysis in Data Structure
1 - Time Complexity Analysis in Data Structure
Overview
Whenever we are thinking of a solution to a problem, there can be N number
of solutions. For instance, any problem can be solved in multiple different
ways. But the Question that arises here is, how can we recognize the most
efficient solution if we have a set of different solutions?
Let us consider a simple example to understand what we are trying to figure out. Let
us say we want to find square of a number. Now to check that let us suppose we
have two approaches. The first approach towards this problem would be to simply
use the mathematical operator * to find the square.
On the other hand, the second approach towards finding the solution would be to run
a loop of 'n' times which starts with the number 'n' and then we gradually keep
adding 'n' to it every time. Now it is clearly understood that if we execute both the
approach, both will take different time to execute and give output. And in this case,
the first approach will take less time and hence is said to be the efficient way.
Takeaways
Time complexity can be defined as the amount of time taken by an algorithm to
execute each statement of code of an algorithm till its completion with respect to the
function of the length of the input
Now to know which solution is the most efficient one, the concept of space and time
complexity of algorithms comes into picture.
The Space and Time complexity can be defined as a measurement scale for
algorithms where we compare the algorithms on the basis of their Space (i.e. the
amount of memory it utilises) and the Time complexity (i.e. the number of operations
it runs to find the solution).
There can more than one way to solve the problem in programming, but knowing
how which of the algorithm works efficiently and hence, can add value to the way we
do programming.
So, in order to dive deep to find the effectiveness of the program, we must know how
to evaluate them using the Space and Time complexity. This use of Space and Time
complexity can help us make the program behave according to the required optimal
conditions, which in some way makes us an efficient programmers too. As with this
module we are focussing more on the Time Complexity domain let us take a look at
the definition of the same.
Summing up above points, we can conclude that the time complexity is explained as
the number of operations an algorithm performs in order to complete a task with
respect to the size of the input. And to find out which algorithm must be considered
as the most efficient one, we pick the one which takes the smallest number of
operations in terms of the time complexity.
Asymptotic Notations
Now we shall we understanding the Notations of Time Complexity. There are major 3
notations which we will hear about.
NOTE:
We can understand that when we do something with every item in one dimension it’s
considered linear, in two dimensions it’s considered quadratic, and keep dividing the
sample space in half is considered logarithmic.
Let us dive deep into this following example to understand more on the Time
complexity of searching algorithms. Here we shall be comparing two different
algorithms which were used to solve a problem. Now the problem that we are talking
here is searching.
Let us assume here that we have a sorted array in ascending order and we need to
search for a particular element in this array. Now to solve this problem we have two
algorithms:
1. Linear Search.
2. Binary Search.
Consider an array which contains five elements, and we have to find the
number 24 in the array.
Now when we search via the linear search algorithm, it will compare each element of
the array to the specific search value. Now when the search value is found, it shall
return true. Seems simple right!
Now let’s analyse the count of the number of operations that it needs to perform to
return its output. We find the answer to this as 5 as it compares every element of the
array one by one. We can now say that by linear search it uses five operations (the
maximum number of operations) to find the given element. This is also the worst
case of an algorithm in the case of linear search.
Summing up, The Linear search takes 'n' (where n is the size of the array) number of
operations in its worst case.
Now let’s examine the Binary search algorithm by diving deep into a simple example.
const array = [3, 7, 11, 21, 24, 36, 47, 61, 99];
const search value = 21;
Now for the given array, if we were to find the search value then by the binary search
algorithm we first check the middle of the array. Now as middle of array is 24, we
then check 24>21 so we start searching left, then by searching elements to the left
we see that we will find the number say 7, then as 7<21 then we start searching to
the desired element and once we see 21, then we compare 21=21.
We see that when we count number of operations binary search took to find the
desired element was four operations. This justify that there is a logarithmic approach
for the binary search.
For an array of size n, the number of operations performed by the Binary Search
is: log (n)
1. The Execution Time: The execution time can be one of the measure but majorly
this also gets manipulated to the particular computer we are using so this measure
can be one of the aspect but not the only aspect to define and compare algorithms.
3. The Ideal solution: When we want to the express the running time of a given
algorithm with respect to the function of the input size let’s say, ’f (n)'. Now to
compare these different functions corresponding to running times we see that this
type of comparison are independent of machine time, programming style, etc.
Conclusion
1. The Time complexity can be defined as the amount of time taken by an
algorithm to execute each statement of code of an algorithm till its completion
with respect to the function of the length of the input.
Ω: This notations defines if functions grow faster than or at the same rate with
respect to the expression.