You are on page 1of 8

Definition of Algorithm:

 An algorithm is a finite set of precise instructions designed to solve a specific problem or


perform a computation.

 Algorithms provide a step-by-step process to achieve a desired outcome.

Nature of Problems:

 Problems that algorithms aim to solve can take various forms, including business challenges
(e.g., logistical planning) and personal tasks (e.g., ordering food).

Key Aspects of Algorithmic Study:

1. Analysis of Algorithms:

 Definition:

 Analysis involves predicting the cost of an algorithm in terms of resources and


performance.

 Objective:

 Understand and evaluate the efficiency of an algorithm in terms of time and


space complexity.

 Outcome:

 Identify how the algorithm scales with input size and resource usage.

2. Design of Algorithms:

 Definition:

 Designing algorithms involves creating step-by-step procedures that minimize


computational cost.

 Objective:

 Develop efficient algorithms for solving specific problems.

 Considerations:

 Minimize time complexity, optimize resource utilization, and enhance overall


performance.

Representation of Algorithms:

 Forms:

 Algorithms can be represented using pseudocode, flowcharts, or actual code.

 These representations aid in understanding the logic and structure of the algorithm.
Characteristics of a Well-Defined Algorithm:

1. Correctness:

 The algorithm must produce the correct result for all valid inputs.

2. Termination:

 The algorithm must finish execution in some finite time.

3. Representation:

 Algorithms can be expressed using various forms such as pseudocode, flowcharts, or


actual programming code.

Application Examples:

1. Business Scenario:

 Problem: "Allocate manpower to maximize profit."

 Algorithmic Solution: Design an algorithm that efficiently assigns available manpower to


tasks to achieve maximum profit.

2. Life Scenario:

 Problem: "I am hungry. How do I order pizza?"

 Algorithmic Solution: Develop an algorithm that outlines the steps to order pizza,
considering factors like preferences and location.

3. Instructional Scenario:

 Problem: "Explain how to tie shoelaces to a five-year-old child."

 Algorithmic Solution: Design an algorithm that provides simple, step-by-step instructions


suitable for a young child.

So, theoretical study of algorithms involves both analysis and design, aiming to understand the
efficiency of algorithms and create solutions that minimize computational cost. Algorithms are
applicable in various domains, addressing problems from business optimization to everyday life
scenarios. The correctness, termination, and representation of algorithms are crucial aspects in their
study.

Sorting Problem Overview:

 Input: A sequence of numbers, a1,a2,...,an.

 Output: A permutation a1′,a2′,...,an′ such ′a1′≤a2′≤...≤an′.


Sorting Algorithms: Sorting algorithms are procedures used to organize elements in a specified order.
They play a crucial role in various computer science applications, as arranging data simplifies searching,
retrieval, and analysis.

Insertion Sort Algorithm:

INSERTION-SORT(A, n) ⊳ A[1 . . n]

for j ← 2 to n

do

key ← A[j]

i←j–1

while i > 0 and A[i] > key

do

A[i+1] ← A[i]

i←i–1

A[i+1] ← key

Explanation:

1. The algorithm starts with the second element (j=2) and iterates through the sequence.

2. For each element, it compares it with the elements on its left side and inserts it into the correct
position in the sorted subsequence.

3. The inner while loop shifts elements to the right until the correct position for the current
element is found.

4. The sorted subsequence grows with each iteration, resulting in a sorted permutation.

Example using Insertion Sort with 8,2,4,9,3,6,8:

1. Pass 1: 2 8 4 9 3 6

2. Pass 2: 2 4 8 9 3 6

3. Pass 3: 2 4 8 9 3 6

4. Pass 4: 2 3 4 8 9 6

5. Pass 5: 2 3 4 6 8 9
Algorithm Notes:

 Maintains a sorted subsequence and inserts each element into its correct position.

 Time complexity of O(n2) in the worst case.

 In-place sorting algorithm, suitable for small or partially sorted datasets.

What is Algorithm Analysis?

 Definition: Algorithm analysis involves estimating the performance of an algorithm in terms of


time and memory consumption.

 Objective: To predict how much time an algorithm may take to finish or how much memory it
might use during execution.

 Comparison: Algorithm analysis allows us to compare different algorithms and choose the most
efficient one for a given problem.

Running Time Analysis:

 Alternative Term: Time-complexity analysis.

 Purpose: To determine how the running time of an algorithm increases as the size of the
problem it solves increases.

 Key Focus: Understanding the relationship between the input size and the time required for the
algorithm to complete.

Size of the Problem:

 Variability: The size of the problem can manifest in various forms, including:

 Size of an array.

 Polynomial degree of an equation.

 Number of elements in a matrix.


 Number of bits in the binary representation of the input.

 And other relevant parameters.

Running Time vs. Memory Consumption:

 Running Time Analysis: Focuses on understanding how the execution time of an algorithm
scales with the input size.

 Memory Consumption Analysis: Investigates the amount of memory an algorithm requires


during its execution.

 Balancing Act: Efficient algorithms strike a balance between minimizing running time and
optimizing memory usage.

Significance of Analysis:

 Algorithm Selection: Allows for informed decision-making in choosing the most suitable
algorithm for a particular problem.

 Performance Prediction: Provides insights into how an algorithm will behave with larger or
more complex inputs.

In short, Algorithm analysis is a critical aspect of designing and implementing efficient algorithms.
Running time analysis, in particular, helps us understand the behavior of algorithms as the size of the
problem varies. The size of the problem, which can take various forms, is a key factor in determining
algorithmic efficiency. Analyzing both running time and memory consumption enables practitioners to
make informed choices and optimizations in algorithm design.

Let's delve into a more detailed explanation of the process of analyzing the running time of algorithms,
focusing on objective measures and cost association.

Objective Measures in Algorithm Analysis:

1. Comparing Execution Times:

 Challenge: Execution times can vary significantly between different computers or


hardware configurations.

 Issue: Directly comparing execution times is not a reliable metric for algorithm analysis.

2. Counting Statements:

 Alternative Approach: Counting the number of statements within an algorithm.

 Cost Association: Assigning a "cost" to each statement to quantify its impact on overall
performance.
Counting Statements and Cost Association:

Objective:

 Understand how the running time of an algorithm scales with the size of the problem it solves.

Challenges:

 The number of statements can vary based on factors like programming language and coding
style.

Example Algorithms:

1. Algorithm 1:

 Cost Analysis:

 arr[0]=0;c1

 arr[1]=0;c1

 ...

 arr[N−1]=0;c1

 Total Cost:

 c1N

2. Algorithm 2:

 Cost Analysis:

 for(i=0; i<N; i++) c1+c2(N+1)+ c3N


 arr[i] = 0; c1N

 Total Cost:

 (c1+c2)×(N+1)+c3N+c1N

 Simplified: (c1+c2+c3)N+(c1+c2)

Key Observations:

 Total Cost Dependency: The total cost of an algorithm depends on both the cost associated with
each statement and the number of times each statement is executed.

 Expressing in Terms of N: Expressing the total cost in terms of the input size (N) facilitates a
clearer understanding of how the running time scales with the problem size.
So, defining objective measures and associating costs provide a structured approach to running time
analysis. The total cost emerges as a critical metric for evaluating and comparing the performance of
algorithms. Analyzing the relationship between the input size and the total cost helps in making
informed decisions about the efficiency of algorithms.

The estimated running time of the Insertion Sort algorithm depends on the arrangement of numbers in
the input array. Specifically, we are often interested in understanding the runtime in the worst-case
scenario. Analyzing the worst-case scenario provides a guarantee that the algorithm won't take any
longer than this for any type of input.

Worst-Case Analysis of Insertion Sort:

Algorithm Steps:

1. The algorithm iterates through each element in the array (from the second element onward).

2. For each element, it compares it with the elements on its left side and inserts it into the correct
position in the sorted subsequence.

Worst-Case Scenario:

 Occurs when the input array is in descending order (i.e., each element is greater than the one
before it).

Analysis:

 For each element in the input array, the algorithm needs to compare it with all the previous
elements (in the sorted subsequence) before finding its correct position.

 The number of comparisons increases linearly with the size of the input array.

Time Complexity (Big O notation):

 The worst-case time complexity of Insertion Sort is O(n2), where n is the size of the input array.

 This indicates that the running time of the algorithm grows quadratically with the size of the
input array in the worst-case scenario.

Significance of Worst-Case Analysis:

 The worst-case analysis provides a guarantee on the upper bound of the running time, ensuring
that the algorithm won't exceed a certain time limit for any input.

 Although O(n2) may not be the most efficient for large datasets, Insertion Sort has advantages
for small or partially sorted arrays.

In summary, the estimated running time of Insertion Sort in the worst-case scenario is O(n2), providing a
reliable upper bound on its performance regardless of the input arrangement.
To make the Insertion Sort algorithm most inefficient (resulting in the worst-case scenario), you should
arrange the input numbers in descending order. This means placing the largest element first, followed
by the second-largest, and so on. In other words, the input array should be sorted in reverse order.

By arranging the numbers in descending order, each new element being considered in the algorithm will
need to be compared with all the previous elements in the sorted subsequence. This leads to the
maximum number of comparisons and shifts, making the algorithm take the longest time to sort the
input.

Let's illustrate this with an example:

Original Input: [5, 4, 3, 2, 1]

Steps of Insertion Sort:

1. For the first element (4), no comparison is needed as it's the first element in the sorted
subsequence.

2. For the second element (3), it needs to be compared with the previous element (4) and then
shifted.

3. For the third element (2), it needs to be compared with the previous elements (4, 3) and then
shifted.

4. For the fourth element (1), it needs to be compared with the previous elements (4, 3, 2) and
then shifted.

Result After Sorting: [1, 2, 3, 4, 5]

In the worst-case scenario, where the input array is sorted in descending order, the number of
comparisons and shifts is maximized, resulting in a time complexity of O(n2) for Insertion Sort.

You might also like