You are on page 1of 4

MCQ:

1. b
2. b
3. b

ESSAY:
1. Greedy algorithms are algorithms that take the locally optimum solution to a problem in a
given process. If in that one step of the process a better solution to the problem is found
a greedy algorithm will choose that better solution instead of finding the optimum solution
overall. The result is usually not optimal but sometimes due to chance it just might be
optimal and in which case could be better than a dynamic algorithm . Greedy algorithms
are usually fast and don’t take up much memory.
2. Brute force algorithms try to find the optimal solution to a problem using whatever
methods they can find without caring for technique or speed. Brute force algorithms
usually specialize towards the problem and can usually only be used in very specific
cases.They try every possible solution to a problem and check whether it is the correct
solution. This is a simple and straightforward approach, but it can be very
time-consuming, especially for problems with a large number of possible solutions. On
the other hand, greedy algorithms are a type of algorithm that makes the locally optimal
choice at each step with the hope of finding a global optimum. They do this by selecting
the option that looks the best at the current moment without considering the
consequences of that choice on future steps. In general, brute force algorithms are more
thorough and may find the optimal solution, but they can be very slow. Greedy
algorithms are faster but may not always find the optimal solution.

3. i. Using the greedy algorithm:


The greedy algorithm would start by selecting the largest denominations first and
working its way down to the smaller denominations. For the given list of coin
denominations, the greedy algorithm would first select a coin with a denomination of $4,
which leaves a remainder of $2. It would then select a coin with a denomination of $3,
which leaves a remainder of $1. Finally, it would select a coin with a denomination of $1
to reach the desired amount of $6. Therefore, the greedy algorithm would require 3 coins
to reach the desired amount of $6.

ii. Using dynamic programming:


Dynamic programming is a method of solving problems by breaking them down into
smaller subproblems and storing the solutions to these subproblems in a table, so they
can be reused later. To find the minimum number of coins needed to make $6 using
dynamic programming, we can create a table with the following steps: Initialize a table of
size 7 (since we want to make $6) with all values set to infinity, except for the first cell,
which is set to 0. Iterate through the coin denominations and fill in the table as follows:
For each cell in the table, consider the minimum number of coins needed to reach that
amount using the current coin denomination. If the current coin denomination is less
than or equal to the amount in the cell, update the cell with the minimum of the current
value and the value obtained by adding 1 to the minimum number of coins needed to
reach the remainder (the amount in the cell minus the coin denomination). For example,
to fill in the cell for an amount of $5 using a coin denomination of $3, we would consider
the minimum number of coins needed to reach $5 using $3 coins. This would be 1 (the
value in the cell for $3) plus 1 (the additional $3 coin needed to reach $5). The resulting
table would look like this:
0 1 2 3 4 5 6
INF 1 2 1 2 2 3
The minimum number of coins needed to make $6 is 3, which is the value in the cell at
row 2, column 6.
Therefore, using dynamic programming, the minimum number of coins needed to make
$6 is 3.

4. Dynamic programming is a method of solving problems by breaking them down into


smaller subproblems and storing the solutions to these subproblems in a table, so they
can be reused later using a process known as memoization.
Makw a table of size 5 with all values set to 0.

Iterate through the lengths of the rod from 1 to 4 and fill in the table as follows:
For each length, consider all possible ways to cut the rod and select the one that gives
the maximum price. The price of the rod of length i can be obtained by adding the price
of the cut rod of length j (for j < i) and the price of the remaining rod of length i-j.
We need to consider all possible ways to cut the rod and select the one that gives the
maximum price. The possible ways to cut the rod of length 4 are:
Cut the rod into two pieces of length 2: This would give a price of $4 (the price of the rod
of length 2) plus $4 (the price of the rod of length 2).
Cut the rod into a piece of length 3 and a piece of length 1: This would give a price of $9
(the price of the rod of length 3) plus $1 (the price of the rod of length 1).
The results:
0 1 2 3 4
0 1 5 8 10
The maximum price for a rod of length 4 is $10, which is the value in the cell at row 2,
column 4.

Therefore, the optimal strategy for cutting and selling a rod of length 4 at the highest
price is to cut it into one piece of length 3 and one piece of length 1.
5. Huffman Tree:
(72)
/ \
(17) B(22)
/ \
A(10) G(18)
/ \ \
F(15) D(18) E(12)
/
C(5)

Huffman Table:

Char Code Freq Total Bits

A 00 10 (2 x 10) = 20

B 1 22 22

C 010 5 (3 x 5) = 15

D 011 18 (3 x 18) = 54

E 1100 12 (4 x 12) = 48

F 0100 15 (4 x 15) = 60

G 0101 18 (4 x 18) = 72

Therefore, the total number of bits needed to represent the characters A, B, C, D, E, F,


and G using the Huffman codes is 20 + 22 + 15 + 54 + 48 + 60 + 72 = 271 bits. This is
the total number of bits needed to represent the text using the Huffman codes. To
compare this to the old text we can multiply the number of occurrences of each character
by 8 (since each character is represented using 8 bits originally). The total number of bits
needed to represent the original text is ((10 x 8) + (22 x 8) + (5 x 8) + (18 x 8) + (12 x 8)
+ (15 x 8) + (18 x 8)) = 880 bits. Therefore, the Huffman codes allow you to represent
the text using 880 - 271 = 609 fewer bits than the original representation. This
represents a compression of approximately (609/880) = 31.8%

6. Memoization is when a dynamic programming algorithm stores results from its previous
calculation in some sort of memory, usually an array or matrix. For example if I am
traversing a graph and I happen to go through a certain weighted edge again and again
the results of me going through that same path would not be calculated again and
instead me progressing forward would be calculated. In simpler terms if went through
hallway 1 with a cost of 4 and again through hallway 2 with a cost of 6 that first
calculation will occur but if the next day I decide to go to hallway 3 from hallway 1 which
connects with hallway 2 the algorithm will remember the cost of moving from hallway 1 to
2 which is 10 and will instead use that value when proceeding with the calculation.
Another example of memoization could be the fibonacci sequence and it’s algorithm. To
compute the nth term of the Fibonacci sequence using a recursive algorithm, you can
write a function that takes an integer n as input and returns the nth term of the Fibonacci
sequence.
Like this:

def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)

The function has a time complexity of ON^2 and thus is very slow. We can use
memoization to store the results of expensive function calls and return the results when
the same calculations are called again.
Here is an example of the fibonacci function with memoization:

def fibonacci(n, memo={}):


if n in memo:
return memo[n]
if n == 0:
result = 0
elif n == 1:
result = 1
else:
result = fibonacci(n-1, memo) + fibonacci(n-2, memo)
memo[n] = result
return result

The result is outputted in a much faster time with the same results.

You might also like