You are on page 1of 6

CSE 450/598, Design and Analysis of Algorithms

Homework#4 Solution

sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Problem 1: One of the basic motivations behind the Minimum Spanning Tree Problem is the
goal of designing a spanning network for a set of nodes with minimum total cost. Here we explore
another type of objective: designing a spanning network for which the most expensive edge is as
cheap as possible.
Specifically, let G = (V, E) be a connected graph with n vertices, m edges, and positive edge
costs that you may assume are all distinct. Let T = (V, E 0 ) be a spanning tree of G; we define the
bottleneck edge of T to be the edge of T with the greatest cost.
A spanning tree T of G is a minimum-bottleneck spanning tree if there is no spanning tree T 0 of
G with a cheaper bottleneck edge.
(a) Is every minimum-bottleneck tree of G a minimum spanning tree of G? Prove or give a counterexample.

Th

Solution: False. We can see a counterexample as fallows: Figure 1.

Figure 1: (a) counterexample

https://www.coursehero.com/file/9692310/HW4S12sol1/

(b) Is every minimum spanning tree of G a minimum-bottleneck tree of G? Prove or give a counterexample.
Solution:

sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

True. Suppose minimum spanning tree T is not minimum bottle-neck tree Tb of G. The bottleneck of T is edge (a, b), which will be larger than any edge of Tb . There exits a path from a to
b in Tb . Suppose it is (a, p1 , , pk , b). Thus, (a, b) > (pk , b). In the minimum spanning tree
T , there exit a path from a to pk . Now, in the tree T , put edge (pk , b) in T and take the edge
(a, b) out of T . In this way we get a new spanning tree T 0 which has smaller summation of
cost. However, T is the minimum spanning tree. So we get contradiction. Thus, the minimum
spanning tree should be minimum bottle-neck tree.
Grading Keys:
10 points for each subproblem;
5 points for correct answer;
5 points for correct reasoning.

Problem 2: Let A1 , , An be the matrices where the dimensions of Ai are di1 di , for
i = 1, , n. Here is a strategy for determining the best order in which to perform the matrix
multiplications to compute A1 A2 An :
At each step, choose the largest remaining dimension (from among d1 , , dn1 ), and multiply
two adjacent matrices which share that dimension.
(a) What is the order of the running time of this algorithm (only to determine the order in which
to multiply the matrices, not including the actual multiplications)?
Solution:

Th

Determining the order of multiplying the matrices is equivalent to sorting the sequence of given
dimensions d1 , , dn1 (except the first dimension d0 and the last dimension dn ). The time
complexity will be O(n log n).
(b) Either give a convincing argument that this strategy will always minimize the number of multiplications, or give an example where it fails to do so.
Solution:
The strategy does not work for all the cases. Following is a counterexample:
A1 A2 A3 where the dimensions of A1 , A2 and A3 are 1 2, 2 3 and 3 4.
The strategy will give the order of A1 (A2 A3 ), which requires 32 multiplications.
https://www.coursehero.com/file/9692310/HW4S12sol1/

2 3 4 + 1 2 4 = 32
The optimal order should be (A1 A2 ) A3 , which only requires 18 multiplication.
1 2 3 + 1 3 4 = 18
Grading Keys:
10 points for each subproblem;

Th

sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Problem 3: Lets consider a long, quiet country road with houses scattered very sparsely along
it. (Picture the road as a long line segment with an eastern endpoint and a western endpoint.)
Further lets suppose that despite the bucolic setting, the residents of all these houses are avid
cell phone users. You want to place cell phone base stations at certain points along the road, so
that every house is within four miles of one of the base stations. Give an efficient algorithm that
achieves this goal using as few base stations as possible. Prove its correctness and explain its time
complexity.
Solution:
Here is a greedy algorithm for this problem. Start at the western end of the road and begin
moving east until the first moment when there is a house h exactly four miles to the west. We place
a base station at this point (if we went any further east without placing a base station, we would
not cover h). We then delete all the houses covered by this base station and iterate this process on
the remaining houses.
Here is another way to view this algorithm. For any point on the road define its position to be
the number of miles it is from the western end. We place the first base station at the easternmost
(i.e. largest) position s1 with the property that all houses between 0 and s1 will be covered by s1 .
In general having placed {s1 , s2 , . . . , si }, we place base station i + 1 at the largest position si+1 such
that all houses between si and si+1 will be covered by si or si+1 . Time complexity of this algorithm
is O(n) where n is the number of houses.
Let S = {s1 , s2 , . . . , sk } denote the full set of base station positions that our greedy algorithm
places, and let T = {t1 , t2 , . . . , tm } denote the set of base station positions in an optimal solutions,
sorted in increasing order (i.e. from west to east). We must show that k = m.
We do this by showing a sense in which our algorithm solution S stays ahead of the optimal
solution T . Specifically, we claim that si ti for each i, and prove this by induction. The claim
is true for i = 1, since we go as far as possible to the east before placing the first base station.
Assume now it is true for some value i 1; this means that our algorithms first i base stations
{s1 , s2 , . . . , si } cover all the houses covered by first i centers {t1 , t2 , . . . , ti }. As a result if we add ti+1
to {s1 , s2 , . . . , si } we will not leave any house between si and ti+1 uncovered. But greedy algorithm
https://www.coursehero.com/file/9692310/HW4S12sol1/

sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

chooses si+1 to be as large as possible subject to the condition that all houses between si and si+1
are covered; so we have si+1 ti+1 . This proves the theorem by induction.
Finally, if k > m, then {s1 , s2 , . . . , sm } fails to cover all houses. But sm tm and so T also fails
to cover all houses, a contradiction.
Grading Keys:
10 points for correct algorithm;
2 points for correct time complexity;
8 points for correctness proof;
Problem 4: For each of the following two statements, decide whether it is true of false. If it is
true, give a short explanation. If it is false, give a counterexample.
(a) Suppose we are given an instance of the Minimum Spanning Tree Problem on a graph G, with
edge costs that are all positive and distinct. Let T be a minimum spanning tree for this instance.
Now suppose we replace each edge cost ce by its square c2e , thereby creating a new instance of
the problem with the same graph but different costs.
True of false? T must still be a minimum spanning tree for this new instance.
Solution:

True. There are minimum spanning tree algorithms like Kruskals algorithm that only care
about the relative order of the costs, not their actual values. Therefore, if we feed the costs c2e
into Kruskals algorithm, it will sort them in the same way and put the same subset of edges
in the MST.

Th

(b) Suppose we are given an instance of the Shortest s-t Path Problem on a directed graph G. we
assume that all edge costs are positive and distinct. Let P be a minimum-cost s-t path for this
instance. Now suppose we replace each edge cost ce by its square, c2e , thereby creating a new
instance of the problem with the same graph but different costs.
True of false? P must still be a minimum-cost s-t path for this new instance.
Solution:

False. Let G has edges (s, v), (v, t) and (s, t) where the edges have cost 2,3 and 4 respectively.
Then the shortest path is the single edge (s, t) but after squaring the costs the shortest path
would go through v.
Grading Keys:
10 points for each subproblem;
https://www.coursehero.com/file/9692310/HW4S12sol1/

5 points for correct answer;


5 points for correct reasoning.

Th

sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

Problem 5: Your friend is working as a camp counselor, and he is in charge of organizing


activities for a set of junior-high-school-age campers. One of his plans is the following mini-triathlon
exercise: each contestant must swim 20 laps of a pool, then bike 10 miles, then run 3 miles. The
plan is to send the contestants out in a staggered fashion, via the following rule: the contestants
must use the pool one at a time. In other words, first one contestant swims the 20 laps, gets out, and
starts biking. As soon as the first person is out of the pool, a second contestant begins swimming
the 20 laps; as soon as he or she is out and starts biking, a third contestant begins swimming... and
so on.)
Each contestant has a projected swimming time (the expected time it will take him or her to
complete the 20 laps), a projected biking time (the expected time it will take him or her to complete
the 10 miles of bicycling) and a projected running time (the time it will take him or her to complete
the 3 miles of running). Your friend wants to decide on a schedule for the triathlon: an order in
which to sequence the starts of the contestants. Lets say that the completion time of a schedule
is the earliest time at which all contestants will be finished with all three legs of the triathlon,
assuming that they each spend exactly their projected swimming, biking, and running times on
the three parts. (Again, note that participants can bike and run simultaneously, but at most one
person can be in the pool at any time.) Whats the best order for sending people out, if one wants
the whole competition to be over as early as possible? More precisely, give an efficient algorithm
that produces a schedule whose completion time is as small as possible. Prove its correctness and
explain its time complexity.
Solution:
Let contestants be numbered 1, 2, . . . , n and let si , bi , ri denote the swimming, biking and running
times of contestant i. Here is an algorithm to produce a schedule: arrange the contestants in order
of decreasing bi + ri and send them out in this order. We claim that this order minimizes the
completion time.
We prove this in the following way. Consider any optimal solution, and suppose it does not
use this order. Then the optimal solution must contain two contestants i and j so that j is sent
out directly after i but bi + ri < bj + rj . We will call such a pair (i, j) an inversion. Consider the
solution obtained by swapping the orders of i and j. In this swapped schedule, j completes earlier
than he/she used to. Also, in the swapped schedule i gets out of the pool when j previously got
out of the pool; but since bi + ri < bj + rj , i finishes sooner in the swapped schedule than j finished
in the previous schedule. Hence our swapped schedule does not have a greater completion time and

https://www.coursehero.com/file/9692310/HW4S12sol1/

Th

sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m

so it is optimal too. Continuing in this way, we can eliminate all inversions without increasing the
completion time. At the end of this process we will have a schedule in the order produced by our
our algorithm whose completion is no greater than that of the original optimal order we considered.
Thus the order produced by our algorithm must also be optimal.
Time complexity of this algorithm is O(n log n) (for sorting the contestants in order of decreasing
bi + ri ).
Grading Keys:
10 points for correct algorithm;
2 points for correct time complexity;
8 points for correctness proof;

https://www.coursehero.com/file/9692310/HW4S12sol1/

Powered by TCPDF (www.tcpdf.org)

You might also like