This action might not be possible to undo. Are you sure you want to continue?

Reference:

Johnsonbaugh, R., Discrete Mathematics, (6th edition), Pearson Prentice Hall, 2005.

The 5th edition of Johnsonbaugh may be used, but the 6th edition has some notation

changes and some diﬀerent problem numbering.

Logic and Proofs

Propositions

Logic is the study of reasoning. We can look at examples involving everyday sentences,

and proceed to more formal mathematical approaches.

Consider the following sentences:

Adelaide is the capital of South Australia.

There are 30 hours in a day.

The square of 12 is 144.

Every even number greater than 2 can be expressed as the sum of two prime numbers

(Goldbach’s conjecture).

Each of the statements is either true or false. The ﬁrst and third are obviously true, and

the second is obviously false. What do you think about the last?

A proposition is a statement that is either true or false (but not both). Whichever of

these (true or false) is the case is called the truth value of the proposition.

Some statements cannot be considered as propositions e.g.

Fred is a nerd.

The truth value is not well deﬁned. However propositions in mathematics are well deﬁned.

Deﬁnition. Let p and q be propositions.

The conjunction of p and q, denoted p ∧ q, is the proposition

p and q.

The disjunction of p and q, denoted p ∨ q, is the proposition

p or q.

1

Example 1. If

p : It is raining,

q : It is cold,

then the conjunction of p and q is

p ∧ q : It is raining and it is cold.

The disjunction of p and q is

p ∨ q : It is raining or it is cold.

A binary operator on a set X assigns to each pair of elements in X an element of X. The

operator ∧ assigns to each pair of propositions p and q the proposition p∧q. Thus, ∧ and

∨ are binary operators on propositions.

Deﬁnition. The truth value of the proposition p ∧ q is deﬁned by the truth table

p q p ∧ q

T T T

T F F

F T F

F F F

In essence, p ∧ q is true provided that both p and q are true, and is false otherwise.

Deﬁnition. The truth value of the proposition p ∨ q is deﬁned by the truth table

p q p ∨ q

T T T

T F T

F T T

F F F

In essence, p ∨ q is true provided that p or q (or both) are true, and is false otherwise.

Deﬁnition. The negation of p, denoted ¬p, is the proposition

not p.

The truth value of the proposition ¬p is deﬁned by the truth table

p ¬p

T F

F T

In English, we sometimes write ¬p as “It is not the case that p.”

2

Example 2. We have

p : The digit 1 occurs twice in the ﬁrst 13 digits of π,

q : The digit 7 does not occur in the ﬁrst 13 digits of π,

r : The ﬁrst 13 digits of π sum to 60.

The compound proposition is

“Either 1 occurs twice in the ﬁrst 13 digits of π and the digit 7 occurs at least once in

the ﬁrst 13 digits of π or the ﬁrst 13 digits of π sum to 60.”

The proposition can be written symbolically as

(p ∧ ¬q) ∨ r.

The ﬁrst digits of π are

π = 3.141592653589 79323864 . . . .

Then

(p ∧ ¬q) ∨ r = (T ∧ ¬T) ∨ F

= (T ∧ F) ∨ F

= F ∨ F

= F,

and so the compound proposition is false.

Conditional Propositions and Logical Equivalence

Deﬁnition. If p and q are propositions, the proposition

if p then q

is called a conditional proposition and is denoted

p → q.

The proposition p is called the hypothesis (or antecedent) and the proposition q is called

the conclusion (or consequent).

Example 3. The lecturer states that if a student gets more than 50% then the student

will pass the course.

p : The student gets more than 50%,

q : The student passes the course.

If p and q are both true then p → q is true.

If p is true and q is false then p → q is false.

3

If p is false then p → q does not depend on the conclusion’s truth value, and so is regarded

as true.

This last often presents some diﬃculty in comprehending. We can think of it in this way.

If the student does not get more than 50%, we cannot regard p → q as false, and so it is

considered true. This gives the following truth table.

Deﬁnition. The truth value of the conditional proposition p → q is deﬁned by the

following truth table:

p q p → q

T T T

T F F

F T T

F F T

Note that

p only if q

is considered logically the same as

if p then q.

An example of this is the two statements

“The student is eligible to take Maths 3 only if the student has passed Maths 2”

and

“If the student takes Maths 3 then the student has passed Maths 2,”

which are logically equivalent.

Deﬁnition. If p and q are propositions, the proposition

p if and only if q

is called a biconditional proposition and is denoted

p ↔ q.

The truth value of the proposition p ↔ q is deﬁned by the following truth table:

p q p ↔ q

T T T

T F F

F T F

F F T

Note that p ↔ q means that p is a necessary and suﬃcient condition for q. The proposition

“p if and only if q” can be written “p iﬀ q”.

Deﬁnition. Suppose that the propositions P and Q are made up of the propositions

p

1

, . . . , p

n

. We say that P and Q are logically equivalent and write

P ≡ Q,

provided that, given any truth values p

1

, . . . , p

n

, either P and Q are both true, or P and

Q are both false.

4

Example 4. Verify the ﬁrst of De Morgan’s laws

¬(p ∨ q) ≡ ¬p ∧ ¬q,

and the second,

¬(p ∧ q) ≡ ¬p ∨ ¬q

will be a tutorial exercise.

P = ¬(p ∧ q),

Q = ¬p ∨ ¬q.

p q p ∨ q ¬p ¬q ¬(p ∨ q) ¬p ∧ ¬q

T T T F F F F

T F T F T F F

F T T T F F F

F F F T T T T

Thus P and Q are logically equivalent.

Deﬁnition. The converse of the conditional proposition p → q is the proposition q → p.

The contrapositive (or transposition) of the conditional proposition p → q is the proposi-

tion ¬q → ¬p.

Theorem 1. The conditional proposition p → q and its contrapositive ¬q → ¬p are

logically equivalent.

Proof:

The truth table

p q p → q ¬q ¬p ¬q → ¬p

T T T F F T

T F F T F F

F T T F T T

F F T T T T

shows that p → q and ¬q → ¬p are logically equivalent.

Some theorems in mathematics are best proved by using the contrapositive. It is likely

that you have seen some in matriculation mathematics or Engineering Mathematics 1 or

2E.

Exercise: Show that p → q ≡ ¬p ∨ q.

Deﬁnition. A compound proposition is a tautology if it is true regardless of the truth

values of its component propositions.

A compound proposition is a contradiction if it is false regardless of the truth values of

its component propositions.

Example 5.

p ¬p p ∨ ¬p p ∧ ¬p

T F T F

F T T F

So p ∨ ¬p is a tautology, and p ∧ ¬p is a contradiction.

Exercise: Show that (p ∧ (p → q)) → q is a tautology.

5

Quantiﬁers

Consider the statement

p : n is a prime number.

The statement p is not a proposition, because a proposition is either true or false. We

have that p is true if n = 7, and false if n = 8.

Deﬁnition. Let P(x) be a statement involving the variable x and let D be a set. We call

P a propositional function or predicate (with respect to D) if for each x in D, P(x) is a

proposition. We call D the domain of discourse of P.

Example 6. Let P(n) be the statement

P(n) : n is a prime number,

and let D be the set of positive integers.

Then P is a propositional function with domain of discourse D since for each n in D,

P(n) is a proposition which is either true of false. P is true for n = 2, 3, 5, 7, . . . , and is

false for n = 1, 4, 6, 8, . . . .

A propositional function P by itself is neither true or false, but is true or false for each x

in its domain of discourse

Example 7. Let P(x) be the statement

P(x) : x

2

−5x + 6 = 0,

and let D be the set of positive integers.

Then P is a propositional function and is true for x = 2 or x = 3, and is false for all other

positive integers.

Deﬁnition. Let P be a propositional function with domain of discourse D. The statement

for every x, P(x)

is said to be a universally quantiﬁed statement. The symbol ∀ means “for every”. Thus

the statement

for every x, P(x)

may be written

∀xP(x).

The symbol ∀ is called a universal quantiﬁer.

The statement

∀xP(x)

is true if P(x) is true for every x in D. The statement

∀xP(x)

is false if P(x) is false for at least one x in D.

6

Example 8. The universally quantiﬁed statement “for every positive integer n greater

than 1, 2

n

−1 is prime” is false.

n = 2 2

2

−1 = 3

n = 3 2

3

−1 = 7

n = 4 2

4

−1 = 15.

We only need a counter example to prove a statement false. We need to prove for all x to

prove a statement true.

Deﬁnition. Let P be a propositional function with domain of discourse D. The statement

there exists x, P(x)

is said to be an existentially quantiﬁed statement. The symbol ∃ means “there exists”.

Thus the statement

there exists x, P(x)

may be written

∃xP(x).

The symbol ∀ is called an existential quantiﬁer.

The statement

∃xP(x)

is true if P(x) is true for at least one x in D. The statement

∃xP(x)

is false if P(x) is false every x in D.

Example 9. The existentially quantiﬁed statement “for some positive integer n, 2

n

− 1

is divisible by 11” is true.

n = 1 2

1

−1 = 1

n = 2 2

2

−1 = 3

n = 3 2

3

−1 = 7

n = 4 2

4

−1 = 15

n = 5 2

5

−1 = 31

n = 6 2

6

−1 = 63

n = 7 2

7

−1 = 127

n = 8 2

8

−1 = 255

n = 9 2

9

−1 = 511

n = 10 2

10

−1 = 1023.

The ﬁrst case where 2

n

−1 is divisible by 11 is for n = 10.

7

The variable x in the propositional function P(x) is called a free variable, that is x is

“free” to roam over the domain of discourse.

The variable x in the universally quantiﬁed statement

∀xP(x)

or in the existentially quantiﬁed statement

∃xP(x)

is a bound variable, in that x is “bound” by the quantiﬁer.

Example 10. Verify that the existentially quantiﬁed statement “for some real number

x,

1

x

2

+ 1

> 1” is false.

We must show that

1

x

2

+ 1

> 1 is false for all real numbers x.

Now

1

x

2

+ 1

> 1 is false when

1

x

2

+ 1

≤ 1 is true. Then

0 ≤ x

2

1 ≤ x

2

+ 1

1

x

2

+ 1

≤ 1,

and so

1

x

2

+ 1

≤ 1 is true for all real numbers x.

Theorem 2. Generalised De Morgan Laws for Logic

If P is a propositional function, each pair of propositions in (a) and (b) has the same

truth values (i.e. either both are true or both are false).

(a) ¬(∀xP(x)); ∃x¬P(x)

(b) ¬(∃xP(x)); ∀x¬P(x)

Proof of (a):

If ¬(∀xP(x)) is true, then ∀xP(x) is false.

Hence P(x) is false for at least one x in the domain of discourse, and ¬P(x) is true for

at least one x in the domain of discourse. Hence ∃x¬P(x) is true.

Similarly, if ¬(∀xP(x)) is false, then ∃x¬P(x) is false.

8

Example 10. (Continued)

We have that P(x) is the statement

1

x

2

+ 1

> 1, and aim to show that for all real numbers

x, P(x) is false i.e.

∃xP(x)

is false. We do this by verifying that for every real number x, ¬P(x) is true i.e.

∀x¬P(x)

is true. Then

∀x¬P(x) is true

¬(∀x¬P(x)) is false

∃x¬(¬P(x)) is false (by De Morgan’s laws)

∃xP(x) is false

Example 11. Consider the well-known proverb

All that glitters is not gold.

This can be interpreted in English in two ways:

Every object that glitters is not gold.

Some object that glitters is not gold.

The intention is that the second is correct.

If we let P(x) be the propositional function “x glitters” and Q(x) be the propositional

function “x is gold”, then the ﬁrst interpretation is represented as

∀x(P(x) → ¬Q(x)),

and the second interpretation is represented as

∃x(P(x) ∧ ¬Q(x)).

In a similar way in which the logical equivalence of the Exercise

p → q ≡ ¬p ∨ q

was shown, we can show that

¬(p → q) ≡ p ∧ ¬q.

Hence

∃x(P(x) ∧ ¬Q(x)) ≡ ∃x¬(P(x) → Q(x))

≡ ¬(∀xP(x) → Q(x))

by De Morgan’s laws.

We can read this last line as “it is not true that for all x, if x glitters then x is gold”.

This has been shown to be logically equivalent to “some object that glitters is not gold”.

The ambiguity comes from applying the negative to Q(x), rather than to the whole

statement.

9

Discrete Mathematics: Week 2

Nested Quantiﬁers

Example 1. Consider the statement

The sum of any two positive real numbers is positive.

This can be restated as: If x > 0 and y > 0, then x + y > 0. We need two universal

quantiﬁers, and can write the statement symbolically as

∀x∀y ((x > 0) ∧ (y > 0) → (x + y > 0)).

The domain of discourse is the set of all real numbers. Multiple quantiﬁers such as ∀x∀y

are said to be nested quantiﬁers.

The statement

∀x∀y P(x, y),

with domain of discourse D, is true if, for every x and for every y in D, P(x, y) is true.

The statement

∀x∀y P(x, y),

is false if there is at least one x and at least one y in D such that P(x, y) is false.

Example 2. Consider the statement

For any real number x, there is at least one real number y such that x + y = 0.

We know that this is true, as we can always choose y to be −x. We can write the statement

symbolically as

∀x ∈ R(∃y ∈ R, x + y = 0).

The domain of discourse is the set of all real numbers.

The statement

∀x∃y P(x, y),

with domain of discourse D, is true if, for every x in D, there is at least one y in D for

which P(x, y) is true.

The statement

∀x∃y P(x, y),

is false if there is at least one x in D such that P(x, y) is false for every y in D.

1

Example 3. Consider the nested quantiﬁer

∃y ∈ R, (∀x ∈ R, x + y = 0).

This can be stated as

There is some real number y such that x + y = 0 for all real numbers x.

This is false, for example choose x to be 1 −y.

The statement

∃x∀y P(x, y),

with domain of discourse D, is true if there is at least one x in D such that P(x, y) is true

for every y in D.

The statement

∃x∀y P(x, y),

is false if, for every x in D, there is at least one y in D such that P(x, y) is false.

Example 4. Consider the statement

∃x∃y ((x > 1) ∧ (y > 1) ∧ (xy = 6)),

with domain of discourse the set of positive integers. This statement is true as there is at

least one positive integer x and at least one positive integer y, both greater than 1, such

that xy = 6 e.g. x = 2, y = 3.

The statement

∃x∃y P(x, y),

with domain of discourse D, is true if there is at least one x in D and at least one y in D

such that P(x, y) is true.

The statement

∃x∃y P(x, y),

is false if, for every x in D and for every y in D, P(x, y) is false.

Example 5. Using the generalized De Morgan laws for logic, the negation of

∀x∃y P(x, y)

is

¬(∀x∃y P(x, y)) ≡ ∃x¬(∃y P(x, y)) ≡ ∃x∀y ¬P(x, y).

Note that in the negation, ∀ and ∃ are interchanged.

2

Proofs

A mathematical system consists of

axioms which are assumed true;

deﬁnitions which are used to create new concepts in terms of existing one;

undeﬁned terms which are not speciﬁcally deﬁned but which are implicitly deﬁned

by the axioms.

Within a mathematical system we can derive theorems.

A theorem is a proposition that has been proved to be true.

A lemma is a theorem that is not too interesting in its own right but is useful in

proving another theorem.

A corollary is a theorem that follows quickly from another theorem.

A proof is an argument that establishes the truth of a theorem.

Example 6. The real numbers furnish an example of a mathematical system. Among

the axioms are:

• For all real numbers x and y, xy = yx.

• There is a subset P of real numbers satisfying

(a) If x and y are in P, then x + y and xy are in P.

(b) If x is a real number, then exactly one of the following statements is true:

x is in P, x = 0, −x is in P.

Multiplication is implicitly deﬁned by the ﬁrst axiom.

The elements of P are called positive real numbers.

The absolute value |x| of a real number x is deﬁned to be x if x is positive or 0, and −x

otherwise.

Example 7. Theorems about real numbers are

• x · 0 = 0 for every real number x.

• For all real numbers x, y and z, if x ≤ y and y ≤ z, then x ≤ z.

Example 8. An example of a lemma about real numbers is

• If n is a positive integer, then either n −1 is a positive integer or n −1 = 0.

Not too interesting in its own right, but can be used to prove other results.

3

Theorems are often of the form

For all x

1

, x

2

, . . . , x

n

, if p(x

1

, x

2

, . . . , x

n

), then q(x

1

, x

2

, . . . , x

n

).

This universally quantiﬁed statement is true provided that the conditional proposition

if p(x

1

, x

2

, . . . , x

n

), then q(x

1

, x

2

, . . . , x

n

)

is true for all x

1

, x

2

, . . . , x

n

in the domain of discourse.

A direct proof assumes that p(x

1

, x

2

, . . . , x

n

) is true, and using this and other axioms,

deﬁnitions and previously derived theorems, show directly that q(x

1

, x

2

, . . . , x

n

) is true.

Example 9. A particular lemma is

The product of two odd integers is odd.

Proof:

Let the two odd integers be 2m+1 and 2n+1, where m and n are integers. Their product

is

(2m + 1)(2n + 1) = 4mn + 2m + 2n + 1

= 2(2mn + m + n) + 1,

which is odd.

A second technique of proof is proof by contradiction.

A proof by contradiction establishes by assuming that the hypothesis p is true and that

the conclusion q is false, and then using p and ¬q as well as axioms, deﬁnitions and

theorems, derives a contradiction. A contradiction is a proposition of the form r ∧ ¬r.

This is sometimes called an indirect proof.

Proof by contradiction is justiﬁed by noting that the propositions

p → q and p ∧ ¬q → r ∧ ¬r

are equivalent. The truth table is

p q r p → q p ∧ ¬q r ∧ ¬r p ∧ ¬q → r ∧ ¬r

T T T T F F T

T T F T F F T

T F T F T F F

T F F F T F F

F T T T F F T

F T F T F F T

F F T T F F T

F F F T F F T

4

Example 10. Prove that the root mean square of two number a and b, a > 0 and b > 0,

is equal to or greater than the arithmentic mean i.e.

a

2

+ b

2

2

≥

a + b

2

.

Proof:

Assume

a

2

+ b

2

2

<

a + b

2

.

Since both sides are positive, we can square without changing the direction of the inequal-

ity.

a

2

+ b

2

2

<

(a + b)

2

4

2(a

2

+ b

2

) < a

2

+ 2ab + b

2

a

2

−2ab + b

2

< 0

(a −b)

2

< 0.

This is a contradiction, and hence

a

2

+ b

2

2

≥

a + b

2

.

Example 11.

Theorem:

√

2 is irrational, that is

√

2 cannot be represented as

m

n

, where m and n are

integers.

The hypotheses are that rational numbers (

m

n

where m and n are integers with no common

factors, n = 0) and square root are deﬁned.

Proof:

Assume that

√

2 is rational, so that

√

2 =

m

n

, where m and n are integers with no common

factors and n = 0. Then

√

2 =

m

n

2 =

m

2

n

2

m

2

= 2n

2

.

It is an easily proved lemma that if m

2

is is even, then m is even. Hence if m is even,

m = 2k, and m

2

= 4k

2

. Then

4k

2

= 2n

2

2k

2

= n

2

,

and so n is even. Hence m and n have a common factor, namely 2, and so there is a

contradiction. Hence

√

2 is irrational.

5

Proof by contrapositive is based on the fact that

p → q ≡ ¬q → ¬p.

The idea is to show that the opposite of the conclusion implies the opposite of the hy-

pothesis.

Example 12.

Theorem: If x and y are real numbers and x + y ≥ 2, then either x ≥ 1 or y ≥ 1.

Proof:

Let p be “x + y ≥ 2” and q be “either x ≥ 1 or y ≥ 1”.

Assume ¬q: x < 1 and y < 1.

Then x + y < 1 + 1, or x + y < 2, which is ¬p. Proven.

Proof by cases is used when the original hypothesis naturally divides into various cases.

Example 13.

Theorem: |x + y ≤ |x| +|y| for all real x and y.

Proof:

Consider the four cases as follows, where each of x, y is nonnegative or negative.

1. x, y ≥ 0:

Then x + y ≥ 0, so |x + y| = x + y = |x| +|y|.

2. x, y < 0:

Then x + y < 0, so |x + y| = −(x + y) = −x −y = |x| +|y|.

3. x ≥ 0, y < 0:

Then x + y < x < |x| +|y|, and

−(x + y) = −x −y ≤ −y = |y| ≤ |x| +|y|.

4. x < 0, y ≥ 0:

The same as case 3, swapping the roles of x and y.

Another form of proof is called an existence proof. An example of this is if we wanted

to show that

∃xP(x)

is true. It is only necessary to ﬁnd a member x in the domain of discourse for which P(x)

is true.

6

Deﬁnition. An argument is a sequence of propositions written

p

1

p

2

.

.

.

p

n

.

.

. q

or

p

1

, p

2

, . . . , p

n

/ .

.

. q.

The propositions p

1

, p

2

, . . . , p

n

are called the hypotheses and the proposition q is called

the conclusion. The argument is valid provided that if p

1

, p

2

, . . . , p

n

are all true, then q

must be true; otherwise the argument is invalid (or a fallacy).

Example 14. Determine whether the argument

p → q

¬q

.

.

. ¬p

is valid.

(a) Using a truth table:

p q p → q ¬q ¬p

T T T F F

T F F T F

F T T F T

F F T T T

Note that the important line is the last. Why?

(b) A verbal argument proceeds: ¬q is true when q is false, so p → q is only true when

p is false (as q is false). Hence ¬p is true.

Example 15. Is the following argument valid?

If I don’t study hard, then I don’t get high distinctions.

I study hard

.

.

. I get high distinctions

Let p be “I study hard”, and let q be “I get high distinctions”. Then the argument is

¬p → ¬q

p

.

.

. q

(a) The truth table is

p q ¬p → ¬q p q

T T T T T

T F T T F

F T F F T

F F T F F

7

The ﬁrst two lines are the important ones. The second line implies that it is an

invalid argument.

(b) Alternatively, assume that p is true and q is false. Then p → ¬q ≡ F → T ≡ T, so

that both hypotheses can be true with a false conclusion.

Mathematical Induction

Example 16. An arithmetic teacher sets his class the problem of adding up all the

integers from 1 to 100. Can this be done in under 10 seconds? There is a piece of folklore

relating this to the mathematician Karl Friedrich Gauss (1777–1855) as a boy.

If we pair the numbers

1 + 2 + 3 + · · · + 49 + 50

100 + 99 + 98 + · · · + 52 + 51

we can see that each vertical pair adds to 101. Since there are 50 pairs, the sum is

50 ×101 = 5050.

The general case, the sum of all integers from 1 to n, is

1 + 2 + 3 + · · · + n =

n(n + 1)

2

, n = 1, 2, 3, . . . .

The Principle of Mathematical Induction is a process by which a set of theorems

corresponding to the non-negative integers can be proven.

Deﬁnition. Suppose that we have a propositional function S(n) whose domain of dis-

course is the set of positive integers. Suppose that

S(1) is true;

for all n ≥ 1, if S(n) is true, then S(n + 1) is true.

The S(n) is true for every positive integer n.

The ﬁrst part, S(1) is true, is called the Basis Step, and the second part is called the

Inductive Step. It is not necessary to start with n = 1, sometimes n = 0, 2, 3, . . . will

occur.

Example 16. (Continued)

Basis step: For n = 1, LHS = 1 and RHS =

1×2

2

= 1. True.

Inductive step: Assume that

1 + 2 + 3 + · · · + n =

n(n + 1)

2

.

8

Then

1 + 2 + 3 + · · · + n + (n + 1) =

n(n + 1)

2

+ (n + 1), by the assumption

=

1

2

(n + 1)(n + 2), factorise whenever possible

=

(n + 1)(n + 2)

2

, correct form.

Hence, by the Principle of Mathematical Induction, S(n) is true for all n ≥ 1.

Example 17. Show that

1 + 4 + 7 + 10 + · · · + (3n −2) =

n(3n −1)

2

, n ≥ 1.

Basis step: For n = 1, LHS = 1 and RHS =

1×2

2

= 1. True.

Inductive step: Assume that

1 + 4 + 7 + · · · + (3n −2) =

n(3n −1)

2

.

Then

1 + 4 + 7 + · · · + (3n −2) + (3n + 1) =

n(3n −1)

2

+ (3n + 1), by the assumption

=

1

2

3n

2

−n + 6n + 2

**, can’t factorise here
**

=

1

2

3n

2

+ 5n + 2

=

(n + 1)(3n + 2)

2

=

(n + 1)(3(n + 1) −1)

2

, correct form.

Hence, by the Principle of Mathematical Induction, S(n) is true for all n ≥ 1.

9

Discrete Mathematics: Week 3

Mathematical Induction (Continued)

Correct formulae are given in advance. How do we know the correct formula?

Experiment to ﬁnd a pattern e.g. sum of the odd integers.

S

2n−1

= 1 + 3 + 5 + 7 + · · · + (2n −1).

n S

2n−1

1 1

2 4

3 9

4 16

.

.

.

.

.

.

It appears that S

2n−1

= n

2

.

Example 1. Show that

S(n) : 1

2

+ 2

2

+ 3

2

+ · · · +n

2

=

n(n + 1)(2n + 1)

6

, n ≥ 1,

is true.

Basis Step: For n = 1, LHS = 1 and RHS =

1 ×2 ×3

6

= 1. True.

Inductive Step: Assume that

1

2

+ 2

2

+ 3

2

+ · · · +n

2

=

n(n + 1)(2n + 1)

6

.

Then

1

2

+ 2

2

+ 3

2

+ · · · +n

2

+ (n + 1)

2

=

n(n + 1)(2n + 1)

6

+ (n + 1)

2

, by the assumption

=

1

6

(n + 1)

2n

2

+n + 6(n + 1)

**, factorize whenever possible
**

=

1

6

(n + 1)

2n

2

+ 7n + 6

=

(n + 1)(n + 2)(2n + 3)

6

=

(n + 1)(n + 2)[2(n + 1) + 1]

6

, correct form.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, S(n) is true for all n ≥ 1.

1

Example 2. Show that

S(n) :

1

1 · 2

+

1

2 · 3

+

1

3 · 4

+ · · · +

1

n(n + 1)

=

n

(n + 1)

, n ≥ 1,

is true.

Work a few terms:

1

1 · 2

=

1

2

1

1 · 2

+

1

2 · 3

=

1

2

+

1

6

=

2

3

1

1 · 2

+

1

2 · 3

+

1

3 · 4

=

1

2

+

1

6

+

1

12

=

3

4

Basis Step: For n = 1, LHS =

1

1 · 2

=

1

2

and RHS =

1

2

. True.

Inductive Step: Assume that

1

1 · 2

+

1

2 · 3

+

1

3 · 4

+ · · · +

1

n(n + 1)

=

n

(n + 1)

.

Then

1

1 · 2

+

1

2 · 3

+

1

3 · 4

+ · · · +

1

n(n + 1)

+

1

(n + 1)(n + 2)

=

n

(n + 1)

+

1

(n + 1)(n + 2)

, by the assumption

=

1

(n + 1)(n + 2)

[n(n + 2) + 1] , factorize whenever possible

=

1

(n + 1)(n + 2)

(n + 1)

2

=

n + 1

n + 2

, correct form.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, S(n) is true for all n ≥ 1.

Example 3. Divisibility example, Johnsonbaugh 1.7.5.

Show that 5

n

−1 is divisible by 4 for all n ≥ 1.

Basis Step: For n = 1, 5

1

−1 = 4 is divisible by 4. True.

Inductive Step: Assume that 5

n

−1 is divisible by 4.

Then we wish to prove that 5

n+1

−1 is divisible by 4.

5

n+1

−1 = 5 ×5

n

−1

= (5

n

−1) + 4 ×5

n

.

2

The ﬁrst part is divisible by 4 by the assumption, and 4 ×5

n

is divisible by 4, hence true.

Alternatively, put 5

n

−1 = 4m, where m is an integer. Then

5

n+1

−1 = 5 ×5

n

−1

= 5 ×(4m+ 1) −1, by the assumption

= 4(5m) + 4,

which is divisible by 4.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, S(n) is true for all n ≥ 1.

Example 4. Geometric sum, Johnsonbaugh 1.7.4.

Use induction to show that, if r = 1,

a +ar

1

+ar

2

+ · · · +ar

n

=

a(r

n+1

−1)

r −1

for all n ≥ 0.

Basis Step: For n = 0, LHS = a and RHS =

a(r −1)

r −1

= a. True.

Inductive Step: Assume that

a +ar

1

+ar

2

+ · · · +ar

n

=

a(r

n+1

−1)

r −1

.

Then

a +ar

1

+ar

2

+ · · · +ar

n

+ar

n+1

=

a(r

n+1

−1)

r −1

+ar

n+1

, by the assumption

=

a(r

n+1

−1) +ar

n+1

(r −1)

r −1

=

a(r

n+1

−1 +r

n+2

−r

n+1

)

r −1

=

a(r

n+2

−1)

r −1

.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, S(n) is true for all n ≥ 0.

Example 5. Show that

S(n) : 4

n

> 5n

2

, n ≥ 3,

is true.

This time the formula is true for n ≥ 3. For n = 1 we have 4

1

> 5 and for n = 2,

4

2

> 5 ×2

2

.

Basis Step: For n = 3, LHS = 4

3

= 64 and RHS = 5 ×3

2

= 45. True.

3

Inductive Step: Assume that 4

n

> 5n

2

, n ≥ 3.

We want to show that 4

n+1

> 5(n + 1)

2

. Then

4

n+1

= 4 (4

n

)

> 4

5n

2

, by the assumption

= 5

n

2

+ 2n

2

+n

2

> 5

n

2

+ 2n + 1

, since n ≥ 3

= 5(n + 1)

2

.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, S(n) is true for all n ≥ 3.

Example 6. Tiling with trominos.

Solomon W. Golomb introduced polyominos in 1954. They can be used in tiling problems.

For example, the tetris game uses tetrominos, which will not tile a rectangle but which

can tile the plane. There are ﬁve free tetrominos, and seven if they are considered to be

one-sided (see ﬁgure).

The Dutch artist M.C. Escher produced many woodcut and lithograph art works of tiling

problems.

There are two trominos, the right and straight trominos. We will henceforth refer to the

right tromino just as a tromino.

4

Trominos can tile a deﬁcient board, that is an n × n board with one square missing,

providing n = 3k. We can see that if n = 3k + 1, then

n

2

−1 = (3k + 1)

2

−1

= 9k

2

+ 6k,

and is divisible by 3.

If n = 3k + 2, then

n

2

−1 = (3k + 2)

2

−1

= 9k

2

+ 12k + 3,

and is divisible by 3.

We can tile all deﬁcient boards with n = 3k, except for some deﬁcient 5 ×5 boards – see

the ﬁgure below. Here we will prove by mathematical induction that all 2

n

×2

n

deﬁcient

boards can be tiled with trominos.

Basis Step: For n = 1, the 2 ×2 deﬁcient board is a tromino and can be tiled.

Inductive Step: Assume that any 2

n

×2

n

deﬁcient board can be tiled.

Then we can divide the 2

n+1

× 2

n+1

deﬁcient board into four 2

n

× 2

n

deﬁcient boards,

with one board having the missing square anywhere, the other three boards having missing

squares in the corners by placement of one tromino as shown in the ﬁgure.

5

We can tile all four 2

n

× 2

n

deﬁcient boards by the hypothesis, and can hence tile the

2

n+1

×2

n+1

deﬁcient board.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, we can tile any 2

n

×2

n

deﬁcient board.

All examples so far have involved the Weak Form of Mathematical Induction, where if

S(n), then S(n +1) is true. The Strong Form of Mathematical Induction allows us

to assume the truth of all of the preceeding statements.

Deﬁnition. Strong Form of Mathematical Induction

Suppose that we have a propositional function S(n) whose domain of discourse is the set

of integers greater than or equal to n

0

. Suppose that

S(n

0

) is true;

for all n > n

0

, if S(k) is true for all k, n

0

≤ k < n, then S(n) is true.

Then S(n) is true for every integer n ≥ n

0

.

Example 7. Recurrence relation example.

Consider the sequence a

1

, a

2

, a

3

, . . . with a

1

= 1, a

2

= 3 and a

n+1

= 3a

n

−2a

n−1

. Then

a

3

= 3 ×3 −2 ×1

= 7

a

4

= 3 ×7 −2 ×3

= 15

a

5

= 3 ×15 −2 ×7

= 31.

It would seem that a

n

= 2

n

−1.

Basis Steps: For n = 1, 2, a

1

= 1 = 2

1

− 1 and a

2

= 3 = 2

2

− 1. We require the two

preceding statements to be true.

Inductive Step: Assume that a

i

= 2

i

−1 for 2 ≤ i ≤ n.

6

We want to prove that a

n+1

= 2

n+1

−1. Now

a

n+1

= 3a

n

−2a

n−1

, using the deﬁnition

= 3 (2

n

−1) −2

2

n−1

−1

**, using the assumption
**

= 2

n

(3 −1) −3 + 2

= 2

n+1

−1.

Since the Basis Steps and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, the formula a

n

= 2

n

−1 is true for all n ≥ 1.

Example 8. Show that postage of six cents or more can be achieved by using only 2-cent

and 7-cent stamps.

Basis Steps: For n = 6, 7. For six cent postage, use three 2-cent stamps and for seven

cent postage, use one 7-cent stamp.

Inductive Step: Assume n ≥ 8 and assume that postage of k cents or more can be achieved

by using only 2-cent and 7-cent stamps for 6 ≤ k < n.

By the assumption, we can make postage of n − 2 cents. Then add a 2-cent stamp to

make postage of n cents. The inductive step is complete.

Since the Basis Steps and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction we can make postage for all n ≥ 6.

Example 9. Prime numbers example.

S(n) : Every positive integer greater than 1 is the product of primes.

Basis Step: For n = 2, 2 is prime, and is the product of one number, itself.

Inductive Step: Assume that i is the product of primes for 2 ≤ i ≤ n.

If n + 1 is prime, it is the product of one prime.

If n + 1 is not prime, then n + 1 = pq where p and q are integers,

2 ≤ p ≤ q ≤ n.

Since by the assumption, p and q are the product of primes, then n+1 = pq is the product

of primes.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction S(n) is true for all n ≥ 2.

7

The Language of Mathematics

Sets

A set is a collection of objects, known as elements. It is described by listing the elements

in parentheses e.g.

A = { 1, 2, 3, 4 }.

Order does not matter e.g.

A = { 1, 2, 3, 4 } = { 2, 1, 4, 3 }.

Elements are assumed diﬀerent e.g.

A = { 1, 2, 3, 4 } = { 1, 2, 2, 3, 4 }.

A large or inﬁnite set can be deﬁned by a property e.g.

B = { 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 },

or

B = { x| x is a positive odd integer less than 20 } ,

and

C = { x| x is a positive integer divisible by 3 } .

Read the symbol | as “such that”.

If X is a ﬁnite set, we let

|X| = the number of elements of X.

e.g. |A| = 4, |B| = 10.

If x is an element of A, we write x ∈ A. If not, we write x ∈ A. e.g.

4 ∈ A, 4 ∈ B, 4 ∈ C.

The empty set is the set with no elements, denoted ∅. Hence ∅ = { }, |∅| = 0. e.g.

x| x ∈ R and x

2

+ 1 = 0

.

What is |{ ∅ }|?

Two sets X and Y are equal if they have the same elements.

X = Y if for every x, if x ∈ X then x ∈ Y and for every x, if x ∈ Y then x ∈ X, i.e.

X = Y if ∀x((x ∈ X → x ∈ Y ) ∧ (x ∈ Y → x ∈ X)).

8

Example 1.

X =

x| x

2

+x −6 = 0

, Y = { 2, −3 } .

If x ∈ X, then

x

2

+ x −6 = 0

(x −2)(x + 3) = 0

x = 2, −3

x ∈ Y.

If x ∈ Y , then if x = 2,

x

2

+x −6 = 0

x ∈ X,

and if x = −3, then

x

2

+x −6 = 0

x ∈ X.

X is a subset of Y if every element of X is an element of Y . We write X ⊆ Y .

In symbols, X is a subset of Y if

∀x(x ∈ X → x ∈ Y ),

e.g.

{ 1, 2 } ⊆ { 1, 2, 3, 4 }.

We have

X ⊆ X

∅ ⊆ X

i.e. ∀x(x ∈ ∅ → x ∈ X)

x ∈ ∅ is false, hence x ∈ ∅ → x ∈ X is true.

If X ⊆ Y and X = Y , then X is a proper subset of Y , denoted X ⊂ Y . We have

If X ⊆ Y and Y ⊆ X, then X = Y .

If X ⊆ Y and Y is ﬁnite, then |X| ≤ |Y |.

If X ⊆ Y , Y is ﬁnite, and |X| = |Y |, then X = Y .

9

Discrete Mathematics: Week 4

Sets (Continued)

The power set of the set X, denoted P(X), is the set of all subsets of X.

Example 2. B = { a, b }, A = { a, b, c }.

P(B) = { ∅, { a }, { b }, { a, b } }, |P(B)| = 4.

P(A) = { ∅, { a }, { b }, { c }, { a, b }, { a, c }, { b, c }, { a, b, c } }, |P(A)| = 8.

Theorem. If |X| = n, then |P(X)| = 2

n

.

Proof: Johnsonbaugh uses a proof by Mathematical Induction. The idea is that exactly

half of the subsets of X contain a particular element of X. This can be seen by pairing

the subsets e.g. for the set A in Example 2.

∅ { a }

{ b } { a, b }

{ c } { a, c }

{ b, c } { a, b, c }

Basis Step: If n = 0, we have the empty set which has only one subset, itself. Then

|∅| = 0, and 2

0

= 1. True.

Inductive Step: Assume that a set with n elements has a power set of size 2

n

.

Let X be a set with n + 1 elements. Remove one element, x, from X to form a set Y .

Then Y has n elements, and by the assumption

|P(Y )| = 2

n

.

But since by the pairing argument

|P(X)| = 2 |P(Y )| ,

then

|P(X)| = 2

n+1

.

Since the Basis Step and Inductive Step have been veriﬁed, by the Principle of Mathe-

matical Induction, if |X| = n, then |P(X)| = 2

n

is true for all n ≥ 0.

An alternative proof is to suppose that a 1 represents the presence of an element in a

subset, and a 0 represents its absence.

Then all subsets of X can be represented by a binary string of length |X| = n, and there

are 2

n

such strings.

1

Set Operations

Given two sets X and Y :

Their union is X ∪ Y = { x| x ∈ X or x ∈ Y }.

Their intersection is X ∩ Y = { x| x ∈ X and x ∈ Y }.

Their diﬀerence is X −Y = { x| x ∈ X and x ∈ Y }.

Example 3. A = { a, b, c, d, e } and B = { 1, 2, 3, 4, a, b }.

A∪ B = { a, b, c, d, e, 1, 2, 3, 4 }

A∩ B = { a, b }

A−B = { c, d, e }

B −A = { 1, 2, 3, 4 }

Sets X and Y are disjoint if X ∩ Y = ∅. For example,

X = { 1, 2, 3 } and Y = { 4, 5 }

are disjoint.

A collection of sets S is pairwise disjoint if X and Y are disjoint for distinct X, Y in S.

For example,

S = { { a, b }, { c, d }, { e, f } }

is pairwise disjoint.

If we deal with sets which are subsets of a set U, then U is called the universal set.

The set U −X is the complement of X, denoted X.

Example 4. U = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }

A = { 1, 3, 5, 7, 9 }, B = { 1, 2, 3, 8 }, C = { 3, 6, 8, 9 }

Then

A∪ B = { 1, 2, 3, 5, 7, 8, 9 }

(A∪ B) = { 0, 4, 6 }

A∪ B ∪ C = { 1, 2, 3, 5, 6, 7, 8, 9 }

(A∪ B ∪ C) = { 0, 4 }

C ∩ (A∪ B) = { 6 }

Venn diagrams provide a pictorial view of sets. U, the universal set, is depicted as a

rectangle. Sets A, B, C, contained in U, are drawn as circles.

A A A B B B

U U U

A∪ B A∩ B A−B

2

A

A

B

C

U

U

A

C ∩ (A∪ B)

In Example 4, we have

A B

C

U

0, 4

1

2

3

5, 7

6

8 9

Theorem 2.1.12 Let U be a universal set and let A, B, and C be subsets of U. The

following properties hold.

(a) Associative laws:

(A∪ B) ∪ C = A ∪ (B ∪ C), (A∩ B) ∩ C = A ∩ (B ∩ C)

(b) Commutative laws:

A∪ B = B ∪ A, A∩ B = B ∩ A

(c) Distributive laws:

A∩ (B ∪ C) = (A∩ B) ∪ (A∩ C), A∪ (B ∩ C) = (A ∪ B) ∩ (A∪ C)

(d) Identity laws:

A ∪ ∅ = A, A∩ U = A

(e) Complement laws:

A∪ A = U, A∩ A = ∅

(f) Idempotent laws:

A ∪ A = A, A∩ A = A

3

(g) Bound laws:

A∪ U = U, A∩ ∅ = ∅

(h) Absorption laws:

A∪ (A∩ B) = A, A∩ (A ∪ B) = A

(i) Involution law:

A = A

(j) 0/1 laws:

∅ = U, U = ∅

(k) De Morgan’s laws for sets:

(A∪ B) = A∩ B, (A∩ B) = A∪ B

Proof: Of the ﬁrst distributive law.

We have the Venn diagrams:

A A B B

C C

U U

B ∪ C A∩ (B ∪ C)

A A B B

C C

U U

A∩ B A ∩ C

Mathematically, the proof is as follows. Let

x ∈ A∩ (B ∪ C).

Then

x ∈ A and x ∈ B ∪ C

x ∈ A and x ∈ B or x ∈ C

x ∈ A∩ B or x ∈ A∩ C,

so that

x ∈ (A∩ B) ∪ (A∩ C).

4

This only proves that

A ∩ (B ∪ C) ⊆ (A∩ B) ∪ (A ∩ C).

Let

x ∈ (A∩ B) ∪ (A∩ C).

Then

x ∈ A∩ B or x ∈ A∩ C

x ∈ A and x ∈ B or x ∈ A and x ∈ C

x ∈ A and x ∈ B or x ∈ C

x ∈ A and x ∈ B ∪ C,

so that

x ∈ A∩ (B ∪ C).

Proof of the ﬁrst De Morgan law.

We have the Venn diagrams:

A A B B

U U

A∪ B (A∪ B) = A∩ B

A A B B

U U

A B

Mathematically, the proof is as follows. Let

x ∈ (A ∪ B).

Then

x ∈ A∪ B

x ∈ A and x ∈ B

x ∈ A and x ∈ B

x ∈ A∩ B.

This only proves that

(A∪ B) ⊆ A∩ B.

5

Let

x ∈ A ∩ B.

Then

x ∈ A and x ∈ B

x ∈ A and x ∈ B

x ∈ A∪ B

x ∈ (A∪ B).

If S = { A

1

, A

2

, . . . , A

n

}, then the union of many sets is

S = { x| x ∈ A

i

for some A

i

∈ S }.

The intersection of many sets is

S = { x| x ∈ A

i

for all A

i

∈ S }.

We write

S =

n

i=1

A

i

,

S =

n

i=1

A

i

.

For inﬁnitely many sets

S = { A

1

, A

2

, A

3

, . . . }

this becomes

S =

∞

i=1

A

i

,

S =

∞

i=1

A

i

.

Example 5. Let

S = { A

1

, A

2

, A

3

, . . . }

where

A

n

= { n, n + 1, n + 2, . . . }.

That is,

A

1

= { 1, 2, 3, . . . }

A

2

= { 2, 3, 4, . . . }

A

3

= { 3, 4, 5, . . . },

etc.

Obviously

S = { 1, 2, 3, . . . } = A

1

,

and

S = ∅,

as

1 / ∈

S as 1 / ∈ A

2

, A

3

, . . .

2 / ∈

S as 2 / ∈ A

3

, A

4

, . . .

etc.

6

A collection S of non-empty subsets of X is a partition of X if each element of X belongs

to exactly one member of X. That is, S is pairwise disjoint and

S = X.

Example 6.

(a) X = { 1, 2, 3, 4, 5, 6, 7, 8 }

Then

S = { {2, 4, 8 }, { 1, 3, 5, 7 }, { 6 } }

is a partition of S.

(b) X = { x| x ∈ R}

S = { { x| x is rational }, { x| x is real } }

is not a partition of X.

T = { { x| x is rational }, { x| x is irrational } }

is a partition of X.

Sets are unordered collections of elements. Sometimes order is important.

An ordered pair of elements, written (a, b), is diﬀerent from the ordered pair (b, a) (unless

a = b).

Alternatively, (a, b) = (c, d) if and only if a = c and b = d.

If X and Y are sets, we let X ×Y denote the set of all ordered pairs (x, y) where x ∈ X

and y ∈ Y .

We call X ×Y the Cartesian product of X and Y .

Example 7. The Last Duck Vietnamese Restaurant and Takeaway sells

Entrees (set E)

a: Chicken Cold Roll

b: Vietnamese Spring Roll

c: Steamed Pork Dumplings

Mains (set M)

1: Twice Cooked Duck Leg Curry

2: Char-grilled Lemongrass Pork

3: Steamed Ginger Infused Chicken

4: Whole Prawns with Fanta Fish Sauce

7

Then E = { a, b, c } and M = { 1, 2, 3, 4 }.

The Cartesian product lists the 12 possible dinners consisting of one entree and one main

course. So

E ×M = { (a, 1), (a, 2), (a, 3), (a, 4), (b, 1), (b, 2), (b, 3), (b, 4), (c, 1), (c, 2), (c, 3), (c, 4) }.

This is actually a cut-down version of the menu. In fact, there are 5 entrees and 8 main

courses. How many dinners are possible?

For the general case,

|X ×Y | = |X| · |Y |.

8

Relations

Relations Johnsonbaugh 3.1

We can consider a relation as a table linking the elements of two sets e.g. product vs price.

Deﬁnition. A (binary) relation R from a set X to a set Y is a subset of the Cartesian

product X ×Y .

If (x, y) ∈ R, we write xRy and say that x is related to y. If X = Y , we call R a (binary)

relation on X.

The set

{ x ∈ X | (x, y) ∈ R for some y ∈ Y }

is called the domain of R.

The set

{ y ∈ Y | (x, y) ∈ R for some x ∈ X }

is called the range of R.

In simpler terms, a (binary) relation R connects elements in X to elements in Y . For

example, we have pictorially

1

2

3

4

a

b

c

X

Y

R = { (1, a), (2, c), (3, b) }.

This is called an arrow diagram.

The domain is all elements of X that occur in R i.e. { 1, 2, 3 }.

The range is all elements of Y that occur in R i.e. { a, b, c }.

Example 1. Johnsonbaugh 3.1.3

X = { 2, 3, 4 } and Y = { 3, 4, 5, 6, 7 }

Deﬁne a relation from X to Y by (x, y) ∈ R if x divides y (with 0 remainder). Therefore

R = { (2, 4), (2, 6), (3, 3), (3, 6), (4, 4) }.

9

We could write R as a table:

X Y

2 4

2 6

3 3

3 6

4 4

The domain of R is { 2, 3, 4 }, and the range of R is { 3, 4, 6 }.

Example 2. Johnsonbaugh 3.1.4

Let R be a relation on X = { 1, 2, 3, 4 } deﬁned by (x, y) ∈ R if x ≤ y, x, y ∈ X.

R = { (1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 3), (2, 4), (3, 3), (3, 4), (4, 4) }.

The domain and range of R are both X.

A relation on a set can be described by its digraph.

Draw dots or vertices as elements of X. If (x, y) ∈ R, draw an arrow from x to y – a

directed edge.

An element (x, x) is called a loop.

The diagraph of Example 2 is:

1 2

3 4

Example 3. The relation R on X = { a, b, c, d } is

R = { (a, a), (b, c), (c, b), (d, d) }.

The digraph is:

a b

c d

10

Deﬁnition. A relation R on a set X is called reﬂexive if (x, x) ∈ R for every x ∈ X.

Example 2 is reﬂexive. There is a loop on every vertex.

Example 3 is not reﬂexive.

Deﬁnition. A relation R on a set X is called symmetric if for all x, y ∈ X, if (x, y) ∈ R

then (y, x) ∈ R.

Example 3 is symmetric. Directed edges go both ways between vertices.

Deﬁnition. A relation R on a set X is called antisymmetric if for all x, y ∈ X, if (x, y) ∈ R

and x = y, then (y, x) ∈ R.

Example 2 is antisymmetric. Between any two distinct vertices there is at most one

directed edge.

Can a relation R be both symmetric and antisymmetric?

Deﬁnition. A relation R on a set X is called transitive if for all x, y, z ∈ X, if (x, y) ∈ R

and (y, z) ∈ R, then (x, z) ∈ R.

Example 2 is transitive. We need to list all pairs to verify.

(x, y) (y, z) (x, z)

(1, 1) (1, 1) (1, 1)

(1, 1) (1, 2) (1, 2)

(1, 1) (1, 3) (1, 3)

(1, 1) (1, 4) (1, 4)

(1, 2) (2, 2) (1, 2)

(1, 2) (2, 3) (1, 3)

etc.

Do we really need to list all pairs? If x = y, and (x, y), (y, z) ∈ R, then (x, z) ∈ R is

automatically true. Hence the table need be only

(x, y) (y, z) (x, z)

(1, 2) (2, 3) (1, 3)

(1, 2) (2, 4) (1, 4)

(1, 3) (3, 4) (1, 4)

(2, 3) (3, 4) (2, 4)

Example 3 is not transitive e.g. (b, c), (c, b) ∈ R, but (b, b) ∈ R.

The digraph of a transitive relation has the property that whenever there are directed

edges from x to y and from y to z, there is a directed edge from x to z.

11

Discrete Mathematics: Week 5

Relations (Continued)

Relations can be used to order elements of a set. For example, the relation R on the

positive integers deﬁned by

(x, y) ∈ R if x ≤ y

orders the integers, and is reﬂexive, antisymmetric and transitive i.e.

reﬂexive : x ≤ x

antisymmetric : if x ≤ y and x = y, then y ≤ x

transitive : if x ≤ y and y ≤ z, then x ≤ z.

Deﬁnition. A relation R on a set X is called a partial order if R is reﬂexive, antisymmetric

and transitive.

Example 4. X = { 2, 3, 4, 5, 6 }

R is the relation deﬁned by (x, y) ∈ R if y is larger than x by an even number or zero.

Hence

R = { (2, 2), (2, 4), (2, 6), (3, 3), (3, 5), (4, 4), (4, 6), (5, 5), (6, 6) }.

The digraph is

2

3

4

5

6

This is

reﬂexive : loops on all vertices

antisymmetric : at most one directed arc between each pair of vertices

transitive : need only check (2, 4), (4, 6), and (2, 6) ∈ R.

If R is a partial order on X, we often write x y when (x, y) ∈ R.

If x, y ∈ X and either x y or y x, then we say x and y are comparable.

If x, y ∈ X, x y and y x, then we say x and y are incomparable.

1

If every pair of elements in X is comparable, we call R a total order.

Example 2 is a total order. Either (x, y) ∈ R or (y, x) ∈ R for all x, y = 1, 2, 3, 4.

More generally, the less than or equals relation on the positive integers is a total order,

since either x ≤ y or y ≤ x (or both if x = y).

Example 4 is not a total order. Why?

Partial orders can be used in task scheduling.

Example 5. Johnsonbaugh 3.1.21

The set T of tasks in taking an indoor ﬂash photograph is as follows.

1. Remove lens cap.

2. Focus camera.

3. Turn oﬀ safety lock.

4. Turn on ﬂash unit.

5. Push photo button.

Some tasks must be done before others, some can be done in either order.

Deﬁne the relation R on T by

i Rj if i = j or task i must be done before task j.

Then

R = { (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (1, 2), (1, 5), (2, 5), (3, 5), (4, 5) }.

R is reﬂexive, antisymmetric and transitive, and is a partial order. Why is R not a total

order?

Possible solutions are 1, 2, 3, 4, 5 or 3, 4, 1, 2, 5.

Deﬁnition. Let R be a relation from X to Y . The inverse of R, denoted R

−1

, is the

relation from X to Y deﬁned by

R

−1

= { (y, x) | (x, y) ∈ R} .

Example 1. (Continued)

X = { 2, 3, 4 } and Y = { 3, 4, 5, 6, 7 }

We have (x, y) ∈ R if x divides y, so

R = { (2, 4), (2, 6), (3, 3), (3, 6), (4, 4) },

then

R

−1

= { (4, 2), (6, 2), (3, 3), (6, 3), (4, 4) }

and might be described as (y, x) ∈ R

−1

if y is divisible by x.

2

Deﬁnition. Let R

1

be a relation from X to Y and R

2

be a relation from Y to Z. The

composition of R

1

and R

2

, denoted R

2

◦ R

1

, is the relation from X to Z deﬁned by

R

2

◦ R

1

= { (x, z) | (x, y) ∈ R

1

and (y, z) ∈ R

2

for some y ∈ Y } .

Example 6.

A = { 1, 2, 3, 4 } B = { a, b, c, d } C = { x, y, z }

R = { (1, a), (2, d), (3, a), (3, b), (3, d) }

S = { (b, x), (b, z), (c, y), (d, z) }

The arrow diagram represents this as:

1 2 3 4

a b c d

x y z

A

B

C

The composition of the relations is

S ◦ R = { (2, z), (3, x), (3, z) }.

3

Matrices of Relations

For a quick revision on matrices, read Johnsonbaugh Appendix A.

A matrix is a convenient way to represent a relation R from X to Y .

Label the rows with the elements of X, label the columns with the elements of Y , and

make the entry in row x column y a 1 if xRy and a 0 otherwise.

This is the matrix of the relation R.

Example 2. (from §3.1, revisited)

X = { 1, 2, 3, 4 } and (x, y) ∈ R if x ≤ y.

R = { (1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 3), (2, 4), (3, 3), (3, 4), (4, 4) }.

Then the matrix is

1 2 3 4

1

2

3

4

1 1 1 1

0 1 1 1

0 0 1 1

0 0 0 1

The matrices depend on the ordering of the elements in the sets X and Y .

A relation R on X has a square matrix.

The relation R on a set X which has a matrix A is

• reﬂexive if and only if (iﬀ) A has 1’s on the main diagonal. Recall

(x, x) ∈ R for all x ∈ X.

• symmetric if and only if A is symmetric. Recall

If (x, y) ∈ R then (y, x) ∈ R.

If the (i, j) th element of A is 1, so is the (j, i) th element.

• antisymmetric if and only if any 1 in the (i, j) th entry of A is matched by a 0 in

the (j, i) th entry in any position oﬀ the main diagonal. Recall

If (x, y) ∈ R then (y, x) ∈ R for x = y.

Example 6. (from §3.1, revisited)

A = { 1, 2, 3, 4 } B = { a, b, c, d } C = { x, y, z }

R = { (1, a), (2, d), (3, a), (3, b), (3, d) }

S = { (b, x), (b, z), (c, y), (d, z) }

4

A

1

= matrix of R =

a b c d

1

2

3

4

1 0 0 0

0 0 0 1

1 1 0 1

0 0 0 0

A

2

= matrix of S =

x y z

a

b

c

d

0 0 0

1 0 1

0 1 0

0 0 1

A

1

A

2

=

1 0 0 0

0 0 0 1

1 1 0 1

0 0 0 0

0 0 0

1 0 1

0 1 0

0 0 1

=

x y z

1

2

3

4

0 0 0

0 0 1

1 0 2

0 0 0

What does the 2 in (A

1

A

2

)

(3,3)

mean?

The composition of the relations is

S ◦ R = { (2, z), (3, x), (3, z) }.

So if we convert the ‘2’ to a ‘1’, we have the matrix of S ◦ R.

Theorem 3.3.6 Johnsonbaugh

Let R

1

be a relation from X to Y and let R

2

be a relation from Y to Z. Choose the

orderings of X, Y , and Z. Let A

1

be the matrix of R

1

and let A

2

be the matrix of R

2

with respect to the orderings selected. The matrix of the relation R

2

◦ R

1

with respect to

the orderings selected is obtained by replacing each nonzero term in the matrix product

A

1

A

2

by 1.

Discussion: Suppose that the (i, j) th entry in A

1

A

2

is nonzero. We obtain this entry by

multiplying the i th row of A

1

by the j th column of A

2

. Therefore there must be at least

one element (i, k) in the i th row of A

1

and at least one element (k, j) in the j th column

of A

2

which are both 1.

Then (i, k) ∈ R

1

and (k, j) ∈ R

2

, so (i, j) ∈ R

2

◦ R

1

.

Note that the matrix sizes are automatically correct for matrix multiplication, since the

number of columns of A

1

and the number of rows in A

2

are equal to the number of

elements of Y .

Matrix Test for Transitivity

The theorem gives a test for a relation R on a set X being transitive.

Compute A

2

and compare A and A

2

. The relation R is transitive if and only if whenever

entry (i, j) in A

2

is nonzero, entry (i, j) in A is also nonzero.

Suppose that the entry (i, j) in A

2

is nonzero. Then there is at least one element (i, k) in

the i th row of A and at least one element (k, j) in the j th column of A which are both

1. Hence (i, k) ∈ R and (k, j) ∈ R. If the (i, j) th entry of A is zero, then (i, j) ∈ R.

5

Example 2. (from §3.1, revisited)

R is a relation on X = { 1, 2, 3, 4 } deﬁned by (x, y) ∈ R if x ≤ y, for x, y ∈ X.

R = { (1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 3), (2, 4), (3, 3), (3, 4), (4, 4) }.

A =

1 2 3 4

1

2

3

4

1 1 1 1

0 1 1 1

0 0 1 1

0 0 0 1

A

2

=

1 2 3 4

1

2

3

4

1 2 3 4

0 1 2 3

0 0 1 2

0 0 0 1

Then A

2

has 1’s only in positions where A has 1’s, and the relation is transitive.

Example 3. (from §3.1, revisited)

R = { (a, a), (b, c), (c, d), (d, d) }

A =

a b c d

a

b

c

d

1 0 0 0

0 0 1 0

0 1 0 0

0 0 0 1

A

2

=

a b c d

a

b

c

d

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

Then A

2

has 1’s in the (b, b) and (c, c) positions, whereas A does not. The relation is not

transitive i.e. (b, c) ∈ R and (c, b) ∈ R, but (b, b) ∈ R and (c, c) ∈ R.

6

Functions Johnsonbaugh 2.2

Deﬁnition. Let X and Y be sets. A function f from X to Y is a subset of the Cartesian

product X ×Y having the property that for each x ∈ X, there is exactly one y ∈ Y with

(x, y) ∈ f. We sometimes denote a function f from X to Y as f : X → Y . (Can also

write y = f(x).)

The set X is called the domain of f. The set

{ y | (x, y) ∈ f }

(which is a subset of Y ) is called the range of f.

Example 1. The relation f = { (1, a), (2, b), (3, a) } from X = { 1, 2, 3 } to Y = { a, b, c }

is a function. Why?

The relation R = { (1, a), (2, b), (3, c) } from X = { 1, 2, 3, 4 } to Y = { a, b, c } is not a

function. Why?

The relation R = { (1, a), (1, b), (2, b), (3, c) } from X = { 1, 2, 3 } to Y = { a, b, c } is not a

function. Why?

We can depict the situations using arrow diagrams. For the three cases we have:

1

2

3

a

b

c

X Y

1

2

3

4

a

b

c

X

Y

1

2

3

a

b

c

X Y

7

There must be exactly one arrow from every element in the domain. There cannot be no

arrow, or more than one arrow.

Some Useful Functions

Deﬁnition. If x is an integer and y is a positive integer, we deﬁne x mod y to be the

remainder when x is divided by y.

Some simple examples:

6 mod 3 = ?

9 mod 10 = ?

14 mod 3 = ?

365 mod 7 = ?

This last result tells us that 29th March next year will be a Thursday.

Deﬁnition. The ﬂoor of x, denoted ⌊ x⌋, is the greatest integer less than or equal to x.

The ceiling of x, denoted ⌈ x⌉, is the least integer greater than or equal to x.

Some simple examples:

⌊ 1.4 ⌋ = ? ⌈ 1.4 ⌉ = ?

⌊ 6 ⌋ = ? ⌈ 6 ⌉ = ?

⌊ −5.7 ⌋ = ? ⌈ −5.7 ⌉ = ?

The graphs of the ﬂoor and ceiling functions are shown below.

0

[

[

[

[

[

)

)

)

)

)

1

1

2

2 3

−1

−1

−2

−2 −3 x

y

Floor function

8

0

]

]

]

]

]

(

(

(

(

(

1

1

2

2 3

−1

−1

−2

−2 −3 x

y

Ceiling function

Example 2. Hash Functions Johnsonbaugh Example 2.2.14

We wish to store nonnegative integers in computer memory cells. A hash function com-

putes the location from the data item.

For example, if the cells are labelled 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, we might use

h(n) = n mod 11.

Store the numbers 15, 558, 32, 132, 102, 5 in the eleven cells.

15 = 1 × 11 + 4 so 15 mod 11 = 4

558 = 50 × 11 + 8 so 558 mod 11 = 8

32 = 2 × 11 + 10 so 32 mod 11 = 10

132 = 12 × 11 + 0 so 132 mod 11 = 0

102 = 9 × 11 + 3 so 102 mod 11 = 3

5 = 0 × 11 + 5 so 5 mod 11 = 5

These all store in the appropriate cells as shown in the diagram below.

0 1 2 3 4 6 7 8 9 10

132 102 15 5

5

558 32

Now store 257 = 23 × 11 + 4.

But location 4 is occupied. A collision has occurred.

The collision resolution policy is to use the next highest unoccupied cell. But if all higher

numbered cells are occupied, start looking at the lowest numbered cell. In this case,

10 → 0.

To ﬁnd the data item n, compute m = h(n) and look at location m. If n is not there,

look at higher numbered cells.

9

Deﬁnition. A function f from X to Y is said to be one-to-one (or injective) if for each

y ∈ Y , there is at most one x ∈ X with f(x) = y.

Example 3. f = { (1, a), (2, b), (3, c) } from X = { 1, 2, 3 } to Y = { a, b, c, d } is a one-

to-one function.

f = { (1, a), (2, b), (3, a) } from X = { 1, 2, 3 } to Y = { a, b, c, d } is a function, but not

one-to-one.

The arrow diagrams illustrate this.

1 1

2 2

3 3

a a

b b

c c

d d

X X

Y Y

Deﬁnition. If f is a function from X to Y and the range of f is Y , f is said to be onto

Y (or an onto function or a surjective function).

Example 4. f = { (1, a), (2, b), (3, c) } from X = { 1, 2, 3 } to Y = { a, b, c } is onto. The

arrow diagram is:

1

2

3

a

b

c

X Y

If Y = { a, b, c, d }, f is not onto.

Deﬁnition. A function that is both one-to-one and onto is called a bijection.

In the preceding example, the function

f = { (1, a), (2, b), (3, c) }

is a bijection.

Inverse Function

Suppose that f is a one-to-one, onto function from X to Y . The inverse relation

{ (y, x) | (x, y) ∈ f }

is a one-to-one, onto function from Y to X, the inverse function f

−1

.

10

Since f is onto, the range of f is Y , and hence the domain of f

−1

is Y .

Since f is one-to-one, there is only one x ∈ X for which (y, x) ∈ f

−1

, so f

−1

is a function.

We can obtain the arrow diagram for f

−1

by reversing the directions of the arrows for f.

Since f is one-to-one, f

−1

is one-to-one. Since f is a function, the domain of f is X and

hence f

−1

is onto.

Example 5. R

1

= { (1, b), (2, a), (3, c) } from X = { 1, 2, 3 } to Y = { a, b, c, d }.

Or we could say

R

1

= { (1, b), (2, a), (3, c) } ⊆ { 1, 2, 3 } × { a, b, c, d }.

R

1

is a function, is one-to-one, but is not onto. This is evident also from the arrow

diagram.

1

2

3

a

b

c

d

X

Y

R

−1

1

= { (a, 2), (b, 1), (c, 3) } ⊆ { a, b, c, d } × { 1, 2, 3 }

is not a function. Why?

R

2

= { (1, b), (2, c), (3, a), (4, d) } ⊆ { 1, 2, 3, 4 } × { a, b, c, d }.

The arrow diagram is

1

2

3

4

a

b

c

d

X Y

R

2

is a function, is one-to-one, and is onto.

R

−1

2

= { (a, 3), (b, 1), (c, 2), (d, 4) } ⊆ { a, b, c, d } × { 1, 2, 3, 4 }.

R

−1

2

is a function, is one-to-one, and is onto.

11

Discrete Mathematics: Week 6

Algorithms

Introduction

An algorithm is a step by step method of solving a problem i.e. a recipe.

Algorithms typically have the following characteristics.

• Input The algorithm receives input.

• Output The algorithm produces output.

• Precision The steps are precisely stated.

• Determinism The intermediate results of each step of execution are unique

and are determined only by the inputs and the results of the preceding steps.

• Finiteness The algorithm terminates; that is, it stops after ﬁnitely many

instructions have been executed.

• Correctness The output produced by the algorithm is correct; that is, the

algorithm correctly solves the problem.

• Generality The algorithm applies to a set of inputs.

Algorithms are written in pseudocode, which resembles real computer code. Johnsonbaugh

in the 6th edition has rewritten algorithms to be more like Java.

Algorithm 4.1.1 Finding the Maximum of Three Numbers

This algorithm ﬁnds the largest of the numbers a, b, and c.

Input: a, b, c

Output: large (the largest of a, b, and c)

1. max3(a, b, c) {

2. large = a

3. if (b > large) // if b is larger then large, update large

4. large = b

5. if (c > large) // if c is larger then large, update large

6. large = c

7. return large

8. }

1

Algorithms consist of a title, a brief description of the algorithm, the input to and output

from the algorithm, and functions containing the instructions of the algorithm. This

algorithm has one function.

Sometimes lines are numbered to make it convenient to refer to them.

line 1 max3 is the name of the function, and a, b, c are input parameters or variables.

line 2 = is the assignment operator. Testing equality uses ==. large is assigned the value

of a.

line 3 This introduces the if statement. The structure is

if (condition)

action

If condition is true, action is executed and control passes to the statement following

action.

If condition is false, action is not executed and control passes immediately to the

statement following action. For example,

if (x == 0)

y = 0

z = x + y

If action consists of multiple statements, enclose them in braces.

if (x ≥ 0) {

y = 0

z = x + 3

}

We can use

arithmetic operators +, −, ∗, /, ( , )

relational operators ==, ¬ =, <, >, ≤, ≥

logical operators ∧, ∨, ¬

The Matlab equivalent command is

if expression

commands

end

There is also the if else statement, which has the structure

if (condition)

action 1

else

action 2

2

The notation // signals that the rest of the line is a comment. A more common

notation is to use %, which is used, for example, by Matlab and postscript. The

Matlab equivalent command is

if expression

commands if true

else

commands if false

end

Matlab can extend the if structure further with elseif.

line 4 This assigns large the value of b if b > large.

line 6 This assigns large the value of c if c > large.

line 7 The return statement simply terminates the function.

The return large statement terminates the function and returns the value of large.

If there is no return statement, the closing brace terminates the function.

We can assign values to the input variables and use a simulation called a trace to evaluate

the operation of the algorithm. For example, we set

a = 6, b = 1, c = 9.

Then

line 2 Set large to a, namely 6.

line 3 The if statement b > large is false, so line 4 is not executed.

line 5 The if statement c > large is true, so at line 6 large is set to the value of c, namely 9.

Algorithm 4.1.2 Finding the Maximum Value in a Sequence

This algorithm ﬁnds the largest of the numbers s

1

, . . . s

n

.

Input: s, n

Output: large (the largest value in the sequence s)

max(s, n) {

large = s

1

for i = 2 to n

if (s

i

> large)

large = s

i

return large

}

3

The structure of the for loop is

for var = init to limit

action

If action consists of multiple statements, enclose them in braces.

The for loop speciﬁes the initial and ﬁnal integer values between which the operations are

processed. Alternatively, we can use a while loop.

while (condition)

action

action is repeated as long as condition is true.

The Matlab equivalent command is

for i = m:n

commands

end

Using a while loop, Algorithm 4.1.2 would become

max(s, n) {

large = s

1

i = 2

while i ≤ n {

if (s

i

> large)

large = s

i

i = i + 1

}

return large

}

The Matlab equivalent command is

while expression

commands

end

4

Examples of Algorithms

Algorithm 4.2.1 Text Search

This algorithm searches for an occurrence of the pattern p in text t. It returns the smallest

index i such that p occurs in t starting at index i. If p does not occur in t, it returns 0.

Input: p (indexed from 1 to m), m, t (indexed from 1 to n), n

Output: i

text search(p, m, t, n) {

for i = 1 to n −m + 1 {

j = 1

// i is the index in t of the ﬁrst character of the substring

// to compare with p, and j is the index in p

// the while loop compares t

i

· · · t

i+m−1

and p

1

· · · p

m

while (t

i+j−1

== p

j

) {

j = j + 1

if (j > m)

return i

}

}

return 0

}

The algorithm indexes the characters in the text with i and those in the pattern with j.

The search starts with the ﬁrst character in the text, and if the pattern is not found,

ﬁnishes with character n −m+1 in the text, when the last characters in the pattern and

text coincide.

If there is a match for the ﬁrst indices in the pattern and text, then the pattern index j

is incremented by 1, and the next characters compared. This continues until either the

pattern is found, or there is not a match. In the latter case, the text index i is incremented

by 1, the pattern index j is reset to 1, and the match process repeats.

5

Example 1. This shows the operation of the text search algorithm in a search for the

string “001” in the string “010001”.

j = 1

↓

0 0 1

0 1 0 0 0 1

↑

i = 1 Y

j = 2

↓

0 0 1

0 1 0 0 0 1

↑

i = 1 N

j = 1

↓

0 0 1

0 1 0 0 0 1

↑

i = 2 N

j = 1

↓

0 0 1

0 1 0 0 0 1

↑

i = 3 Y

j = 2

↓

0 0 1

0 1 0 0 0 1

↑

i = 3 Y

j = 3

↓

0 0 1

0 1 0 0 0 1

↑

i = 3 N

j = 1

↓

0 0 1

0 1 0 0 0 1

↑

i = 4 Y

j = 2

↓

0 0 1

0 1 0 0 0 1

↑

i = 4 Y

j = 3

↓

0 0 1

0 1 0 0 0 1

↑

i = 4 Y

1. The text index i and the pattern index j are initially set to 1, and the ﬁrst characters

compared. There is a match.

2. The pattern index j is incremented to 2, and the pattern and text characters j and

i + j −1 compared. There is not a match.

3. The text index i is incremented to 2 and the pattern index j reset to 1. There is

not a match in the corresponding characters.

4. The text index i is incremented to 3 and the pattern index j left at 1. There is a

match.

5. The pattern index j is incremented to 2, and the pattern and text characters j and

i + j −1 compared. There is a match.

6. The pattern index j is incremented to 3, and the pattern and text characters j and

i + j −1 compared. There is not a match.

7. The text index i is incremented to 4 and the pattern index j reset to 1. There is a

match.

8. The pattern index j is incremented to 2, and the pattern and text characters j and

i + j −1 compared. There is a match.

9. The pattern index j is incremented to 3, and the pattern and text characters j and

i + j −1 compared. There is a match, and the pattern is found.

6

Example 2. Algorithm Testing Whether a Positive Integer is Prime

This algorithm ﬁnds whether a positive integer is prime or composite.

Input: m, a positive integer

Output: true, if m is prime; false, if m is not prime

is prime(m) {

for i = 2 to

√

m

if (m mod i == 0)

return false

return true

}

Note that the modulus operation need only go up to

√

m, since if m is not prime, one

factor will be less than or equal to

√

m and one factor will be greater than or equal to

√

m. If i mod m is 0, then i divides m, false is returned, and function is terminated. If

the for loop runs to its conclusion, control passes to the ﬁfth line and true is returned.

Example 3. Algorithm Finding a Prime Larger Than a Given Integer

This algorithm ﬁnds the ﬁrst prime larger than a given positive integer.

Input: n, a positive integer

Output: m, the smallest prime greater than n

large prime(n) {

m = n + 1

while (¬is prime(m) )

m = m + 1

return m

}

The algorithm starts with m = n + 1, calls the function is prime to test if m is prime,

and continually increments m in a while loop as long as m is not prime. If the function

is prime returns false, then ¬false is true, and the while loop continues. If the function

is prime returns true, then ¬true is false, the while loop terminates, and m is returned.

7

Analysis of Algorithms

Analysis of an algorithm involves deriving estimates or storage space to execute an algo-

rithm. Time is more crucial. The following table shows how time varies in the execution

of an algorithm with the size of the input and the complexity of the algorithm. Note that

lg n means the logarithm to the base 2 of n. We can see from the table, for example, that

with an input of size n = 10

6

, the time to execute is 20 seconds if the algorithm behaves

as nlg n, and twelve days if the algorithm behaves as n

2

.

This has real implications. Cooley and Tukey derived the Fast Fourier Transform algo-

rithm in the 1960’s, which reduced the time for a numerical discrete Fourier transform

from behaving as n

2

to behaving as nlg n.

Real-life algorithms are checked for the amount of time to execute. For example, it is

pointless to have an air traﬃc control algorithm that takes two hours to run if an update

is required every 15 minutes.

Suppose that we measure the size of the “input” as n.

Best-case time

The minimum time needed to execute the algorithm among all inputs of size n.

Worst-case time

The maximum time needed to execute the algorithm among all inputs of size n.

Average-case time

The average time needed to execute the algorithm among all inputs of size n.

Example 1.

(a) The algorithm is to ﬁnd the largest element in a sequence.

The number of iterations of a while loop (or comparisons) is n −1 for input of size

n in all three cases.

(b) Search for a key word in a list of size n.

The best-case time is 1 if the key word is ﬁrst in the list.

The worst-case time is n if the key word is last in the list, or not there at all.

The average-case time is the sum of all n+1 cases, where the key word is in position

i, i = 1, 2, . . . , n in the list or not there at all. This is

1 + 2 + 3 + · · · + n + n

n + 1

=

n(n + 1)/2 + n

n + 1

=

n

2

+ 3n

2(n + 1)

.

(c) A set X contains red items and black items. Count all subsets that contain at least

one red item. Since there are 2

n

subsets, the time taken behaves as 2

n

.

(d) In the travelling salesperson problem, the salesperson visits n towns in some order.

There are

1

2

(n −1)! possible tours of n towns.

Question: Which grows the faster, 2

n

or

1

2

(n −1)! .

8

Number of Steps

for Input Time to Execute if n =

of Size n 3 6 9 12

1 10

−6

sec 10

−6

sec 10

−6

sec 10

−6

sec

lg lg n 10

−6

sec 10

−6

sec 2 ×10

−6

sec 2 ×10

−6

sec

lg n 2 ×10

−6

sec 3 ×10

−6

sec 3 ×10

−6

sec 4 ×10

−6

sec

n 3 ×10

−6

sec 6 ×10

−6

sec 9 ×10

−6

sec 10

−5

sec

nlg n 5 ×10

−6

sec 2 ×10

−5

sec 3 ×10

−5

sec 4 ×10

−5

sec

n

2

9 ×10

−6

sec 4 ×10

−5

sec 8 ×10

−5

sec 10

−4

sec

n

3

3 ×10

−5

sec 2 ×10

−4

sec 7 ×10

−4

sec 2 ×10

−3

sec

2

n

8 ×10

−6

sec 6 ×10

−5

sec 5 ×10

−4

sec 4 ×10

−3

sec

Number of Steps

for Input Time to Execute if n =

of Size n 50 100 1000

1 10

−6

sec 10

−6

sec 10

−6

sec

lg lg n 2 ×10

−6

sec 3 ×10

−6

sec 3 ×10

−6

sec

lg n 6 ×10

−6

sec 7 ×10

−6

sec 10

−5

sec

n 5 ×10

−5

sec 10

−4

sec 10

−3

sec

nlg n 3 ×10

−4

sec 7 ×10

−4

sec 10

−2

sec

n

2

3 ×10

−3

sec 0.01 sec 1 sec

n

3

0.13 sec 1 sec 16.7 min

2

n

36 yr 4 ×10

16

yr 3 ×10

287

yr

Number of Steps

for Input Time to Execute if n =

of Size n 10

5

10

6

1 10

−6

sec 10

−6

sec

lg lg n 4 ×10

−6

sec 4 ×10

−6

sec

lg n 2 ×10

−5

sec 2 ×10

−5

sec

n 0.1 sec 1 sec

nlg n 2 sec 20 sec

n

2

3 hr 12 days

n

3

32 yr 31, 710 yr

2

n

3 ×10

30089

yr 3 ×10

301016

yr

TABLE 4.3.1 Time to execute an algorithm if one step takes 1 microsecond to execute.

9

Often we are less interested in the best- or worst-case times for an algorithm to execute

than in how the times grow as n increases.

Example 2. Suppose the worst-case time is

t(n) =

n

3

3

+ n

2

−

n

3

.

Then for n = 10, 100, 1, 000, 10, 000 we have the table

n n

3

/3 + n

2

−n/3 n

3

/3

10 430 333

100 343, 300 333, 333

1, 000 334, 333, 000 333, 333, 333

10, 000 3.334 ×10

11

3.333 ×10

11

For large n,

t(n) ≈

n

3

3

.

Hence t(n) is of order n

3

, ignoring the constant

1

3

.

Deﬁnition. Let f and g be functions with domain { 1, 2, 3, . . . }.

We write

f(n) = O(g(n))

and say that f(n) is of order at most g(n) or f(n) is big oh of g(n) if there exists a

positive constant C

1

such that

|f(n)| ≤ C

1

|g(n)|

for all but ﬁnitely many positive integers n.

We write

f(n) = Ω(g(n))

and say that f(n) is of order at least g(n) or f(n) is omega of g(n) if there exists a positive

constant C

2

such that

|f(n)| ≥ C

2

|g(n)|

for all but ﬁnitely many positive integers n.

We write

f(n) = Θ(g(n))

and say that f(n) is of order g(n) or f(n) is theta of g(n) if f(n) = O(g(n)) and f(n) =

Ω(g(n)).

10

Example 3. f(n) = 4n + 3. Then

f(n) ≤ 4n + 3n

= 7n,

so f(n) = O(n). Also

f(n) ≥ 4n,

so f(n) = Ω(n).

Therefore f(n) = Θ(n).

Example 4. f(n) = 2n

2

+ 3n + 1. Then

f(n) ≤ 2n

2

+ 3n

2

+ n

2

= 6n

2

,

so f(n) = O(n

2

). Also

f(n) ≥ 2n

2

,

so f(n) = Ω(n

2

).

Therefore f(n) = Θ(n

2

).

Example 2. (Continued) t(n) =

n

3

3

+ n

2

−

n

3

.

t(n) ≤

n

3

3

+ n

2

≤

n

3

3

+ n

3

=

4

3

n

3

,

so t(n) = O(n

3

). Also

t(n) ≥

n

3

3

+ n

2

−

n

2

3

=

n

3

3

+

2

3

n

2

≥

n

3

3

,

so t(n) = Ω(n

3

).

Therefore t(n) = Θ(n

3

).

Exercise: Try f(n) = 4n

3

−n

2

+ 2n.

11

Theorem 4.3.4 Let

p(n) = a

k

n

k

+ a

k−1

n

k−1

+ · · · + a

1

n + a

0

be a polynomial in n of degreee k, where each a

i

is nonnegative (and a

k

= 0). Then

p(n) = Θ

n

k

.

Proof:

p(n) = a

k

n

k

+ a

k−1

n

k−1

+ · · · + a

1

n + a

0

≤ a

k

n

k

+ a

k−1

n

k

+ · · · + a

1

n

k

+ a

0

n

k

= (a

k

+ a

k−1

+ · · · + a

1

+ a

0

) n

k

= C

1

n

k

,

so p(n) = O

n

k

. Also

p(n) ≥ a

k

n

k

,

so p(n) = Ω

n

k

.

Therefore t(n) = Θ

n

k

.

12

y

n

1

1

2

2 3

4

4 5 6 7

8

8 9 10 11 13

16

32

64

128

256

y = 1

y = lg n

y = n

y = n lg n

y = n

2

y = 2

n

Figure 4.3.1 Growth of some common functions.

12

Example 5. f(n) = 2n + 3 lg n.

Remember that lg n represents log

2

n.

Does your calculator have an lg button? If not,

y = lg n

n = 2

y

ln n = ln (2

y

)

= y ln 2, so

y =

ln n

ln 2

.

Now lg n < n for all n ≥ 1. (See the preceding graph). Therefore

f(n) = 2n + 3 lg n

< 2n + 3n

= 5n,

so f(n) = O(n). Also

f(n) ≥ 2n,

so f(n) = Ω(n).

Therefore f(n) = Θ(n).

Example 6. f(n) = 1 + 2 + 3 + · · · + (n −1) + n.

This example assumes that we don’t know that the sum is

n(n + 1)

2

.

1 + 2 + 3 + · · · + (n −1) + n ≤ n + n + n + · · · + n + n

= n

2

,

so f(n) = O(n).

Also

1 + 2 + 3 + · · · + (n −1) + n ≤ 1 + 1 + 1 + · · · + 1 + 1

= n,

so f(n) = Ω(n).

This seems to be too low an estimate. We can do better. Let’s be trickier, and throw

away approximately the ﬁrst half of the series.

1 + 2 + 3 + · · · + (n −1) + n ≥

n

2

+

n

2

+ 1

+ · · · + (n −1) + n

≥

n

2

+

n

2

+ · · · +

n

2

.

How many terms are there?

13

If n = 8 4 + 5 + 6 + 7 + 8 = 5 terms i.e.

n + 1

2

.

If n = 9 5 + 6 + 7 + 8 + 9 = 5 terms i.e.

n + 1

2

.

If n = 2k k + (k + 1) + · · · + 2k = 2k −(k −1) = k + 1 terms.

If n = 2k + 1 (k + 1) + (k + 2) + · · · + (2k + 1) = (2k + 1) −k = k + 1 terms.

Hence there are

n + 1

2

terms. Therefore

f(n) ≥

n + 1

2

n

2

≥

n

2

·

n

2

=

n

2

4

,

so f(n) = Ω(n

2

).

Therefore f(n) = Θ(n).

If we know that f(n) =

n(n + 1)

2

, then

f(n) =

1

2

n

2

+

1

2

n

≤

1

2

n

2

+

1

2

n

2

= n

2

,

so f(n) = O(n

2

).

Also

f(n) ≥

1

2

n

2

,

so f(n) = Ω(n

2

).

Therefore f(n) = Θ(n

2

).

14

Discrete Mathematics: Week 7

Analysis of Algorithms (Continued)

Example 7. We can generalize Example 6 to n

k

.

f(n) = 1

k

+ 2

k

+ 3

k

+ · · · + (n −1)

k

+ n

k

.

Now

f(n) = 1

k

+ 2

k

+ 3

k

+ · · · + (n −1)

k

+ n

k

≤ n

k

+ n

k

+ n

k

+ · · · + n

k

+ n

k

= n ×n

k

= n

k+1

.

So f(n) = O

n

k+1

.

f(n) ≥

¸

n

2

¸

k

+

¸

n

2

+ 1

¸

k

+ · · · + (n −1)

k

+ n

k

≥

¸

n

2

¸

k

+

¸

n

2

¸

k

+ · · · +

¸

n

2

¸

k

=

¸

n + 1

2

¸ ¸

n

2

¸

k

≥

n + 1

2

n

2

k

=

1

2

k+1

(n + 1)n

k

=

1

2

k+1

n

k+1

+ n

k

≥

1

2

k+1

n

k+1

.

So f(n) = Ω

n

k+1

.

·

. . f(n) = Θ

n

k+1

.

Example 8. What is the order of n! ?

What is n! ? The basic deﬁnition is

n! = n ×(n −1) ×(n −2) × · · · ×3 ×2 ×1.

Hence

1! = 1

2! = 2

3! = 6

4! = 24

5! = 120

6! = 720

7! = 5040

1

etc.

My calculator runs out at 69! , as 70! > 10

100

. What is the limit on your calculator?

Then 0! is deﬁned to be 1. What is 1

1

2

! ?

Stirling’s formula gives an approximation for n! :

n! ≈

√

2πn

n

e

n

.

Hence

ln n! ≈

1

2

ln 2π +

1

2

lnn + nln n −n.

So we suspect that

lg n! = Θ(nlg n) .

Then, taking lg of n! ,

lg n! = lg n + lg (n −1) + · · · + lg 2 + lg 1

≤ lg n + lg n + · · · + lg n + lg n

= nlg n.

So n! = O(nlg n). Finding a lower limit,

lg n! ≥ lg n + lg (n −1) + · · · + lg

¸

n

2

¸

≥ lg

¸

n

2

¸

+ lg

¸

n

2

¸

+ · · · + lg

¸

n

2

¸

=

¸

n + 1

2

¸

lg

¸

n

2

¸

≥

n

2

lg

n

2

=

n

2

(lg n −lg 2)

=

n

2

¸

lg n

2

+

lg n

2

−1

¸

≥

n

4

lg n, as for n ≥ 4, lg n ≥ 2.

So lg n! = Ω(nlg n).

·

. . lg n! = Θ(nlg n).

Deﬁnition. If an algorithm requires t(n) units of time to terminate in the best case for

an input of size n, and

t(n) = O(g(n)) ,

we say that the best-case time required by the algorithm is of order at most g(n) or that

the best-case time required by the algorithm is O(g(n)).

If an algorithm requires t(n) units of time to terminate in the worst case for an input of

size n, and

t(n) = O(g(n)) ,

2

we say that the worst-case time required by the algorithm is of order at most g(n) or that

the worst-case time required by the algorithm is O(g(n)).

If an algorithm requires t(n) units of time to terminate in the average case for an input

of size n, and

t(n) = O(g(n)) ,

we say that the average-case time required by the algorithm is of order at most g(n) or

that the average-case time required by the algorithm is O(g(n)).

Replace O by Ω and “at most” by “at least” to obtain the deﬁnition of what it means

for the best-case, worst-case or average-case time of an algorithm to be of order at least

g(n).

If the best case time is O(g(n)) and Ω(g(n)), then the best-case time required by the

algorithm is Θ(g(n)).

Similar deﬁnitions apply for the worst-case and average-case times.

Example 1. (b) (Continued)

Johnsonbaugh Algorithm 4.3.17. Searching for a key in an unordered sequence.

The best-case time is 1 i.e. Θ(1).

The worst-case time is n i.e. Θ(n).

The average-case time is

(1 + 2 + 3 + · · · + n) + n

n + 1

=

n

2

+ 3n

2(n + 1)

i.e. Θ(n).

Example 9. Consider the pseudocode

for i = 1 to n

for j = 1 to i

x = x + 1

Find a theta notation for the number of times the statement x = x + 1 is executed.

The number of times is

1 + 2 + 3 + · · · + n =

1

2

n

2

+

1

2

n,

which is Θ(n

2

).

Example 10. Consider the pseudocode

i = n

while (i ≥ 1) {

x = x + 1

i = ⌊ i/2 ⌋

}

Find a theta notation for the number of times the statement x = x + 1 is executed.

3

Suppose that n = 8. Then for

i = 8 x = x + 1 is executed

i = 4 x = x + 1 is executed

i = 2 x = x + 1 is executed

i = 1 x = x + 1 is executed

i = 0 x = x + 1 is not executed.

So the statement is executed 4 times.

Suppose n = 2

k

. Then the statement is executed for

i = 2

k

, 2

k−1

, 2

k−2

, . . . , 2, 2

0

= 1,

i.e. k + 1 = 1 + lg n times.

If

2

k

≤ n < 2

k+1

,

then after k iterations

1 ≤ i =

¸

n

2

k

¸

< 2.

So for all n, x = x + 1 is executed 1 + lg n times, which is Θ(lg n).

Example 11. Find a theta notation for the number of times the statement x = x + 1 is

executed in the following pseudocode.

j = n

while (j ≥ 1) {

for i = 1 to j

x = x + 1

j = ⌊ j/2 ⌋

}

Let t(n) denote the number of times the statement x = x + 1 is executed.

The ﬁrst time in the while loop, it is executed n times.

·

. . t(n) ≥ n, and t(n) = Ω(n).

The second time in the while loop, the statement is executed j = ⌊ j/2 ⌋ times, and so

on. Hence

t(n) ≤ n +

n

2

+

n

4

+

n

8

+ · · · +

n

2

k−1

,

where

n

2

k

< 1. So

t(n) ≤

n

1 −

1

2

k

1 −

1

2

= 2n

1 −

1

2

k

≤ 2n.

4

·

. . t(n) = O(n).

·

. . t(n) = Θ(n).

A “good” algorithm has a worst-case polynomial time, and such problems are called

feasible or tractable problems. Exponential or factorial worst-case time problems are

called intractable, and require a long time to execute even for reletively small n.

There are problems for which there is no algorithm. These are called unsolvable. Such a

problem is the halting problem: given an arbitrary program and a set of inputs, will the

program ever halt?

A large number of solvable problems have an undetermined status. They are thought to

be intractable, but none have ever been proved to be intractable. Such a problem is the

travelling salesperson problem.

5

Recursive Algorithms

A recursive function invokes itself. A recursive algorithm is an algorithm that contains a

recursive function.

Example 1. Factorials. Johnsonbaugh Example 4.4.1

We know that n! = n(n − 1)!, and that 0! is deﬁned as 1. We can resolve the problem

of computing n! into the simpler problem of computing (n − 1)!, then into computing

(n −2)! and so on, until we get to 0! which is known i.e.

Problem Simpliﬁed Problem

5! 5 · 4!

4! 4 · 3!

3! 3 · 2!

2! 2 · 1!

1! 1 · 0!

0! None

Table 1: Decomposing the factorial problem.

Problem Solution

0! 1

1! 1 · 0! = 1

2! 2 · 1! = 2

3! 3 · 2! = 6

4! 4 · 3! = 24

5! 5 · 4! = 120

Table 2: Combining subproblems of the factorial problem.

Algorithm 4.4.2 Computing n Factorial

This recursive algorithm computes n!.

Input: n, an integer greater than or equal to 0

Output: n!

1. factorial (n) {

2. if (n == 0)

3. return 1

4. return n∗factorial (n −1)

5. }

6

We can see how the algorithm computes n!.

If n = 1, proceed to line 4 since n = 0, and compute

n · (n −1)! = 1 · 0! = 1 · 1 = 1.

If n = 2, proceed to line 4 since n = 0, and compute

n · (n −1)! = 2 · 1! = 2 · 1 = 2,

and so on.

Theorem: Algorithm 4.4.2 returns the value of n! , n ≥ 0.

Proof: Basis Step: For n = 0, the algorithm correctly returns 1.

Inductive Step: Assume that the algorithm correctly returns the value of (n−1)! , n > 0.

Suppose n is input to the algorithm. Since n > 0, proceed to line 4. By the assumption,

the algorithm correctly computes (n −1)! . Hence the algorithm correctly computes n! =

n · (n −1)! .

Recursive algorithms and their proof go hand-in-hand with mathematical induction.

Example 2. This algorithm recursively ﬁnds the smallest of a ﬁnite sequence of numbers.

Algorithm Recursively Finding the Minimum Value in a Sequence

This algorithm ﬁnds the smallest of the numbers s

1

, s

2

, . . . , s

n

.

Input: s, n

Output: small, (the smallest value in the sequence s)

min(s, n) {

if (n == 1) {

small = s

1

return small

}

else {

small = min(s, n −1)

if small ≤ s

n

return small

else {

small = s

n

return small

}

}

}

7

If n = 1, there is only one number in the sequence, and it is returned.

If n = 2, then min(s, 1) is recursively called, and this returns s

1

. This is compared with

s

2

, and the smaller returned.

If n = 3, then min(s, 2) is recursively called, and this recursively calls min(s, 1), which

returns s

1

. Then min(s, 2) returns the smaller of s

1

and s

2

, and this is compared with s

3

and the smaller returned.

Theorem: This algorithm correctly returns the value of the smallest of a ﬁnite sequence

of numbers.

Proof: Basis Step: For n = 1, the algorithm returns the only number in the sequence.

Inductive Step: Assume that the algorithm correctly returns the smallest value in sequence

of length n −1.

Then for a sequence of length n, n > 1, the algorithm correctly computes the smallest of

s

1

, s

2

, . . . , s

n−1

, compares this with s

n

, and returns the smaller.

Therefore the algorithm correctly returns the smallest number in a sequence of length n.

Example 3. Robot walk. Johnsonbaugh Example 4.4.5

A robot can take steps of 1 metre or 2 metres. In how many ways can the robot walk n

metres? Let walk(n) denote the number of ways. Then walk(1) = 1 and walk(2) = 2.

Distance Sequence of Steps Number of Ways to Walk

1 1 1

2 1, 1 or 2 2

3 1, 1, 1 or 1, 2 or 2, 1 3

4 1, 1, 1, 1 or 1, 1, 2 5

or 1, 2, 1 or 2, 1, 1 or 2, 2

Suppose n > 2. Then

walk(n) = walk(n −1) + walk(n −2).

Algorithm 4.4.6 Robot Walking

This algorithm computes the function deﬁned by

walk(n) =

1, n = 1

2, n = 2

walk(n −1) + walk(n −2), n > 2.

Input: n

Output: walk(n)

walk(n) {

if (n == 1 ∨ n == 2)

8

return n

return walk(n −1) + walk(n −2)

}

We can see that the sequence generated is

1, 2, 3, 5, 8, 13, 21, 34, 55, . . . .

This is the Fibonacci sequence {f

n

}, and is deﬁned by the equations

f

1

= 1

f

2

= 1

f

n

= f

n−1

+ f

n−2

, for all n ≥ 3.

The Fibonacci sequences arises in many natural situations, as well as mathematical ones.

For example, a pine cone can have 13 clockwise spirals and 8 counterclockwise spirals.

The ratio of successive terms in the Fibonacci sequence has a limit of the golden ratio,

lim

n → ∞

f

n

f

n−1

= φ =

1 +

√

5

2

= 1.6180339887498 . . . .

For example,

55

34

= 1.6176 . . . .

A classical formula for the Fibonacci numbers is

1

√

5

¸

1 +

√

5

2

n

−

1 −

√

5

2

n

¸

.

All those

√

5 occurrences, and the answer is an integer!

9

Discrete Mathematics: Week 8

Graph Theory

Introduction to Graphs

Graph theory was introduced by Leonhard Euler in 1736, as a means of solving the

K¨onigsberg bridge problem. There was a revival of graph theory in the 1920’s, with the

ﬁrst text produced in 1936. Graph theory involves a lot of terminology, as will be seen.

A graph is drawn with dots and lines, the dots being vertices and the lines are edges. The

important information in the graph is the connections, not the positions of the vertices

and edges.

Example 1. The following two graphs contain the same information.

e

1

e

1

e

2

e

2

e

3

e

3

e

4

e

4

e

5

e

5

e

6

e

6

v

1

v

1

v

2

v

2

v

3

v

3

v

4

v

4

v

5

v

5

A path starts at one vertex v

1

, travels along an edge to vertex v

2

, and so on, arriving at

v

n

.

For a path to traverse every edge exactly once, and return to the original vertex, an even

number of edges must touch each vertex, for example, the graph in Example 1. In the

graph shown below, a path can traverse each edge exactly once, but will not return to the

original vertex.

v

1

v

2

v

3

v

4

Deﬁnition 8.1.1

A graph (or undirected graph) G consists of a set V of vertices (or nodes) and a set E of

edges (or arcs) such that each edge e ∈ E is associated with an unordered pair of vertices.

If there is a unique edge e associated with the vertices v and w, we write e = (v, w) or

1

e = (w, v). In this context, (v, w) denotes an edge between v and w in an undirected

graph and not an ordered pair.

A directed graph (or digraph) G consists of a set V of vertices (or nodes) and a set E of

edges (or arcs) such that each edge e ∈ E is associated with an ordered pair of vertices. If

there is a unique edge e associated with ordered pair (v, w) of vertices, we write e = (v, w),

which denotes an edge from v to w.

An edge e in a graph (undirected or directed) that is associated with the pair of vertices

v and w is said to be incident on v and w, and v and w are said to be incident on e and

to be adjacent vertices.

If G is a graph (undirected or directed) with vertices V and edges E, we write G = (V, E).

Unless speciﬁed otherwise, the sets E and V are assumed to be ﬁnite and V is assumed

to be nonempty.

Example 1. (Continued)

V = { v

1

, v

2

, v

3

, v

4

, v

5

}

E = { e

1

, e

2

, e

3

, e

4

, e

5

, e

6

}

e

2

= (v

2

, v

3

) = (v

3

, v

2

)

Edge e

4

is incident on vertices v

2

and v

4

, and vertices v

2

and v

4

are adjacent.

Example 2. The graph shown below is a directed graph.

e

1

e

2

e

3

e

4

e

5

e

6

e

7

v

1

v

2

v

3

v

4

v

5

v

6

v

7

Directed edges are indicated by arrows.

Edge e

1

is associated with the ordered pair (v

2

, v

1

) of vertices.

Distinct edges can be associated with the same pair of vertices e.g. e

3

and e

4

. These are

parallel edges, and can also occur in undirected graphs.

e

1

e

2

v

1

v

2

2

An edge incident on a single vertex is called a loop e.g. e

7

.

A vertex not incident to any edge is called isolated e.g. v

7

.

A graph with neither loops nor parallel edges is called a simple graph.

Example 3. Johnsonbaugh 8.1.5

Holes are being bored in a sheet of metal by a drill press, and these can be considered

as the vertices of a graph. There is a travel time associated with every edge between

vertices, so that we have a weighted graph. This is shown in the diagram below.

a

a

b

b

c

c

d

d

e

e

2

2

3

3

4

4

4

4

5

5

6

6

6 6

8

8

9 9

12

12

If an edge e is labelled k, we say that the weight of edge e is k.

The length of a path is the sum of the weighted edges in a path. A path of minimum

length that visits each vertex exactly once represents the optimum path for the drill press.

For example, for a path starting at b and ﬁnishing at e, we have path lengths:

Path Length

b, a, c, d, e 17

b, a, d, c, e 20

b, c, a, d, e 16

b, c, d, a, e 19

b, d, a, c, e 23

b, d, c, a, e 23

Additionally, all starting and ﬁnishing vertices need to be checked:

(a, b), (a, c), (a, d), (a, e), (b, c), (b, d), (b, e), (c, d), (c, e), (d, e).

It is expected that the reverse path will be of the same length.

3

Example 4. Johnsonbaugh Example 8.1.7: Similarity Graphs.

A particular algorithm in C++ is implemented by a number of persons. We wish to group

“like” programs into classes based on program properties. The properties are:

1. The number of lines in the program

2. The number of return statements in the program

3. The number of function calls in the program.

Number of Number of Number of

Program program lines return statements function calls

1 66 20 1

2 41 10 2

3 68 5 8

4 90 34 5

5 75 12 14

A similarity graph is constructed with vertices corresponding to the programs. A vertex

is denoted (p

1

, p

2

, p

3

) where p

i

is the value of property i.

A dissimilarity function s is deﬁned as follows. For each pair of vertices v = (p

1

, p

2

, p

3

)

and w = (q

1

, q

2

, q

3

) set

s(v, w) = |p

1

−q

1

| +|p

2

−q

2

| +|p

3

−q

3

| .

In this example, we have

s (v

1

, v

2

) = 36 s (v

2

, v

3

) = 38 s (v

3

, v

4

) = 54

s (v

1

, v

3

) = 24 s (v

2

, v

4

) = 76 s (v

3

, v

5

) = 20

s (v

1

, v

4

) = 42 s (v

2

, v

5

) = 48 s (v

4

, v

5

) = 46

s (v

1

, v

5

) = 30

For a ﬁxed number S, inset an edge between vertices v and w if s(v, w) < S. For example,

if S = 25 we have the graph:

v

1

v

2

v

3

v

4

v

5

We say that v and w are in the same class if v = w or if there is a path from v to w. Here

the classes are { 1, 3, 5 }, { 2 } and { 4 }. What are the classes if S = 40?

4

Deﬁnition 8.1.9

The complete graph on n vertices, denoted K

n

, is the simple graph with n vertices in

which there is an edge between each pair of distinct vertices.

Example 5. The complete graph, K

4

, on 4 vertices is:

Deﬁnition 8.1.11

A graph G = (V, E) is bipartite if there exist subsets V

1

and V

2

(either possibly empty)

of V such that V

1

∩ V

2

= ∅, V

1

∪ V

2

= V , and each edge in E is incident on one vertex in

V

1

and one vertex in V

2

.

Example 6. The graph in the ﬁgure below is bipartite for

V

1

= { v

1

, v

2

, v

3

} and V

2

= { v

4

, v

5

} .

v

1

v

2

v

3

v

4

v

5

Note that it is not required that there is an edge between every vertex in V

1

and every

vertex in V

2

.

Example 7. The graph in the following ﬁgure is not bipartite.

v

1

v

2

v

3

v

4

v

5

v

6

v

7

v

8

v

9

5

It is often easier to prove that a graph is not bipartite by arguing a contradiction.

Suppose the graph is bipartite. Then we can partition the set of vertices into two subsets

V

1

and V

2

.

Consider v

4

, v

5

and v

6

. Since v

4

and v

5

are adjacent, v

4

is in V

1

(say) and v

5

is in V

2

.

Since v

5

and v

6

are adjacent, v

6

is in V

1

.

But v

4

and v

6

are adjacent and both are in V

1

. Contradiction. Hence the graph is not

bipartite.

Deﬁnition 8.1.15

The complete bipartite graph on m and n vertices, denoted K

m,n

, is the simple graph

whose vertex set is partitioned into sets V

1

with m vertices and V

2

with n vertices in

which the edge set consists of all edges of the form (v

1

, v

2

) with v

1

∈ V

1

and v

2

∈ V

2

.

Example 8. The complete bipartite graph on two and four vertices, K

2,4

, is:

6

Paths and Cycles

Deﬁnition 8.2.1

Let v

0

and v

n

be vertices in a graph. A path from v

0

to v

n

of length n is an alternating

sequence of n + 1 vertices and n edges beginning with vertex v

0

and ending with vertex

v

n

,

(v

0

, e

1

, v

1

, e

2

, v

2

, . . . v

n−1

, e

n

, v

n

),

in which edge e

i

is incident on vertices v

i−1

and v

i

for i = 1, . . . , n.

The formalism in Deﬁnition 8.2.1 means: Start at vertex v

0

; go along edge e

1

to v

1

; go

along edge e

2

to v

2

; and so on.

Deﬁnition 8.2.4

A graph G is connected if given any vertices v and w in G, there is a path from v to w.

Example 1. Consider the following graph.

e

1

e

2

e

3

e

4

e

5

e

6

e

7

e

8

e

9

e

10

e

11

e

12

v

1

v

2

v

3

v

4

v

5

v

6

v

7

v

8

v

9

Paths joining v

1

and v

2

are (v

1

, v

4

, v

2

) of length 2 and (v

1

, v

4

, v

5

, v

6

, v

7

, v

2

) of length 5.

The graph is connected.

The graph

e

1

e

2

e

3

v

1

v

2

v

3

v

4

v

5

is not connected.

A connected graph consists of one “piece”, and a graph that is not connected consists of

two or more “pieces”. These “pieces” are subgraphs of the original graph called compo-

nents.

7

Deﬁnition 8.2.8

Let G = (V, E) be a graph. We call (V

′

, E

′

) a subgraph of G if

(a) V

′

⊆ V and E

′

⊆ E.

(b) For every edge e

′

∈ E

′

, if e

′

is incident on v

′

and w

′

, then v

′

, w

′

∈ V

′

.

Example 1. (Continued) A subgraph is

e

1

e

2

e

3

e

4

e

11

v

1

v

2

v

3

v

4

v

5

v

7

Deﬁnition 8.2.11

Let G be a graph and let v be a vertex in G. The subgraph G

′

of G consisting of all edges

and vertices in G that are contained in some path beginning at v is called the component

of G containing v.

A connected graph, such as Example 1, has only one component, itself.

A graph such as

e

1

e

1

e

2

e

2

e

3

e

3

v

1

v

1

v

2

v

2

v

3

v

3

v

4

v

4

v

5

v

5

G :

G

1

:

G

2

:

has two

and

components

has two components G

1

and G

2

.

We describe G

2

as G

2

= (V

2

, E

2

) with V

2

= { v

2

, v

3

, v

5

} and E

2

= { e

2

, e

3

}.

Deﬁnition 8.2.14

Let v and w be vertices in a graph G.

A simple path from v to w is a path from v to w with no repeated vertices.

A cycle (or circuit) is a path of nonzero length from v to v with no repeated edges.

8

A simple cycle is a cycle from v to v, in which, except for the beginning and ending

vertices that are both equal to v, there are no repeated vertices.

Example 1. (Continued)

Path Simple Path? Cycle? Simple Cycle?

(v

1

, v

4

, v

5

, v

6

, v

5

)

(v

1

, v

4

, v

6

, v

7

, v

2

)

(v

4

, v

1

, v

3

, v

5

, v

4

, v

6

, v

7

, v

2

, v

4

)

(v

4

, v

5

, v

6

, v

7

, v

2

, v

4

)

(v

4

)

The K¨onigsberg Bridge Problem

Two islands in the Pregel River in K¨onigsberg (now Kaliningrad in Russia) were connected

to each other and the river banks by seven bridges, as shown in the diagram below.

A

B C

D

Pregel

River

The problem is to start at any location, walk over each bridge exactly once, and ﬁnish at

the starting location.

Leonhard Euler solved the problem in 1736. The problem can be represented as the

following graph, where the vertices represent the locations and the edges represent the

bridges.

A

B C

D

9

Euler showed that there is no solution, as all vertices have an odd number of incident

edges.

A cycle in graph G that includes all of the edges and all of the vertices of G is called an

Euler cycle, in honour of Euler.

We introduce the idea of the degree of a vertex v, δ(v). This is the number of edges

incident on v.

Theorem 8.2.17

If a graph G has an Euler cycle, then G is connected and every vertex has even degree.

Proof: Suppose that G has an Euler cycle.

We have seen the argument that during the cycle, the path leaves each vertex for every

time that it is entered, and so all vertices must be of even degree.

If v and w are vertices of G, then the portion of the Euler cycle between v and w is a

path from v to w, and so G is connected.

Theorem 8.2.18

If G is a connected graph and every vertex has even degree, then G has an Euler cycle.

Proof: Johnsonbaugh gives a mathematical induction proof. See the text.

Example 2. For the K¨onigsberg bridge problem,

δ(A) = 3, δ(B) = 5, δ(C) = 3, δ(D) = 3,

so there is not an Euler cycle.

If the graph G is as shown below, then G is connected an every vertex has even degree.

δ (v

1

) = δ (v

2

) = δ (v

3

) = δ (v

5

) = 4, δ (v

4

) = 6, δ (v

6

) = δ (v

7

) = 2.

v

1

v

2

v

3

v

4

v

5

v

6

v

7

Hence an Euler cycle exists. By inspection, one is

(v

6

, v

4

, v

7

, v

5

, v

1

, v

3

, v

4

, v

1

, v

2

, v

5

, v

4

, v

2

, v

3

, v

6

) .

10

Theorem 8.2.21

If G is a graph with m edges and vertices {v

1

, v

2

, . . . , v

n

}, then

n

i=1

δ(v

i

) = 2m.

In particular, the sum of degrees of all of the vertices of the graph is even.

Proof: Each edge is counted twice, once from v

i

to v

j

, and once from v

j

to v

i

.

Corollary 8.2.22

In any graph, there are an even number of vertices of odd degree.

Theorem 8.2.23

A graph has a path with no repeated edges from v to w (v = w) containing all the edges

and vertices if and only if it is connected and v and w are the only vertices of odd degree.

For example, the graph below.

v

1

v

2

v

3

v

4

v

5

Proof: Add an edge from v to w. Now thare is an Euler cycle as all vertices are of even

degree.

Theorem 8.2.24

If a graph G contains a cycle from v to v, G contains a simple cycle from v to v.

Proof: If C is a cycle from v

0

to v

n

(v

0

= v

n

), and C is not a simple cycle, then there

must be vertices v

i

= v

j

in the path where i < j < n. Remove the portion of the path

between v

i

and v

j

, and repeat the procedure if necessary. The cycle is eventually reduced

to a simple cycle.

11

Discrete Mathematics: Week 9

Trees

Introduction

Example 1. Johnsonbaugh example on the draw for the semiﬁnals and ﬁnals at Wim-

bledon (many years ago).

SEMIFINALS

Graf

Sabatini

Navratilova

Seles

Graf

FINALS

Seles

Graf

WIMBLEDON

CHAMPION

The draw can be represented as a graph called a tree. It is rotated clockwise through 90

◦

so as to appear as is shown on the right. If it is rotated through 90

◦

the other way then

it would appear as a natural tree.

v

1

v

1 v

2

v

2

v

3

v

3

v

4

v

4

v

5

v

5

v

6

v

6

v

7 v

7

level 0

level 1

level 2

height = 2

Deﬁnition 9.1.1

A (free) tree T is a simple graph satisfying the following: If v and w are vertices in T,

then there is a unique simple path from v to w.

A rooted tree is a tree in which a particular vertex is designated the root.

The level of a vertex v is the length of the simple path from the root to v.

The height of a rooted tree is the maximum level number that occurs.

1

Example 2. Johnsonbaugh 9.1.4 The tree T shown below will become the rooted tree T

′

if we designate vertex e as the root.

PSfrag replacemen

a

a

b

b

c

c

d

d

e

e

f

f

g

g

h

h

i

i

j

j

Huﬀman Codes

Character representation is often by ﬁxed-length bit strings e.g. ASCII where eight bit

strings are used for the 256 extended character set.

Huﬀman codes represent characters by variable-length bit strings. Short bit strings are

used to represent the most frequently used characters.

A Huﬀman code is most easily deﬁned by a rooted tree. To decode a bit string, begin at

the root and move down the tree until a character is encountered. The bit, 0 or 1, means

move right or left. When a character is encountered, begin again at the root.

Given a tree that deﬁnes a Huﬀman code, any bit string can be uniquely decoded even

though the characters are represented by variable-length bit strings.

Example 3. Decode the string 010101110110 from the following tree.

A

O

R

S T

Root

0

0

0

0 1

1

1

1

The string decodes as:

A R S T

010 1 0111 0110

2

Algorithm 9.1.9 Constructing an Optimal Huﬀman Code

This algorithm constructs an optimal Huﬀman code from a table giving the frequency

of occurrence of the characters to be represented. The output is a rooted tree with the

vertices at the lowest levels labelled with the frequencies and with the edges labelled with

bits. The coding tree is obtained by replacing each frequency by a character having that

frequency.

Input: A sequence of n frequencies, n ≥ 2

Output: A rooted tree that deﬁnes an optimal Huﬀman code

huﬀman(f, n) {

if (n == 2) {

let f

1

and f

2

denote the frequencies

let T be as in the ﬁgure (on the left)

return T

}

let f

i

and f

j

denote the smallest frequencies

replace f

i

and f

j

in the list by f

i

+ f

j

T

′

= huﬀman(f, n − 1)

replace a vertex in T

′

labelled f

i

+ f

j

by the

tree shown in the ﬁgure (on the right) to obtain the tree T

return T

}

f

1

f

2

f

i

f

j

0 0 1 1

Example 4. We have the following table of frequencies.

Letter Frequency

A

B

C

D

E

10

12

17

8

22

3

The algorithm begins by repeatedly replacing the two smallest frequencies with the sum

until a two element sequence is obtained.

10, 12, 17, 8, 22 −→ 18, 12, 17, 22

A, B, C, D, E A + D, B, C, E

12, 17, 18, 22 −→ 29, 18, 22

B, C, A + D, E B + C, A + D, E

18, 22, 29 −→ 40, 29

A + D, E, B + C A + D + E, B + C

The algorithm then constructs trees working backward, beginning with the two element

sequence 29, 40.

0

0 0

0

0 0

0

0

0

0

1

1 1

1

1

1 1

1

1

1

8 10

12 12 17 17 18 18 22 22 22

29 29 40

Now replace each frequency by a character having that frequency.

0

0 0

0 1

1 1

1

A

B C

D

E

Then BED is coded as 11 | 00 | 011.

4

Terminology and Characterizations of Trees

Deﬁnition 9.2.1

Let T be a tree with root v

0

. Suppose that x, y and z are vertices in T and that

(v

0

, v

1

, . . . , v

n

) is a simple path in T. Then

(a) v

n−1

is the parent of v

n

.

(b) v

0

, . . . , v

n−1

are ancestors of v

n

.

(c) v

n

is a child of v

n−1

.

(d) if x is an ancestor of y, y is a descendant of x.

(e) If x and y are children of z, x and y are siblings.

(f) If x has no children, x is a terminal vertex (or a leaf ).

(g) If x is not a terminal vertex, x is an internal (or branch) vertex.

(h) The subtree of T rooted at x is the graph with vertex set V and edge set E, where

V is x with the descendants of x and

E = {e | e is an edge on a simple path from x to some vertex in V }.

Example 1. consider the following tree.

a

b

c

d

e

f

g

h

i

j

The parent of b is: e

The ancestors of a are: b, e, g

The children of b are: a, c

The descendants of e are: d, f, b, a, c

The siblings of d are: f, b

The terminal vertices are: d, f, a, c, h, j

The internal vertices are: g, e, i, b

5

The subtree rooted at e is:

a

b

c

d

e

f

Example 2. Greek gods. Johnsonbaugh Example 9.2.2

Uranus

Aphrodite Kronos Atlas Prometheus

Eros Zeus Poseidon Hades Ares

Apollo Athena Hermes Heracles

The parent of Eros is:

The ancesters of Hermes are:

The children of Zeus are:

The descendants of Kronos are:

The siblings of Atlas are:

The terminal vertices are:

The internal vertices are:

The subtree rooted at Kronos is:

Kronos

Zeus Poseidon Hades Ares

Apollo Athena Hermes Heracles

6

Theorem 9.2.3

Let T be a graph with n vertices. The following are equivalent.

(a) T is a tree.

(b) T is connected and acyclic.

(c) T is connected and has n − 1 edges.

(d) T is acyclic and has n − 1 edges.

Partial proof: The proof is as follows. We show

if (a), then (b); if (b), then (c); if (c), then (d); if (d), then (a),

and so all must be equivalent.

We will show:

If (a), then (b).

Let T be a tree. Then T is connected as there is a simple path from any vertex to any

other vertex.

Suppose T contains a cycle. Then by a previous theorem, Theorem 8.2.24, T contains a

simple cycle i.e.

C = ( v

0

, v

1

, . . . , v

n

)

with v

0

= v

n

.

C cannot be a loop since T is a simple graph (no loops or parallel edges). Hence C

contains at least two distinct vertices v

i

and v

j

with i < j.

The paths

( v

i

, v

i+1

, . . . , v

j

) and ( v

i

, v

i−1

, . . . , v

0

, v

n−1

, . . . , v

j

)

are distinct simple paths from v

i

to v

j

, contradicting the deﬁnition of a tree.

Therefore, a tree cannot contain a cycle.

7

Spanning Trees Johnsonbaugh 9.3

Deﬁnition 9.3.1

A tree T is a spanning tree of a graph G if T is a subgraph of G that contains all of the

vertices of G.

Example 1. Johnsonbaugh Example 9.3.2

The spanning tree of the graph below is shown in black. Other spanning trees are possible.

a b

c d

e f

g

h

Theorem 9.3.4

A graph G has a spanning tree if and only if G is connected.

Proof:

If G has a spanning tree, G is connected.

If G is connected, progressively remove edges from cycles until G is acyclic.

This procedure is ineﬃcient in practice.

Algorithm 9.3.6 Breadth-First Search for a Spanning Tree

This algorithm is formally given in Johnsonbaugh, but a more informal description of the

procedure is as follows.

Select an ordering, e.g. abcdefgh of the vertices of G.

Select the ﬁrst vertex as the root e.g. a.

The tree T initially consists of the single vertex a and no edges.

Add to T all edges (a, x) and the vertices on which they are incident, x = b to h, that do

not produce a cycle when added to T. This gives all level 1 vertices.

Repeat with the vertices on level 1, then level 2, and so on, until no further edges can be

added.

8

Example 1. (Continued)

Select the ordering abcdefgh.

Select a as the root.

Add edges (a, b), (a, c), (a, g).

Add edges for vertices on level 1: b add (b, d)

c add (c, e)

g none

Add edges for vertices on level 2: d add (d, f)

e none

Add edges for vertices on level 3: f add (f, h)

Add edges for vertices on level 4: h none

The spanning tree is shown in the previous diagram.

Algorithm 9.3.7 Depth-First Search for a Spanning Tree

This algorithm is formally given in Johnsonbaugh, but a more informal description of the

procedure is as follows.

Select an ordering, e.g. abcdefgh of the vertices of G.

Select the ﬁrst vertex as the root e.g. a, the current vertex.

At each step, add to the tree an edge incident to the current vertex that doesn’t create a

cycle, using the vertex order to prioritize. Make the new vertex the current vertex.

If an edge can’t be added, backtrack by making the parent the current vertex.

Continue until the current vertex is again the root.

Example 1. (Continued)

a b

c d

e f

g

h

9

Select the ordering abcdefgh.

Select a as the root.

Add edges (a, b), (b, d), (d, c), (c, e), (e, f), (f, h).

Backtrack to f, then e. Add (e, g).

Backtrack to e, c, d, b, a. Ends.

Minimal Spanning Trees Johnsonbaugh 9.4

Deﬁnition 9.4.1

Let G be a weighted graph. A minimal spanning tree of G is a spanning tree of G with

minimum weight.

Example 1. There are six cities 1–6, and the costs of building roads between certain

pairs of them are shown on the following graph.

1

1

2

2

2

3

3

3

4

4

5

5

6

6

6

Breadth-First Search Select the order 123456.

Select 1 as the root.

Add edges (1, 2), (1, 3), (1, 5).

Add edges for vertices on level 1: 2 add (2, 4)

3 add (3, 6)

5 none

Add edges for vertices on level 2: 4 none

6 none

The weight is 17, and the spanning tree is as shown below.

10

1 2

2

3

3

3

4

4

5

5

6

Prim’s Algorithm Johnsonbaugh Algorithm 9.4.3

This algorithm is formally given in Johnsonbaugh, but a more informal description of the

procedure is as follows.

The algorithm begins with a single vertex. At each iteration, add to the current tree a

minimum weight edge that does not complete a cycle.

Example 1. (Continued)

Begin with vertex 1. Edges with one vertex in the tree and one vertex not in the tree are:

Edge Weight

(1, 2) 4

(1, 3) 2

(1, 5) 3

Select edge (1, 3)

with minimum weight.

Edges with one vertex in the tree and one vertex not in the tree are:

Edge Weight

(1, 2) 4

(1, 5) 3

(3, 4) 1

(3, 5) 6

(3, 6) 3

Select edge (3, 4)

with minimum weight.

Edges with one vertex in the tree and one vertex not in the tree are:

Edge Weight

(1, 2) 4

(1, 5) 3

(3, 5) 6

(3, 6) 3

(4, 2) 5

(4, 6) 6

The minimum spanning tree

will be constructed when

either (1, 5) or (3, 6)

is selected.

Select edge (1, 5).

with minimum weight.

11

Edges with one vertex in the tree and one vertex not in the tree are:

Edge Weight

(1, 2) 4

(3, 6) 3

(4, 2) 5

(4, 6) 6

(5, 6) 2

Select edge (5, 6)

with minimum weight.

Edges with one vertex in the tree and one vertex not in the tree are:

Edge Weight

(1, 2) 4

(4, 2) 5

Select edge (1, 2)

with minimum weight.

The minimal spanning tree, shown below, has weight 12.

1

1

2

2

2

3

3

4

4

5

6

12

Discrete Mathematics: Week 10

Binary Trees

Deﬁnition 9.5.1

A binary tree is a rooted tree in which each vertex has either no children, one child, or

two children. If a vertex has one child, that child is designated as either a left child or

a right child (but not both). If a vertex has two children, one child is designated a left

child and the other child is designated a right child.

Example 1. Johnsonbaugh Example 9.5.2

In the binary tree below, b is the left child of vertex a, and c is the right child of vertex a.

Vertex d is the right child of vertex b; vertex b has no left child.

Vertex e is the left child of vertex c; vertex c has no right child.

a

b c

d

e

f g

A full binary tree is a binary tree in which each vertex has either two children or zero

children.

Theorem 9.5.4

If T is a full binary tree with i internal vertices, then T has i + 1 terminal vertices and

2i + 1 total vertices.

Proof:

The i internal vertices each have two children, so there are 2i children.

Only the root is a nonchild.

Therefore the total number of vertices is 2i + 1, and the number of terminal vertices is

(2i + 1) − i = i + 1.

Example 2. In a single elimination tournament, each contestant is eliminated after one

loss. The tree structure is as shown below. Winners progress to the right, and there is a

single winner eventually at the root.

Some contestants receive byes if there are not initially 2

n

contestants.

1

Contestant 1

Contestant 2

Contestant 3

Contestant 4

Contestant 5

Contestant 6

Contestant 7

Winner

Theorem 9.5.6

If a binary tree of height h has t terminal vertices, then

lg t ≤ h.

(Or equivalently, t ≤ 2

h

.)

Proof: We will prove t ≤ 2

h

by mathematical induction.

Basis Step: If h = 0, the binary tree has a single vertex, and t = 1. Then t = 2

h

= 2

0

= 1.

Inductive Step: Assume true for a binary tree with height less than h.

Let T be a binary tree of height t > 0 with t terminal vertices. Suppose that the root of

T has one child. Eliminate the root and edge incident on the root. The remaining tree

has height h − 1 and t terminal vertices. By the assumption,

t ≤ 2

h−1

< 2

h

and case h is true.

Suppose the root of T has children v

1

and v

2

, and subtrees rooted at v

1

and v

2

have

heights h

1

and h

2

and terminal vertices number t

1

and t

2

respectively. Then

h

1

≤ h − 1 and h

2

≤ h − 1,

and

t = t

1

+ t

2

≤ 2

h

1

+ 2

h

2

≤ 2

h−1

+ ≤ 2

h−1

= 2

h

.

Hence the inductive step is veriﬁed.

Since the Basis Step and Inductive Step have been veriﬁed, the Principle of Mathematical

Induction tells us that the theorem is true.

2

Deﬁnition 9.5.8

A binary search tree is a binary tree T in which data are associated with the vertices.

The data are arranged so that, for each vertex v in T, each data item in the left subtree

of v is less than the data item in v, and each data item in the right subtree of v is greater

than the data item in v.

Algorithm 9.5.10 in Johnsonbaugh gives a formal approach for constructing a binary

search tree. A less formal approach is as follows.

• Start with an empty tree (no vertices or edges).

• Inspect the items in order.

• Place the ﬁrst item in the root.

• Compare each following item in turn with the current vertex, beginning with the

root.

• If item < vertex, move to the left child.

• If item > vertex, move to the right child.

• Continue to compare with each new vertex.

• If there is no child in that position, place the item there.

• Move to the next item and start over, comparing with the root.

Example 3. Build a binary search tree for items in order

o, n, p, d, u, j, t, l, m,

using lexicographic order.

o

n p

d u

j t

l

m

3

Example 4. Build a binary search tree for the words

“by nineteen ninety no Australian child will be living in poverty”.

by

nineteen

ninety

no

Australian

child

will

be

living

in

poverty

Searching a Binary Search Tree

The algorithm is as follows.

• Given a data item D.

• Begin at the root as the current vertex.

• If D < vertex, go to the left child.

• If D > vertex, go to the right child.

• If D = vertex, the data item is found.

• If the child to move to is missing, D is not in the tree.

Worst-case Search Time

The worst-case search is to search the longest path from the root when the item is not

present.

Suppose that there are n internal vertices, or data items. The terminal vertices would

correspond to missing children. So if the item is not in the tree, we need to check down

to the appropriate terminal vertex.

For a full binary tree with n internal vertices, the number of terminal vertices is t = n+1.

We know that lg t ≤ h, where h is the height of the tree. Hence the worst-case search

time is

⌈ lg t ⌉ = ⌈ lg (n + 1) ⌉.

4

For example, if a tree contains a million items, then

⌈ lg 1, 000, 000 ⌉ = 20,

so that a search will ﬁnd an item, or determine if it is not present, in at most 20 steps.

There are algorithms to minimize the height of a binary search tree.

Tree Traversals Johnsonbaugh 9.6

Bread-ﬁrst search and depth-ﬁrst search are ways to traverse a tree in a systematic way

such that each vertex is visited exactly once. This section considers three other tree-

traversal methods, which are deﬁned recursively. Johnsonbaugh gives formal algorithms

for each of these, we will consider simpler formulations.

Preorder Traversal Recursive algorithm 9.6.1

Preorder the root: process the root, preorder the left child, preorder the right child.

Inorder Traversal Recursive algorithm 9.6.3

Inorder the root: inorder the left child, process the root, inorder the right child.

Postorder Traversal Recursive algorithm 9.6.5

Postorder the root: postorder the left child, postorder the right child, process the root.

Example 5. Consider the following tree.

A

B

C D

E

F

G

H

I J

Preorder: A B F

ABC D F G

ABCDEFG H

ABCDEFGHI J

5

Inorder: B A F

CB D AF G

CBDEAF H G

CBDEAFI HJ G

Postorder: B F A

C D B G FA

CEDB H GFA

CEDBI J HGFA

Arithmetic Expressions

The operators +, −, × and ÷ ( or +, −, ∗, / ) operate on pairs of operands or expressions,

where the operator appears between its operands. An example is

(A + B) × C − D ÷ E.

This is the inﬁx form of an expression.

An arithmetic expression can be represented as a binary tree.

The terminal vertices correspond to the operands.

The internal vertices correspond to the operators.

An operator operates on its left and right subtrees.

Example 6. The expression (A+B) ×C − D ÷E can be represented as the tree in the

diagram below.

A B

C D E

+

−

× ÷

For inorder traversal, parentheses are put around each operation. The parentheses dictate

the order of the operations, and the hierarchy of the operators need not be speciﬁed. Some

pairs of pararentheses may not be necessary.

Inorder:

× − ÷

+ × C

− ( D ÷ E )

( ( ( A + B ) × C ) − ( D ÷ E ) )

6

The preorder traversal is as follows. This is known as the preﬁx form of the expression or

Polish notation. No parentheses are required for unambiguous evaluation.

Preorder: − × ÷

− × + C ÷ DE

− × +AB C ÷ DE

The postorder traversal is as follows. This is known as the postﬁx form of the expression or

reverse Polish notation. Again, no parentheses are required for unambiguous evaluation.

Postorder: × ÷ −

+ C × DE ÷ −

AB + C × DE ÷ −

7

Example 1. If p: q: then the conjunction of p and q is p∧q : The disjunction of p and q is p∨q : It is raining or it is cold. It is raining and it is cold. It is raining, It is cold,

A binary operator on a set X assigns to each pair of elements in X an element of X. The operator ∧ assigns to each pair of propositions p and q the proposition p ∧ q. Thus, ∧ and ∨ are binary operators on propositions. Deﬁnition. The truth value of the proposition p ∧ q is deﬁned by the truth table p T T F F q p∧q T T F F T F F F

In essence, p ∧ q is true provided that both p and q are true, and is false otherwise. Deﬁnition. The truth value of the proposition p ∨ q is deﬁned by the truth table p T T F F q p∨q T T F T T T F F

In essence, p ∨ q is true provided that p or q (or both) are true, and is false otherwise. Deﬁnition. The negation of p, denoted ¬ p, is the proposition not p. The truth value of the proposition ¬ p is deﬁned by the truth table p ¬p T F F T In English, we sometimes write ¬ p as “It is not the case that p.”

2

Example 2. We have p: q: r: The digit 1 occurs twice in the ﬁrst 13 digits of π, The digit 7 does not occur in the ﬁrst 13 digits of π, The ﬁrst 13 digits of π sum to 60.

The compound proposition is “Either 1 occurs twice in the ﬁrst 13 digits of π and the digit 7 occurs at least once in the ﬁrst 13 digits of π or the ﬁrst 13 digits of π sum to 60.” The proposition can be written symbolically as (p ∧ ¬ q) ∨ r. The ﬁrst digits of π are π = 3.141592653589 79323864 . . . . Then (p ∧ ¬ q) ∨ r = = = = and so the compound proposition is false. Conditional Propositions and Logical Equivalence Deﬁnition. If p and q are propositions, the proposition if p then q is called a conditional proposition and is denoted p → q. The proposition p is called the hypothesis (or antecedent) and the proposition q is called the conclusion (or consequent). Example 3. The lecturer states that if a student gets more than 50% then the student will pass the course. p: q: The student gets more than 50%, The student passes the course. (T ∧ ¬ T ) ∨ F (T ∧ F ) ∨ F F ∨F F,

If p and q are both true then p → q is true. If p is true and q is false then p → q is false. 3

If p is false then p → q does not depend on the conclusion’s truth value, and so is regarded as true. This last often presents some diﬃculty in comprehending. We can think of it in this way. If the student does not get more than 50%, we cannot regard p → q as false, and so it is considered true. This gives the following truth table. Deﬁnition. The truth value of the conditional proposition p → q is deﬁned by the following truth table: p q p→q T T T T F F F T T T F F Note that p only if q is considered logically the same as if p then q. An example of this is the two statements “The student is eligible to take Maths 3 only if the student has passed Maths 2” and “If the student takes Maths 3 then the student has passed Maths 2,” which are logically equivalent. Deﬁnition. If p and q are propositions, the proposition p if and only if q is called a biconditional proposition and is denoted p ↔ q. The truth value of the proposition p ↔ q is deﬁned by the following truth table: p T T F F q p↔q T T F F T F F T

Note that p ↔ q means that p is a necessary and suﬃcient condition for q. The proposition “p if and only if q” can be written “p iﬀ q”. Deﬁnition. Suppose that the propositions P and Q are made up of the propositions p1 , . . . , pn . We say that P and Q are logically equivalent and write P ≡ Q, provided that, given any truth values p1 , . . . , pn , either P and Q are both true, or P and Q are both false. 4

Example 4. Verify the ﬁrst of De Morgan’s laws ¬ (p ∨ q) ≡ ¬ p ∧ ¬ q, and the second, ¬ (p ∧ q) ≡ ¬ p ∨ ¬ q will be a tutorial exercise. P = ¬ (p ∧ q), Q = ¬ p ∨ ¬ q. p q p ∨ q ¬ p ¬ q ¬ (p ∨ q) ¬ p ∧ ¬ q T T T F F F F T F T F T F F F T T T F F F F F F T T T T Thus P and Q are logically equivalent. Deﬁnition. The converse of the conditional proposition p → q is the proposition q → p. The contrapositive (or transposition) of the conditional proposition p → q is the proposition ¬ q → ¬ p. Theorem 1. The conditional proposition p → q and its contrapositive ¬ q → ¬ p are logically equivalent. Proof: The truth table p T T F F q p → q ¬q ¬p ¬q T T F F F F T F T T F T F T T T → ¬p T F T T

shows that p → q and ¬ q → ¬ p are logically equivalent. Some theorems in mathematics are best proved by using the contrapositive. It is likely that you have seen some in matriculation mathematics or Engineering Mathematics 1 or 2E. Exercise: Show that p → q ≡ ¬ p ∨ q. Deﬁnition. A compound proposition is a tautology if it is true regardless of the truth values of its component propositions. A compound proposition is a contradiction if it is false regardless of the truth values of its component propositions. Example 5. p ¬p p ∨ ¬p p ∧ ¬p T F T F F T T F So p ∨ ¬ p is a tautology, and p ∧ ¬ p is a contradiction. Exercise: Show that (p ∧ (p → q)) → q is a tautology. 5

Quantiﬁers Consider the statement p : n is a prime number. The statement p is not a proposition, because a proposition is either true or false. We have that p is true if n = 7, and false if n = 8. Deﬁnition. Let P (x) be a statement involving the variable x and let D be a set. We call P a propositional function or predicate (with respect to D) if for each x in D, P (x) is a proposition. We call D the domain of discourse of P . Example 6. Let P (n) be the statement P (n) : n is a prime number, and let D be the set of positive integers. Then P is a propositional function with domain of discourse D since for each n in D, P (n) is a proposition which is either true of false. P is true for n = 2, 3, 5, 7, . . . , and is false for n = 1, 4, 6, 8, . . . . A propositional function P by itself is neither true or false, but is true or false for each x in its domain of discourse Example 7. Let P (x) be the statement P (x) : x2 − 5x + 6 = 0, and let D be the set of positive integers. Then P is a propositional function and is true for x = 2 or x = 3, and is false for all other positive integers. Deﬁnition. Let P be a propositional function with domain of discourse D. The statement for every x, P (x) is said to be a universally quantiﬁed statement. The symbol ∀ means “for every”. Thus the statement for every x, P (x) may be written ∀x P (x). The symbol ∀ is called a universal quantiﬁer. The statement ∀x P (x) is true if P (x) is true for every x in D. The statement ∀x P (x) is false if P (x) is false for at least one x in D. 6

n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n = 10 21 − 1 = 1 22 − 1 = 3 23 − 1 = 7 24 − 1 = 15 25 − 1 = 31 26 − 1 = 63 27 − 1 = 127 28 − 1 = 255 29 − 1 = 511 210 − 1 = 1023. The existentially quantiﬁed statement “for some positive integer n. P (x) is said to be an existentially quantiﬁed statement. We only need a counter example to prove a statement false. The statement ∃x P (x) is true if P (x) is true for at least one x in D. 7 . Example 9. 2n − 1 is divisible by 11” is true. The statement there exists x. The ﬁrst case where 2n − 1 is divisible by 11 is for n = 10. Deﬁnition. 2n − 1 is prime” is false. n=2 n=3 n=4 22 − 1 = 3 23 − 1 = 7 24 − 1 = 15. The symbol ∀ is called an existential quantiﬁer. Thus the statement there exists x. The statement ∃x P (x) is false if P (x) is false every x in D. The symbol ∃ means “there exists”.Example 8. We need to prove for all x to prove a statement true. Let P be a propositional function with domain of discourse D. The universally quantiﬁed statement “for every positive integer n greater than 1. P (x) may be written ∃x P (x).

each pair of propositions in (a) and (b) has the same truth values (i. x +1 1 We must show that 2 > 1 is false for all real numbers x. if ¬ (∀x P (x)) is false. then ∃x ¬ P (x) is false. 2 > 1” is false. x +1 1 1 Now 2 > 1 is false when 2 ≤ 1 is true.e. in that x is “bound” by the quantiﬁer. The variable x in the universally quantiﬁed statement ∀x P (x) or in the existentially quantiﬁed statement ∃x P (x) is a bound variable. Hence P (x) is false for at least one x in the domain of discourse. that is x is “free” to roam over the domain of discourse. +1 Theorem 2. Hence ∃x ¬ P (x) is true. Then x +1 x +1 0 ≤ x2 1 ≤ x2 + 1 x2 and so x2 1 ≤ 1. either both are true or both are false).The variable x in the propositional function P (x) is called a free variable. Verify that the existentially quantiﬁed statement “for some real number 1 x. ∃x ¬ P (x) (b) ¬ (∃x P (x)). (a) ¬ (∀x P (x)). ∀x ¬ P (x) Proof of (a): If ¬ (∀x P (x)) is true. and ¬ P (x) is true for at least one x in the domain of discourse. then ∀x P (x) is false. Generalised De Morgan Laws for Logic If P is a propositional function. 8 . Similarly. +1 1 ≤ 1 is true for all real numbers x. Example 10.

rather than to the whole statement. then the ﬁrst interpretation is represented as ∀x (P (x) → ¬ Q(x)). We do this by verifying that for every real number x. Some object that glitters is not gold.e. 9 . ∀x ¬ P (x) is true. P (x) is false i. Consider the well-known proverb All that glitters is not gold. This has been shown to be logically equivalent to “some object that glitters is not gold”. (Continued) We have that P (x) is the statement x. This can be interpreted in English in two ways: Every object that glitters is not gold. Hence ∃x (P (x) ∧ ¬ Q(x)) ≡ ∃x ¬ (P (x) → Q(x)) ≡ ¬ (∀x P (x) → Q(x)) by De Morgan’s laws. We can read this last line as “it is not true that for all x. The intention is that the second is correct. Then ∀x ¬ P (x) ¬ (∀x ¬ P (x)) ∃x ¬ (¬ P (x)) ∃x P (x) is is is is true false false false (by De Morgan’s laws) Example 11. The ambiguity comes from applying the negative to Q(x). and aim to show that for all real numbers +1 ∃x P (x) is false. and the second interpretation is represented as ∃x (P (x) ∧ ¬ Q(x)). if x glitters then x is gold”. ¬ P (x) is true i.Example 10. In a similar way in which the logical equivalence of the Exercise p → q ≡ ¬p ∨ q was shown.e. we can show that ¬ (p → q) ≡ p ∧ ¬ q. x2 1 > 1. If we let P (x) be the propositional function “x glitters” and Q(x) be the propositional function “x is gold”.

y). is true if. y) is true. We can write the statement symbolically as ∀x ∈ R (∃y ∈ R. x + y = 0). This can be restated as: If x > 0 and y > 0. as we can always choose y to be −x. Consider the statement For any real number x. then x + y > 0. y). The statement ∀x∃y P (x. Multiple quantiﬁers such as ∀x∀y are said to be nested quantiﬁers. The statement ∀x∀y P (x. y) is false. P (x. is false if there is at least one x in D such that P (x. y). We need two universal quantiﬁers. The domain of discourse is the set of all real numbers. Consider the statement The sum of any two positive real numbers is positive. and can write the statement symbolically as ∀x∀y ((x > 0) ∧ (y > 0) → (x + y > 0)). We know that this is true. y). y) is false for every y in D. Example 2. for every x in D.Discrete Mathematics: Week 2 Nested Quantiﬁers Example 1. with domain of discourse D. there is at least one real number y such that x + y = 0. The statement ∀x∃y P (x. is false if there is at least one x and at least one y in D such that P (x. 1 . The domain of discourse is the set of all real numbers. for every x and for every y in D. The statement ∀x∀y P (x. is true if. there is at least one y in D for which P (x. y) is true. with domain of discourse D.

for every x in D. y)) ≡ ∃x∀y ¬ P (x. y). ∀ and ∃ are interchanged. x = 2. x + y = 0). for every x in D and for every y in D. y) is true. (∀x ∈ R. The statement ∃x∃y P (x. This can be stated as There is some real number y such that x + y = 0 for all real numbers x. is false if. such that xy = 6 e. Example 4. y) is false. for example choose x to be 1 − y. y). y) is ¬ (∀x∃y P (x. This statement is true as there is at least one positive integer x and at least one positive integer y. is true if there is at least one x in D such that P (x. The statement ∃x∃y P (x. y = 3. The statement ∃x∀y P (x. y)) ≡ ∃x ¬ (∃y P (x. 2 . y).g. P (x. Note that in the negation. Example 5. with domain of discourse D. y) is true for every y in D. with domain of discourse D. y) is false. Consider the statement ∃x∃y ((x > 1) ∧ (y > 1) ∧ (xy = 6)). is true if there is at least one x in D and at least one y in D such that P (x. Consider the nested quantiﬁer ∃y ∈ R. is false if.Example 3. This is false. with domain of discourse the set of positive integers. y). both greater than 1. the negation of ∀x∃y P (x. y). there is at least one y in D such that P (x. The statement ∃x∀y P (x. Using the generalized De Morgan laws for logic.

A lemma is a theorem that is not too interesting in its own right but is useful in proving another theorem. then x + y and xy are in P . but can be used to prove other results. Within a mathematical system we can derive theorems. (b) If x is a real number. • There is a subset P of real numbers satisfying (a) If x and y are in P . A corollary is a theorem that follows quickly from another theorem.Proofs A mathematical system consists of axioms which are assumed true. then x ≤ z. y and z. xy = yx. Theorems about real numbers are • x · 0 = 0 for every real number x. 3 . then either n − 1 is a positive integer or n − 1 = 0. Multiplication is implicitly deﬁned by the ﬁrst axiom. if x ≤ y and y ≤ z. The real numbers furnish an example of a mathematical system. Not too interesting in its own right. and −x otherwise. • For all real numbers x. Among the axioms are: • For all real numbers x and y. Example 8. An example of a lemma about real numbers is • If n is a positive integer. x = 0. Example 7. A proof is an argument that establishes the truth of a theorem. The elements of P are called positive real numbers. undeﬁned terms which are not speciﬁcally deﬁned but which are implicitly deﬁned by the axioms. deﬁnitions which are used to create new concepts in terms of existing one. then exactly one of the following statements is true: x is in P . −x is in P . The absolute value |x| of a real number x is deﬁned to be x if x is positive or 0. A theorem is a proposition that has been proved to be true. Example 6.

Their product is (2m + 1)(2n + 1) = 4mn + 2m + 2n + 1 = 2(2mn + m + n) + 1. derives a contradiction. A contradiction is a proposition of the form r ∧ ¬ r. then q(x1 . This universally quantiﬁed statement is true provided that the conditional proposition if p(x1 . xn in the domain of discourse. x2 . . A proof by contradiction establishes by assuming that the hypothesis p is true and that the conclusion q is false. . xn ) is true. The truth table is p T T T T F F F F q T T F F T T F F r p → q p ∧ ¬q r ∧ ¬r p ∧ ¬q T T F F F T F F T F T F F F T F T T F F F T F F T T F F F T F F → r ∧ ¬r T T F F T T T T and p ∧ ¬q → r ∧ ¬r 4 . . and then using p and ¬ q as well as axioms. show directly that q(x1 . . . . . x2 . . if p(x1 . then q(x1 . where m and n are integers. deﬁnitions and theorems. . . . xn ). x2 . . . deﬁnitions and previously derived theorems. . A direct proof assumes that p(x1 . . . xn ) is true for all x1 . Proof by contradiction is justiﬁed by noting that the propositions p→q are equivalent. . . .Theorems are often of the form For all x1 . Proof: Let the two odd integers be 2m+ 1 and 2n+ 1. . and using this and other axioms. xn ) is true. . A particular lemma is The product of two odd integers is odd. x2 . . A second technique of proof is proof by contradiction. . . . . xn ). This is sometimes called an indirect proof. . Example 9. . . x2 . xn ). x2 . xn . . x2 . . . x2 . which is odd.

< .e. m = 2k. a > 0 and b > 0. m The hypotheses are that rational numbers ( where m and n are integers with no common n factors. where m and n are integers with no common n factors and n = 0. and m2 = 4k 2 . Then 4k 2 = 2n2 2k 2 = n2 . we can square without changing the direction of the inequality.Example 10. 2 2 Proof: a2 + b2 a+b < . that is 2 cannot be represented as . Then √ m 2 = n m2 2= 2 n 2 m = 2n2 . where m and n are n integers. Proof: √ √ m Assume that 2 is rational. 2 2 Example 11. and so there is a √ contradiction. Hence if m is even. Assume a2 + b2 2 2(a2 + b2 ) a2 − 2ab + b2 (a − b)2 This is a contradiction. a2 + b2 a+b ≥ . Prove that the root mean square of two number a and b. namely 2. is equal to or greater than the arithmentic mean i. 5 (a + b)2 4 < a2 + 2ab + b2 < 0 < 0. n = 0) and square root are deﬁned. It is an easily proved lemma that if m2 is is even. Hence m and n have a common factor. √ √ m Theorem: 2 is irrational. Hence 2 is irrational. and so n is even. then m is even. 2 2 Since both sides are positive. and hence a2 + b2 a+b ≥ . so that 2 = .

y ≥ 0: The same as case 3. which is ¬ p. y < 0: Then x + y < 0. then either x ≥ 1 or y ≥ 1. Proof: Consider the four cases as follows. Proof: Let p be “x + y ≥ 2” and q be “either x ≥ 1 or y ≥ 1”. so |x + y| = −(x + y) = −x − y = |x| + |y|. 6 . x < 0. Example 13. x. The idea is to show that the opposite of the conclusion implies the opposite of the hypothesis. y < 0: Then x + y < x < |x| + |y|. swapping the roles of x and y. x. y is nonnegative or negative. Proven. Theorem: If x and y are real numbers and x + y ≥ 2.Proof by contrapositive is based on the fact that p → q ≡ ¬ q → ¬ p. Then x + y < 1 + 1. 3. An example of this is if we wanted to show that ∃x P (x) is true. It is only necessary to ﬁnd a member x in the domain of discourse for which P (x) is true. Example 12. or x + y < 2. 4. Another form of proof is called an existence proof. 1. so |x + y| = x + y = |x| + |y|. Assume ¬ q: x < 1 and y < 1. and −(x + y) = −x − y ≤ −y = |y| ≤ |x| + |y|. 2. y ≥ 0: Then x + y ≥ 0. Theorem: |x + y ≤ |x| + |y| for all real x and y. Proof by cases is used when the original hypothesis naturally divides into various cases. where each of x. x ≥ 0.

pn are all true. pn are called the hypotheses and the proposition q is called the conclusion. Is the following argument valid? If I don’t study hard.. . ... ... . (a) The truth table is p T T F F q ¬p → ¬q T T F T T F F T 7 p T T F F q T F T F . (a) Using a truth table: p T T F F q p → q ¬q ¬p T T F F F F T F T T F T F T T T Note that the important line is the last. otherwise the argument is invalid (or a fallacy). I study hard . then q must be true. ¬ p is valid.. p2 . .. p2 . . The argument is valid provided that if p1 . .. q or p1 . An argument is a sequence of propositions written p1 p2 . q . so p → q is only true when p is false (as q is false). Determine whether the argument p→q ¬q . Then the argument is ¬p → ¬q p . I get high distinctions Let p be “I study hard”. . then I don’t get high distinctions. p2 .. pn . . . Why? (b) A verbal argument proceeds: ¬ q is true when q is false. and let q be “I get high distinctions”. q. The propositions p1 . . Hence ¬ p is true.Deﬁnition. . . pn / . Example 15. Example 14. .

2 8 . The ﬁrst part. The second line implies that it is an invalid argument. True. and the second part is called the Inductive Step. 2 n = 1. if S(n) is true. so that both hypotheses can be true with a false conclusion. . the sum is 50 × 101 = 5050. is 1+2+3+ ··· +n = n(n + 1) . 3. (Continued) Basis step: For n = 1. then S(n + 1) is true. Example 16. S(1) is true. Since there are 50 pairs. Suppose that S(1) is true. the sum of all integers from 1 to n. . 2.The ﬁrst two lines are the important ones. (b) Alternatively. The S(n) is true for every positive integer n. Suppose that we have a propositional function S(n) whose domain of discourse is the set of positive integers. . sometimes n = 0. Can this be done in under 10 seconds? There is a piece of folklore relating this to the mathematician Karl Friedrich Gauss (1777–1855) as a boy. is called the Basis Step. Then p → ¬ q ≡ F → T ≡ T. Mathematical Induction Example 16. 2 Inductive step: Assume that 1+2+3+ ··· +n = n(n + 1) . Deﬁnition. If we pair the numbers 1 + 2 + 3 + · · · + 49 + 50 100 + 99 + 98 + · · · + 52 + 51 we can see that each vertical pair adds to 101. 2. for all n ≥ 1. will occur. . The general case. . . LHS = 1 and RHS = 1×2 = 1. 3. An arithmetic teacher sets his class the problem of adding up all the integers from 1 to 100. . The Principle of Mathematical Induction is a process by which a set of theorems corresponding to the non-negative integers can be proven. It is not necessary to start with n = 1. assume that p is true and q is false.

2 1 = 2 3n2 − n + 6n + 2 . 9 . 2 Inductive step: Assume that 1 + 4 + 7 + · · · + (3n − 2) = Then 1 + 4 + 7 + · · · + (3n − 2) + (3n + 1) = n(3n − 1) + (3n + 1). factorise whenever possible 2 = (n + 1)(n + 2) . 2 n ≥ 1. Basis step: For n = 1. Hence. by the assumption 2 = 1 (n + 1)(n + 2).Then 1 + 2 + 3 + · · · + n + (n + 1) = n(n + 1) + (n + 1). 2 = 1 3n2 + 5n + 2 2 = (n + 1)(3n + 2) 2 (n + 1)(3(n + 1) − 1) . by the Principle of Mathematical Induction. Hence. 2 correct form. Show that 1 + 4 + 7 + 10 + · · · + (3n − 2) = n(3n − 1) . True. Example 17. = 2 correct form. by the assumption can’t factorise here n(3n − 1) . LHS = 1 and RHS = 1×2 = 1. S(n) is true for all n ≥ 1. S(n) is true for all n ≥ 1. by the Principle of Mathematical Induction.

. S2n−1 = 1 + 3 + 5 + 7 + · · · + (2n − 1). S(n) is true for all n ≥ 1. 1 . Show that S(n) : 12 + 22 + 32 + · · · + n2 = is true. 6 n ≥ 1. correct form. 6 n(n + 1)(2n + 1) . It appears that S2n−1 = n2 . True. Basis Step: For n = 1.Discrete Mathematics: Week 3 Mathematical Induction (Continued) Correct formulae are given in advance. n S2n−1 1 1 2 4 3 9 4 16 . 6 1×2×3 = 1. by the Principle of Mathematical Induction. Since the Basis Step and Inductive Step have been veriﬁed. sum of the odd integers. . by the assumption 6 1 = 6 (n + 1) 2n2 + n + 6(n + 1) . Example 1. LHS = 1 and RHS = Inductive Step: Assume that 12 + 22 + 32 + · · · + n2 = Then 12 + 22 + 32 + · · · + n2 + (n + 1)2 n(n + 1)(2n + 1) = + (n + 1)2 . . .g. factorize whenever possible = 1 (n + 1) 2n2 + 7n + 6 6 = (n + 1)(n + 2)(2n + 3) 6 (n + 1)(n + 2)[2(n + 1) + 1] = . 6 n(n + 1)(2n + 1) . How do we know the correct formula? Experiment to ﬁnd a pattern e. .

Divisibility example. Inductive Step: Assume that 5n − 1 is divisible by 4. Show that 5n − 1 is divisible by 4 for all n ≥ 1. by the assumption (n + 1) (n + 1)(n + 2) 1 [n(n + 2) + 1] . Then we wish to prove that 5n+1 − 1 is divisible by 4. 1·2 2·3 3·4 n(n + 1) (n + 1) n ≥ 1. 5n+1 − 1 = 5 × 5n − 1 = (5n − 1) + 4 × 5n . by the Principle of Mathematical Induction.Example 2. 2 . Basis Step: For n = 1. Show that S(n) : is true. 51 − 1 = 4 is divisible by 4. = = = = Since the Basis Step and Inductive Step have been veriﬁed. 1·2 2 2 1 1 1 n 1 + + + ··· + = .5. True. True. 1·2 2·3 3·4 n(n + 1) (n + 1) Then 1 1 1 1 1 + + + ··· + + 1·2 2·3 3·4 n(n + 1) (n + 1)(n + 2) n 1 + .7. Johnsonbaugh 1. n+2 1 1 1 = and RHS = . correct form. Example 3. LHS = Inductive Step: Assume that 1 1 1 n 1 + + + ··· + = . Work a few terms: 1 1 = 1·2 2 1 1 1 1 + = + 1·2 2·3 2 6 2 = 3 1 1 1 1 1 1 + + = + + 1·2 2·3 3·4 2 6 12 3 = 4 Basis Step: For n = 1. factorize whenever possible (n + 1)(n + 2) 1 (n + 1)2 (n + 1)(n + 2) n+1 . S(n) is true for all n ≥ 1.

S(n) is true for all n ≥ 0. This time the formula is true for n ≥ 3. where m is an integer. r−1 a(r n+1 − 1) . if r = 1. Alternatively. Since the Basis Step and Inductive Step have been veriﬁed. by the Principle of Mathematical Induction.4. hence true. a + ar 1 + ar 2 + · · · + ar n = for all n ≥ 0. Basis Step: For n = 0. Use induction to show that. r−1 a(r − 1) = a. LHS = a and RHS = Inductive Step: Assume that a + ar 1 + ar 2 + · · · + ar n = Then a + ar 1 + ar 2 + · · · + ar n + ar n+1 a(r n+1 − 1) + ar n+1 . Johnsonbaugh 1. Geometric sum.7. put 5n − 1 = 4m. LHS = 43 = 64 and RHS = 5 × 32 = 45. 3 n ≥ 3. 42 > 5 × 22 . = 4(5m) + 4.The ﬁrst part is divisible by 4 by the assumption. by the Principle of Mathematical Induction. is true. Example 5. which is divisible by 4. Example 4. S(n) is true for all n ≥ 1. by the assumption r−1 a(r n+1 − 1) + ar n+1 (r − 1) r−1 n+1 a(r − 1 + r n+2 − r n+1 ) r−1 n+2 a(r − 1) . and 4 × 5n is divisible by 4. True. Show that S(n) : 4n > 5n2 . Basis Step: For n = 3. Then 5n+1 − 1 = 5 × 5n − 1 = 5 × (4m + 1) − 1. For n = 1 we have 41 > 5 and for n = 2. . r−1 a(r n+1 − 1) r−1 by the assumption = = = = Since the Basis Step and Inductive Step have been veriﬁed. True.

= 5(n + 1)2 . Solomon W. 4 . since n ≥ 3 The Dutch artist M. n ≥ 3. We want to show that 4n+1 > 5(n + 1)2 . S(n) is true for all n ≥ 3. Example 6. the right and straight trominos. There are two trominos. For example. Since the Basis Step and Inductive Step have been veriﬁed.C. which will not tile a rectangle but which can tile the plane. by the Principle of Mathematical Induction. and seven if they are considered to be one-sided (see ﬁgure). There are ﬁve free tetrominos. Escher produced many woodcut and lithograph art works of tiling problems. by the assumption = 5 n2 + 2n2 + n2 > 5 n2 + 2n + 1 . the tetris game uses tetrominos. They can be used in tiling problems. Golomb introduced polyominos in 1954.Inductive Step: Assume that 4n > 5n2 . Tiling with trominos. Then 4n+1 = 4 (4n ) > 4 5n2 . We will henceforth refer to the right tromino just as a tromino.

Inductive Step: Assume that any 2n × 2n deﬁcient board can be tiled. the other three boards having missing squares in the corners by placement of one tromino as shown in the ﬁgure. Then we can divide the 2n+1 × 2n+1 deﬁcient board into four 2n × 2n deﬁcient boards. We can tile all deﬁcient boards with n = 3k. except for some deﬁcient 5 × 5 boards – see the ﬁgure below. Basis Step: For n = 1. with one board having the missing square anywhere. then n2 − 1 = (3k + 1)2 − 1 = 9k 2 + 6k.Trominos can tile a deﬁcient board. providing n = 3k. that is an n × n board with one square missing. and is divisible by 3. We can see that if n = 3k + 1. 5 . the 2 × 2 deﬁcient board is a tromino and can be tiled. then n2 − 1 = (3k + 2)2 − 1 = 9k 2 + 12k + 3. If n = 3k + 2. Here we will prove by mathematical induction that all 2n × 2n deﬁcient boards can be tiled with trominos. and is divisible by 3.

Then a3 = = a4 = = a5 = = It would seem that an = 2n − 1. Strong Form of Mathematical Induction Suppose that we have a propositional function S(n) whose domain of discourse is the set of integers greater than or equal to n0 . Inductive Step: Assume that ai = 2i − 1 for 2 ≤ i ≤ n. with a1 = 1. Then S(n) is true for every integer n ≥ n0 . a1 = 1 = 21 − 1 and a2 = 3 = 22 − 1. we can tile any 2n × 2n deﬁcient board. a2 = 3 and an+1 = 3an − 2an−1 . by the Principle of Mathematical Induction. Since the Basis Step and Inductive Step have been veriﬁed. n0 ≤ k < n. We require the two preceding statements to be true. . where if S(n). . . Basis Steps: For n = 1. 2. All examples so far have involved the Weak Form of Mathematical Induction. a3 . and can hence tile the 2n+1 × 2n+1 deﬁcient board. a2 . then S(n) is true. Recurrence relation example. 6 3×3−2×1 7 3×7−2×3 15 3 × 15 − 2 × 7 31. The Strong Form of Mathematical Induction allows us to assume the truth of all of the preceeding statements. Example 7. . then S(n + 1) is true. if S(k) is true for all k. for all n > n0 . Suppose that S(n0 ) is true.We can tile all four 2n × 2n deﬁcient boards by the hypothesis. Deﬁnition. Consider the sequence a1 .

itself. 7 . 7. by the Principle of Mathematical Induction we can make postage for all n ≥ 6. Example 9. Show that postage of six cents or more can be achieved by using only 2-cent and 7-cent stamps. Since the Basis Steps and Inductive Step have been veriﬁed. Prime numbers example. Inductive Step: Assume n ≥ 8 and assume that postage of k cents or more can be achieved by using only 2-cent and 7-cent stamps for 6 ≤ k < n. 2 ≤ p ≤ q ≤ n. p and q are the product of primes. we can make postage of n − 2 cents. Since by the assumption. If n + 1 is prime. the formula an = 2n − 1 is true for all n ≥ 1. Inductive Step: Assume that i is the product of primes for 2 ≤ i ≤ n. and is the product of one number. then n+1 = pq is the product of primes. Basis Steps: For n = 6. using the deﬁnition using the assumption = 3 (2n − 1) − 2 2n−1 − 1 . S(n) : Every positive integer greater than 1 is the product of primes. Basis Step: For n = 2. then n + 1 = pq where p and q are integers. 2 is prime. use one 7-cent stamp. use three 2-cent stamps and for seven cent postage. Example 8. by the Principle of Mathematical Induction S(n) is true for all n ≥ 2. Now an+1 = 3an − 2an−1 . For six cent postage. By the assumption. Since the Basis Step and Inductive Step have been veriﬁed. = 2n (3 − 1) − 3 + 2 = 2n+1 − 1. If n + 1 is not prime. by the Principle of Mathematical Induction. The inductive step is complete. Since the Basis Steps and Inductive Step have been veriﬁed. Then add a 2-cent stamp to make postage of n cents. it is the product of one prime.We want to prove that an+1 = 2n+1 − 1.

15. 3. e. |A| = 4. 4. 4 ∈ C. known as elements. 2. denoted ∅. or B = { x | x is a positive odd integer less than 20 } . if x ∈ X then x ∈ Y and for every x. X = Y if for every x. A = { 1. e. 3 } . 4 ∈ A. If not. Read the symbol | as “such that”. 4 } . if x ∈ Y then x ∈ X. 7. It is described by listing the elements in parentheses e.g. 11. x | x ∈ R and x2 + 1 = 0 . 2.g. If x is an element of A. we write x ∈ A. A = { 1.g.g. Elements are assumed diﬀerent e. 4 } = { 2. 3. 17. 19 } . 2.The Language of Mathematics Sets A set is a collection of objects. X = Y if ∀x ((x ∈ X → x ∈ Y ) ∧ (x ∈ Y → x ∈ X)). and C = { x | x is a positive integer divisible by 3 } .e. |∅| = 0. 4 } = { 1. 2. 4 ∈ B.g. 9. 13. |B| = 10. If X is a ﬁnite set.g. 4 } . Hence ∅ = { }. 3. 8 . we let |X| = the number of elements of X. What is |{ ∅ }|? Two sets X and Y are equal if they have the same elements. 3. 5. e. The empty set is the set with no elements. 1. 2.g. Order does not matter e. A = { 1. B = { 1. we write x ∈ A. 3. i. A large or inﬁnite set can be deﬁned by a property e.

then if x = 2. x | x2 + x − 6 = 0 . 9 . If X ⊆ Y and X = Y . We write X ⊆ Y . In symbols. then X = Y . −3 Y. 3. 2. then X = Y . x2 + x − 6 = 0 x ∈ X. ∀x (x ∈ ∅ → x ∈ X) x ∈ ∅ is false. then X is a proper subset of Y . If X ⊆ Y and Y is ﬁnite. 4 } . X is a subset of Y if ∀x (x ∈ X → x ∈ Y ). then x2 + x − 6 = 0 x ∈ X. X is a subset of Y if every element of X is an element of Y . e. { 1. denoted X ⊂ Y . Y = { 2. We have If X ⊆ Y and Y ⊆ X. We have X ⊆ X ∅ ⊆ X i.e. then |X| ≤ |Y |. Y is ﬁnite. and |X| = |Y |. If X ⊆ Y . X= If x ∈ X. and if x = −3. = = = ∈ 0 0 2. −3 } . then x2 + x − 6 (x − 2)(x + 3) x x If x ∈ Y . hence x ∈ ∅ → x ∈ X is true.Example 1.g. 2 } ⊆ { 1.

c } { a. b } }. Since the Basis Step and Inductive Step have been veriﬁed. { b }. { a.g. Then Y has n elements. and a 0 represents its absence. b.Discrete Mathematics: Week 4 Sets (Continued) The power set of the set X. { a. then |P(X)| = 2n . itself. b. by the Principle of Mathematical Induction. This can be seen by pairing the subsets e. An alternative proof is to suppose that a 1 represents the presence of an element in a subset. c }. denoted P(X). |P(A)| = 8. { b. { a. and there are 2n such strings. True. then |P(X)| = 2n is true for all n ≥ 0. c } }. is the set of all subsets of X. The idea is that exactly half of the subsets of X contain a particular element of X. P(A) = { ∅. for the set A in Example 2. Proof: Johnsonbaugh uses a proof by Mathematical Induction. then |P(X)| = 2n+1 . x. { b }. Let X be a set with n + 1 elements. and by the assumption |P(Y )| = 2n . Theorem. ∅ {b} {c} { b. |P(B)| = 4. { a }. But since by the pairing argument |P(X)| = 2 |P(Y )| . b. 1 . { a }. A = { a. and 20 = 1. b }. Example 2. we have the empty set which has only one subset. if |X| = n. from X to form a set Y . c }. Then |∅| = 0. { a. Then all subsets of X can be represented by a binary string of length |X| = n. c } Basis Step: If n = 0. b } { a. P(B) = { ∅. { c }. c } {a} { a. c }. Remove one element. Inductive Step: Assume that a set with n elements has a power set of size 2n . B = { a. If |X| = n. b }.

4. 7. 6. f } } is pairwise disjoint. denoted X. X = { 1. 3 } are disjoint. 2. 4 } A ∩ B = { a. 4 } Sets X and Y are disjoint if X ∩ Y = ∅. C = { 3. 1. 5. c. 9 } (A ∪ B ∪ C) = { 0. d }. 8. 3. 9 } and Y = { 4. 3. are drawn as circles. 5 } A∪B A∩B 2 A−B . 9 }. 4 } C ∩ (A ∪ B) = { 6 } Venn diagrams provide a pictorial view of sets. 3. 2. 6. 3. e } B − A = { 1. If we deal with sets which are subsets of a set U. B. 3. 2. 6 } A ∪ B ∪ C = { 1. A = { a. Example 4. 8. S = { { a. 2. 7. b }. U A B A B U A B U B = { 1. d. 3. 2. 7. is depicted as a rectangle. Y in S. 5. 8 }. the universal set. C. Their diﬀerence is X − Y = { x | x ∈ X and x ∈ Y }. contained in U. A collection of sets S is pairwise disjoint if X and Y are disjoint for distinct X. The set U − X is the complement of X. 9 } A = { 1. e. d. 2. Sets A. then U is called the universal set. For example. 8. b }. 3. U = { 0. 4. c. { e. b } A − B = { c. e } and B = { 1. For example. 6. Then A ∪ B = { 1. 8. 3. 4. Their intersection is X ∩ Y = { x | x ∈ X and x ∈ Y }. 1. 5. 5. b. b. 9 } (A ∪ B) = { 0. { c. Example 3. a. 2. d. U. 2. 7.Set Operations Given two sets X and Y : Their union is X ∪ Y = { x | x ∈ X or x ∈ Y }. A ∪ B = { a.

B. (a) Associative laws: (A ∪ B) ∪ C = A ∪ (B ∪ C). 3 A∩A= A A∩A= ∅ A∩U =A A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) A∩B = B∩A (A ∩ B) ∩ C = A ∩ (B ∩ C) . (e) Complement laws: A ∪ A = U. 4 8 2 B Theorem 2.1. (f) Idempotent laws: A ∪ A = A.12 Let U be a universal set and let A. and C be subsets of U. (d) Identity laws: A ∪ ∅ = A. 7 9 3 6 C 0. (c) Distributive laws: A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C). The following properties hold. we have U A 1 5. (b) Commutative laws: A ∪ B = B ∪ A.U A U A B C A C ∩ (A ∪ B) In Example 4.

the proof is as follows.(g) Bound laws: A ∪ U = U. Then x∈A x∈A x∈A∩B so that x ∈ (A ∩ B) ∪ (A ∩ C). Let x ∈ A ∩ (B ∪ C). Proof: Of the ﬁrst distributive law. (i) Involution law: A=A (j) 0/1 laws: ∅ = U. (k) De Morgan’s laws for sets: (A ∪ B) = A ∩ B. 4 and and or C A∩C x∈ B∪C x ∈ B or x ∈ C x ∈ A ∩ C. . (h) Absorption laws: A ∪ (A ∩ B) = A. We have the Venn diagrams: U A B (A ∩ B) = A ∪ B U =∅ A ∩ (A ∪ B) = A A∩∅=∅ U A B C B∪C U A B C A ∩ (B ∪ C) U A B C A∩B Mathematically.

We have the Venn diagrams: U A B A B U or or and and x∈A∩C x ∈ A and x ∈ C x ∈ B or x ∈ C x ∈ B ∪ C. Let x ∈ (A ∩ B) ∪ (A ∩ C).This only proves that A ∩ (B ∪ C) ⊆ (A ∩ B) ∪ (A ∩ C). Then x∈A∩B x ∈ A and x ∈ B x∈A x∈A so that x ∈ A ∩ (B ∪ C). Let x ∈ (A ∪ B). Proof of the ﬁrst De Morgan law. the proof is as follows. A∪B U A B (A ∪ B) = A ∩ B U A B A Mathematically. 5 ∈ and and ∈ A∪B x∈B x∈B A ∩ B. Then x x∈A x∈A x This only proves that (A ∪ B) ⊆ A ∩ B. B .

. }. . Let S = { A1 . 5. . . / S as 2 ∈ A3 . then the union of many sets is S = { x | x ∈ Ai for some Ai ∈ S }. A3 . S= i=1 Ai . For inﬁnitely many sets S = { A1 . Example 5. A3 . . . 4. . . } A3 = { 3. 3. as 1∈ / 2∈ / etc. and S = ∅. . 6 S as 1 ∈ A2 . . . } where An = { n. } = A1 . That is. . . / . A3 . 3.Let x ∈ A ∩ B. . We write S= i=1 n n Ai . . . . } A2 = { 2. The intersection of many sets is S = { x | x ∈ Ai for all Ai ∈ S }. 2. . Obviously S = { 1. 2. 4. . n + 2. . . . etc. n + 1. A2 . . }. . An }. S= i=1 Ai . 3. . A1 = { 1. . . A2 . . A2 . Then x∈A x∈A x x and and ∈ ∈ x∈B x∈B A∪B (A ∪ B). A4 . . If S = { A1 . . . } this becomes ∞ ∞ S= i=1 Ai .

We call X × Y the Cartesian product of X and Y . 2. is diﬀerent from the ordered pair (b. { x | x is real } } is not a partition of X. S is pairwise disjoint and S = X. 8 } Then S = { {2. y) where x ∈ X and y ∈ Y . If X and Y are sets. 5. T = { { x | x is rational }. (a. 6. { 6 } } is a partition of S. (b) X = { x | x ∈ R } S = { { x | x is rational }. Alternatively. a) (unless a = b). b) = (c. The Last Duck Vietnamese Restaurant and Takeaway sells Entrees (set E) a: Chicken Cold Roll b: Vietnamese Spring Roll c: Steamed Pork Dumplings Mains (set M) 1: Twice Cooked Duck Leg Curry 2: Char-grilled Lemongrass Pork 3: Steamed Ginger Infused Chicken 4: Whole Prawns with Fanta Fish Sauce 7 . d) if and only if a = c and b = d. 7 }. 4. (a) X = { 1. 5. 3. { 1. written (a.A collection S of non-empty subsets of X is a partition of X if each element of X belongs to exactly one member of X. { x | x is irrational } } is a partition of X. 3. Example 6. 4. we let X × Y denote the set of all ordered pairs (x. An ordered pair of elements. 8 }. b). Sometimes order is important. Example 7. Sets are unordered collections of elements. 7. That is.

The Cartesian product lists the 12 possible dinners consisting of one entree and one main course. (c. 4) } . (a. In fact. 2). 2). 8 . 2). 3). (c. (b. 4). (b. This is actually a cut-down version of the menu. 4 }. b. 3. (c. 1). 3). 1). 4). (b. (c.Then E = { a. (b. there are 5 entrees and 8 main courses. (a. |X × Y | = |X| · |Y |. How many dinners are possible? For the general case. (a. 3). So E × M = { (a. 2. 1). c } and M = { 1.

(3. (3. (2. For example.e.e. The domain is all elements of X that occur in R i. Therefore R = { (2. Deﬁnition. 3 }. b) }.1. The set { x ∈ X | (x. a). A (binary) relation R from a set X to a set Y is a subset of the Cartesian product X × Y .3 X = { 2. a (binary) relation R connects elements in X to elements in Y . 3). This is called an arrow diagram. y) ∈ R for some x ∈ X } is called the range of R. { a. product vs price. 6. 3. we call R a (binary) relation on X. (4. 4).g. c }. If X = Y . c). 6). The range is all elements of Y that occur in R i. b.Relations Relations Johnsonbaugh 3. 4 } and Y = { 3. The set { y ∈ Y | (x. we write x R y and say that x is related to y. 7 } Deﬁne a relation from X to Y by (x. Johnsonbaugh 3. If (x. 9 . y) ∈ R if x divides y (with 0 remainder). 2. (2. Example 1. 4. we have pictorially X 1 a 2 b 3 c 4 Y R = { (1. In simpler terms. 5. y) ∈ R.1 We can consider a relation as a table linking the elements of two sets e. 6). (3. 4) }. y) ∈ R for some y ∈ Y } is called the domain of R. { 1.

6 }. (1. 4 }. The domain and range of R are both X. x. (3. 2). d) }. (2. y) ∈ R. If (x. 3. (1. (d. The digraph is: a b c d 10 . 3). (c. Draw dots or vertices as elements of X. 2. a). 2). 4). y) ∈ R if x ≤ y. 1). draw an arrow from x to y – a directed edge. b). 3). Johnsonbaugh 3. 4). c). A relation on a set can be described by its digraph. 4).We could write R as a table: X 2 2 3 3 4 Y 4 6 3 6 4 The domain of R is { 2. c. 4 } deﬁned by (x. d } is R = { (a.4 Let R be a relation on X = { 1. (3. (2. 4. y ∈ X. An element (x. 4) }. 3. The relation R on X = { a. R = { (1. The diagraph of Example 2 is: 1 2 3 4 Example 3. (1. (4.1. (b. (2. b. and the range of R is { 3. x) is called a loop. Example 2. 3).

A relation R on a set X is called antisymmetric if for all x. if (x. 2) (2. z) (x. y. z ∈ X. 1) (1. 4) (1. 3) (1. y ∈ X. y) ∈ R and (y. but (b. if (x.Deﬁnition. x) ∈ R. 3) (3. then (x. We need to list all pairs to verify. z) (1. 4) (1. z) ∈ R is automatically true. then (y. b) ∈ R. 4) (1. y) ∈ R then (y. 1) (1. 2) (2. 11 . y ∈ X. 2) (2. Do we really need to list all pairs? If x = y. 2) (1. (y. x) ∈ R. 3) (3. 3) (1. then (x. y) ∈ R and x = y. (c. Example 2 is antisymmetric. y) (y. Deﬁnition. Hence the table need be only (x. 2) (1. z) (x. 1) (1. There is a loop on every vertex. 1) (1. x) ∈ R for every x ∈ X. and (x. The digraph of a transitive relation has the property that whenever there are directed edges from x to y and from y to z. Example 2 is transitive. 1) (1. (b. z) ∈ R. A relation R on a set X is called transitive if for all x. 3) (1. 4) (1. Example 3 is not reﬂexive. 4) Example 3 is not transitive e. 3) (1. (x. Deﬁnition.g. z) (1. z) ∈ R. 4) (2. 4) (1. c). Example 3 is symmetric. Example 2 is reﬂexive. 4) (2. b) ∈ R. there is a directed edge from x to z. A relation R on a set X is called reﬂexive if (x. 2) (1. y). y) (y. 3) etc. 1) (1. 3) (1. z) ∈ R. Between any two distinct vertices there is at most one directed edge. Can a relation R be both symmetric and antisymmetric? Deﬁnition. 2) (1. A relation R on a set X is called symmetric if for all x. if (x. Directed edges go both ways between vertices. 2) (2.

6). (2.e. y ∈ X and either x If x. y ∈ X. 1 . For example. 6 } R is the relation deﬁned by (x. the relation R on the positive integers deﬁned by (x. (4. 4). (5. y) ∈ R. (4. y) ∈ R if y is larger than x by an even number or zero. 6). x y and y y or y x. (2. then x ≤ z. (6. The digraph is 3 5 2 4 6 This is reﬂexive : antisymmetric : transitive : loops on all vertices at most one directed arc between each pair of vertices need only check (2. If R is a partial order on X. A relation R on a set X is called a partial order if R is reﬂexive. 5).Discrete Mathematics: Week 5 Relations (Continued) Relations can be used to order elements of a set. X = { 2. and (2. 3). y when (x. (3. 4. antisymmetric and transitive. 5. Deﬁnition. y) ∈ R if x ≤ y orders the integers. 4). Example 4. 6) ∈ R. we often write x If x. (4. Hence R = { (2. then we say x and y are incomparable. x. 4). 5). (3. 6) }. 3. then y ≤ x if x ≤ y and y ≤ z. and is reﬂexive. 2). antisymmetric and transitive i. then we say x and y are comparable. reﬂexive : antisymmetric : transitive : x≤x if x ≤ y and x = y. 6).

6. 5. (3. . More generally. x) ∈ R−1 if y is divisible by x. 5. 5). 1. 2. Focus camera. Either (x. Remove lens cap. 2).1. Why? Partial orders can be used in task scheduling. Push photo button. the less than or equals relation on the positive integers is a total order. 3. 3). 2). 5 or 3. 2). R is reﬂexive. (4. y) ∈ R if x divides y. 2. (4. 3. Turn oﬀ safety lock. since either x ≤ y or y ≤ x (or both if x = y). (6. (2. Let R be a relation from X to Y . 3). 6). (2. (6. Deﬁnition. Deﬁne the relation R on T by iRj Then R = { (1. Why is R not a total order? Possible solutions are 1. 2 and Y = { 3. Example 4 is not a total order.If every pair of elements in X is comparable. (3. then R−1 = { (4. 2. 4. 2). Example 2 is a total order. (5. antisymmetric and transitive. Johnsonbaugh 3. (1. 1). 5). so R = { (2. 3. (3. y) ∈ R } . 4. 2. is the relation from X to Y deﬁned by R−1 = { (y. Example 1. (4. 4). and is a partial order. 5) } . 4 } We have (x. some can be done in either order. 3). 3). 7 } if i = j or task i must be done before task j. Turn on ﬂash unit. (4. (2. Some tasks must be done before others. 5). 6). 4. 5). 4). 5. (Continued) X = { 2. (3. 3. Example 5. x) ∈ R for all x. 4. we call R a total order.21 The set T of tasks in taking an indoor ﬂash photograph is as follows. 4) }. The inverse of R. y) ∈ R or (y. y = 1. x) | (x. 4. 1. (1. 4) } and might be described as (y. (3. denoted R−1 .

2. a). (d. z) ∈ R2 for some y ∈ Y } . c. z). (3. A = { 1. d). y. (3. Let R1 be a relation from X to Y and R2 be a relation from Y to Z. d } C = { x. a). (2. z) }. Example 6. y) ∈ R1 and (y. b. z) | (x. b). y). (c.Deﬁnition. is the relation from X to Z deﬁned by R2 ◦ R1 = { (x. (b. (3. 3. 3 . (3. The composition of R1 and R2 . (3. z). z) } The arrow diagram represents this as: 1 2 3 4 A a b c d B C x y z The composition of the relations is S ◦ R = { (2. 4 } B = { a. x). d) } S = { (b. denoted R2 ◦ R1 . x). z } R = { (1.

(4. j) th entry of A is matched by a 0 in the (j. (d. (3. 4) } . 4 } B = { a. (from §3. (from §3. i) th element. d) } S = { (b. z). 4 } and (x. revisited) X = { 1.1. so is the (j. • antisymmetric if and only if any 1 in the (i. 3. 3). 4). 4). (3. x) ∈ R for all x ∈ X. 3). (2. y).Matrices of Relations For a quick revision on matrices. a). (3. z } R = { (1. and make the entry in row x column y a 1 if x R y and a 0 otherwise. y) ∈ R if x ≤ y. 3. Label the rows with the elements of X. i) th entry in any position oﬀ the main diagonal. z) } 4 . 2).1. If the (i. Recall If (x. y. 2. R = { (1. y) ∈ R then (y. (3. (2. a). (b. 2. Example 2. d } C = { x. b. (1. x) ∈ R. • symmetric if and only if A is symmetric. x). b). This is the matrix of the relation R. (1. The relation R on a set X which has a matrix A is • reﬂexive if and only if (iﬀ) A has 1’s on the main diagonal. 2). (1. A matrix is a convenient way to represent a relation R from X to Y . read Johnsonbaugh Appendix A. Then the matrix is 1 2 3 4 1 1 0 0 0 2 1 1 0 0 3 1 1 1 0 4 1 1 1 1 The matrices depend on the ordering of the elements in the sets X and Y . y) ∈ R then (y. Recall (x. c. (2. label the columns with the elements of Y . 1). x) ∈ R for x = y. (2. (c. j) th element of A is 1. Recall If (x. A relation R on X has a square matrix. (3. revisited) A = { 1. d). 4). 3). Example 6.

Suppose that the entry (i. Note that the matrix sizes are automatically correct for matrix multiplication. z). and Z.3. We obtain this entry by multiplying the i th row of A1 by the j th column of A2 . Discussion: Suppose that the (i. k) in the i th row of A1 and at least one element (k. The relation R is transitive if and only if whenever entry (i. j) ∈ R2 . j) th entry in A1 A2 is nonzero. Then there is at least one element (i. (3.A1 = matrix of R = 1 2 3 4 a b c d a 1 0 1 0 b 0 0 1 0 y 0 0 1 0 c 0 0 0 0 z 0 1 0 1 d 0 1 1 0 A2 = matrix of S = x 0 1 0 0 A1 A2 = 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 0 1 = 1 2 3 4 x 0 0 1 0 y 0 0 0 0 z 0 1 2 0 What does the 2 in (A1 A2 )(3. So if we convert the ‘2’ to a ‘1’. entry (i. j) in A2 is nonzero. j) ∈ R. Therefore there must be at least one element (i. Choose the orderings of X. j) in A is also nonzero. k) ∈ R1 and (k. Compute A2 and compare A and A2 . z) }. 5 . k) in the i th row of A and at least one element (k. Let A1 be the matrix of R1 and let A2 be the matrix of R2 with respect to the orderings selected. j) in the j th column of A which are both 1. Then (i. Matrix Test for Transitivity The theorem gives a test for a relation R on a set X being transitive. j) in A2 is nonzero.6 Johnsonbaugh Let R1 be a relation from X to Y and let R2 be a relation from Y to Z. we have the matrix of S ◦ R. x).3) mean? The composition of the relations is S ◦ R = { (2. Hence (i. Theorem 3. j) th entry of A is zero. j) ∈ R. so (i. The matrix of the relation R2 ◦ R1 with respect to the orderings selected is obtained by replacing each nonzero term in the matrix product A1 A2 by 1. since the number of columns of A1 and the number of rows in A2 are equal to the number of elements of Y . then (i. (3. Y . j) in the j th column of A2 which are both 1. k) ∈ R and (k. If the (i. j) ∈ R2 ◦ R1 .

1 2 3 4 1 2 3 4 A= 1 1 0 0 0 1 1 0 0 0 2 1 1 0 0 2 2 1 0 0 3 1 1 1 0 3 3 2 1 0 4 1 1 1 1 4 4 3 2 1 A2 = Then A2 has 1’s only in positions where A has 1’s. (1. b) ∈ R and (c. R = { (1. (2. The relation is not transitive i. 3). 2).Example 2. 4) }. b) and (c. c) positions. c). and the relation is transitive. (3. for x. d) } a b c d a b c d A= a 1 0 0 0 a 1 0 0 0 b 0 0 1 0 b 0 1 0 0 c 0 1 0 0 c 0 0 1 0 d 0 0 0 1 d 0 0 0 1 A2 = Then A2 has 1’s in the (b.1. y) ∈ R if x ≤ y. y ∈ X. (1. d). 1). 4 } deﬁned by (x. whereas A does not. (from §3. revisited) R = { (a. (2. (4. b) ∈ R. c) ∈ R. a). 3). 4). 3). (3. 2). (c. but (b. 4). (1. c) ∈ R and (c.1. (d.e. 6 . revisited) R is a relation on X = { 1. (from §3. Example 3. 4). (b. (2. 2. (b. 3.

2. The relation f = { (1. Why? The relation R = { (1. c } is not a function. Why? We can depict the situations using arrow diagrams.Functions Johnsonbaugh 2. 3 } to Y = { a. (2. (3. (2.) The set X is called the domain of f . c) } from X = { 1. (2. For the three cases we have: X 1 2 3 a b c Y X 1 a 2 b 3 c 4 Y X 1 2 3 a b c Y 7 . Example 1. 3 } to Y = { a. b). b). y) ∈ f } (which is a subset of Y ) is called the range of f . 2. The set { y | (x. a) } from X = { 1. a). (Can also write y = f (x). 4 } to Y = { a. a). 2. c } is a function. We sometimes denote a function f from X to Y as f : X → Y . 3. y) ∈ f . (1. c } is not a function. c) } from X = { 1. b. A function f from X to Y is a subset of the Cartesian product X × Y having the property that for each x ∈ X. (3.2 Deﬁnition. there is exactly one y ∈ Y with (x. b. b). b. Why? The relation R = { (1. a). (3. b). Let X and Y be sets.

denoted ⌈ x ⌉. denoted ⌊ x ⌋.4 ⌉ = ? ⌈6⌉ = ? ⌈ −5. is the greatest integer less than or equal to x. or more than one arrow.7 ⌋ = ? ⌈ 1. y Floor function 2 1 [ [ [ ) ) −3 −2 −1 [ 0 ) 1 2 3 x ) −1 [ ) −2 8 . There cannot be no arrow. The ﬂoor of x.7 ⌉ = ? The graphs of the ﬂoor and ceiling functions are shown below. Some simple examples: 6 mod 3 9 mod 10 14 mod 3 365 mod 7 = = = = ? ? ? ? This last result tells us that 29th March next year will be a Thursday.4 ⌋ = ? ⌊6⌋ = ? ⌊ −5. Some Useful Functions Deﬁnition. The ceiling of x. we deﬁne x mod y to be the remainder when x is divided by y. If x is an integer and y is a positive integer. Deﬁnition. is the least integer greater than or equal to x. Some simple examples: ⌊ 1.There must be exactly one arrow from every element in the domain.

compute m = h(n) and look at location m. 6. Store the numbers 15. 5 in the eleven cells. look at higher numbered cells. 9. But location 4 is occupied. if the cells are labelled 0. 7. 32. 15 558 32 132 102 5 = = = = = = 1 × 11 + 4 50 × 11 + 8 2 × 11 + 10 12 × 11 + 0 9 × 11 + 3 0 × 11 + 5 so so so so so so 15 558 32 132 102 5 mod mod mod mod mod mod 11 11 11 11 11 11 = = = = = = 4 8 10 0 3 5 These all store in the appropriate cells as shown in the diagram below.14 We wish to store nonnegative integers in computer memory cells. If n is not there.2. 1. But if all higher numbered cells are occupied.y Ceiling function 2 1( ( ] ( ] ] −3 −2 ( −1 ] 0 1 2 3 x −1 ( ] −2 Example 2. To ﬁnd the data item n. 558. 3. A collision has occurred. 4. In this case. Hash Functions Johnsonbaugh Example 2. 132 0 1 2 102 3 15 4 5 5 6 7 558 8 9 32 10 Now store 257 = 23 × 11 + 4. start looking at the lowest numbered cell. The collision resolution policy is to use the next highest unoccupied cell. 5. 2. 10 → 0. 8. For example. 132. A hash function computes the location from the data item. 10. we might use h(n) = n mod 11. 102. 9 .

b. Inverse Function Suppose that f is a one-to-one. d }. 10 . onto function from Y to X. (3. In the preceding example. The arrow diagram is: X 1 2 3 a b c Y If Y = { a. (2. If f is a function from X to Y and the range of f is Y . f = { (1. c. f is not onto. c } is onto. f = { (1. (3. (3. 2. c) } from X = { 1. d } is a oneto-one function. f is said to be onto Y (or an onto function or a surjective function). (2. The inverse relation { (y. b. x) | (x. the function f = { (1. d } is a function. onto function from X to Y . b). Deﬁnition. but not one-to-one. f = { (1.Deﬁnition. Example 3. a). b). c. c. 3 } to Y = { a. the inverse function f −1 . A function that is both one-to-one and onto is called a bijection. a). b). b). (3. y) ∈ f } is a one-to-one. (2. there is at most one x ∈ X with f (x) = y. 3 } to Y = { a. c) } is a bijection. Example 4. c) } from X = { 1. 3 } to Y = { a. A function f from X to Y is said to be one-to-one (or injective) if for each y ∈ Y . The arrow diagrams illustrate this. a). a). b. a) } from X = { 1. (2. 2. 2. Y X 1 b 2 c 3 d 3 d 2 c a X 1 b a Y Deﬁnition. b.

the domain of f is X and hence f −1 is onto. a). c. and is onto. and hence the domain of f −1 is Y . a). 2. d }. there is only one x ∈ X for which (y. 2). but is not onto. 3) } ⊆ { a. c). Why? R2 = { (1. 3. b. 3 } is not a function. is one-to-one. 2. This is evident also from the arrow diagram. the range of f is Y . c) } ⊆ { 1. is one-to-one. d } × { 1. We can obtain the arrow diagram for f −1 by reversing the directions of the arrows for f . 2. R1 = { (1. (3. (b. (c. b. 2.Since f is onto. 11 . d) } ⊆ { 1. Since f is one-to-one. c. 2. b). 3). (2. Y X 1 b 2 c 3 d a −1 R1 = { (a. Or we could say R1 = { (1. Since f is a function. 3 } × { a. c. b. b. 2). −1 R2 = { (a. (c. c. is one-to-one. d }. d }. 3 } to Y = { a. 1). f −1 is one-to-one. and is onto. (b. Example 5. R1 is a function. b). 4 }. b. (2. 1). (4. Since f is one-to-one. a). The arrow diagram is X 1 2 3 4 a b c d Y R2 is a function. (2. (3. 4 } × { a. so f −1 is a function. (3. 3. (d. d } × { 1. 4) } ⊆ { a. x) ∈ f −1 . b). c) } from X = { 1. −1 R2 is a function. c.

• Finiteness The algorithm terminates. a recipe. a. 6. and c. The algorithm produces output. Algorithms typically have the following characteristics. b. • Generality The algorithm applies to a set of inputs. which resembles real computer code. and c) 1. max3 (a. b.1 Input: Output: Finding the Maximum of Three Numbers This algorithm ﬁnds the largest of the numbers a.Discrete Mathematics: Week 6 Algorithms Introduction An algorithm is a step by step method of solving a problem i.1. • Determinism The intermediate results of each step of execution are unique and are determined only by the inputs and the results of the preceding steps. 8. c) { 2. c large (the largest of a. the algorithm correctly solves the problem. Johnsonbaugh in the 6th edition has rewritten algorithms to be more like Java. b. it stops after ﬁnitely many instructions have been executed. that is. 4. } large = a if (b > large) large = b if (c > large) large = c return large // if c is larger then large. • Correctness The output produced by the algorithm is correct. update large 1 . Algorithms are written in pseudocode. b. Algorithm 4. The steps are precisely stated. 7. 5.e. • Input • Output • Precision The algorithm receives input. that is. update large // if b is larger then large. 3.

≤. line 1 max3 is the name of the function. Testing equality uses ==. large is assigned the value of a. a brief description of the algorithm. action is executed and control passes to the statement following action. For example. ¬ The Matlab equivalent command is if expression commands end There is also the if else statement. line 3 This introduces the if statement. if (x ≥ 0) { y=0 z =x+3 } We can use arithmetic operators relational operators logical operators +. c are input parameters or variables. and functions containing the instructions of the algorithm. >. ∨. −. ) ==. Sometimes lines are numbered to make it convenient to refer to them. enclose them in braces. which has the structure if (condition) action 1 else action 2 2 . b. <. ≥ ∧. action is not executed and control passes immediately to the statement following action.Algorithms consist of a title. the input to and output from the algorithm. ∗. ¬ =. This algorithm has one function. line 2 = is the assignment operator. /. If condition is false. if (x == 0) y=0 z =x+y If action consists of multiple statements. The structure is if (condition) action If condition is true. ( . and a.

A more common notation is to use %. If there is no return statement. The return large statement terminates the function and returns the value of large. Algorithm 4. n large (the largest value in the sequence s) 3 .1. Then line 2 Set large to a. . line 6 This assigns large the value of c if c > large. line 5 The if statement c > large is true. by Matlab and postscript. c = 9. This algorithm ﬁnds the largest of the numbers s1 . namely 6. so line 4 is not executed. so at line 6 large is set to the value of c. the closing brace terminates the function.2 Input: Output: max (s. s. namely 9. sn . we set a = 6. For example.The notation // signals that the rest of the line is a comment. line 3 The if statement b > large is false. line 7 The return statement simply terminates the function. n) { large = s1 for i = 2 to n if (si > large) large = si return large } Finding the Maximum Value in a Sequence b = 1. line 4 This assigns large the value of b if b > large. . We can assign values to the input variables and use a simulation called a trace to evaluate the operation of the algorithm. . The Matlab equivalent command is if expression commands if true else commands if false end Matlab can extend the if structure further with elseif. which is used. for example.

enclose them in braces. we can use a while loop. while (condition) action action is repeated as long as condition is true.2 would become max (s.The structure of the for loop is for var = init to limit action If action consists of multiple statements. The for loop speciﬁes the initial and ﬁnal integer values between which the operations are processed.1. Algorithm 4. The Matlab equivalent command is for i = m:n commands end Using a while loop. Alternatively. n) { large = s1 i=2 while i ≤ n { if (si > large) large = si i=i+1 } return large } The Matlab equivalent command is while expression commands end 4 .

5 . m. and j is the index in p // the while loop compares ti · · · ti+m−1 and p1 · · · pm while (ti+j−1 == pj ) { j =j+1 if (j > m) return i } } return 0 } The algorithm indexes the characters in the text with i and those in the pattern with j.Examples of Algorithms Algorithm 4. The search starts with the ﬁrst character in the text. n) { for i = 1 to n − m + 1 { j=1 // i is the index in t of the ﬁrst character of the substring // to compare with p. and the next characters compared. This continues until either the pattern is found. n i text search(p. and the match process repeats. then the pattern index j is incremented by 1. m. t. If p does not occur in t. when the last characters in the pattern and text coincide. it returns 0. Input: Output: p (indexed from 1 to m). t (indexed from 1 to n). ﬁnishes with character n − m + 1 in the text. the text index i is incremented by 1. or there is not a match. and if the pattern is not found. It returns the smallest index i such that p occurs in t starting at index i. If there is a match for the ﬁrst indices in the pattern and text.1 Text Search This algorithm searches for an occurrence of the pattern p in text t. the pattern index j is reset to 1. In the latter case.2.

8. and the pattern and text characters j and i + j − 1 compared. and the pattern and text characters j and i + j − 1 compared. There is a match. and the pattern and text characters j and i + j − 1 compared. There is not a match. The text index i and the pattern index j are initially set to 1. and the pattern and text characters j and i + j − 1 compared. The pattern index j is incremented to 2. 6 . There is not a match in the corresponding characters. The pattern index j is incremented to 3. There is a match. j=1 ↓ 0 0 1 0 1 0 0 0 1 ↑ i=1 Y j=1 ↓ 0 0 1 0 1 0 0 0 1 ↑ i=3 Y j=1 ↓ 0 0 1 0 1 0 0 0 1 ↑ i=4 Y j=2 ↓ 0 1 1 0 0 0 1 N j=2 ↓ 0 1 0 0 1 Y j=2 ↓ 0 1 0 1 Y j=1 ↓ 0 0 1 0 1 0 0 0 1 ↑ i=2 N j=3 ↓ 0 1 0 0 1 N j=3 ↓ 0 1 0 1 Y 0 0 ↑ i=1 0 1 0 0 ↑ i=3 0 1 0 0 ↑ i=3 0 0 1 0 0 ↑ i=4 0 0 1 0 0 ↑ i=4 1. The text index i is incremented to 2 and the pattern index j reset to 1. There is a match. 4. 2. The text index i is incremented to 3 and the pattern index j left at 1. The pattern index j is incremented to 3. There is a match. and the pattern and text characters j and i + j − 1 compared. There is not a match. and the ﬁrst characters compared. The pattern index j is incremented to 2. The pattern index j is incremented to 2. 5. 7. 3. This shows the operation of the text search algorithm in a search for the string “001” in the string “010001”. 9.Example 1. and the pattern is found. There is a match. The text index i is incremented to 4 and the pattern index j reset to 1. 6. There is a match.

one √ factor will be less than or equal to m and one factor will be greater than or equal to √ m. if m is not prime √ m is prime(m) { for i = 2 to if (m mod i == 0) return false return true } √ Note that the modulus operation need only go up to m. a positive integer m. the smallest prime greater than n large prime(n) { m=n+1 while (¬ is prime(m) ) m=m+1 return m } The algorithm starts with m = n + 1. If the function is prime returns true. false is returned. the while loop terminates. if m is prime. Example 3. a positive integer true. Algorithm Input: Output: Testing Whether a Positive Integer is Prime This algorithm ﬁnds whether a positive integer is prime or composite. 7 . and continually increments m in a while loop as long as m is not prime. If the function is prime returns false. and m is returned. If i mod m is 0. m. Algorithm Input: Output: Finding a Prime Larger Than a Given Integer This algorithm ﬁnds the ﬁrst prime larger than a given positive integer. and function is terminated. and the while loop continues. false. n. If the for loop runs to its conclusion. then ¬ true is false. since if m is not prime. then i divides m. then ¬ false is true.Example 2. control passes to the ﬁfth line and true is returned. calls the function is prime to test if m is prime.

(b) Search for a key word in a list of size n. This has real implications. . that with an input of size n = 106 . The worst-case time is n if the key word is last in the list. 2(n + 1) (c) A set X contains red items and black items. . (d) In the travelling salesperson problem. For example. This is n(n + 1)/2 + n 1+2+3+ ··· +n+n = n+1 n+1 2 n + 3n = . the time taken behaves as 2n . We can see from the table. and twelve days if the algorithm behaves as n2 . Cooley and Tukey derived the Fast Fourier Transform algorithm in the 1960’s. 2n or 2 (n − 1)! . The best-case time is 1 if the key word is ﬁrst in the list. the time to execute is 20 seconds if the algorithm behaves as n lg n. Count all subsets that contain at least one red item. The average-case time is the sum of all n + 1 cases. for example. Suppose that we measure the size of the “input” as n. . Since there are 2n subsets. the salesperson visits n towns in some order. n in the list or not there at all. 2 1 Question: Which grows the faster. or not there at all. it is pointless to have an air traﬃc control algorithm that takes two hours to run if an update is required every 15 minutes. where the key word is in position i. Note that lg n means the logarithm to the base 2 of n.Analysis of Algorithms Analysis of an algorithm involves deriving estimates or storage space to execute an algorithm. The number of iterations of a while loop (or comparisons) is n − 1 for input of size n in all three cases. Real-life algorithms are checked for the amount of time to execute. Time is more crucial. 8 . . There are 1 (n − 1)! possible tours of n towns. Worst-case time The maximum time needed to execute the algorithm among all inputs of size n. which reduced the time for a numerical discrete Fourier transform from behaving as n2 to behaving as n lg n. Example 1. i = 1. The following table shows how time varies in the execution of an algorithm with the size of the input and the complexity of the algorithm. Average-case time The average time needed to execute the algorithm among all inputs of size n. Best-case time The minimum time needed to execute the algorithm among all inputs of size n. (a) The algorithm is to ﬁnd the largest element in a sequence. 2.

3.1 sec 2 sec 3 hr 32 yr 3 × 1030089 yr 5 106 10−6 sec 4 × 10−6 sec 2 × 10−5 sec 1 sec 20 sec 12 days 31.7 min 3 × 10287 yr −6 Time to Execute if n = 10 10−6 sec 4 × 10−6 sec 2 × 10−5 sec 0.Number of Steps for Input of Size n 1 lg lg n lg n n n lg n n2 n3 2n Number of Steps for Input of Size n 1 lg lg n lg n n n lg n n2 n3 2n Number of Steps for Input of Size n 1 lg lg n lg n n n lg n n2 n3 2n 3 −6 10 sec 10−6 sec 2 × 10−6 3 × 10−6 5 × 10−6 9 × 10−6 3 × 10−5 8 × 10−6 sec sec sec sec sec sec Time to Execute if n = 6 9 −6 −6 10 sec 10 sec −6 10 sec 2 × 10−6 sec −6 3 × 10 sec 3 × 10−6 sec 6 × 10−6 sec 9 × 10−6 sec −5 2 × 10 sec 3 × 10−5 sec 4 × 10−5 sec 8 × 10−5 sec 2 × 10−4 sec 7 × 10−4 sec −5 6 × 10 sec 5 × 10−4 sec 12 10 sec 2 × 10−6 4 × 10−6 10−5 sec 4 × 10−5 10−4 sec 2 × 10−3 4 × 10−3 −6 sec sec sec sec sec 50 10 sec 2 × 10−6 6 × 10−6 5 × 10−5 3 × 10−4 3 × 10−3 0.01 sec 1 sec 4 × 1016 yr 1000 10 sec 3 × 10−6 sec 10−5 sec 10−3 sec 10−2 sec 1 sec 16. 9 .1 Time to execute an algorithm if one step takes 1 microsecond to execute. 710 yr 3 × 10301016 yr TABLE 4.13 sec 36 yr −6 sec sec sec sec sec Time to Execute if n = 100 −6 10 sec 3 × 10−6 sec 7 × 10−6 sec 10−4 sec 7 × 10−4 sec 0.

10. 300 333.or worst-case times for an algorithm to execute than in how the times grow as n increases. We write f (n) = Ω(g(n)) and say that f (n) is of order at least g(n) or f (n) is omega of g(n) if there exists a positive constant C2 such that |f (n)| ≥ C2 |g(n)| for all but ﬁnitely many positive integers n. ignoring the constant 3 .334 × 1011 3. 333. }. 100. 2. We write f (n) = O(g(n)) and say that f (n) is of order at most g(n) or f (n) is big oh of g(n) if there exists a positive constant C1 such that |f (n)| ≤ C1 |g(n)| for all but ﬁnitely many positive integers n. . 3 n3 /3 + n2 − n/3 n3 /3 430 333 343. 3 3 Then for n = 10. . 000. Let f and g be functions with domain { 1. Example 2. Deﬁnition. 000 we have the table n 10 100 1. 000 10. We write f (n) = Θ(g(n)) and say that f (n) is of order g(n) or f (n) is theta of g(n) if f (n) = O(g(n)) and f (n) = Ω(g(n)). 333. 000 333.333 × 1011 1 Hence t(n) is of order n3 . 333 334. 10 . 1. 333 3. 000 For large n. 3. .Often we are less interested in the best. t(n) ≈ n3 . Suppose the worst-case time is n3 n t(n) = + n2 − .

Exercise: Try f (n) = 4n3 − n2 + 2n. so f (n) = Ω (n2 ). 3 so t(n) = Ω (n3 ). 3 3 t(n) ≤ n3 + n2 3 n3 + n3 ≤ 3 4 3 = n. Also f (n) ≥ 2n2 . Example 2. so f (n) = Ω(n).Example 3. f (n) = 2n2 + 3n + 1. 11 . so f (n) = O (n2 ). Also f (n) ≥ 4n. Therefore f (n) = Θ(n). 3 so t(n) = O (n3 ). Therefore f (n) = Θ (n2 ). so f (n) = O(n). Example 4. Therefore t(n) = Θ (n3 ). f (n) = 4n + 3. Also t(n) ≥ n3 n2 + n2 − 3 3 3 n 2 = + n2 3 3 n3 ≥ . (Continued) t(n) = n n3 + n2 − . Then f (n) ≤ 4n + 3n = 7n. Then f (n) ≤ 2n2 + 3n2 + n2 = 6n2 .

where each ai is nonnegative (and ak = 0). Then p(n) = Θ nk . Proof: p(n) = ≤ = = so p(n) = O nk .Theorem 4. 12 .3. y 256 128 64 32 16 8 4 2 1 1 2 3 4 5 6 7 8 9 10 11 y=1 y = lg n y=n y = n lg n y = 2n y = n2 ak nk + ak−1 nk−1 + · · · + a1 n + a0 ak nk + ak−1 nk + · · · + a1 nk + a0 nk (ak + ak−1 + · · · + a1 + a0 ) nk C1 nk .4 Let p(n) = ak nk + ak−1 nk−1 + · · · + a1 n + a0 be a polynomial in n of degreee k. Also p(n) ≥ ak nk .1 Growth of some common functions.3. Therefore t(n) = Θ nk . 12 13 n Figure 4. so p(n) = Ω nk .

2 2 2 How many terms are there? 13 . This example assumes that we don’t know that the sum is n(n + 1) . so f (n) = Ω (n). ln 2 so Now lg n < n for all n ≥ 1. so f (n) = O (n). (See the preceding graph). so f (n) = O (n). 2 1 + 2 + 3 + · · · + (n − 1) + n ≤ n + n + n + · · · + n + n = n2 . Let’s be trickier. Remember that lg n represents log2 n. Does your calculator have an lg button? If not. 1 + 2 + 3 + · · · + (n − 1) + n ≥ ≥ n n + + 1 + · · · + (n − 1) + n 2 2 n n n + + ··· + . Also f (n) ≥ 2n. We can do better. Example 6. This seems to be too low an estimate. and throw away approximately the ﬁrst half of the series. Also 1 + 2 + 3 + · · · + (n − 1) + n ≤ 1 + 1 + 1 + · · · + 1 + 1 = n. Therefore f (n) = 2n + 3 lg n < 2n + 3n = 5n. f (n) = 1 + 2 + 3 + · · · + (n − 1) + n. f (n) = 2n + 3 lg n. so f (n) = Ω (n). Therefore f (n) = Θ (n). y = lg n n = 2y ln n = ln (2y ) = y ln 2. ln n y = .Example 5.

e.If n = 8 If n = 9 If n = 2k If n = 2k + 1 n+1 . 4 n 2 Hence there are so f (n) = Ω (n2 ). n+1 terms. 2 n+1 5 + 6 + 7 + 8 + 9 = 5 terms i. Therefore f (n) = Θ (n2 ). Therefore 2 f (n) ≥ n+1 2 n n ≥ · 2 2 n2 = . Therefore f (n) = Θ (n).e. Also f (n) ≥ 1 n2 . If we know that f (n) = n(n + 1) . 2 k + (k + 1) + · · · + 2k = 2k − (k − 1) = k + 1 terms. 4 + 5 + 6 + 7 + 8 = 5 terms i. 1 2n 1 2 2n 14 . 2 so f (n) = Ω (n2 ). (k + 1) + (k + 2) + · · · + (2k + 1) = (2k + 1) − k = k + 1 terms. . then 2 f (n) = 1 n2 + 2 ≤ 1 n2 + 2 = n2 . so f (n) = O (n2 ).

Example 8. . f (n) = Θ nk+1 . Hence 1! 2! 3! 4! 5! 6! 7! = = = = = = = 1 1 2 6 24 120 720 5040 2 k n n k + + 1 + · · · + (n − 1)k + nk 2 2 n k n k n k + + ··· + 2 2 2 k n+1 n 2 2 n+1 n k 2 2 1 (n + 1)nk k+1 1k + 2k + 3k + · · · + (n − 1)k + nk nk + nk + nk + · · · + nk + nk n × nk nk+1 . .Discrete Mathematics: Week 7 Analysis of Algorithms (Continued) Example 7. Now f (n) = ≤ = = So f (n) = O nk+1 .·. 1 1 2k+1 2k+1 nk+1 + nk nk+1 . f (n) = 1k + 2k + 3k + · · · + (n − 1)k + nk . What is the order of n! ? What is n! ? The basic deﬁnition is n! = n × (n − 1) × (n − 2) × · · · × 3 × 2 × 1. We can generalize Example 6 to nk . f (n) ≥ ≥ = ≥ = = ≥ So f (n) = Ω nk+1 .

etc. What is the limit on your calculator? Then 0! is deﬁned to be 1. lg n! ≥ lg n + lg (n − 1) + · · · + lg ≥ = ≥ = = ≥ So lg n! = Ω (n lg n). as 70! > 10100 . Then. and t(n) = O (g(n)) . If an algorithm requires t(n) units of time to terminate in the worst case for an input of size n. What is 1 1 ! ? 2 Stirling’s formula gives an approximation for n! : √ n! ≈ Hence So we suspect that lg n! = Θ (n lg n) . as for n ≥ 4. 2 n 2 n n n lg + lg + · · · + lg 2 2 2 n n+1 lg 2 2 n n lg 2 2 n (lg n − lg 2) 2 n lg n lg n + −1 2 2 2 n lg n. lg n! = lg n + lg (n − 1) + · · · + lg 2 + lg 1 ≤ lg n + lg n + · · · + lg n + lg n = n lg n. My calculator runs out at 69! . lg n! = Θ (n lg n). If an algorithm requires t(n) units of time to terminate in the best case for an input of size n. and t(n) = O (g(n)) . Finding a lower limit. Deﬁnition. lg n ≥ 2.·. we say that the best-case time required by the algorithm is of order at most g(n) or that the best-case time required by the algorithm is O (g(n)). 2 . . 4 2πn n e n . 1 ln n! ≈ 1 ln 2π + 2 ln n + n ln n − n. So n! = O (n lg n). taking lg of n! .

17.3. If the best case time is O (g(n)) and Ω (g(n)). Example 1. worst-case or average-case time of an algorithm to be of order at least g(n). The average-case time is n2 + 3n (1 + 2 + 3 + · · · + n) + n = n+1 2(n + 1) i. Searching for a key in an unordered sequence. Replace O by Ω and “at most” by “at least” to obtain the deﬁnition of what it means for the best-case. Θ(n). (b) (Continued) Johnsonbaugh Algorithm 4. Example 10. Θ(1). The worst-case time is n i. Example 9.e. 3 1 + 2 + 3 + · · · + n = 1 n2 + 1 n. Consider the pseudocode i=n while (i ≥ 1) { x=x+1 i = ⌊ i/2 ⌋ } Find a theta notation for the number of times the statement x = x + 1 is executed. Θ(n). 2 2 .e. Similar deﬁnitions apply for the worst-case and average-case times.we say that the worst-case time required by the algorithm is of order at most g(n) or that the worst-case time required by the algorithm is O (g(n)). The number of times is which is Θ(n2 ). If an algorithm requires t(n) units of time to terminate in the average case for an input of size n. and t(n) = O (g(n)) . we say that the average-case time required by the algorithm is of order at most g(n) or that the average-case time required by the algorithm is O (g(n)). then the best-case time required by the algorithm is Θ (g(n)). Consider the pseudocode for i = 1 to n for j = 1 to i x=x+1 Find a theta notation for the number of times the statement x = x + 1 is executed.e. The best-case time is 1 i.

2k−2. then after k iterations n < 2.·. it is executed n times. 2k−1. So the statement is executed 4 times. 20 = 1. The second time in the while loop. and so on. t(n) ≥ n. 1≤i= Example 11. . Find a theta notation for the number of times the statement x = x + 1 is executed in the following pseudocode. the statement is executed j = ⌊ j/2 ⌋ times. 4 . 2. j=n while (j ≥ 1) { for i = 1 to j x=x+1 j = ⌊ j/2 ⌋ } Let t(n) denote the number of times the statement x = x + 1 is executed. So 2 t(n) ≤ 1 n 1 − 2k 1 1− 2 1 = 2n 1 − 2k ≤ 2n. If 2k ≤ n < 2k+1 . Then for i=8 i=4 i=2 i=1 i=0 x=x+1 x=x+1 x=x+1 x=x+1 x=x+1 is is is is is executed executed executed executed not executed. .e. . k + 1 = 1 + lg n times. The ﬁrst time in the while loop. Suppose n = 2k . 2k So for all n. and t(n) = Ω(n). . Hence n n n n t(n) ≤ n + + + + · · · + k−1 . 2 4 8 2 n where k < 1. which is Θ(lg n). i. x = x + 1 is executed 1 + lg n times. Then the statement is executed for i = 2k . .Suppose that n = 8.

5 .·.·.. They are thought to be intractable. Such a problem is the travelling salesperson problem. . t(n) = Θ(n). A “good” algorithm has a worst-case polynomial time. but none have ever been proved to be intractable. Exponential or factorial worst-case time problems are called intractable. will the program ever halt? A large number of solvable problems have an undetermined status. and such problems are called feasible or tractable problems. t(n) = O(n). There are problems for which there is no algorithm. These are called unsolvable. and require a long time to execute even for reletively small n. Such a problem is the halting problem: given an arbitrary program and a set of inputs.

n. 5. factorial (n) { 2.1 We know that n! = n(n − 1)!. A recursive algorithm is an algorithm that contains a recursive function. Example 1. } if (n == 0) return 1 return n∗factorial (n − 1) 6 .4. 4. Algorithm 4. We can resolve the problem of computing n! into the simpler problem of computing (n − 1)!. Factorials. and that 0! is deﬁned as 1. then into computing (n − 2)! and so on. until we get to 0! which is known i.Recursive Algorithms A recursive function invokes itself.e.2 Input: Output: Computing n Factorial This recursive algorithm computes n!. Problem 5! 4! 3! 2! 1! 0! Simpliﬁed Problem 5 · 4! 4 · 3! 3 · 2! 2 · 1! 1 · 0! None Table 1: Decomposing the factorial problem. an integer greater than or equal to 0 n! 1. 3.4. Problem 0! 1! 2! 3! 4! 5! Solution 1 1 · 0! = 1 2 · 1! = 2 3 · 2! = 6 4 · 3! = 24 5 · 4! = 120 Table 2: Combining subproblems of the factorial problem. Johnsonbaugh Example 4.

s2 . Proof: Basis Step: For n = 0. proceed to line 4 since n = 0. proceed to line 4 since n = 0. Algorithm Input: Output: min(s. . n ≥ 0. . If n = 1.We can see how the algorithm computes n!. n small. and so on. . and compute n · (n − 1)! = 1 · 0! = 1 · 1 = 1. proceed to line 4. Inductive Step: Assume that the algorithm correctly returns the value of (n − 1)! . . . Example 2. Hence the algorithm correctly computes n! = n · (n − 1)! . and compute n · (n − 1)! = 2 · 1! = 2 · 1 = 2.2 returns the value of n! . This algorithm recursively ﬁnds the smallest of a ﬁnite sequence of numbers. Suppose n is input to the algorithm. n) { if (n == 1) { small = s1 return small } else { small = min(s. sn . n − 1) if small ≤ sn return small else { small = sn return small } } } 7 Recursively Finding the Minimum Value in a Sequence s. If n = 2.4. Theorem: Algorithm 4. the algorithm correctly computes (n − 1)! . Recursive algorithms and their proof go hand-in-hand with mathematical induction. (the smallest value in the sequence s) This algorithm ﬁnds the smallest of the numbers s1 . By the assumption. Since n > 0. n > 0. the algorithm correctly returns 1.

and this is compared with s3 and the smaller returned. 2. compares this with sn . the algorithm correctly computes the smallest of s1 . 1. and it is returned. Therefore the algorithm correctly returns the smallest number in a sequence of length n. 2) returns the smaller of s1 and s2 . sn−1 . and this returns s1 . . In how many ways can the robot walk n metres? Let walk (n) denote the number of ways. 1. 2 or 2. and returns the smaller. 1). 1. If n = 2. which returns s1 . 1. there is only one number in the sequence. 1. . walk (n − 1) + walk (n − 2). Inductive Step: Assume that the algorithm correctly returns the smallest value in sequence of length n − 1. 1 or 2. and this recursively calls min(s.4. Then walk (1) = 1 and walk (2) = 2. s2 . Distance 1 2 3 4 Sequence of Steps Number of Ways to Walk 1 1 1. Algorithm 4. n walk (n) if (n == 1 ∨ n == 2) 8 . 1 or 1. Johnsonbaugh Example 4. Then for a sequence of length n. Theorem: This algorithm correctly returns the value of the smallest of a ﬁnite sequence of numbers. then min(s.If n = 1. the algorithm returns the only number in the sequence. . 1) is recursively called. If n = 3. n > 1. then min(s. Robot walk. . 1 or 2.4. Then walk (n) = walk (n − 1) + walk (n − 2). 2 5 or 1.6 Robot Walking This algorithm computes the function deﬁned by walk (n) = Input: Output: walk (n) { 1. Then min(s. Proof: Basis Step: For n = 1.5 A robot can take steps of 1 metre or 2 metres. 2 Suppose n > 2. Example 3. 1 or 2 2 1. 1 3 1. 2) is recursively called. 2. and the smaller returned. This is compared with s2 . 1 or 1. n=1 n=2 n > 2.

. . 55 = 1. The Fibonacci sequences arises in many natural situations. 13. 34. a pine cone can have 13 clockwise spirals and 8 counterclockwise spirals. 8. n → ∞ fn−1 = φ = 2 For example. For example. 9 . 2. . . 5. . 34 A classical formula for the Fibonacci numbers is √ n √ 1 1+ 5 1− 5 √ − 2 2 5 √ All those 5 occurrences. for all n ≥ 3. as well as mathematical ones.return n return walk (n − 1) + walk (n − 2) } We can see that the sequence generated is 1. The ratio of successive terms in the Fibonacci sequence has a limit of the golden ratio.6180339887498 . 21. . . . . . 3. 55. √ fn 1+ 5 lim = 1. and the answer is an integer! n . This is the Fibonacci sequence {fn }.6176 . and is deﬁned by the equations f1 = 1 f2 = 1 fn = fn−1 + fn−2 .

arriving at vn . and so on.1. w) or 1 . Graph theory involves a lot of terminology. travels along an edge to vertex v2 . as a means of solving the K¨nigsberg bridge problem. For a path to traverse every edge exactly once. Example 1. a path can traverse each edge exactly once. The important information in the graph is the connections. as will be seen. we write e = (v. v1 v2 v4 Deﬁnition 8. v1 v1 e1 v2 e2 e6 e5 e4 e3 v5 v4 v2 v3 e1 e2 v4 e4 e5 v5 v3 e3 e6 A path starts at one vertex v1 .Discrete Mathematics: Week 8 Graph Theory Introduction to Graphs Graph theory was introduced by Leonhard Euler in 1736. There was a revival of graph theory in the 1920’s. A graph is drawn with dots and lines. for example. and return to the original vertex. The following two graphs contain the same information. the graph in Example 1. but will not return to the original vertex. an even number of edges must touch each vertex. with the o ﬁrst text produced in 1936. In the graph shown below.1 v3 A graph (or undirected graph) G consists of a set V of vertices (or nodes) and a set E of edges (or arcs) such that each edge e ∈ E is associated with an unordered pair of vertices. the dots being vertices and the lines are edges. If there is a unique edge e associated with the vertices v and w. not the positions of the vertices and edges.

v3 ) = (v3 . v4 . v1 e1 e3 v2 e2 e6 v5 v7 Directed edges are indicated by arrows. v2 ) Edge e4 is incident on vertices v2 and v4 . v3 . we write e = (v. e4 . e5 . If there is a unique edge e associated with ordered pair (v. v2 . (Continued) V = { v1 . Example 2. w) denotes an edge between v and w in an undirected graph and not an ordered pair. and v and w are said to be incident on e and to be adjacent vertices. If G is a graph (undirected or directed) with vertices V and edges E. Edge e1 is associated with the ordered pair (v2 . Unless speciﬁed otherwise. e6 } e2 = (v2 . The graph shown below is a directed graph. E). An edge e in a graph (undirected or directed) that is associated with the pair of vertices v and w is said to be incident on v and w. v5 } E = { e1 . e3 and e4 . Example 1. we write G = (V. v1 ) of vertices. which denotes an edge from v to w.g. Distinct edges can be associated with the same pair of vertices e. In this context. w). (v. e3 . and can also occur in undirected graphs. v). and vertices v2 and v4 are adjacent. e1 v1 e2 2 v2 v6 e7 e4 v4 e5 v3 . A directed graph (or digraph) G consists of a set V of vertices (or nodes) and a set E of edges (or arcs) such that each edge e ∈ E is associated with an ordered pair of vertices.e = (w. These are parallel edges. e2 . w) of vertices. the sets E and V are assumed to be ﬁnite and V is assumed to be nonempty.

c. c. For example. (b. e). e7 . c). d. d). for a path starting at b and ﬁnishing at e. a. e). b a 8 b a 6 2 6 4 e 4 d 5 3 12 9 2 6 12 c d 4 e 5 4 3 c 8 6 9 If an edge e is labelled k.g. so that we have a weighted graph. (a. 3 . A vertex not incident to any edge is called isolated e. a. d. b). A graph with neither loops nor parallel edges is called a simple graph. e 16 b. It is expected that the reverse path will be of the same length. a. e 19 b. we say that the weight of edge e is k. e). Johnsonbaugh 8. c. (c. a. d. e). we have path lengths: Path Length b. Example 3. d.1. (d. d. c. e 23 23 b. The length of a path is the sum of the weighted edges in a path. and these can be considered as the vertices of a graph. (a. (c. There is a travel time associated with every edge between vertices. d. A path of minimum length that visits each vertex exactly once represents the optimum path for the drill press. e 17 b. all starting and ﬁnishing vertices need to be checked: (a.An edge incident on a single vertex is called a loop e. (b.5 Holes are being bored in a sheet of metal by a drill press. c. This is shown in the diagram below. c). d). e 20 b. (a.g. v7 . c. a. e Additionally. d). (b. a.

p2 .1. w) = |p1 − q1 | + |p2 − q2 | + |p3 − q3 | . inset an edge between vertices v and w if s(v. p2 . The number of function calls in the program. Number of program lines 66 41 68 90 75 Number of return statements 20 10 5 34 12 Number of function calls 1 2 8 5 14 Program 1 2 3 4 5 A similarity graph is constructed with vertices corresponding to the programs. v5 ) = 20 s (v1 . For example. w) < S. v5 ) = 30 For a ﬁxed number S. In this example. v5 ) = 46 s (v1 . v3 ) = 38 s (v3 . p3 ) and w = (q1 . q3 ) set s(v. Here the classes are { 1. We wish to group “like” programs into classes based on program properties. The properties are: 1. v2 ) = 36 s (v2 .Example 4. Johnsonbaugh Example 8. The number of lines in the program 2. if S = 25 we have the graph: v1 v2 v3 v4 v5 We say that v and w are in the same class if v = w or if there is a path from v to w. v4 ) = 76 s (v3 . { 2 } and { 4 }. v5 ) = 48 s (v4 . What are the classes if S = 40? 4 . The number of return statements in the program 3. q2 . A vertex is denoted (p1 . we have s (v1 . A dissimilarity function s is deﬁned as follows.7: Similarity Graphs. v3 ) = 24 s (v2 . 5 }. v4 ) = 54 s (v1 . v4 ) = 42 s (v2 . p3 ) where pi is the value of property i. A particular algorithm in C++ is implemented by a number of persons. For each pair of vertices v = (p1 . 3.

V1 ∪ V2 = V . on 4 vertices is: Deﬁnition 8. The graph in the following ﬁgure is not bipartite.Deﬁnition 8. v5 } . The complete graph. Example 7.1. v2 . and each edge in E is incident on one vertex in V1 and one vertex in V2 . The graph in the ﬁgure below is bipartite for V1 = { v1 .9 The complete graph on n vertices. v3 } v1 v4 v2 v5 v3 Note that it is not required that there is an edge between every vertex in V1 and every vertex in V2 . E) is bipartite if there exist subsets V1 and V2 (either possibly empty) of V such that V1 ∩ V2 = ∅. v1 v2 v3 v4 v5 v8 v6 v9 v7 5 .11 A graph G = (V. and V2 = { v4 . is the simple graph with n vertices in which there is an edge between each pair of distinct vertices. Example 6. Example 5.1. K4 . denoted Kn .

is the simple graph whose vertex set is partitioned into sets V1 with m vertices and V2 with n vertices in which the edge set consists of all edges of the form (v1 . v5 and v6 . Then we can partition the set of vertices into two subsets V1 and V2 .1. Deﬁnition 8. is: 6 . Hence the graph is not bipartite.It is often easier to prove that a graph is not bipartite by arguing a contradiction. K2. The complete bipartite graph on two and four vertices. denoted Km. Since v4 and v5 are adjacent. v4 is in V1 (say) and v5 is in V2 . Since v5 and v6 are adjacent. Suppose the graph is bipartite.15 The complete bipartite graph on m and n vertices. v2 ) with v1 ∈ V1 and v2 ∈ V2 . v6 is in V1 .n .4 . Example 8. Consider v4 . Contradiction. But v4 and v6 are adjacent and both are in V1 .

The graph v1 v2 v3 is not connected. Consider the following graph. v2 ) of length 2 and (v1 . vn ). and a graph that is not connected consists of two or more “pieces”. v4 . . . v2 . en . v6 . v4 . .1 means: Start at vertex v0 . A connected graph consists of one “piece”.2. go along edge e2 to v2 . v1 . Example 1. (v0 . . The graph is connected. v2 e11 v1 e2 e1 v4 e10 e9 v7 e12 v6 v3 e4 v5 e8 v8 e7 e3 e5 e6 v9 Paths joining v1 and v2 are (v1 . v5 . These “pieces” are subgraphs of the original graph called components. v2 ) of length 5.2. 7 e1 e2 v5 e3 v4 . vn−1 . A path from v0 to vn of length n is an alternating sequence of n + 1 vertices and n edges beginning with vertex v0 and ending with vertex vn .1 Let v0 and vn be vertices in a graph.4 A graph G is connected if given any vertices v and w in G. The formalism in Deﬁnition 8. . v7 . e2 . in which edge ei is incident on vertices vi−1 and vi for i = 1. e1 . . Deﬁnition 8. n.Paths and Cycles Deﬁnition 8. there is a path from v to w.2. and so on. go along edge e1 to v1 . .

2. such as Example 1. has only one component. A graph such as G1 : v1 v1 G: v2 v3 e1 e2 v5 e3 v3 e3 v4 has two components and G2 : v2 e2 v5 e1 v4 has two components G1 and G2 .2. Deﬁnition 8. v5 } and E2 = { e2 . The subgraph G′ of G consisting of all edges and vertices in G that are contained in some path beginning at v is called the component of G containing v. w ′ ∈ V ′ . E) be a graph. itself. We call (V ′ . (Continued) A subgraph is v2 e11 v1 e2 e1 v4 v7 v3 e4 v5 Deﬁnition 8. A cycle (or circuit) is a path of nonzero length from v to v with no repeated edges. E2 ) with V2 = { v2 . We describe G2 as G2 = (V2 . Example 1. 8 .11 e3 Let G be a graph and let v be a vertex in G. (b) For every edge e′ ∈ E ′ . then v ′ . e3 }.2.Deﬁnition 8. A simple path from v to w is a path from v to w with no repeated vertices. E ′ ) a subgraph of G if (a) V ′ ⊆ V and E ′ ⊆ E.8 Let G = (V. v3 . A connected graph. if e′ is incident on v ′ and w ′ .14 Let v and w be vertices in a graph G.

in which. (Continued) Path (v1 . v2 ) (v4 . v2 . walk over each bridge exactly once. v4 . v3 . v4 . v4 ) (v4 . The problem can be represented as the following graph. Simple Path? Cycle? Simple Cycle? A Pregel B River D C The problem is to start at any location. A B C D 9 . v4 ) (v4 ) The K¨nigsberg Bridge Problem o Two islands in the Pregel River in K¨nigsberg (now Kaliningrad in Russia) were connected o to each other and the river banks by seven bridges. except for the beginning and ending vertices that are both equal to v. v6 . v5 . Leonhard Euler solved the problem in 1736. v5 . where the vertices represent the locations and the edges represent the bridges. there are no repeated vertices. Example 1. as shown in the diagram below. v1 . v7 .A simple cycle is a cycle from v to v. and ﬁnish at the starting location. v6 . v5 ) (v1 . v6 . v5 . v2 . v4 . v7 . v7 . v6 .

Euler showed that there is no solution, as all vertices have an odd number of incident edges. A cycle in graph G that includes all of the edges and all of the vertices of G is called an Euler cycle, in honour of Euler. We introduce the idea of the degree of a vertex v, δ(v). This is the number of edges incident on v. Theorem 8.2.17 If a graph G has an Euler cycle, then G is connected and every vertex has even degree. Proof: Suppose that G has an Euler cycle. We have seen the argument that during the cycle, the path leaves each vertex for every time that it is entered, and so all vertices must be of even degree. If v and w are vertices of G, then the portion of the Euler cycle between v and w is a path from v to w, and so G is connected. Theorem 8.2.18 If G is a connected graph and every vertex has even degree, then G has an Euler cycle. Proof: Johnsonbaugh gives a mathematical induction proof. See the text. Example 2. For the K¨nigsberg bridge problem, o δ(A) = 3, δ(B) = 5, δ(C) = 3, δ(D) = 3, so there is not an Euler cycle. If the graph G is as shown below, then G is connected an every vertex has even degree. δ (v1 ) = δ (v2 ) = δ (v3 ) = δ (v5 ) = 4, δ (v4 ) = 6, δ (v6 ) = δ (v7 ) = 2. v1

v2

v3

v4

v5

v6

v7

Hence an Euler cycle exists. By inspection, one is (v6 , v4 , v7 , v5 , v1 , v3 , v4 , v1 , v2 , v5 , v4 , v2 , v3 , v6 ) . 10

**Theorem 8.2.21 If G is a graph with m edges and vertices {v1 , v2 , . . . , vn }, then
**

n

δ(vi ) = 2m.

i=1

In particular, the sum of degrees of all of the vertices of the graph is even. Proof: Each edge is counted twice, once from vi to vj , and once from vj to vi . Corollary 8.2.22 In any graph, there are an even number of vertices of odd degree. Theorem 8.2.23 A graph has a path with no repeated edges from v to w (v = w) containing all the edges and vertices if and only if it is connected and v and w are the only vertices of odd degree. For example, the graph below. v1

v2

v5

v3

v4

Proof: Add an edge from v to w. Now thare is an Euler cycle as all vertices are of even degree. Theorem 8.2.24 If a graph G contains a cycle from v to v, G contains a simple cycle from v to v. Proof: If C is a cycle from v0 to vn (v0 = vn ), and C is not a simple cycle, then there must be vertices vi = vj in the path where i < j < n. Remove the portion of the path between vi and vj , and repeat the procedure if necessary. The cycle is eventually reduced to a simple cycle.

11

**Discrete Mathematics: Week 9
**

Trees

Introduction Example 1. Johnsonbaugh example on the draw for the semiﬁnals and ﬁnals at Wimbledon (many years ago).

**Seles Seles Navratilova Graf Sabatini Graf Graf FINALS SEMIFINALS
**

The draw can be represented as a graph called a tree. It is rotated clockwise through 90◦ so as to appear as is shown on the right. If it is rotated through 90◦ the other way then it would appear as a natural tree. v4 v5 v1 v6 v7 Deﬁnition 9.1.1 A (free) tree T is a simple graph satisfying the following: If v and w are vertices in T , then there is a unique simple path from v to w. A rooted tree is a tree in which a particular vertex is designated the root. The level of a vertex v is the length of the simple path from the root to v. The height of a rooted tree is the maximum level number that occurs. v3 v4 v5 v6 v7 v2 v2 height = 2 v1 v3

WIMBLEDON CHAMPION

level 0 level 1 level 2

1

Decode the string 010101110110 from the following tree. To decode a bit string. Johnsonbaugh 9.1. d g b f e h j a c i b d f i h j e g a c Huﬀman Codes Character representation is often by ﬁxed-length bit strings e. 0 or 1. A Huﬀman code is most easily deﬁned by a rooted tree. begin at the root and move down the tree until a character is encountered. Root 1 A 1 1 T The string decodes as: 010 1 0111 0110 R A T S 0 S 1 0 R 0 0 O 2 . means move right or left. any bit string can be uniquely decoded even though the characters are represented by variable-length bit strings. Huﬀman codes represent characters by variable-length bit strings. The bit. Short bit strings are used to represent the most frequently used characters. When a character is encountered. Given a tree that deﬁnes a Huﬀman code.4 The tree T shown below will become the rooted tree T ′ if we designate vertex e as the root. begin again at the root.PSfrag replacemen Example 2. ASCII where eight bit strings are used for the 256 extended character set.g. Example 3.

n) { if (n == 2) { let f1 and f2 denote the frequencies let T be as in the ﬁgure (on the left) return T } let fi and fj denote the smallest frequencies replace fi and fj in the list by fi + fj T ′ = huﬀman(f. n ≥ 2 A rooted tree that deﬁnes an optimal Huﬀman code huﬀman(f. n − 1) replace a vertex in T ′ labelled fi + fj by the tree shown in the ﬁgure (on the right) to obtain the tree T return T } 1 f1 0 f2 1 fi 0 fj Example 4. Letter A B C D E Frequency 10 12 17 8 22 3 . The coding tree is obtained by replacing each frequency by a character having that frequency.Algorithm 9.9 Constructing an Optimal Huﬀman Code This algorithm constructs an optimal Huﬀman code from a table giving the frequency of occurrence of the characters to be represented. Input: Output: A sequence of n frequencies. We have the following table of frequencies.1. The output is a rooted tree with the vertices at the lowest levels labelled with the frequencies and with the edges labelled with bits.

E A + D. 22 B. B. B + C A + D + E. D. 40. 29 A + D. E 12. 12. 8. beginning with the two element sequence 29. 22 A. 18. 0 1 0 A 0 E 4 . 1 1 B 0 C 1 D Then BED is coded as 11 | 00 | 011. 17. 22 −→ 18. 1 29 0 40 1 29 1 18 0 0 22 1 12 1 0 17 0 1 18 0 22 1 12 1 0 17 1 8 0 1 0 10 0 22 Now replace each frequency by a character having that frequency. 17. 10. B. 12. E 18. 17. B + C The algorithm then constructs trees working backward. 22 −→ 29. C. A + D.The algorithm begins by repeatedly replacing the two smallest frequencies with the sum until a two element sequence is obtained. 29 −→ 40. C. 18. E B + C. C. 22. E. A + D.

e. c d. vn ) is a simple path in T . b. g e h b d f a The parent of b is: e b. b f. a. (e) If x and y are children of z. consider the following tree. b c j i The ancestors of a are: The children of b are: The siblings of d are: The descendants of e are: The terminal vertices are: The internal vertices are: 5 . (f) If x has no children. vn−1 are ancestors of vn . Suppose that x. j g. e. f. Then (a) vn−1 is the parent of vn . x is an internal (or branch) vertex . g a. . i. .Terminology and Characterizations of Trees Deﬁnition 9. c d. (c) vn is a child of vn−1 . c.2. (b) v0 . . h. x and y are siblings. (d) if x is an ancestor of y. v1 . y and z are vertices in T and that (v0 . . . . x is a terminal vertex (or a leaf ). a. f. where V is x with the descendants of x and E = {e | e is an edge on a simple path from x to some vertex in V }. Example 1. . (h) The subtree of T rooted at x is the graph with vertex set V and edge set E. (g) If x is not a terminal vertex.1 Let T be a tree with root v0 . y is a descendant of x. .

2 Uranus Aphrodite Kronos Atlas Prometheus Eros Zeus Poseidon Hades Ares Apollo Athena Hermes The parent of Eros is: The ancesters of Hermes are: The children of Zeus are: The descendants of Kronos are: The siblings of Atlas are: The terminal vertices are: The internal vertices are: The subtree rooted at Kronos is: Kronos Heracles Zeus Poseidon Hades Ares Apollo Athena Hermes Heracles 6 .2. Greek gods.The subtree rooted at e is: e b d f a c Example 2. Johnsonbaugh Example 9.

2. . (d) T is acyclic and has n − 1 edges.2. . then (c). Let T be a tree. vj ) and ( vi . . . .Theorem 9. and so all must be equivalent. . .24. Partial proof: The proof is as follows. vi+1 . vi−1 . if (c). C cannot be a loop since T is a simple graph (no loops or parallel edges). Suppose T contains a cycle. then (a). contradicting the deﬁnition of a tree. . Therefore. . . . Hence C contains at least two distinct vertices vi and vj with i < j. The following are equivalent.3 Let T be a graph with n vertices. (a) T is a tree. . if (b). The paths ( vi . Then by a previous theorem. (b) T is connected and acyclic. v1 . vn−1 . T contains a simple cycle i. 7 . Theorem 8. . (c) T is connected and has n − 1 edges. . We will show: If (a).e. if (d). vn ) with v0 = vn . then (b). . . a tree cannot contain a cycle. then (d). then (b). We show if (a). vj ) are distinct simple paths from vi to vj . Then T is connected as there is a simple path from any vertex to any other vertex. C = ( v0 . v0 .

4 A graph G has a spanning tree if and only if G is connected. Proof: If G has a spanning tree.3. Algorithm 9.g. Select the ﬁrst vertex as the root e. The tree T initially consists of the single vertex a and no edges.2 The spanning tree of the graph below is shown in black.3 Deﬁnition 9. Johnsonbaugh Example 9. G is connected.3. Other spanning trees are possible.g. a. Select an ordering. 8 . until no further edges can be added. x = b to h.3. a b c d e f h g Theorem 9. abcdef gh of the vertices of G.Spanning Trees Johnsonbaugh 9. x) and the vertices on which they are incident. This gives all level 1 vertices. and so on. that do not produce a cycle when added to T . Add to T all edges (a.6 Breadth-First Search for a Spanning Tree This algorithm is formally given in Johnsonbaugh. If G is connected. but a more informal description of the procedure is as follows.1 A tree T is a spanning tree of a graph G if T is a subgraph of G that contains all of the vertices of G.3. Example 1. e. This procedure is ineﬃcient in practice. progressively remove edges from cycles until G is acyclic. Repeat with the vertices on level 1. then level 2.

c). backtrack by making the parent the current vertex. Make the new vertex the current vertex. Example 1. If an edge can’t be added. (a.g. a. (Continued) Select the ordering abcdef gh. Add edges (a. e) none add (d. Continue until the current vertex is again the root. Select the ﬁrst vertex as the root e. f ) none add (f. e. add to the tree an edge incident to the current vertex that doesn’t create a cycle. At each step. Algorithm 9. b). but a more informal description of the procedure is as follows. (Continued) a b c d e f h g 9 . (a.Example 1. Select an ordering. Add edges for vertices on level 1: b c g d e f h add (b.7 Depth-First Search for a Spanning Tree This algorithm is formally given in Johnsonbaugh. Select a as the root. the current vertex. g).3. using the vertex order to prioritize. abcdef gh of the vertices of G. h) none Add edges for vertices on level 2: Add edges for vertices on level 3: Add edges for vertices on level 4: The spanning tree is shown in the previous diagram. d) add (c.g.

1 Let G be a weighted graph. c). 4) add (3. (1. Add edges for vertices on level 1: 2 3 5 4 6 add (2. (c. and the costs of building roads between certain pairs of them are shown on the following graph. d).Select the ordering abcdef gh. g). 1 4 2 3 3 2 1 5 4 6 5 6 2 3 6 Breadth-First Search Select the order 123456. f ). a. and the spanning tree is as shown below. 3). b). Example 1. A minimal spanning tree of G is a spanning tree of G with minimum weight. Backtrack to f . (e. (1. Minimal Spanning Trees Johnsonbaugh 9. h). Ends. 10 . 6) none none none Add edges for vertices on level 2: The weight is 17. (b. d. b. Select a as the root. Backtrack to e. then e. Select 1 as the root. Add edges (1.4 Deﬁnition 9. There are six cities 1–6. (d. c. Add (e. 2).4. Add edges (a. e). (f. 5).

with minimum weight. Edges with one vertex in the tree and one vertex not in the tree are: Select edge (3. At each iteration. Example 1. 4) with minimum weight. 3) with minimum weight. (Continued) Begin with vertex 1. 11 . 5). add to the current tree a minimum weight edge that does not complete a cycle.3 This algorithm is formally given in Johnsonbaugh. 5) 3 Edge Weight (1. Edges with one vertex in the tree and one vertex not in the tree are: The minimum spanning tree will be constructed when either (1. 5) or (3. 2) 4 (1. 5) 6 (3. 6) 3 (4.4. 2) 5 (4. 2) 4 (1. 3) 2 (1. 6) 3 Edge Weight (1. The algorithm begins with a single vertex. 5) 3 (3. 4) 1 (3. 6) 6 Select edge (1. Edges with one vertex in the tree and one vertex not in the tree are: Edge Weight (1. 2) 4 (1.1 4 2 3 3 5 2 5 4 3 6 Prim’s Algorithm Johnsonbaugh Algorithm 9. Select edge (1. but a more informal description of the procedure is as follows. 5) 3 (3. 5) 6 (3. 6) is selected.

6) (5. 6) 3 (4. 1 4 2 3 3 5 2 2 1 4 6 12 . Edges with one vertex in the tree and one vertex not in the tree are: Select edge (1. shown below. 2) with minimum weight. 6) 2 Edge Weight (1. 6) with minimum weight. 2) 5 Select edge (5. The minimal spanning tree. has weight 12. 2) 5 6 (4. 2) 4 (3. 2) 4 (4.Edges with one vertex in the tree and one vertex not in the tree are: Edge Weight (1.

so there are 2i children.1 A binary tree is a rooted tree in which each vertex has either no children. a b e d f g c A full binary tree is a binary tree in which each vertex has either two children or zero children. b is the left child of vertex a. Winners progress to the right. Proof: The i internal vertices each have two children. each contestant is eliminated after one loss. vertex b has no left child. 1 . and the number of terminal vertices is (2i + 1) − i = i + 1. Vertex e is the left child of vertex c. Therefore the total number of vertices is 2i + 1. Some contestants receive byes if there are not initially 2n contestants. If a vertex has one child. or two children. one child.5. Example 1. Theorem 9. In a single elimination tournament. and c is the right child of vertex a. that child is designated as either a left child or a right child (but not both). vertex c has no right child. and there is a single winner eventually at the root. If a vertex has two children.5. one child is designated a left child and the other child is designated a right child. The tree structure is as shown below. Vertex d is the right child of vertex b.5. Only the root is a nonchild.4 If T is a full binary tree with i internal vertices. Example 2. then T has i + 1 terminal vertices and 2i + 1 total vertices.Discrete Mathematics: Week 10 Binary Trees Deﬁnition 9.2 In the binary tree below. Johnsonbaugh Example 9.

Suppose the root of T has children v1 and v2 . By the assumption. Inductive Step: Assume true for a binary tree with height less than h.Contestant 1 Contestant 2 Contestant 3 Contestant 4 Winner Contestant 5 Contestant 6 Contestant 7 Theorem 9. 2 and h2 ≤ h − 1. and t = 1. Basis Step: If h = 0. . the Principle of Mathematical Induction tells us that the theorem is true. Then t = 2h = 20 = 1. the binary tree has a single vertex. then lg t ≤ h. t ≤ 2h . The remaining tree has height h − 1 and t terminal vertices.5. Let T be a binary tree of height t > 0 with t terminal vertices. Then h1 ≤ h − 1 and t = t1 + t2 ≤ 2h1 + 2h2 ≤ 2h−1 + ≤ 2h−1 = 2h . Hence the inductive step is veriﬁed. Since the Basis Step and Inductive Step have been veriﬁed. Suppose that the root of T has one child. and subtrees rooted at v1 and v2 have heights h1 and h2 and terminal vertices number t1 and t2 respectively. Eliminate the root and edge incident on the root.6 If a binary tree of height h has t terminal vertices.) Proof: We will prove t ≤ 2h by mathematical induction. t ≤ 2h−1 < 2h and case h is true. (Or equivalently.

5.5. • Place the ﬁrst item in the root. • Start with an empty tree (no vertices or edges). Build a binary search tree for items in order o. for each vertex v in T . d. move to the right child.10 in Johnsonbaugh gives a formal approach for constructing a binary search tree. and each data item in the right subtree of v is greater than the data item in v. comparing with the root. • If there is no child in that position. j.Deﬁnition 9. p. beginning with the root. each data item in the left subtree of v is less than the data item in v. Example 3. n. • Continue to compare with each new vertex. • Move to the next item and start over. t. • Inspect the items in order. o n d j l m t p u 3 . u. move to the left child. • Compare each following item in turn with the current vertex. using lexicographic order. A less formal approach is as follows.8 A binary search tree is a binary tree T in which data are associated with the vertices. Algorithm 9. The data are arranged so that. l. • If item > vertex. • If item < vertex. place the item there. m.

go to the right child. • Given a data item D. we need to check down to the appropriate terminal vertex. where h is the height of the tree. • If D < vertex. • If D = vertex. D is not in the tree. 4 . by Australian nineteen be child ninety living no in will poverty Searching a Binary Search Tree The algorithm is as follows. The terminal vertices would correspond to missing children. • Begin at the root as the current vertex. Suppose that there are n internal vertices. go to the left child. or data items. Hence the worst-case search time is ⌈ lg t ⌉ = ⌈ lg (n + 1) ⌉. Worst-case Search Time The worst-case search is to search the longest path from the root when the item is not present. • If the child to move to is missing. For a full binary tree with n internal vertices. • If D > vertex. So if the item is not in the tree. the data item is found. We know that lg t ≤ h. the number of terminal vertices is t = n + 1.Example 4. Build a binary search tree for the words “by nineteen ninety no Australian child will be living in poverty”.

3 Inorder the root: inorder the left child. process the root.6 Bread-ﬁrst search and depth-ﬁrst search are ways to traverse a tree in a systematic way such that each vertex is visited exactly once.For example. in at most 20 steps. inorder the right child.6. Preorder Traversal Recursive algorithm 9. Consider the following tree. if a tree contains a million items. This section considers three other treetraversal methods. so that a search will ﬁnd an item. Tree Traversals Johnsonbaugh 9. or determine if it is not present. preorder the right child. Example 5. 000. then ⌈ lg 1. process the root. 000 ⌉ = 20. A B C D E I Preorder: A B F ABC D F G ABCDEFG H ABCDEFGHIJ H J F G 5 . Postorder Traversal Recursive algorithm 9. we will consider simpler formulations. preorder the left child.5 Postorder the root: postorder the left child.6. There are algorithms to minimize the height of a binary search tree.6. Inorder Traversal Recursive algorithm 9.1 Preorder the root: process the root. postorder the right child. Johnsonbaugh gives formal algorithms for each of these. which are deﬁned recursively.

/ ) operate on pairs of operands or expressions. The expression (A + B) × C − D ÷ E can be represented as the tree in the diagram below. parentheses are put around each operation. × and ÷ ( or +. An example is (A + B) × C − D ÷ E. The terminal vertices correspond to the operands. An operator operates on its left and right subtrees. − × + A B C D ÷ E For inorder traversal. where the operator appears between its operands. This is the inﬁx form of an expression. An arithmetic expression can be represented as a binary tree. −. ∗. The parentheses dictate the order of the operations. −.Inorder: B A F CB D AF G CBDEAF H G CBDEAFIHJG Postorder: B F A C D B G FA CEDB H GFA CEDBIJHGFA Arithmetic Expressions The operators +. and the hierarchy of the operators need not be speciﬁed. The internal vertices correspond to the operators. Inorder: × − ÷ + ×C − (D ÷ E ) (((A + B ) × C ) − (D ÷ E )) 6 . Some pairs of pararentheses may not be necessary. Example 6.

Preorder: − × ÷ − × + C ÷ DE − × +ABC ÷ DE The postorder traversal is as follows. no parentheses are required for unambiguous evaluation. No parentheses are required for unambiguous evaluation. Again. This is known as the postﬁx form of the expression or reverse Polish notation. This is known as the preﬁx form of the expression or Polish notation.The preorder traversal is as follows. Postorder: × ÷ − + C × DE ÷ − AB + C × DE ÷ − 7 .

- Cam Tutorial Solution
- Assignment 3 - Financial Case Study
- kinematics fundamentals
- Hms 211 Final Exam Formula Sheet 2012
- 2014 Sem 1 - Topic 2 - Organisational Culture & CVP Analysis
- Hms 211 Final Exam Formula Sheet 2012
- a-320 rev 22
- ENR_H08
- 58009554
- Week 1 - Introduction to Metal Forming Processes
- A330 Documents

Sign up to vote on this title

UsefulNot useful- Discrete Mathematics
- APM263-GrossmanSolutionManual
- discrete math
- Lecture Notes in Discrete Mathematics
- Discrete Mathematics
- Discrete Mathematics 2005
- Solucionario MatematicasDiscretas--RichardJohnsonbaugh--6Edición
- Water Pollution in the Republic of Trinidad and Tobago
- 1_RealNumbers
- Movsisyan and Davidova (Rep Thrm for Q-bilattices)
- handout counting principle
- COMP2111 sample exam
- MATA31_TT1_2012F
- Alizadeh-Ardeshir-On the Linear Lindenbaum Algebra of Basic Propositional Logic
- utp08.pdf
- LATA 16.pdf
- Chapter13 Uncertainty
- Local Ban
- Notes 10
- Saharon Shelah- Proper and Improper Forcing Second Edition
- Kumiko Hattori, Tetsuya Hattori and Shigeo Kusuoka- Self-avoiding Paths on the Three Dimensional Sierpinski Gasket
- Amazing and Aesthetic Aspects of Analysis
- Fermat's Last Theorem-Simple Proof (IJESI)
- Outline
- MIT18_100BF10_pset2sol
- Istvan Juhasz, Saharon Shelah, Lajos Soukup and Zoltan Szentmiklossy- Cardinal Sequences and Cohen Real Extensions
- 1001.1353.pdf
- OCT09-01
- weighted Frechet and LB-spaces of Moscatelli type
- Functional Analysis Notes
- Discrete Math