You are on page 1of 142

Field of events

Random variables
The cumulative distribution function
The probability density function

Decision and estimation in information processing:


course no. 2

March 1, 2021

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

Events:
subsets A ⊂ Ω.
sole entities to which probability is given P(A).
Sometimes, Ω may contain many elements ωi (may be a
subinterval R).
The number of events A ⊂ Ω may be huge.
It is not practical to give probability to all events. ⇒ a
reduced set of events to which we allocate probabilities.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

The set of events to which we allocate probabilities: closed to


usual set operations (union, complementing, intersection).
Why is that? It’s not logical to know P(A) and P(B) and to
not know P(A ∪ B) or P(A ∩ B).
The set of events closed to set operations: field of events.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

The set of events to which we allocate probabilities: closed to


usual set operations (union, complementing, intersection).
Why is that? It’s not logical to know P(A) and P(B) and to
not know P(A ∪ B) or P(A ∩ B).
The set of events closed to set operations: field of events.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events

The set of events to which we allocate probabilities: closed to


usual set operations (union, complementing, intersection).
Why is that? It’s not logical to know P(A) and P(B) and to
not know P(A ∪ B) or P(A ∩ B).
The set of events closed to set operations: field of events.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: definition

Let P(Ω) be the set of the parts of Ω:

A ⊂ Ω ⇒ A ∈ P(Ω)

K ⊂ P(Ω) is called field of events if the following hold true:


1 ∀A ∈ K ⇒ {(A) ∈ K;
2 ∀A, B ∈ K ⇒ A ∪ B ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: definition

Let P(Ω) be the set of the parts of Ω:

A ⊂ Ω ⇒ A ∈ P(Ω)

K ⊂ P(Ω) is called field of events if the following hold true:


1 ∀A ∈ K ⇒ {(A) ∈ K;
2 ∀A, B ∈ K ⇒ A ∪ B ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: definition

Let P(Ω) be the set of the parts of Ω:

A ⊂ Ω ⇒ A ∈ P(Ω)

K ⊂ P(Ω) is called field of events if the following hold true:


1 ∀A ∈ K ⇒ {(A) ∈ K;
2 ∀A, B ∈ K ⇒ A ∪ B ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: definition

Let P(Ω) be the set of the parts of Ω:

A ⊂ Ω ⇒ A ∈ P(Ω)

K ⊂ P(Ω) is called field of events if the following hold true:


1 ∀A ∈ K ⇒ {(A) ∈ K;
2 ∀A, B ∈ K ⇒ A ∪ B ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: definition

Let P(Ω) be the set of the parts of Ω:

A ⊂ Ω ⇒ A ∈ P(Ω)

K ⊂ P(Ω) is called field of events if the following hold true:


1 ∀A ∈ K ⇒ {(A) ∈ K;
2 ∀A, B ∈ K ⇒ A ∪ B ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: properties

Properties of a field of events:


3 ∀A, B ∈ K ⇒ A ∩ B ∈ K.
Proof:
A, B ∈ K ⇒ {(A), {(B) ∈ K ⇒ {(A) ∪ {(B) ∈ K ⇒
{({(A) ∪ {(B)) = A ∩ B ∈ K
4 Ω ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∪ {(A) = Ω ∈ K.
5 ∅ ∈ K.
Proof:
∀A ∈ K ⇒ {(A) ∈ K ⇒ A ∩ {(A) = ∅ ∈ K.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Field of events: example

Example: the rolling of a die. Ω = {f1 , f2 , . . . , f6 }


We want the minimal field of events K1 that contains events
A = {f2 , f4 , f6 } and B = {f1 , f3 }.
K1 = { {f2 , f4 , f6 }, {f1 , f3 }, {f1 , f2 , . . . , f6 }, ∅, {f1 , f3 , f5 },
| {z } | {z } | {z } | {z }
A B Ω {(A)
{f , f , f , f }, {f , f , f3 , f4 , f6 }, {f5 } , }
| 2 4{z 5 6 } | 1 2 {z } |{z}
{(B) A∪B {(A∪B)
Conclusion: an experiment is completely determined
probabilistically by specifying the triple (Ω, K, P).

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: definition

Notation of random variables (RV): Greek letters


ξ(= x), η(= y ), ζ(= z), . . ..
An RV is a function that associates to each ωi ∈ Ω a real
number:
ξ : Ω → R.

not
If the experimental outcome is ωk , then ξ(ωk ) = ξ (k) is called
an instance of ξ.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: definition

Notation of random variables (RV): Greek letters


ξ(= x), η(= y ), ζ(= z), . . ..
An RV is a function that associates to each ωi ∈ Ω a real
number:
ξ : Ω → R.

not
If the experimental outcome is ωk , then ξ(ωk ) = ξ (k) is called
an instance of ξ.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: definition

Notation of random variables (RV): Greek letters


ξ(= x), η(= y ), ζ(= z), . . ..
An RV is a function that associates to each ωi ∈ Ω a real
number:
ξ : Ω → R.

not
If the experimental outcome is ωk , then ξ(ωk ) = ξ (k) is called
an instance of ξ.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: definition

Notation of random variables (RV): Greek letters


ξ(= x), η(= y ), ζ(= z), . . ..
An RV is a function that associates to each ωi ∈ Ω a real
number:
ξ : Ω → R.

not
If the experimental outcome is ωk , then ξ(ωk ) = ξ (k) is called
an instance of ξ.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Random variables: examples


Ω = {f1 , . . . , f6 } we can define the following RVs:
ξ(fi ) = i, ∀i ∈ {1, . . . , 6}

0 if i = 2k
η(fi ) =
1 if i = 2k + 1

Ω = {the set of all students si from IIIF+G series}:

ξ(si ) = CM(si )

Ω = {the set of resistors ri with same nominal parameters}:


ξ(ri ) = R(ri ) [Ω]

η(ri ) = C (ri ) [µF ]


Decision and estimation in information processing: course no. 2
Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

In practice, questions of the type P(a < ξ < b) =? arise.


Ex.: for ξ(si ) = CM(si ), P(ξ ≥ 8) =?
Ex.: for ξ(ri ) = R(ri ), P(Rn − tol ≤ ξ ≤ Rn + tol) =?
Problem: we stated that probabilitaty is associated to events
A ⊂ Ω solely!
Is {a < ξ < b} an events, such that we can talk about its
probability?
Answer: yes, it is an event!
 
{a < ξ < b} = ωi ∈ Ω a < ξ(ωi ) < b

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs: example

For ξ(si ) = CM(si ):

{ξ ≥ 8} = {the set of students in IIIF+G series with CM ≥ 8}

For ξ(fi ) = i:
{ξ < 2, 5} = {f1 , f2 }

{2 ≤ ξ < 5} = {f2 , f3 , f4 }

{ξ < 1} = ∅

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs: example

For ξ(si ) = CM(si ):

{ξ ≥ 8} = {the set of students in IIIF+G series with CM ≥ 8}

For ξ(fi ) = i:
{ξ < 2, 5} = {f1 , f2 }

{2 ≤ ξ < 5} = {f2 , f3 , f4 }

{ξ < 1} = ∅

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs: example

For ξ(si ) = CM(si ):

{ξ ≥ 8} = {the set of students in IIIF+G series with CM ≥ 8}

For ξ(fi ) = i:
{ξ < 2, 5} = {f1 , f2 }

{2 ≤ ξ < 5} = {f2 , f3 , f4 }

{ξ < 1} = ∅

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs: example

For ξ(si ) = CM(si ):

{ξ ≥ 8} = {the set of students in IIIF+G series with CM ≥ 8}

For ξ(fi ) = i:
{ξ < 2, 5} = {f1 , f2 }

{2 ≤ ξ < 5} = {f2 , f3 , f4 }

{ξ < 1} = ∅

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs: example

For ξ(si ) = CM(si ):

{ξ ≥ 8} = {the set of students in IIIF+G series with CM ≥ 8}

For ξ(fi ) = i:
{ξ < 2, 5} = {f1 , f2 }

{2 ≤ ξ < 5} = {f2 , f3 , f4 }

{ξ < 1} = ∅

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs: example

For ξ(si ) = CM(si ):

{ξ ≥ 8} = {the set of students in IIIF+G series with CM ≥ 8}

For ξ(fi ) = i:
{ξ < 2, 5} = {f1 , f2 }

{2 ≤ ξ < 5} = {f2 , f3 , f4 }

{ξ < 1} = ∅

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

Events generated by RVs

We can thus talk of P(a < ξ < b), ∀a, b ∈ R!


By extension, we can talk of P((a < ξ < b) ∪ (c ≤ ξ < d)).
Remark: there are subsets A ⊂ R way “nastier” than intervals
or union of intervals!
E.g., A = R \ Q.
Such subsets are not of interest from a practical point of view.
We content ourselves with taking of P(ξ ∈ A), with A ∈ I,
where:
I = {∪i (ai , bi ] ai , bi ∈ R, i ∈ N }

Obviously, I is a field of events!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function

The probabilistic description of a RV ξ as


P(a < ξ < b) ∀a, b, ∈ R seems tedious.
Yet, all of the probabilities P(a < ξ < b) may be computed
starting from a single function: the cumulative distribution
function (CDF)!
Definition: Fξ : R −→ [0, 1]


Fξ (x) = P(ξ ≤ x)

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function

The probabilistic description of a RV ξ as


P(a < ξ < b) ∀a, b, ∈ R seems tedious.
Yet, all of the probabilities P(a < ξ < b) may be computed
starting from a single function: the cumulative distribution
function (CDF)!
Definition: Fξ : R −→ [0, 1]


Fξ (x) = P(ξ ≤ x)

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function

The probabilistic description of a RV ξ as


P(a < ξ < b) ∀a, b, ∈ R seems tedious.
Yet, all of the probabilities P(a < ξ < b) may be computed
starting from a single function: the cumulative distribution
function (CDF)!
Definition: Fξ : R −→ [0, 1]


Fξ (x) = P(ξ ≤ x)

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function

The probabilistic description of a RV ξ as


P(a < ξ < b) ∀a, b, ∈ R seems tedious.
Yet, all of the probabilities P(a < ξ < b) may be computed
starting from a single function: the cumulative distribution
function (CDF)!
Definition: Fξ : R −→ [0, 1]


Fξ (x) = P(ξ ≤ x)

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function

The probabilistic description of a RV ξ as


P(a < ξ < b) ∀a, b, ∈ R seems tedious.
Yet, all of the probabilities P(a < ξ < b) may be computed
starting from a single function: the cumulative distribution
function (CDF)!
Definition: Fξ : R −→ [0, 1]


Fξ (x) = P(ξ ≤ x)

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties


1 Fξ (−∞) = P(ξ ≤ −∞) = 0
2 Fξ (∞) = P(ξ ≤ ∞) = 1.
3 Fξ is an increasing function:
x1 < x2 ⇒ Fξ (x1 ) ≤ Fξ (x2 ) ∀ x1 , x2 ∈ R
.
Proof:
Fξ (x2 ) = P(ξ ≤ x2 ) = P((ξ ≤ x1 ) ∪ (x1 < ξ ≤ x2 ))
As {ξ < x1 } and {x1 < ξ ≤ x2 } are mutually exclusive, by
applying the third axiom:
Fξ (x2 ) = P(ξ ≤ x1 )+P(x1 < ξ ≤ x2 ) = Fξ (x1 )+P(x1 < ξ ≤ x2 ),
| {z }
≥0

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties

4 P(x1 < ξ ≤ x2 ) = Fξ (x2 ) − Fξ (x1 ).


Proof:
It suffices to rewrite the proof of property no. 3. Property

no. 4 shows that we can compute P(a < ξ < b) for ∀a, b
starting from the CDF!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties

4 P(x1 < ξ ≤ x2 ) = Fξ (x2 ) − Fξ (x1 ).


Proof:
It suffices to rewrite the proof of property no. 3. Property

no. 4 shows that we can compute P(a < ξ < b) for ∀a, b
starting from the CDF!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties

4 P(x1 < ξ ≤ x2 ) = Fξ (x2 ) − Fξ (x1 ).


Proof:
It suffices to rewrite the proof of property no. 3. Property

no. 4 shows that we can compute P(a < ξ < b) for ∀a, b
starting from the CDF!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The cumulative distribution function: properties

4 P(x1 < ξ ≤ x2 ) = Fξ (x2 ) − Fξ (x1 ).


Proof:
It suffices to rewrite the proof of property no. 3. Property

no. 4 shows that we can compute P(a < ξ < b) for ∀a, b
starting from the CDF!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: definition

The probability density function (PDF): the most useful


function in describing the statistical behaviour of RVs.
wξ : R −→ [0, ∞)
∆ dFξ (x)
wξ (x) = .
dx

Legitimate questions:
Why do we name the derivative of the CDF “probability
density function”?
Why do we need a probability “density” function, instead of a
“probability” function?

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: definition

The probability density function (PDF): the most useful


function in describing the statistical behaviour of RVs.
wξ : R −→ [0, ∞)
∆ dFξ (x)
wξ (x) = .
dx

Legitimate questions:
Why do we name the derivative of the CDF “probability
density function”?
Why do we need a probability “density” function, instead of a
“probability” function?

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: definition

The probability density function (PDF): the most useful


function in describing the statistical behaviour of RVs.
wξ : R −→ [0, ∞)
∆ dFξ (x)
wξ (x) = .
dx

Legitimate questions:
Why do we name the derivative of the CDF “probability
density function”?
Why do we need a probability “density” function, instead of a
“probability” function?

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: definition

The probability density function (PDF): the most useful


function in describing the statistical behaviour of RVs.
wξ : R −→ [0, ∞)
∆ dFξ (x)
wξ (x) = .
dx

Legitimate questions:
Why do we name the derivative of the CDF “probability
density function”?
Why do we need a probability “density” function, instead of a
“probability” function?

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: definition

The probability density function (PDF): the most useful


function in describing the statistical behaviour of RVs.
wξ : R −→ [0, ∞)
∆ dFξ (x)
wξ (x) = .
dx

Legitimate questions:
Why do we name the derivative of the CDF “probability
density function”?
Why do we need a probability “density” function, instead of a
“probability” function?

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: definition

The probability density function (PDF): the most useful


function in describing the statistical behaviour of RVs.
wξ : R −→ [0, ∞)
∆ dFξ (x)
wξ (x) = .
dx

Legitimate questions:
Why do we name the derivative of the CDF “probability
density function”?
Why do we need a probability “density” function, instead of a
“probability” function?

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

If we detail the derivative of the CDF:


Fξ (x+∆x)−Fξ (x) P(x<ξ≤x+∆x)
wξ (x) = lim∆x→0 ∆x = lim∆x→0 ∆x

Hence:
wξ (x)∆x ≈ P(x < ξ ≤ x + ∆x)
∆x&&

By using wξ (x) we can compute the proability that ξ lies in an


infinitesimal interval around x.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

If we detail the derivative of the CDF:


Fξ (x+∆x)−Fξ (x) P(x<ξ≤x+∆x)
wξ (x) = lim∆x→0 ∆x = lim∆x→0 ∆x

Hence:
wξ (x)∆x ≈ P(x < ξ ≤ x + ∆x)
∆x&&

By using wξ (x) we can compute the proability that ξ lies in an


infinitesimal interval around x.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

If we detail the derivative of the CDF:


Fξ (x+∆x)−Fξ (x) P(x<ξ≤x+∆x)
wξ (x) = lim∆x→0 ∆x = lim∆x→0 ∆x

Hence:
wξ (x)∆x ≈ P(x < ξ ≤ x + ∆x)
∆x&&

By using wξ (x) we can compute the proability that ξ lies in an


infinitesimal interval around x.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

If we detail the derivative of the CDF:


Fξ (x+∆x)−Fξ (x) P(x<ξ≤x+∆x)
wξ (x) = lim∆x→0 ∆x = lim∆x→0 ∆x

Hence:
wξ (x)∆x ≈ P(x < ξ ≤ x + ∆x)
∆x&&

By using wξ (x) we can compute the proability that ξ lies in an


infinitesimal interval around x.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

If we detail the derivative of the CDF:


Fξ (x+∆x)−Fξ (x) P(x<ξ≤x+∆x)
wξ (x) = lim∆x→0 ∆x = lim∆x→0 ∆x

Hence:
wξ (x)∆x ≈ P(x < ξ ≤ x + ∆x)
∆x&&

By using wξ (x) we can compute the proability that ξ lies in an


infinitesimal interval around x.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

If we detail the derivative of the CDF:


Fξ (x+∆x)−Fξ (x) P(x<ξ≤x+∆x)
wξ (x) = lim∆x→0 ∆x = lim∆x→0 ∆x

Hence:
wξ (x)∆x ≈ P(x < ξ ≤ x + ∆x)
∆x&&

By using wξ (x) we can compute the proability that ξ lies in an


infinitesimal interval around x.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

wξ(x)

w (x)∆x≈P(x<ξ≤x+∆x)
ξ

x x+∆ x

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: explanations

Legitimate question: why don’t we talk about P(ξ = x)


instead of complicating things with P(x < ξ ≤ x + ∆x)?
Answer: because mosr RVs are continuous.
That means that they can take an (even uncountable) number
of values.
That leads directly to P(ξ = x) = 0 in general!
Example: a die with an increasing number of faces N (which
we cannot construct geometrically, but forget about it):
N = 6 ⇒ the probability of a face is = 61 .
1
N = 20 ⇒ the probability of a face is = 20 .
1
N = ∞ (sfera) ⇒ the probability of a face is = ∞ = 0!

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: conclusions


The value of the PDF wξ (x) at a point x ∈ R does not
representa a probabilitaty.
The measure of the probability is given by the area
underneath wξ , and for an area, we need an interval, however
small (x, x + ∆x).
Even if P(ξ = 2, 1) = P(ξ = 6, 2) = 0, it is obvious, from the
shape of the PDF that “2,1 is more likely than 6,2”.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: conclusions


The value of the PDF wξ (x) at a point x ∈ R does not
representa a probabilitaty.
The measure of the probability is given by the area
underneath wξ , and for an area, we need an interval, however
small (x, x + ∆x).
Even if P(ξ = 2, 1) = P(ξ = 6, 2) = 0, it is obvious, from the
shape of the PDF that “2,1 is more likely than 6,2”.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: conclusions


The value of the PDF wξ (x) at a point x ∈ R does not
representa a probabilitaty.
The measure of the probability is given by the area
underneath wξ , and for an area, we need an interval, however
small (x, x + ∆x).
Even if P(ξ = 2, 1) = P(ξ = 6, 2) = 0, it is obvious, from the
shape of the PDF that “2,1 is more likely than 6,2”.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties


1 wξ (x) ≥ 0.
Proof: it is the derivative of the increasing function Fξ .
Rx2
2 wξ (x)dx = P(x1 < ξ ≤ x2 )
x1
Proof:
Rx2
wξ (x)dx = Fξ (x2 ) − Fξ (x1 ) = P(x1 < ξ ≤ x2 )
x1

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties


1 wξ (x) ≥ 0.
Proof: it is the derivative of the increasing function Fξ .
Rx2
2 wξ (x)dx = P(x1 < ξ ≤ x2 )
x1
Proof:
Rx2
wξ (x)dx = Fξ (x2 ) − Fξ (x1 ) = P(x1 < ξ ≤ x2 )
x1

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties


1 wξ (x) ≥ 0.
Proof: it is the derivative of the increasing function Fξ .
Rx2
2 wξ (x)dx = P(x1 < ξ ≤ x2 )
x1
Proof:
Rx2
wξ (x)dx = Fξ (x2 ) − Fξ (x1 ) = P(x1 < ξ ≤ x2 )
x1

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties


1 wξ (x) ≥ 0.
Proof: it is the derivative of the increasing function Fξ .
Rx2
2 wξ (x)dx = P(x1 < ξ ≤ x2 )
x1
Proof:
Rx2
wξ (x)dx = Fξ (x2 ) − Fξ (x1 ) = P(x1 < ξ ≤ x2 )
x1

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties

R∞
3 The normalisation condition: wξ (x)dx = 1
−∞
Proof:
We write property no. 2 for x1 = −∞ and x2 = ∞.
Ry
4 The CDF: Fξ (y ) = wξ (x)dx.
−∞
Proof:
We write property no. 2 for x1 = −∞ ’si x2 = y .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties

R∞
3 The normalisation condition: wξ (x)dx = 1
−∞
Proof:
We write property no. 2 for x1 = −∞ and x2 = ∞.
Ry
4 The CDF: Fξ (y ) = wξ (x)dx.
−∞
Proof:
We write property no. 2 for x1 = −∞ ’si x2 = y .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties

R∞
3 The normalisation condition: wξ (x)dx = 1
−∞
Proof:
We write property no. 2 for x1 = −∞ and x2 = ∞.
Ry
4 The CDF: Fξ (y ) = wξ (x)dx.
−∞
Proof:
We write property no. 2 for x1 = −∞ ’si x2 = y .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties

R∞
3 The normalisation condition: wξ (x)dx = 1
−∞
Proof:
We write property no. 2 for x1 = −∞ and x2 = ∞.
Ry
4 The CDF: Fξ (y ) = wξ (x)dx.
−∞
Proof:
We write property no. 2 for x1 = −∞ ’si x2 = y .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties

R∞
3 The normalisation condition: wξ (x)dx = 1
−∞
Proof:
We write property no. 2 for x1 = −∞ and x2 = ∞.
Ry
4 The CDF: Fξ (y ) = wξ (x)dx.
−∞
Proof:
We write property no. 2 for x1 = −∞ ’si x2 = y .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: properties

R∞
3 The normalisation condition: wξ (x)dx = 1
−∞
Proof:
We write property no. 2 for x1 = −∞ and x2 = ∞.
Ry
4 The CDF: Fξ (y ) = wξ (x)dx.
−∞
Proof:
We write property no. 2 for x1 = −∞ ’si x2 = y .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

Legitimate question: how can we approach the (unknown)


PDF of a RV based on a large set of instances of that RV?
Examplu: ξ = {2.68, 3.5102, −3.0845, −1.8180, 3.4648, . . .}.
Answer: by computing the histogram of values:
we determine the range where the RV takes values: [ξmin , ξmax ].
we divide the range above into N sub-intervals of equal width
(bins).
for each bin, we count how many values of the RV fall into
that bin.
we normalize the values of the histogram such that its area
gets 1 .

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

wξ (x) histogram in 10 bins


of 100000 values of ξ

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

wξ (x) histogram in 100 bins


of 100000 values of ξ

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

wξ (x) histogram in 1000 bins


of 100000 values of ξ

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The probability density function: practical estimation

wξ (x) histogram in 10000 bins


of 100000 values of ξ

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The Gaussian (normal) PDF

(x − m)2
 
1
wξ (x) = √ exp − , m ∈ R, σ > 0
σ 2π 2σ 2

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The uniform PDF

1

b−a dac’a a ≤ x ≤ b
wξ (x) =
0 ’in rest.

Decision and estimation in information processing: course no. 2


Field of events
Random variables
The cumulative distribution function
The probability density function

The Rayleigh PDF

(  
x x2
σ2
exp − 2σ 2 dac’a x ≥ 0
wξ (x) =
0 ’in rest.

Decision and estimation in information processing: course no. 2

You might also like