LECTURE 3 Chapter 3: Enhancement in the spatial domain 3.

3 Histogram Processing Histogram h(rk ) = nk • rk : kth gray level • nk : number of pixels of gray level rk Normalization ⇒ discrete PDF p(rk ) = nk /n • n: total number of pixels
L−1 X k=0

p(rk ) = 1

Histogram equalization: 3.3.1 Histogram specification: 3.3.2

3.3 Histogram Processing Examples

Lecture 3 (page 2)

3.3 Histogram Processing 3.3.1 Histogram Equalization

Lecture 3 (page 3)

First consider continuous functions and transformations of the form s = T (r), r ∈ [0, 1] and assume that (a) T (r) single-valued, monot. increasing for r ∈ [0, 1] (b) T (r) ∈ [0, 1] for r ∈ [0, 1] Inverse transformation r = T −1(s), s ∈ [0, 1] Why?

3.3 Histogram Processing View gray levels as random variables • pr (r): continuous PDF of r • ps(s): continuous PDF of s If T −1(s) satisfies condition (a) then

Lecture 3 (page 4)

dr ¯¯¯ ps(s) = pr (r) ¯¯¯ ds Consider the transformation function s = T (r) =
Z

¯ ¯ ¯ ¯ ¯ ¯ ¯

¯

r 0

pr (w) dw

RHS is the cumulative distribution function (CDF) of r, and satisfies conditions (a) and (b). From Leibniz’s rule... ds dT (r) = dr dr ¾ d ½Z r = pr (w) dw dr 0 = pr (r) ⇒ dr ¯¯¯ ps(s) = pr (r) ¯¯¯ ds ¯ 1 ¯¯¯¯ = pr (r) ¯ pr (r) ¯¯ = 1, s ∈ [0, 1]
¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯

3.3 Histogram Processing Thus for T (r) =
Z

Lecture 3 (page 5)

r 0

pr (w) dw, ps(s) is always uniform

Now consider discrete values... Recall nk p(rk ) = , k = 0, 1, 2, . . . , L − 1 n

The discrete version of s = T (r) = is sk = T (rk ) =
k X j=0 k X
Z

r 0

pr (w) dw

pr (rj )

nj , k = 0, 1, 2, . . . , L − 1 = j=0 n and is called histogram equalization NB: This will not produce a uniform histogram, but will tend to spread the histogram of the input image Advantages: • Gray-level values cover entire scale (contrast enhancement) • Fully automatic

3.3 Histogram Processing

Lecture 3 (page 6)

Example 3.3: Histogram equalization

3.3 Histogram Processing

Lecture 3 (page 7)

Example 3.3: Histogram equalization... Transformation functions

3.3 Histogram Processing

Lecture 3 (page 8)

3.3.2 Histogram Matching (Specification) • Some applications: hist. equalization not best approach • So, generate processed image with specified histogram Development of the method Again consider continuous gray levels r and z... • pr (r): continuous PDF for input image Let s be a random variable where s = T (r) =
Z

• pz (z): specified (desired) cont. PDF for output image

r 0

pr (w) dw

and define z as a random variable where G(z) = thus G(z) = T (r) and z = G−1(s) = G−1 [ T (r) ]
Z

z 0

pz (t) dt = s,

3.3 Histogram Processing

Lecture 3 (page 9)

Procedure for continuous formulation (1) Obtain T (r) (2) Obtain G(z) (3) Obtain G−1 (4) Pixels of input image → T → G−1 → Pixels of output image • Seldom possible to obtain analytical expressions for T (r) and G−1 • Easier for discrete case, but only approximation for desired histogram Discrete formulation Recall histogram equalization... sk = T (rk ) = =
k X j=0 k X

pr (rj )

nj , k = 0, 1, 2, . . . , L − 1 j=0 n

For histogram specification, we have vk = G(zk ) =
i=0 k X

pz (zi) = sk , k = 0, 1, 2, . . . , L − 1

3.3 Histogram Processing

Lecture 3 (page 10)

Discrete formulation for histogram specification sk = T (rk ) = =
k X k X

j=0

pr (rj )

nj , k = 0, 1, 2, . . . , L − 1 j=0 n
k X

vk = G(zk ) =

i=0

pz (zi) = sk , k = 0, 1, 2, . . . , L − 1

zk = G−1 [ T (rk ) ] = G−1 [ sk ], k = 0, 1, 2, . . . , L − 1 r0 r1 . . rL−1 −→ T −→ s0 −→ T −→ s1 . . . . −→ T −→ sL−1

z0 z1 . . zL−1

−→ G −→ s0 = v0 −→ G −→ s1 = v1 . . . . −→ G −→ sL−1 = vL−1

3.3 Histogram Processing

Lecture 3 (page 11)

(1) Obtain histogram of given image; (2) Precompute: rk −→ sk ; (3) Obtain G from given pz (z) (4) Precompute: sk −→ zk . Use iterative scheme... Let zk = z for each k, where z is the smallest integer in ˆ ˆ the interval [0, L − 1] so that (G(ˆ) − sk ) ≥ 0, k = 0, 1, 2, . . . , L − 1. z For k = k +1, start with z = zk and incr. in integer values ˆ (5) For each pixel, use (2) and (4): rk −→ sk −→ zk • NB: None of the values of pz (zi) can be zero. Why?

3.3 Histogram Processing

Lecture 3 (page 12)

3.3 Histogram Processing

Lecture 3 (page 13)

• Histogram specification: trial-and-error process

3.3 Histogram Processing 3.3.3 Local Enhancement

Lecture 3 (page 14)

• Previous methods (3.3.1 and 3.3.2) were global • Define square or rectangular neighbourhood (mask) and move the center from pixel to pixel • For each neighbourhood... • Calculate histogram of the points in the neighbourhood • Obtain histogram equalization/specification function • Map gray level of pixel centered in neighbourhood • Can use new pixel values and previous histogram to calculate next histogram Example 3.5: Enhancement using local histograms

3.3 Histogram Processing

Lecture 3 (page 15)

3.3.4 Use of Hist. Statistics for Image Enhancement The nth moment of r (discrete) about its mean is defined as µn(r) =
L−1 X i=0

(ri − m)np(ri) rip(ri)

where m is the mean value of r: m=
L−1 X i=0

Note that µ0 = 1 and µ1 = 0, and that µ2 is the variance σ 2(r): µ2(r) =
L−1 X i=0

(ri − m)2p(ri)

Mean: measure of average gray level Variance: measure of average contrast Let (x, y) be the coordinates of a pixel in an image, and Sxy denote a subimage centered at (x, y): mSxy =
2 σSxy (s,t)∈Sxy
½ X

rs,tp(rs,t)
¾

=

(s,t)∈Sxy

X

rs,t − mSxy p(rs,t)

3.3 Histogram Processing

Lecture 3 (page 16)

Example 3.6: Enhancement based on local statistics

g(x, y) =  

  

E · f (x, y) if mSxy ∈ [0, k0MG] AND σSxy ∈ [k1DG, k2DG] f (x, y) otherwise

MG: Global mean DG: Global standard deviation

3.3 Histogram Processing

Lecture 3 (page 17)

Example 3.6: Enhancement based on local statistics E = 4.0; k0 = 0.4; k1 = 0.02; k2 = 0.4; (3×3) local region

Sign up to vote on this title
UsefulNot useful