You are on page 1of 121

EC-364 Computer Vision

Linear Filters

Dr. Kamal M. Captain

SVNIT, Surat, India.

February 23, 2023

February 23, 2023 1 / 121


Image as a Function

f (x, y ) is the image intensity at position (x, y ).


February 23, 2023 2 / 121
Pixel (Point) Processing
Transformation T of intensity f at each pixel to g :
g (x, y ) = T (x, y )

February 23, 2023 3 / 121


Pixel (Point) Processing

February 23, 2023 4 / 121


Examples

February 23, 2023 5 / 121


Examples

February 23, 2023 6 / 121


Examples

February 23, 2023 7 / 121


Examples

February 23, 2023 8 / 121


Examples

February 23, 2023 9 / 121


Examples

February 23, 2023 10 / 121


Examples

February 23, 2023 11 / 121


Smoothing

February 23, 2023 12 / 121


Smoothing

February 23, 2023 13 / 121


Gaussian Kernel

February 23, 2023 14 / 121


Gaussian Kernel

February 23, 2023 15 / 121


Gaussian Smoothing

February 23, 2023 16 / 121


Gaussian Smoothing

February 23, 2023 17 / 121


Gaussian Smoothing

February 23, 2023 18 / 121


Gaussian Smoothing is Separable

February 23, 2023 19 / 121


Gaussian Smoothing is Separable

February 23, 2023 20 / 121


Gaussian Smoothing is Separable

February 23, 2023 21 / 121


Gaussian Smoothing is Separable

February 23, 2023 22 / 121


Correlation

Figure: Template Matching

How do you locate the template in the image?

February 23, 2023 23 / 121


Correlation

One way you can do this is to slide the template over the image and
at each point you find the difference between the template and the
image in the overlapping region.
Wherever the difference is very small, we can say that we found the
template.
Minimize:
XX
E [i, j] = (f [m, n] − t [m − i, n − j])2 .
m n

This is called the sum of squared difference.


When this is 0, we say that we found the part of image that exactly
matches the template.

February 23, 2023 24 / 121


Correlation

Let us expand the previous expression


XX
E [i, j] = f 2 [m, n]+t 2 [m − i, n − j]2 −2f [m, n] t [m − i, n − j] .
m n

Minimizing E [i, j] is equivalent ot maximizing


2f [m, n] t [m − i, n − j].

This term is called the cross-correlation


XX
Rtf [i, j] = f [m, n] t [m − i, n − j] = t ⊗ f .
m n

February 23, 2023 25 / 121


Convolution vs. Correlation

Convolution:
XX
g [i, j] = f [m, n] t [i − m, j − n] = t ∗ f .
m n

Correlation:
XX
Rtf [i, j] = f [m, n] t [m − i, n − j] = t ⊗ f .
m n

No Flipping in Correlation

February 23, 2023 26 / 121


Problem with Cross-correlation

Let us take the example of 1D signal.

Where will the cross correlation be maximum?

February 23, 2023 27 / 121


Problem with Cross-correlation

Let us take the example of 1D signal.

Rtf (C ) > Rtf (B) > Rtf (A).


We need Rtf (A) to be maximum.
Why this has happened?

February 23, 2023 28 / 121


Normalized Cross-Correlation

Account for energy differences


P P
f [m, n] t [m − i, n − j]
Ntf [i, j] = pP P m n pP P
2 2
m n f [m, n] m n t [m − i, n − j]

Ntf (A) > Ntf (B) > Ntf (C ).

February 23, 2023 29 / 121


Normalized Cross-Correlation

Account for energy differences


P P
f [m, n] t [m − i, n − j]
Ntf [i, j] = pP P m n pP P
2 2
m n f [m, n] m n t [m − i, n − j]

February 23, 2023 30 / 121


Fourier Transform
Jean Banptiste Joseph Fourier has idea in 1807.
Any Periodic Function can be rewritten as a Weighted Sum of Infinite
Sinusoids of Different Frequencies.
Many scientists including Lagrange, Laplace and Poisson did not
believe in the idea.
It took about eight years to publish his results. It is not translated to
English until 1878.

Figure: (1768-1830)

February 23, 2023 31 / 121


Sinusoid
A sinusoid signal f (x) is expressed as
f (x) = A sin (2πux + ϕ)

where,
A: Amplitude
T: Period
ϕ: Phase
u: Frequency(1/T )
February 23, 2023 32 / 121
Fourier Series

Let us see a demo.


February 23, 2023 33 / 121
Fourier Series

February 23, 2023 34 / 121


Fourier Series

February 23, 2023 35 / 121


Fourier Series

February 23, 2023 36 / 121


Fourier Series

February 23, 2023 37 / 121


Frequency Representation of a Signal

February 23, 2023 38 / 121


Fourier Transform

February 23, 2023 39 / 121


Fourier Transform

February 23, 2023 40 / 121


Finding FT and IFT

Fourier Transform:
Z∞
F (u) = f (x) e −i2πux dx
−∞

where x: space, u: frequency



e iθ = cos (θ) + i sin (θ) where i = −1

Inverse Fourier Transform:


Z∞
f (x) = F (u) e i2πux du
−∞

February 23, 2023 41 / 121


Fourier Transform is Complex

F (u) holds the amplitude and the phase of the sinusoid of frequency
u.
Z∞
F (u) = f (x) e −i2πux dx
−∞

F (u) = Re {F (u)} + iIm {F (u)}

q
Amplitude:A(u) = Re {F (u)}2 + Im {F (u)}2

Phase:ϕ(u) = atan2 {Im {F (u)} , Re {F (u)}}

February 23, 2023 42 / 121


Gaussian Smoothing in Fourier Domain

Convolve the noisy signal with Gaussian kernel to reduce the noise.

February 23, 2023 43 / 121


Gaussian Smoothing in Fourier Domain

February 23, 2023 44 / 121


Gaussian Smoothing in Fourier Domain

February 23, 2023 45 / 121


Gaussian Smoothing in Fourier Domain

February 23, 2023 46 / 121


2D Fourier Transform

Fourier Transform:
Z ∞ Z ∞
F (u, v ) = f (x, y ) e −i2π(ux+vy ) dxdy
−∞ −∞

where u and v are frequencies along x and y , respectively.

Fourier Transform:
Z ∞ Z ∞
f (x, y ) = F (u, v ) e i2π(ux+vy ) dudv
−∞ −∞

where u and v are frequencies along x and y , respectively.


We know that the images are discrete, so we have to extend the
concept to discrete forms.

February 23, 2023 47 / 121


2D Fourier Transform: Discrete Images
Discrete Fourier Transform (DFT):
M−1
X N−1
X
F [p, q] = f [m, n] e −i2πpm/M e −i2πqn/N ,
m=0 n=0

where p = 0, 1, . . . , M − 1, q = 0, 1, . . . , N − 1.
p and q are frequencies along m and n, respectively.

Inverse Discrete Fourier Transform (IDFT):


M−1 N−1
1 XX
f [m, n] = F [p, q] e i2πpm/M e i2πqn/N ,
MN
p=0 q=0

for m = 0, 1, . . . , M − 1, n = 0, 1, . . . , N − 1.
With these expressions, we can now compute the Fourier transform of
discrete images.
February 23, 2023 48 / 121
2D Fourier Transform: Example 1

We have variation only in x directions. There is not variation along y


direction.
There is only horizontal frequencies and no vertical frequencies.
The central dot is called the DC component or the average intensity
of the original spatial domain image.

February 23, 2023 49 / 121


2D Fourier Transform: Example 1

February 23, 2023 50 / 121


2D Fourier Transform: Example 2

February 23, 2023 51 / 121


2D Fourier Transform: Example 3

February 23, 2023 52 / 121


2D Fourier Transform: Example 4

February 23, 2023 53 / 121


Low Pass Filtering

February 23, 2023 54 / 121


Low Pass Filtering

February 23, 2023 55 / 121


Low Pass Filtering

February 23, 2023 56 / 121


High Pass Filtering

February 23, 2023 57 / 121


High Pass Filtering

February 23, 2023 58 / 121


High Pass Filtering

February 23, 2023 59 / 121


Gaussian Smoothing

February 23, 2023 60 / 121


Gaussian Smoothing

February 23, 2023 61 / 121


Gaussian Smoothing

February 23, 2023 62 / 121


Importance of Phase

February 23, 2023 63 / 121


Hybrid Image

February 23, 2023 64 / 121


Deconvolution

February 23, 2023 65 / 121


Motion Blur

f (x, y ) ∗ h (x, y ) = g (x, y )

February 23, 2023 66 / 121


Motion Blur

f (x, y ) ∗ h (x, y ) = g (x, y )

Given captured image g (x, y ) and the PSF h(x, y ) can we estimate
actual scene f (x, y )?

February 23, 2023 67 / 121


Motion Deblur: Deconvolution

f (x, y ) ∗ h (x, y ) = g (x, y )

Let f ′ be the recovered image.


f ′ (x, y ) ∗ h (x, y ) = g (x, y )

F ′ (u, v ) H (u, v ) = G (u, v )


G (u, v )
F ′ (u, v ) =
H (u, v )
We can then take IFT of F ′ (u, v ) to get f ′ (x, y ).
February 23, 2023 68 / 121
Motion Deblur: Deconvolution

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

February 23, 2023 69 / 121


Motion Deblur: Deconvolution

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

Step 1: Recover F ′ (u, v ) in Fourier Domain.

February 23, 2023 70 / 121


Motion Deblur: Deconvolution

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

Step 1: Recover F ′ (u, v ) in Fourier Domain.

February 23, 2023 71 / 121


Motion Deblur: Deconvolution

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

Step 2: Compute IFT of F ′ (u, v ) to recover f ′ (x, y ).

February 23, 2023 72 / 121


Noise Issue in Deconvolution

f (x, y ) ∗ h (x, y ) + η (x, y ) = g (x, y )

February 23, 2023 73 / 121


Motion Deblur: Deconvolution

If we ignore the noise η (x, y ):

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

February 23, 2023 74 / 121


Motion Deblur: Deconvolution
If we ignore the noise η (x, y ):

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

Higher frequencies in F (u, v ) are amplified.


February 23, 2023 75 / 121
Motion Deblur: Deconvolution
If we ignore the noise η (x, y ):

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

Noise is significantly amplified.


February 23, 2023 76 / 121
Deconvolution Issues

G (u, v )
F ′ (u, v ) = → IFT → f ′ (x, y )
H (u, v )

1 Where H(u, v ) = 0, F ′ (u, v ) = ∞ → Not recoverable.

2 Motion blur filter H(u, v ) is a low pass filter.

For high frequencies (u, v ):


Noise N(u, v ) in G (u, v ) is high
Filter H(u, v ) ≈ 0.

Hence, noise in G (u, v ) is amplified.


We need some kind of noise suppression to solve this issue.

February 23, 2023 77 / 121


Noise Suppression: Weiner Deconvolution

 
G (u, v )  1
F ′ (u, v ) = 
H (u, v ) 1 + NSR(u,v2)
|H(u,v )|

where:  
1 1
Weiner Filter W (u, v ) =  
H (u, v ) 1 + NSR(u,v2)
|H(u,v )|

Noise to Signal Ratio, NSR(u, v ):

Power of noise at (u,v) |N(u, v )|2


NSR(u, v ) = =
power of signal(scene) at (u, v ) |F (u, v )|2

February 23, 2023 78 / 121


Noise Suppression: Weiner Deconvolution

 
G (u, v ) 1
F ′ (u, v ) =  .
H (u, v ) 1 + NSR(u,v2)
|H(u,v )|

Determining NSR requires us to have prior knowledge of the noise


”pattern” and the scene (or of similar scene).

|N(u, v )|2
NSR(u, v ) = .
|F (u, v )|2

Often NSR is set to a single suitable constant λ.

NSR (u, v ) = λ.

February 23, 2023 79 / 121


Noise Suppression: Weiner Deconvolution

NSR(u, v ) = λ = 0.002 was used to recover image.

February 23, 2023 80 / 121


Sampling Theory and Aliasing

February 23, 2023 81 / 121


Sampling Theory

We know that the lens of the camera creates a continuous optical


image on the image plane.

At the image plane there are image sensor which converts this
continuous image into discrete digital image.

Now the question is how densely should we sample the continuous


image so that we don’t loose important important information in it
and at the same time not introduce any artifacts or undesired effects.

This brings us to Sampling Theory.

February 23, 2023 82 / 121


From Continuous to Digital Image

February 23, 2023 83 / 121


Sampling Problem

February 23, 2023 84 / 121


Sampling Problem

February 23, 2023 85 / 121


Sampling Theory

February 23, 2023 86 / 121


Shah Function (Impulse Train)

February 23, 2023 87 / 121


Fourier Analysis of Sampled Signal

February 23, 2023 88 / 121


Fourier Analysis of Sampled Signal

February 23, 2023 89 / 121


Aliasing

February 23, 2023 90 / 121


Nyquist Theorem

February 23, 2023 91 / 121


Aliasing in Digital Imaging

How do sensors combat aliasing?

February 23, 2023 92 / 121


Minimizing the Effects of Aliasing

February 23, 2023 93 / 121


Histogram Processing

February 23, 2023 94 / 121


Histogram

Let rk , for k = 0, 1, 2, . . . , L − 1, denote the intensities of an L−level


digital image, f (x, y ).
The unnormalized histogram of f is defined as

h (rk ) = nk for k = 0, 1, 2, . . . , L − 1,

where nk is the number of pixels in f with intensity rk , and the


subdivisions of the intensity scale are called histogram bins.
Similarly, the normalized histogram of f is defined as

h (rk ) nk
p (rk ) = =
MN MN
where M and N are the number of image rows and columns,
respectively.

February 23, 2023 95 / 121


Histogram

Mostly, normalized histogram is used, which is referred to histogram


or image histogram.

The sum of p(rk ) for all values of k is always 1.

The components of p(rk ) are estimates of the probabilities of


intensity levels occurring in an image.

Histogram manipulation is fundamental tool in image processing.

Histogram are simple to compute and are also suitable for fast
hardware implementations, thus making Histogram-based techniques
a popular tool for real-time image processing.

February 23, 2023 96 / 121


Histogram

February 23, 2023 97 / 121


Some Observations

The components of the histogram of the high-contrast image cover a


wide range of the intensity scale, and the distribution of pixels is not
too far from uniform, with few bins being much higher than the
others.

It is reasonable to conclude that an image whose pixels tend to occupy


the entire range of possible intensity levels and, in addition, tend to
be distributed uniformly, will have an appearance of high contrast.

It is possible to develop a transformation function that can achieve


this effect automatically, using only the histogram of an input image.

February 23, 2023 98 / 121


Histogram Equalization

Assuming initially continuous intensity values, let the variable r


denote the intensities of an image to be processed.
We assume that r is in the range [0, L − 1] with r = 0 representing
black and r = L − 1 representing white.
For r satisfying these conditions, we focus attention on
transformations (intensity mappings) of the form

s = T (r ) , 0 ≤ r ≤ L − 1

that produce an output intensity value, s, for a given intensity value r


in the input image.
We take following assumptions:
1 T (r ) is a monotonic increasing function in the interval 0 ≤ r ≤ L − 1.
2 0 ≤ T (r ) ≤ L − 1 for 0 ≤ r ≤ L − 1.

February 23, 2023 99 / 121


Histogram Equalization

In some formulations, we use inverse transformation

r = T −1 (s), 0 ≤ s ≤ L − 1

in which case the condition 1 changes to:


1′ T (r ) is strictly monotonic increasing function in the interval
0 ≤ r ≤ L − 1.
Condition 1 guarantees that output intensity values will never be less
than corresponding input values, thus preventing artifacts created by
reversals of intensity.
Condition 2 guarantees that the range of output intensities is the
same as the input.
Condition 1’ guarantees that the mappings from s back to r will be
one-to-one, thus preventing ambiguities.

February 23, 2023 100 / 121


Histogram Equalization

February 23, 2023 101 / 121


Observations

LHS Figure:
We see that it is possible for multiple input values to map to a single
output value and still satisfy these two conditions.
This function performs a one-to-one or many-to-one mapping. This is
perfectly fine when mapping from r to s.
However, if we wanted to recover the values of r uniquely from the
mapped values.
This would be possible for the inverse mapping of sk , but the inverse
mapping of sq is a range of values, which prevents us in general from
recovering the original value of r that resulted in sq .
RHS Figure:
T (r ) be strictly monotonic guarantees that the inverse mappings will
be single valued (i.e., the mapping is one-to-one in both directions).

February 23, 2023 102 / 121


Histogram Equalization

The intensity of an image may be viewed as a random variable in the


interval [0, L − 1].

Let pr (r ) and ps (s) denote the PDFs of intensity values r and s in


two different images.

If pr (r ) and T (r ) are known, and T (r ) is continuous and


differentiable over the range of values of interest, then the PDF of the
transformed (mapped) variable s can be obtained as

dr
ps (s) = pr (r )
ds

We see that the PDF of the output intensity variable, s, is determined


by the PDF of the input intensities and the transformation function
used [recall that r and s are related by T (r )].

February 23, 2023 103 / 121


Histogram Equalization
A transformation function of particular importance in image
processing is Z r
s = T (r ) = (L − 1) pr (w )dw
0
where w is a dummy variable of integration.
The integral on the right side is the cumulative distribution function
(CDF) of random variable r .
Because PDFs are always are positive, and the integral of a function
is the area under the function, it follows that this transformation
function satisfies condition 1.
This is because the area under the function cannot decrease as r
increases.
When the upper limit in this equation is r = L − 1 the integral
evaluates to 1, as it must for a PDF. Thus, the maximum value of s
is L − 1, and condition 2 is also satisfied.
February 23, 2023 104 / 121
Histogram Equalization
We use transformation expression to find the ps (s) corresponding to
the transformation function.
We know from Leibniz’s rule in calculus that the derivative of a
definite integral with respect to its upper limit is the integrand
evaluated at the limit. Using this, we get
ds dT (r )
=
dr dr Z r 
d
= (L − 1) pr (w )dw
dr 0
= (L − 1)pr (r )
Substituting this value in transformation expression, we get

dr 1 = 1 , 0≤s ≤L−1

ps (s) = pr (r ) = pr (r )
ds (L − 1)pr (r ) L − 1
We note that this ps (s) is the uniform distribution.
February 23, 2023 105 / 121
Histogram Equalization

February 23, 2023 106 / 121


Histogram Discrete Images
For discrete values, we work with probabilities and summations
instead of probability density functions and integrals (but the
requirement of monotonicity stated earlier still applies).
The probability of occurrence of intensity level rk in a digital image is
approximated by
nk
pr (rk ) =
MN
where MN is the total number of pixels in the image, and nk denotes
the number of pixels that have intensity rk .
The discrete form of the transformation is given as follows
k
X
sk = T (rk ) = (L − 1) pr (rj ) k = 0, 1, . . . , L − 1
j=0

where L is the number of possible intensity levels in the image (e.g.,


256 for an 8-bit image).
February 23, 2023 107 / 121
Histogram Equalization: Example
1 Suppose that a 3-bit image (L = 8) of size 64 × 64 pixels
(MN = 4096) has the intensity distribution in Table, where the
intensity levels are integers in the range [0, L − 1] = [0, 7]. Obtain
values of histogram equalization transformation function.

February 23, 2023 108 / 121


Histogram Equalization Example: Solution

The transformation for 0 intensity value can be obtained as


0
X
s0 = T (r0 ) = 7 pr (rj ) = 7pr (r0 ) = 7 × 0.19 = 1.33
j=0

The transformation for intensity value 1 can be obtained as


1
X
s1 = T (r1 ) = 7 pr (rj ) = 7(pr (r0 )+pr (r1 )) = 7×(0.19+0.25) = 3.08
j=0

Similarly, we can get s2 = 4.55, s3 = 5.67, s4 = 6.23, s5 = 6.65,


s6 = 6.86 and s2 = 7.

February 23, 2023 109 / 121


Histogram Equalization Example: Solution
At this point, the s values are fractional because they were generated
by summing probability values, so we round them to their nearest
integer values in the range [0, 7]:
s0 = 1.33 → 1 s1 = 3.08 → 3 s2 = 4.55 → 5 s3 = 5.67 → 6
s4 = 6.23 → 6 s5 = 6.65 → 7 s6 = 6.86 → 7 s7 = 7 → 7
These are the values of the equalized histogram.
Observe that the transformation yielded only five distinct intensity
levels. Because r0 = 0 was mapped to s0 = 1, there are 790 pixels in
the histogram equalized image with this value.
There are 1023 pixels with a value of s1 = 3 and 850 pixels with a
value of s2 = 5.
Both r3 and r4 were mapped to the same value, 6, so there are
(656 + 329 = 985) pixels in the equalized image with this value.
Similarly, there are (245 + 122 + 81 = 441) pixels with a value of 7 in
the histogram equalized image.
Dividing these numbers by MN = 4096 yielded the equalized
histogram. February 23, 2023 110 / 121
Histogram Equalization Example: Solution

Because a histogram is an approximation to a PDF, and no new


allowed intensity levels are created in the process, perfectly flat
histograms are rare in practical applications of histogram equalization
using this method.
Unlike its continuous counterpart, it cannot be proved in general that
discrete histogram equalization results in a uniform histogram.

February 23, 2023 111 / 121


Histogram Equalization

February 23, 2023 112 / 121


Histogram Equalization

February 23, 2023 113 / 121


Histogram Matching (Specification)

Histogram equalization produces a transformation function that seeks


to generate an output image with a uniform histogram.

When automatic enhancement is desired, this is a good approach to


consider because the results from this technique are predictable and
the method is simple to implement.

however, there are applications in which histogram equalization is not


suitable.

In particular, it is useful sometimes to be able to specify the shape of


the histogram that we wish the processed image to have.

The method used to generate images that have a specified histogram


is called histogram matching or histogram specification.

February 23, 2023 114 / 121


Histogram Matching (Specification)
Consider continuous intensities r and z which we treat as random
variables with PDFs pr (r ) and pz (z), respectively.
Here, r and z denote the intensity levels of the input and output
(processed) images, respectively.
We can estimate pr (r ) from the given input image, and pz (z) is the
specified PDF that we wish the output image to have.
Let s be a random variable with the property
Z r
s = T (r ) = (L − 1) pr (w )dw (A)
0

where w is dummy variable of integration.


Define a function G on variable z with the property
Z z
G = (L − 1) pz (v )dv = s (B)
0

where v is dummy variable of integration.


February 23, 2023 115 / 121
Histogram Matching (Specification)

It follows from equations (A) and (B) that G (z) = s = T (r ) and,


therefore, that z must satisfy the condition

z = G −1 (s) = G −1 [T (r )] (C)

The transformation function T (r ) can be obtained using Eq. (A)


after pr (r ) has been estimated using the input image.

Similarly, function G (z) can be obtained from Eq. (B) because pz (z)
is given.

February 23, 2023 116 / 121


Histogram Matching (Specification) Procedure

Equations (A) through (C) imply that an image whose intensity levels
have a specified PDF can be obtained using the following procedure:
1 Obtain pr (r ) from the input image to use in Eq. (A).
2 Use the specified PDF, pz (z), in Eq. (B) to obtain the function G (z).
3 Compute the inverse transformation z = G −1 (s); this is a mapping
from s to z, the latter being the values that have the specified PDF.
4 Obtain the output image by first equalizing the input image using Eq.
(A); the pixel values in this image are the s values. For each pixel with
value s in the equalized image, perform the inverse mapping
z = G −1 (s) to obtain the corresponding pixel in the output image.
When all pixels have been processed with this transformation, the PDF
of the output image, pz (z), will be equal to the specified PDF.

February 23, 2023 117 / 121


Histogram Matching (Specification) Procedure

Because s is related to r by T (r ), it is possible for the mapping that


yields z from s to be expressed directly in terms of r .

However, finding analytical expressions for G −1 is not a trivial task.

This is not a problem when working with discrete quantities.

We can convert the continuous result into a discrete form. This


means that we can work with histogram instead of PDFs.

As in histogram equalization, we loose in the conversion ability to be


able to guarantee a result that will have the exact specified histogram.

February 23, 2023 118 / 121


Histogram Matching (Specification) Procedure
The discrete formulation of Eq. (A) is the histogram equalization
transformation, that is,
k
X
sk = T (rk ) = (L − 1) pr (rj ), k = 0, 1, . . . , L − 1
j=0

Similarly, given a specific value of sk , the discrete formulation of Eq.


(B) involves computing the transformation function
q
X
G (zq ) = (L − 1) pz (zi ) (D)
i=0
for a value of q so that
G (zq ) = sk (E)
where pz (zi ) is the ith value of the specified histogram.
Finally, we obtain the desired value zq from the inverse
transformation:
zq = G −1 (sk ) (F)
February 23, 2023 119 / 121
Histogram Matching (Specification) Procedure

In practice, there is no need to compute the inverse of G , Because we


deal with intensity levels that are integers, it is a simple matter to
compute all the possible values of G using Eq. (E) for
q = 0, 1, . . . , L − 1.
These values are rounded to nearest integer values spanning the range
[0, L − 1] and stored in the lookup table.
Then, given a particular value of sk , we look for closest match in the
table.
For example, if the 27th entry in the table is the closest value of sk ,
then q = 26 (recall that we start counting intensities at 0) and z26 is
the best solution to Eq. (F). Thus, the given sk value would map to
z26 .
We repeat this procedure to find the mapping from each value sk to
the value zq that is closest match in the table. These mappings are
the solution to the histogram specification problem.
February 23, 2023 120 / 121
Histogram Matching (Specification) Procedure
Given an input image and a specified histogram pz (zi ), for
i = 0, 1, . . . , L − 1, we may summarize the procedure for discrete
histogram specification as follows:
1 Compute the histogram, pr (r ), of the input image, and use it in Eq.
(D) to map the intensities in the input image to the intensities in the
histogram equalized image. Round the resulting values, sk , to the
integer range [0,L-1].
2 Compute all values of function G (zq ) using Eq. (E) for
q = 0, 1, . . . , L − 1, where pz (z) are the values of the specified
histogram. Round the values of G to integers in the range [0, L-1].
Store the rounded values of G in the lookup table.
3 For every value of sk , k = 0, 1, . . . , L − 1, use the stored values of G
from Step 2 to find the corresponding value of zq so that G (zq ) is
closest to sk . Store these mappings from s to z. When more than one
value of zq gives the same match (i.e., the mapping is not unique),
choose the smallest value by convention.
4 Form the histogram equalized image by mapping every equalized pixel
with value sk to the corresponding pixel with value zq in the histogram
specified image, using the mappings found the Step 3.
February 23, 2023 121 / 121

You might also like