You are on page 1of 17

Enhancement Using Local Histogram

• Used to enhance details over small portions of the image.


• Define a square or rectangular neighborhood, whose center
moves from pixel to pixel.
• Compute local histogram based on the chosen neighborhood
for each point and apply a histogram equalization or
histogram specification transformation to the center pixel.
• Non-overlapping neighborhoods can also be used to reduce
computations. But this usually results in some artifacts
(checkerboard like pattern).
• Read example in Section 3.3.3 of text.
• Another use of histogram information in image enhancement
is the statistical moments associated with the histogram (recall
that the histogram can be thought of as a probability density
function).
• For example, we can use the local mean and variance to
determine the local brightness/contrast of a pixel. This
information can then be used to determine what, if any
transformation to apply to that pixel.
• Note that local histogram based operations are non-uniform in
the sense that a different transformation is applied to each
pixel.
• Read example in Section 3.3.4 of text.
Image Enhancement: Spatial Filtering
• Image enhancement in the spatial domain can be represented
as:
Transformation

g (m, n) = T ( f )(m, n)

Enhanced Image Given Image

The transformation T maybe linear or nonlinear. We will mainly


study linear operators T but will see one important nonlinear
operation.

How to specify T

• If the operator T is linear and shift invariant (LSI),


characterized by the point-spread sequence (PSS) h ( m, n) ,
then (recall convolution)

g (m, n) = h(m, n) * f (m, n)

∑ ∑ h( m − k , n − l ) f ( k , l )
∞ ∞
=
l = −∞ k = −∞

∑ ∑ f ( m − k , n − l ) h( k , l )
∞ ∞
=
l = −∞ k = −∞
• In practice, to reduce computations, h( m, n ) is of “finite
extent:”

h(k , l ) = 0, for (k , l ) ∉ ∆

where ∆ is a small set (called neighborhood). ∆ is also called


as the support of h.

• In the frequency domain, this can be represented as:

G (u , v) = H e (u , v) Fe (u , v)

where H e (u , v) and Fe (u , v ) are obtained after appropriate zero-


padding.

• Many LSI operations can be interpreted in the frequency


domain as a “filtering operation.” It has the effect of filtering
frequency components (passing certain frequency components
and stopping others).

• The term filtering is generally associated with such


operations.
• Examples of some common filters (1-D case):

Lowpass filter Highpass filter

H(u) H(u)
1
1

0 0

h(x) h(x)

0 0

• If h(m, n) is a 3 by 3 mask given by

w1 w2 w3
h = w4 w5 w6
w7 w8 w9
then

g (m, n) = w1 f (m − 1, n − 1) + w2 f (m − 1, n) + w3 f (m − 1, n + 1)
+ w4 f (m, n − 1) + w5 f (m, n) + w6 f (m, n + 1)
+ w7 f (m + 1, n − 1) + w8 f (m + 1, n) + w9 f (m + 1, n + 1)
• The output g(m, n) is computed by sliding the mask over each
pixel of the image f(m, n). This filtering procedure is
sometimes referred to as moving average filter.

• Special care is required for the pixels at the border of image


f(m, n). This depends on the so-called boundary condition.
Common choices are:
The mask is truncated at the border (free boundary)
The image is extended by appending extra rows/columns at
the boundaries. The extension is done by repeating the
first/last row/column or by setting them to some constant
(fixed boundary).
The boundaries “wrap around” (periodic boundary).

• In any case, the final output g(m, n) is restricted to the support


of the original image f(m, n).

• The mask operation can be implemented in matlab using the


filter2 command, which is based on the conv2
command.
Smoothing Filters
• Image smoothing refers to any image-to-image transformation
designed to “smooth” or flatten the image by reducing the
rapid pixel-to-pixel variation in grayvalues.

• Smoothing filters are used for:


Blurring: This is usually a preprocessing step for removing
small (unwanted) details before extracting the relevant
(large) object, bridging gaps in lines/curves,
Noise reduction: Mitigate the effect of noise by linear or
nonlinear operations.

Image smoothing by averaging (lowpass spatial


filtering)

• Smoothing is accomplished by applying an averaging mask.

• An averaging mask is a mask with positive weights, which


sum to 1. It computes a weighted average of the pixel values
in a neighborhood. This operation is sometimes called
neighborhood averaging.

• Some 3 x 3 averaging masks:




      

0 1 0
 
1 1 1 


1 3 1  
0 1 0
1 1 1 1
1 1 1 1 1 1 3 16 3 1 4 1


   

5 

9

32 

8

0 1 0 1 1 1 1 3 1 0 1 0
  
  
• This operation is equivalent to lowpass filtering.
Example of Image Blurring

 


1 1 

1
1 1 1 1


N2
   

 

1 1 

1 N×N

Avg. Mask
Original Image

N =3 N =5 N =7

N = 11 N = 15 N = 21
Example of noise reduction

 


1 1 1 1 1


1 1 1 1 1
1 

1 1 1 1 1


25
1 1 1 1 1


 

1 1 1 1 1
Noise-free Image

Zero-mean Gaussian noise, Variance = 0.01

Zero-mean Gaussian noise, Variance = 0.05


Median Filtering
• The averaging filter is best suited for noise whose distribution
is Gaussian:


1 − x2  

pnoise ( x) =
 
exp
2σ 2

σ 2π

• The averaging filter typically blurs edges and sharp details.

• The median filter usually does a better job of preserving


edges.

• Median filter is particularly suited if the noise pattern exhibits


strong (positive and negative) spikes. Example: salt and
pepper noise.

• Median filter is a nonlinear filter, that also uses a mask. Each


pixel is replaced by the median of the pixel values in a
neighborhood of the given pixel.

• Suppose A = {a1 , a2 , , aK } are the pixel values in a


neighborhood of a given pixel with a1 ≤ a2 ≤ ≤ aK . Then 


aK / 2 , for K even
median ( A) = 
a( K +1) / 2 , for K odd

Note: Median of a set of values is the “center value,” after sorting.

• For example: If A = {0,1,2,4,6,6,10,12,15} , then median(A) = 6.


Example of noise reduction
Gaussian noise: σ = 0.2 Salt & Pepper noise: prob. = 0.2

Noisy Image

MSE = 0.0337 MSE = 0.062

Output of 3x3
Averaging filter

MSE = 0.0075 MSE = 0.0125

Output of 3x3
Median filter

MSE = 0.0089 MSE = 0.0042


Image Sharpening
• This involves highlighting fine details or enhancing details
that have been blurred.
Basic highpass spatial filtering
• This can be accomplished by a linear shift-invariant operator,
implemented by means of a mask, with positive and negative
coefficients.

• This is called a sharpening mask, since it tends to enhance


abrupt graylevel changes in the image.

• The mask should have a positive coefficient at the center and


negative coefficients at the periphery. The coefficients should
sum to zero. Example:

−1 −1 −1
 

1

−1 8 −1
9
−1 −1 −1


 

• This is equivalent to highpass filtering.

• A highpass filtered image g can be thought of as the


difference between the original image f and a lowpass filtered
version of f :
g (m, n) = f (m, n) − lowpass( f (m, n))

Example
High-boost filtering

• This is a filter whose output g is produced by subtracting a


lowpass (blurred) version of f from an amplified version of f
g (m, n) = Af (m, n) − lowpass( f (m, n))

This is also referred to as unsharp masking.

• Observe that
g (m, n) = Af (m, n) − lowpass( f (m, n))
= ( A − 1) f (m, n) + f (m, n) − lowpass( f (m, n))
= ( A − 1) f (m, n) + highpass( f (m, n))

• For A > 1 , part of the original image is added back to the


highpass filtered version of f.

• The result is the original image with the edges enhanced


relative to the original image.

Example:

Original Highpass High-boost


Image filtering filtering
Derivative filter
• Averaging tends to blur details in an image. Averaging
involves summation or integration.

• Naturally, differentiation or “differencing” would tend to


enhance abrupt changes, i.e., sharpen edges.

• Most common differentiation operator is the gradient.

∂f ( x, y )
 

∇f ( x, y ) = ∂f (∂xx, y )


∂y


 

• The magnitude of the gradient is:


2 1/ 2


2
∂f ( x, y ) ∂f ( x, y )


∇f ( x , y ) = +







∂x ∂y

• Discrete approximations to the magnitude of the gradient is


normally used. Consider the following image region:

z1 z2 z3
z4 z5 z6
z7 z8 z9
• We may use the approximation

[
∇f ( x, y ) ≈ ( z 5 − z 8 ) + ( z 5 − z 6 )
2
]
2 1/ 2

• This can implemented using the masks:




1
h1 = and h2 = [1 − 1]


−1


As follows:
[
∇f ( x, y ) ≈ ( f * h1 ) + ( f * h2 )
2
]
2 1/ 2

• Alternatively, we may use the approximation:

[
∇f ( x, y ) ≈ ( z5 − z9 ) + ( z6 − z8 )
2
]
2 1/ 2

• This can implemented using the masks:


 

1 0 0 1
h1 = and h2 =




0 −1 −1 0
 

As follows:
[
∇f ( x, y ) ≈ ( f * h1 ) + ( f * h2 )
2
]
2 1/ 2

• The resulting maks are called Roberts cross-gradient


operators.
• The Roberts operators and the Prewitt/Sobel operators
(described later) are used for edge detection and are
sometimes called edge detectors.

Example: Roberts cross-gradient operator

  

1 1 0 0 1
h1 = and h2 = [1 − 1] h1 = and h2 =


−1 0 −1 −1 0
 

  
• Better approximations to the gradient can be obtained by:

∇f ( x , y ) ≈ [((z 7 + z8 + z9 ) − ( z1 + z2 + z3 )) + (( z3 + z6 + z9 ) − ( z1 + z4 + z7 ))
2
]
2 1/ 2

• This can be implemented using the masks:


−1 −1 −1 −1 0 1
   

 

h1 = 0 

0 0 and h2 = − 1 0 1


−1 0 1
 

1 1 1
   

as follows:
[
∇f ( x, y ) ≈ ( f * h1 ) + ( f * h2 )
2
]
2 1/ 2

• The resulting masks are called Prewitt operators.

• Another approximation is given by the masks:

−1 − 2 −1 −1 0 1
 

 

h1 = 0

0 0 

and h2 = − 2 0 2


−1 0 1
 

1 2 1




• The resulting masks are called Sobel operators.

Prewitt Sobel

You might also like