You are on page 1of 20

Debark University Department of computer science Computer vision and image processing

Chapter three: Image Enhancement

Why perform enhancement?

Natural images can be degraded when they are acquired due to:

 Poor contrast due to poor illumination or finite sensitivity of the imaging device.
 Electronic sensor noise or atmospheric disturbances leading to broadband noise.
 Aliasing effects due to inadequate sampling.
 Finite aperture effects or motion leading to spatial errors.
 Lighting condition,
 Sensor resolution and quality,
 Limitation or noise of optical system.

What are the Challenges in image enhancement?

The principal objective of image enhancement is to process a given image so that the result is
more suitable than the original image for a specific application. It sharpens image features such
as edges, boundaries, or contrast to make a graphic display more helpful for display and analysis.

The primary condition for image enhancement is that the information that you want to extract,
emphasize or restore must exist in the image. Fundamentally, ‘you cannot make something out of
nothing’ and the desired information must not be totally swamped by noise within the image.
Perhaps the most accurate and general statement we can make about the goal of image
enhancement is simply that the processed image should be more suitable than the original one for
the required task or purpose. This makes the evaluation of image enhancement, by its nature,
rather subjective and, hence, it is difficult to quantify its performance apart from its specific
domain of application. An image enhancement algorithm makes such degraded images visually
better perceived. There are various and simple algorithms for image enhancement based on
lookup tables Contrast enhancement and other algorithms also work with simple linear filtering
methods Noise removal.

Gizachew M. Page 1
Debark University Department of computer science Computer vision and image processing

Image Enhancement Methods

• Image enhancement methods can be:

– Spatial Domain Methods: techniques are based on direct manipulation of pixels in


an image.

– Frequency Domain Methods: techniques are based on modifying the Fourier


transform of the image.

– Combination Methods: there are some enhancement techniques based on various


combinations of methods from the two categories.

Image enhancement techniques improve the quality of an image as perceived by a human.


These techniques are most useful because many satellite image when examined on a color
display give in adequate information for human interpretation. There exist a wide Varity of
techniques for improving image quality like the contrast stretch, density slicing, edge
enhancement.

Contrast

Contrast generally refers to the difference in luminance or grey level values in an image and is an
important characteristic. It can be defined as the ratio of the maximum intensity to the minimum
intensity over an image.

Contrast ratio has a strong bearing on the resolving power and detectability of an image. The
larger the ratio, more easy it is to interpret the image.

C=Imax/Imin

Contrast enhancement can effect by a linear and non linear transformation (reading assignment)

Enhancing an image provides better contrast and a more detailed image as


compare to non enhanced image. Image enhancement has very good applications. It is used to
enhance medical images, images captured in remote sensing, images from satellite e.t.c.

Gizachew M. Page 2
Debark University Department of computer science Computer vision and image processing

Spatial domain processes will be denoted by the expression. g(x,y)=T[f(x,y)]


where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined
over some neighborhood of (x, y). The principal approach in defining a neighborhood about a
point (x, y) is to use a square or rectangular sub image area centered at (x, y)

What is a Histogram?

In Statistics, Histogram is a graphical representation showing a visual impression of the


distribution of data. An Image Histogram is a type of histogram that acts as a graphical
representation of the lightness/color distribution in a digital image and plots the number of pixels
for each value. Histograms are the basis for numerous spatial domain processing techniques,
histogram manipulation can be used effectively for image enhancement, used to provide useful
image statistics, Information derived from histograms are quite useful in other image processing
applications, such as image compression and segmentation.
Histogram equalization
Equalization increases the global contrast of many images, especially when the usable data of the
image is represented by close contrast values. Through this adjustment, the intensities can be
better distributed on the histogram. This allows for areas of lower local contrast to gain higher
contrast. Histogram equalization accomplishes this by effectively spreading out the most
frequent intensity values. The method is useful in images with backgrounds and foregrounds
that are both bright or both dark. In particular, the method can lead to better views of bone
structure in x-ray images. In an image of low contrast, the image has grey levels concentrated in
a narrow band. The grey level histogram of an image h(i) where :h(i)=number of pixels with grey
level = I, for a low contrast image.
Since the human eye is sensitive to contrast rather than absolute pixel intensities, we would
perceive less information from an image with poor intensity distributions than from the same
image with better intensity distributions. Images with skewed distributions can be helped with
histogram equalization. Histogram equalization is a point process that redistributes the image's
intensity distributions in order to obtain a uniform histogram for the image. Histogram
equalization can be done in three steps:

1. Compute the histogram of the image

Gizachew M. Page 3
Debark University Department of computer science Computer vision and image processing

2. Calculate the normalized sum of histogram


3. Transform the input image to an output image

Example: A sample image with a skewed histogram (poor intensity distribution)

and then show the histogram-equalized image for the given skewed image

4 4 4 4 4
3 4 5 4 3
3 5 5 5 3
3 4 5 4 3
4 4 4 4 4

Step1: list the frequency of each pixel (0,0),(1,0), (2,0),(3,6),(4,14),(5,5),(6,0),(7,0)

Step2:select the highest gray level in our case 5 and find the bit that represent 5that highest
gray level >= 2n and 5>=23(5>=8) the bit is 3 we represent (0-------->8).

Step3: Calculate the normalized sum of histogram

First we have to calculate the PMF (probability mass function) of all the pixels in this image
that gives the probability of each number in the data set or you can say that it basically gives the
count or frequency of each element. Second we calculate CDF stands for cumulative
distributive function. It is a function that calculates the cumulative sum of all the values that are
calculated by PMF. It basically sums the previous one.

Gary level No of pixel PMF No of CDF (sum) CDF*maximum Histogram


pixel/sum gray level value level
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 6 6/25=0.24 0.24 1.68 2
4 14 14/25=0.56 0.8 5.6 6
5 5 5/25=0.2 1.0 7 7

Gizachew M. Page 4
Debark University Department of computer science Computer vision and image processing

6 0 0 1.0 7 7
7 0 0 1.0 7 7

Step4: Transform the input image to an output image

Input pixel 0 1 2 3 4 5 6 7

Output 0 0 0 2 6 7 7 7
pixel value

Finally the skewed input image after histogram equalizations will have

4 4 4 4 4 6 6 6 6 6
3 4 5 4 3 2 6 7 6 2
3 5 5 5 3 2 7 7 7 2
3 4 5 4 3 2 6 7 6 2
4 4 4 4 4 6 6 6 6 6

Input image output image after equalization

Grey level transformation


There are three basic gray level transformations.
 Linear
 Logarithmic
 Power – law

Linear transformation includes simple identity and negative transformation.

 In Identity transformation, each value of the input image is directly mapped to each other
value of output image. That results in the same input image and output image.
 In negative transformation, each value of the input image is subtracted from the L-1 and
mapped onto the output image.

Gizachew M. Page 5
Debark University Department of computer science Computer vision and image processing

For instance the following transition has been done. s = (L – 1) – r. Since the input image of
Einstein is an 8bpp image, so the numbers of levels in this image are 256. Putting 256 in the
equation, we get:

 s = 255 – r
So each value is subtracted by 255 and the result image will be produced and the lighter pixels
become dark and the darker picture becomes light. And it results in image negative.

Log Transformations

The general form of the log transformation is done as: s = c * log (1+r)
Where s and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the
minimum value at least 1.
Power-Law Transformations

There are further two transformation is power law transformations, that include nth power and nth root
transformation. These transformations can be given by the expression:

s=cr^γ

This symbol γ is called gamma, due to which this transformation is also known as gamma
transformation. Variation in the value of γ varies the enhancement of the images. Different
display devices / monitors have their own gamma correction, that’s why they display their
image at different intensity. This type of transformation is used for enhancing images for
different type of display devices. The gamma of different display devices is different. For
example Gamma of CRT lies in between of 1.8 to 2.5 that means the image displayed on CRT is
dark. If the value of the Gamma is large the image becomes dark.

Image Filtering

Simple image operators can be classified as 'pointwise' or 'neighborhoods' (filtering) operators


Histogram equalization is a pointwise operation. More general filtering operations use
neighborhoods of pixels. Filtering is a technique for modifying or enhancing an image. Spatial
domain operation or filtering (the processed value for the current pixel processed value for the

Gizachew M. Page 6
Debark University Department of computer science Computer vision and image processing

current pixel depends on both itself and surrounding pixels). Hence Filtering is a neighborhood
operation, in which the value of any given pixel in the output image is determined by applying
some algorithm to the values of the pixels in the neighborhood of the corresponding input pixel.
A pixel's neighborhood is some set of pixels, defined by their locations relative to that pixel.

Image filtering is used to:

 Remove noise
 Sharpen contrast
 Highlight contours
 Detect edges 

Spatial domain filtering


Some neighborhood operations work with the values of the image pixels in the neighborhood,
and the corresponding values of a subimage that has the same dimensions as the neighborhood.
The subimage is called a filter (or mask, kernel, template, window).The values in a filter
subimage are referred to as coefficients, rather than pixels. The operation on the image modifies
the pixels in an image based on some function of the pixels in their neighborhood.
Note the size of the mask must be odd (3x3, 5x5, etc.) to ensure to have center and the minimum
mask is 3x3.

Gizachew M. Page 7
Debark University Department of computer science Computer vision and image processing

Figure 2.1 the mechanism of spatial filter(liner or non linear)


Simplest operation:
Linear filtering (replace each pixel by a linearcombination of its neighbors).Linear spatial
filtering is often referred to as “convolvingan image with a filter”. at each pixel(x,y), the
response is given by the sum of the products of the filter coefficients and the corresponding
image pixels in the area spanned by the filter mask. For the 3X3 mask shown in the previous
figure, the result (or response), R, of linear filtering is R=w(-1,-1)f(x-1,y-1)+w(-1,0)f(x-1,y)+…
+w(0,0f(x,y)+…..+w(1,0)f(x+1,y)+w(1,1)f(x+1,y+1). Linear filtering and convolution involves
‘overlap – multiply – add’ with ‘convolution mask’

Gizachew M. Page 8
Debark University Department of computer science Computer vision and image processing

Image Filtering

Spatial domain filtering classified in to two

1. Smoothing spatial filter (low pass filter)


a. Averaging liner filter
b. Ordered statistic nonlinear filter (Media, Max, Min
2. Sharpening spatial filter

Smoothing spatial filter

Smoothing spatial filters used for blurring and for noise reduction. Blurring is used in
preprocessing step to remove small details from an image prior to(large) object extraction and
bridge small gaps in line or curves. Noise reduction can be accomplished by blurring with liner
filter or non linear filter.

Averaging liner filter

Gizachew M. Page 9
Debark University Department of computer science Computer vision and image processing

The response of average filtering is simply the average of the pixels contained in the
neighborhood of the filter mask. The output of averaging filter is a smoothed image with reduced
“sharp” transitions in gray level. Noise and edge consist of sharp transition in gray level. Thus
smoothing filter is used for noise reduction; however they have the undesirable side effect that
they blur edges. The average filter works by moving through the image pixel by pixel, replacing
each value with the average value of neighboring pixels, including itself.

Two common 3x3 and 5x5 averaging Linear Filters are.


1 1 1 1 1

1 2 1 1 1 1 1 1
1 1 1
1/16 1\9
2 4 2 1 1 1 1 1
1 1 1
1/9 1 1 1 1 1
1 1 1 1 2 1

Standard averageweighted average 1 1 1 1 1

1. 2D Average filtering example using a 3 x 3 sampling window: for the shaded pixel value with
keeping border values unchanged
1 4 0 1 3 1
1 4 0 1 3 1
2 2 4 2 2 3
2 2 2 2 1 3
1 0 1 0 1 0
1 2 1 1 1 0
1 2 1 0 2 2
1 2 1 1 1 2
2 5 3 1 2 5
2 2 2 2 2 5
1 1 4 2 3 0
1 1 4 2 3 0
input
Output
Convolution mask follow rule that involves ‘overlap – multiply – add’ with ‘convolution mask’

1 1 1 1 4 0
1/9 with for (1,1)
1 1 1 2 2 4

1 1 1 1 0 1
(1,1)=1/9[(1x1)+(1x4)+(1x0)+(1x2)+(1x2)+(1x4)+(1x1)+(1x0)+(1x1)]=1+4+2+2+4+1+1=15/9=
1.66~2

Gizachew M. Page 10
Debark University Department of computer science Computer vision and image processing

(1,2)=
1 1 1 4 0 1
1 1 1 2 4 2
1/9 with
1 1 1 0 1 0

(1,2)= 1/9 [(1x4)+(1x0)+(1x1)+(1x2)+(1x4)+()(1x4)+(1x0)+(1x1)+()(1x0)]


=1/9[4+1+2+4+4+1]=16/9=1.7~2
Calculate the rest pixel value in such manner and the output display the output
2. 2D Average filtering example using a 3 x 3 sampling window: for the shaded pixel value
Extending border values outside with values at boundary.

Input output

3. 2D Median filtering example using a 3 x 3 sampling window: Extending border values


outside with 0s (Zero-padding)

Gizachew M. Page 11
Debark University Department of computer science Computer vision and image processing

Order-statistics filters (median filter)


Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking)
the pixels contained in the image area encompassed by the filter, and then replacing the value of
the center pixel with the value determined by the ranking result. Best-known “median filter”
Median filtering is a nonlinear method used to remove noise from images. It is widely used as it
is very effective at removing noise while preserving edges. It is particularly effective at
removing ‘salt and pepper’ type noise. The median filter works by moving through the image
pixel by pixel, replacing each value with the median value of neighboring pixels.
Process of Median filter
The median is calculated by Corp region of neighborhood and first sorting all the pixel values
from the window into numerical order, and then replacing the pixel being considered with the
middle (median) pixel value.

1. 2D Median filtering example using a 3 x 3 sampling window: Keeping border values


unchanged

Gizachew M. Page 12
Debark University Department of computer science Computer vision and image processing

2. 2D Median filtering example using a 3 x 3 sampling window: Extending border values


outside with values at boundary

3. 2D Median filtering example using a 3 x 3 sampling window: Extending border values


outside with 0s

Sharpening spatial filter

Gizachew M. Page 13
Debark University Department of computer science Computer vision and image processing

The principal objective of sharpening is to highlight fine detail in an image or to enhance detail
that has been blurred, either in error or as a natural effect of a particular method of image
acquisition. Uses of image sharpening range from electronic printing and medical imaging to
industrial inspection and autonomous guidance in military systems.
High spatial frequency component which have detailed information in the form of edge and
boundaries and should be extracted. Image sharpening algorithms are used to separate object
outline. Therefore image sharpening filter also called edge enhancement or edge crispening
algorithm. The image blurring is accomplished in the spatial domain by pixel averaging in a
neighborhood; it is the process of integration and Sharpening could be accomplished by spatial
differentiation (to find the difference by neighborhood). Thus, image differentiation enhances
edges and other discontinuities (such as noise) and de-emphasizes areas with slowly varying
intensities or emphasizes transition in image intensity. Smoothing is often referred to as low-
pass filtering, In a similar manner, sharpening is often referred to as high-pass filtering. In this
case, high frequencies (which are responsible for fine details) are passed, while low frequencies
are attenuated or rejected.
Derivative operator: this operator calculate the gradient of the image intensity at each point, by
gives direction of the largest possible increase from light to dark and the rate of changes in that
direction.
Gradient Filter
Edges can be extracted by taking the gradient of the image. Gradient refers to the difference
between the pixels of an image
 If neighbouring pixel have the same intensity, the difference is zero and hence there is no
edge.
 Edges exits when there is a significant local intensity variation.
There are two ways to apply sharpening filters that are based on first- and second-order
derivatives.
Derivatives of a digital function are defined in terms of differences. There are various ways to
define these differences. However, we require that any definition we use for a first derivative:
1. Must be zero in areas of constant intensity.
2. Must be nonzero at the onset of an intensity step or ramp.
3. Must be nonzero along intensity ramps.

Gizachew M. Page 14
Debark University Department of computer science Computer vision and image processing

Similarly, any definition of a second derivative


1. Must be zero in areas of constant intensity.
2. Must be nonzero at the onset and end of an intensity step or ramp.
3. Must be zero along intensity ramps on constant slope.
Reading assignment first order and second order derivative
Common types of noise
Noise is typically defined as a random variation in brightness or colour information and it is
frequently produced by technical limits of the image collection sensor or by improper
environmental circumstances. During picture acquisition and transmission, noise may be
introduced into the image. The introduction of noise into the image could be caused by several
factors. The quantification of noise is determined by the number of corrupted pixels in the
image. The following are the primary sources of noise in digital images: – 
 Environmental factors may have an impact on the imaging sensor. 
 Low light and sensor temperature may cause image noise. 
 Dust particles in the scanner can cause noise in the digital image. 
 Transmission channel interference.
Common noise
 Salt-and-pepper noise: contains random occurrences of black and white pixels and
commonly seen in photograph.
 Impulse noise: contains random occurrences of white pixels.
Impulse noises are short duration noises which degrade an image. They may occur during image
acquisition, due to switching, sensor temperature. They may also occur due to interference in the
channel and due to atmospheric disturbances during image transmission
 Gaussian noise: variations in intensity drawn from a Gaussian normal distribution.
Enhancement in Frequency Domain

•Fourier Series: Any function that periodically repeats itself can be expressed as the sum of
sines/cosines of different frequencies, each multiplied with a different coefficient.

In the frequency domain, a digital image is converted from spatial domain to frequency domain.
In the frequency domain, image filtering is used for image enhancement for a specific
application. A Fast Fourier transformation is a tool of the frequency domain used to convert the

Gizachew M. Page 15
Debark University Department of computer science Computer vision and image processing

spatial domain to the frequency domain. For smoothing an image, low filter is implemented and
for sharpening an image, high pass filter is implemented. When both the filters are implemented,
it is analyzed for the ideal filter, Butterworth filter and Gaussian filter. The frequency domain is
a space which is defined by Fourier transform.

Fourier transformation is a tool for image processing. It is used for decomposing an image into
sine and cosine components. The input image is a spatial domain and the output is represented in
the Fourier or frequency domain. Fourier transformation is used in a wide range of application
such as image filtering, image compression. Image analysis and image reconstruction etc
Frequency domain.

 think of each color plane as a sinusoidal function of changing intensity values


 Refers to organizing pixels according to their changing intensity (frequency)
 Frequency Domain Filtering is used when one cannot find a straight forward kernel in a
spatial domain filtering

Image enhancement in the frequency domain is straightforward.


 Steps:
1. Compute the Fourier transform of the image to be enhanced,
2. Multiply the result by a filter, and
3. Take the inverse transform to produce the enhanced image.

Gizachew M. Page 16
Debark University Department of computer science Computer vision and image processing

Transformation

A signal can be converted from time domain into frequency domain using mathematical
operators called transforms. There are many kind of transformation that does this. Some of them
are given below.

 Fourier Series

 Fourier transformation

 Laplace transform

 Z transform

Frequency components

Any image in spatial domain can be represented in a frequency domain. But what do these
frequencies actually mean. We divide frequency components into two major components.

High frequency components: High frequency components correspond to edges in an image.

Low frequency components: Low frequency components in an image correspond to smooth


regions.
How can we analyze what a given filter does to high, medium, and low frequencies?
 The answer is to simply pass a sinusoid of known frequency through the filter and to observe
by how much it is attenuated.
 A sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation.

Gizachew M. Page 17
Debark University Department of computer science Computer vision and image processing

It occurs often in pure and applied mathematics, as well as physics, engineering, signal
processing and many other fields

Filtering in the Frequency Domain

Fig Basic step for filtering in frequency domain

Some Basic Properties of the Frequency Domain: Frequency is directly related to the rate of
change. Therefore, slowest varying component (u=v=0) corresponds to the average intensity
level of the image. Corresponds to the origin of the Fourier Spectrum. Higher frequencies
corresponds to the faster varying intensity level changes in the image. The edges of objects or the
other components characterized by the abrupt changes in the intensity level corresponds to higher
frequencies.

Basic Steps for Filtering in the Frequency Domain:


1. Multiply the input image by (-1)x+y to center the transform.
2. Compute F(u,v), the DFT(discrete Fourier transform) of the image from (1).
3. Multiply F(u,v) by a filter function H(u,v).
4. Compute the inverse DFT of the result in (3).
5. Obtain the real part of the result in (4).
6. Multiply the result in (5) by (-1)x+y

Gizachew M. Page 18
Debark University Department of computer science Computer vision and image processing

Given the filter H(u,v) (filter transfer function) in the frequency domain, the Fourier transform of
the output image (filtered image) is given by:
G(u,v)  H(u,v)F(u,v) Step (3)
The filtered image g(x,y) is simply the inverse Fourier transform of G(u,v).
g(x, y)  1G(u,v) Step (4)

Frequency Domain Filters types

1. Low pass filter:


Low pass filter removes the high frequency components that mean it keeps low frequency
components. It is used for smoothing the image. It is used to smoothen the image by
attenuating high frequency components and preserving low frequency components.
Mechanism of low pass filtering in frequency domain is given by:
G(u, v) = H(u, v) . F(u, v)

where F(u, v) is the Fourier Transform of original image

and H(u, v) is the Fourier Transform of filtering mask

2. High pass filter:


High pass filter removes the low frequency components that means it keeps high frequency
components. It is used for sharpening the image. It is used to sharpen the image by attenuating
low frequency components and preserving high frequency components.
Mechanism of high pass filtering in frequency domain is given by:
H(u, v) = 1 - H'(u, v)

where H(u, v) is the Fourier Transform of high pass filtering

and H'(u, v) is the Fourier Transform of low pass filtering

Gizachew M. Page 19
Debark University Department of computer science Computer vision and image processing

3. Band pass filter:


Band pass filter removes the very low frequency and very high frequency components that
means it keeps the moderate range band of frequencies. Band pass filtering is used to enhance
edges while reducing the noise at the same time.

Gizachew M. Page 20

You might also like