You are on page 1of 22

28-09-2020

Unit- II

18CSE353T - Digital Image Processing


Dr. R. Rajkamal

9/28/2020 Dr. R. Rajkamal 1

Image Transformations
In General

Reflections: These are like mirror images as seen across a line or a point.

Translations ( or slides): This moves the figure to a new location with no change to the
looks of the figure.

Rotations: This turns the figure clockwise or counter-


clockwise but doesn’t change the figure.

Dilations: This reduces or enlarges the figure to a similar figure.

Dr. R. Rajkamal
2

1
28-09-2020

Reflections

You can reflect a figure using a line or a point. All measures (lines and angles) are
preserved but in a mirror image.

Example: The figure is reflected across line l .

You could fold the picture along line l and the left figure l
would coincide with the corresponding parts of right figure.

Dr. R. Rajkamal
3

Reflections
Reflection across the x-axis: the x values stay the same and the y values change sign.
(x , y)  (x, -y)

Reflection across the y-axis: the y values stay the same and the x values change sign.
(x , y)  (-x, y)

Example: In this figure, line l :


• reflects across the y axis to line n n l
(2, 1)  (-2, 1) & (5, 4)  (-5, 4)

 reflects across the x axis to line m.


(2, 1)  (2, -1) & (5, 4)  (5, -4)
m
Dr. R. Rajkamal
4

2
28-09-2020

Reflections across specific lines


To reflect a figure across the line y = a or x = a, mark the corresponding points equidistant
from the line.
i.e. If a point is 2 units above the line its corresponding image point must be 2 points
below the line.

Example:
Reflect the fig. across the line y = 1.
(2, 3)  (2, -1).
(-3, 6)  (-3, -4)
(-6, 2)  (-6, 0)
Dr. R. Rajkamal

Lines of Symmetry

• If a line can be drawn through a figure so the one side of the figure is a reflection of the
other side, the line is called a “line of symmetry.”
• Some figures have 1 or more lines of symmetry.
• Some have no lines of symmetry.

Four lines of symmetry


One line of symmetry Two lines of symmetry

Infinite lines of symmetry


No lines of symmetry
Dr. R. Rajkamal

3
28-09-2020

Translations

• If a figure is simply moved to another location without change to its shape or direction, it
is called a translation (or slide).
• If a point is moved “a” units to the right and “b” units up, then the translated point will
be at (x + a, y + b).
• If a point is moved “a” units to the left and “b” units down, then the translated point will
be at (x - a, y - b).
Example: A
Image A translates to image B by moving to the right
3 units and down 8 units.
B
A (2, 5)  B (2+3, 5-8)  B (5, -3)
Dr. R. Rajkamal Lesson 10-5: Transformations
7

Composite Reflections

• If an image is reflected over a line and then that image is reflected over a parallel line
(called a composite reflection), it results in a translation.

Example: C
A B

Image A reflects to image B, which then reflects to image C. Image C is a translation


of image A
Dr. R. Rajkamal
8

4
28-09-2020

Rotations

• An image can be rotated about a fixed point.


• The blades of a fan rotate about a fixed point.
• An image can be rotated over two intersecting lines by using composite
reflections.
Image A reflects over line m to B, image B reflects over line n to C. Image C is a
rotation of image A.
A

C
m
B n

Dr. R. Rajkamal
9

Rotations

It is a type of transformation where the object is rotated around a fixed point called the point of
rotation.

When a figure is rotated 90° counterclockwise about the origin, switch each coordinate and
multiply the first coordinate by -1.
(x, y) (-y, x)
Ex: (1,2) (-2,1) & (6,2)  (-2, 6)
When a figure is rotated 180° about the origin, multiply
both coordinates by -1.
(x, y) (-x, -y)
Ex: (1,2) (-1,-2) & (6,2)  (-6, -2)
Dr. R. Rajkamal
10

5
28-09-2020

Angles of rotation

• In a given rotation, where A is the figure and B is the resulting figure after rotation,
and X is the center of the rotation, the measure of the angle of rotation AXB is twice
the measure of the angle formed by the intersecting lines of reflection.

Example: Given segment AB to be rotated over lines l and m, which intersect to form a
35° angle. Find the rotation image segment KR.

35 °

Dr. R. Rajkamal
11

Angles of Rotation

• Since the angle formed by the lines is 35°, the angle of rotation is 70°.
• 1. Draw AXK so that its measure is 70° and AX = XK.
• 2. Draw BXR to measure 70° and BX = XR.
• 3. Connect K to R to form the rotation image of segment AB.

B
K

A
R
35 °
X
Dr. R. Rajkamal Lesson 10-5: Transformations
12

6
28-09-2020

Dilations

• A dilation is a transformation which changes the size of a figure but not its shape. This
is called a similarity transformation.
• Since a dilation changes figures proportionately, it has a scale factor k.
– If the absolute value of k is greater than 1, the dilation is an enlargement.
– If the absolute value of k is between 0 and 1, the dilation is a reduction.
– If the absolute value of k is equal to 0, the dilation is congruence transformation. (No
size change occurs.)

Dr. R. Rajkamal
13

Dilations
• In the figure, the center is C. The distance from C to E is three times the distance from C to A. The distance
from C to F is three times the distance from C to B. This shows a transformation of segment AB with center
C and a scale factor of 3 to the enlarged segment EF.
A E

R
A

C B C F
W B

• In this figure, the distance from C to R is ½ the distance from C to A. The distance from C to W is ½ the
distance from C to B. This is a transformation of segment AB with center C and a scale factor of ½ to the
reduced segment RW.

Dr. R. Rajkamal Lesson 10-5: Transformations


14

7
28-09-2020

Intensity Transformations
G (x,y) = T [F(x,y)]
 T is “compute the average intensity of the
neighborhood.

 The neighborhood is rectangular, centered on (x, y),


and much smaller in size than the image.

 Horizontal Scan

The smallest possible neighborhood is of size 1x1.


g depends only on the value of f at a single point (x, y) and T in G (x,y) = T [F(x,y)] becomes an
intensity (also called gray-level or mapping) transformation function of the form s = T(r)
s and r are variables denoting, respectively, the intensity of g and f at any point (x, y).
Dr. R. Rajkamal

Intensity Transformations
 In this technique, sometimes called contrast
stretching , values of r lower than k are
compressed by the transformation function
into a narrow range of s, toward black.
 T(r) produces a two-level (binary) image. A
mapping of this form is called a thresholding
function.

The effect of applying the transformation to every pixel of f to generate the corresponding pixels in g
would be to produce an image of higher contrast than the original by darkening the intensity levels
below k and brightening the levels above k.
Dr. R. Rajkamal

8
28-09-2020

Intensity Transformations

Three basic types of functions used frequently for image


enhancement

 linear (negative and identity transformations)

 logarithmic (log and inverse-log transformations), and

 power-law (nth power and nth root transformations)

Dr. R. Rajkamal

Image Negative Digital Mammogram

 The negative of an image with intensity levels in the


range is [0, L-1] obtained by using the negative
transformation
s=L-1–r

 Equivalent of a photographic negative.

 Suited for enhancing white or gray detail embedded in Original digital


Negative image
dark regions of an image mammogram

Negative images are useful for enhancing white or grey detail embedded in dark regions of an image

Dr. R. Rajkamal

9
28-09-2020

Log transform
 The log transformations can be defined by this formula
s = c log(r + 1) c is a constant
 The value 1 is added to each of the pixel value of the
input image because if there is a pixel intensity of 0 in
the image, then log (0) is equal to infinity. So 1 is added,
to make the minimum value at least 1.
Original image Log Transformed
image

During log transformation, the dark pixels in an image are expanded as compare to the higher pixel
values. The higher pixel values are kind of compressed in log transformation.

Dr. R. Rajkamal

Power Law transform


 nth power and nth root transformation s=c rγ

 Gamma transformation.
 Variation in the value of γ varies the enhancement of
the images.
 This type of transformation is used for enhancing
images for different type of display devices. The gamma
of different display devices is different.
 For example Gamma of CRT lies in between of 1.8 to 2.5
that means the image displayed on CRT is dark.

Dr. R. Rajkamal

10
28-09-2020

Histogram
The histogram of a digital image with intensity levels in the range [ 0, L-1 ] is a discrete function
h(rk) = nk
where rk is the kth intensity value and nk is the number of pixels in the image with intensity.

Dr. R. Rajkamal

Histogram

 Histograms are the basis for numerous spatial domain


processing techniques.

Dr. R. Rajkamal

11
28-09-2020

Histogram
Dark, light, low contrast, high contrast, and their corresponding histograms

x-axis – values of intensities


y-axis – their frequencies

Dr. R. Rajkamal

Histogram
 Suppose that a 3-bit image of size pixels has the intensity
distribution shown in Table, where the intensity levels are
integers in the range.
 Values of the histogram equalization transformation function are
obtained using
histogram of hypothetical image

s1= 3.08, s2 = 4.55, s3 = 5.67, s4 = 6.23, s5 = 6.65, s6 = 6.86, s7 = 7.00.

Dr. R. Rajkamal

12
28-09-2020

Histogram Equalized Histograms

 Round off the values

 Only 5 distinct intensity level

r0 = 0 was mapped to s0=1, there are 790 pixels in the histogram equalized image with this value.
Dividing these numbers by MN = 4096 yielded the equalized histogram .

Dr. R. Rajkamal

Basics of Spatial Filtering


Spatial filter consists of
(1)A neighborhood, (typically a small rectangle)
(2)A predefined operation that is performed on
the image pixels encompassed by the
neighborhood.

Dr. R. Rajkamal

13
28-09-2020

Smoothing Spatial Filters

 Smoothing filters are used for blurring and  Output (response) of a smoothing, linear
for noise reduction. spatial filter is simply the average of the
 Blurring is used in preprocessing tasks, such pixels contained in the neighborhood of
as removal of small details from an image the filter mask.
prior to (large) object extraction, and  These filters sometimes are called
bridging of small gaps in lines or curves. averaging filters, low pass filters.
 Noise reduction by blurring with a linear
filter and also by nonlinear filtering.

Dr. R. Rajkamal

Smoothing Spatial Filters


 By replacing the value of every pixel in an image by the average of the intensity levels in
the neighborhood defined by the filter mask, this process results in an image with reduced
“sharp” transitions in intensities.
 Random noise - sharp transitions in intensity levels,
 The most obvious application of smoothing is noise reduction.

However, edges (which almost always are desirable features of an image) also are
characterized by sharp intensity transitions, so averaging filters have the undesirable side
effect that they blur edges.

Dr. R. Rajkamal

14
28-09-2020

Smoothing Spatial Filters


3 X 3 smoothing filters
 Use of the first filter yields the standard
average of the pixels under the mask.
 The idea here is that it is computationally
more efficient to have coefficients valued 1.
 At the end of the filtering process the
entire image is divided by 9.  Average of the intensity levels of the
 An m x n mask would have a normalizing pixels in the neighborhood defined by
constant equal to 1/mn. the mask

A spatial averaging filter in which all coefficients are equal sometimes is called a box filter.
Dr. R. Rajkamal

Smoothing Spatial Filters


 The pixel at the center of the mask is multiplied by a higher value than any
other, thus giving this pixel more importance in the calculation of the average.
 The diagonal terms are further away from the center than the orthogonal
neighbors (by a factor of ) and, thus, are weighed less than the immediate
neighbors of the center pixel.
 The basic strategy behind weighing the center point the highest and then
reducing the value of the coefficients as a function of increasing distance from
the origin is simply an attempt to reduce blurring in the smoothing process.

Weighted average, terminology used to indicate that pixels are multiplied by different coefficients,
thus giving more importance (weight) to some pixels at the expense of others.
Dr. R. Rajkamal

15
28-09-2020

Sharpening Spatial Filters

 The principal objective of sharpening is to highlight transitions in intensity.

 Uses of image sharpening vary and include applications ranging from electronic printing
and medical imaging to industrial inspection and autonomous guidance in military systems.

Image blurring could be accomplished in the spatial domain by pixel averaging


in a neighborhood.

Dr. R. Rajkamal

Sharpening Spatial Filters


The derivatives of a digital function are defined in terms of differences

First derivative Second derivative


(1) must be zero in areas of constant intensity; (1) must be zero in constant areas;
(2) must be nonzero at the onset of an intensity (2) must be nonzero at the onset and end of an
step or ramp; and intensity step or ramp; and
(3) must be nonzero along ramps. (3) must be zero along ramps of constant slope.

Sharpening filters are based on first- and second-order derivatives


Dr. R. Rajkamal

16
28-09-2020

Sharpening Spatial Filters


First and second derivatives of a 1-D digital
function representing a section of a
horizontal intensity profile from an image.

Dr. R. Rajkamal

Sharpening Spatial Filters


First derivatives in image processing are implemented using the magnitude of the gradient.

For a function f(x,y), the gradient of f at coordinates (x, y)


is defined as the two-dimensional column vector

The magnitude (length) of vector denoted as M(x, y), where

Dr. R. Rajkamal

17
28-09-2020

Sharpening Spatial Filters


Laplacian of two variables

after including diagonal


neighbors

Dr. R. Rajkamal

Sharpening Spatial Filters


A process that has been used for many years by the printing and publishing
industry to sharpen images consists of subtracting an unsharp (smoothed)
version of an image from the original image. This process, called unsharp
masking, consists of the following steps:
1. Blur the original image.
2. Subtract the blurred image from the original (the resulting difference is
called the mask.)
3. Add the mask to the original.

averaging is analogous to integration, it is logical to conclude that sharpening can be


accomplished by spatial differentiation.
Dr. R. Rajkamal

18
28-09-2020

Frequency Domain Filtering


 Spatial Domain - Pixel basis
 In some cases, image processing tasks are best
formulated by transforming the input images,
carrying the specified task in a transform
domain, and applying the inverse transform to
return to the spatial domain.
 2-D linear transforms, expressed in the general s(x, y, u, v) - Inverse transformation kernel

form Filtering techniques in the frequency domain are


based on modifying the Fourier Transform to
achieve a specific objective and then computing
the inverse DFT to get us back to the image domain
r(x, y, u, v) - forward transformation kernel
Dr. R. Rajkamal

Frequency Domain Filtering


 Filtering in the frequency domain consists of modifying the Fourier transform
 of an image and then computing the inverse transform to obtain the processed result.
 Given a digital image, (x, y), of size the basic filtering equation in which we are interested has
the form:

 Where F-1 is the IDFT,


 (u, v) is the DFT of the input image, (x, y),
 (u, v) is a filter function (also called simply the filter, or the filter transfer function), and
 (x, y) is the filtered (output) image.
 Functions F, H, and g are arrays of size M X N, the same as the input image.

Dr. R. Rajkamal

19
28-09-2020

Frequency Domain Filtering


 Simplest filters we can construct is a filter (u, v) that is 0 at the center of the transform and 1
elsewhere. This filter would reject the dc term and “pass” (i.e., leave unchanged) all other terms
of H (u, v) when we form the product H(u, v) * filter(u, v).
 dc term is responsible for the average intensity of an image, so setting it to zero will reduce the
average intensity of the output image to zero. The image became much darker.
 Low frequencies in the transform are related to slowly varying intensity components in an
image, such as the walls of a room or a cloudless sky in an outdoor scene.
 On the other hand, high frequencies are caused by sharp transitions in intensity, such as edges
and noise. filter (u, v) that attenuates high frequencies while passing low frequencies (appropriately called a
lowpass filter) would blur an image, while a filter with the opposite property (called a highpass filter)
would enhance sharp detail, but cause a reduction in contrast in the image..
Dr. R. Rajkamal

Dr. R. Rajkamal

20
28-09-2020

Frequency Domain Filtering

Dr. R. Rajkamal

Smoothing Using Freq Domain Filters

All frequencies on or inside a circle of radius D0 are A Butterworth filter of order 20 exhibits
passed without attenuation, whereas all frequencies characteristics similar to those of the ILPF
outside the circle are completely attenuated (filtered out). (in the limit, both filters are identical).

Dr. R. Rajkamal

21
28-09-2020

Sharpening Using Frequency Domain Filters


 An image can be smoothed by attenuating the high-
frequency components of its Fourier transform.
 Because edges and other abrupt changes in intensities
are associated with high-frequency components, image
sharpening can be achieved in the frequency domain by
high pass filtering, which attenuates the low-frequency
components without disturbing high-frequency
information in the Fourier transform.

Dr. R. Rajkamal

Sharpening Using Frequency Domain Filters

Dr. R. Rajkamal

22

You might also like