You are on page 1of 52

Digital Image

&
Video Processing
Lecture # 2-3

Based on chapter 2 from DIP Gonzalez & Woods


Image Sampling & Quantization
 Output of sensors is continuous voltage waveforms
whose amplitude & spatial behavior are related to
physical phenomenon being sensed
 Need to convert continuous sensed data into
digital form for digital images
 Involves two processes
 Sampling
 Quantization
Generating a Digital Image
 Figure (top) shows a continuous
image f to be converted into digital
form
 An image may be continuous with
respect to x- and y-coordinates &
also in amplitude
 To digitize, need to sample the
function in both coordinates & also
in amplitude
 1-D function in Figure (bottom) is
plot of amplitude (intensity level)
values of continuous image along the
line segment AB of Figure (top)
 Random variation in it is due to
noise
Generating a Digital Image
 To sample, take equally spaced
samples along line AB as shown in Fig.
(top)
 Samples are shown as small dark
squares in Fig. (top) & their (discrete)
spatial locations indicated by tick
marks in bottom of top Fig.
 To form a digital function, intensity
values of samples be converted
(quantized) into discrete quantities
 Vertical gray bar (Fig. top) depicts
intensity scale divided into 8 discrete
intervals ranging from black to white
 Fig. (bottom) shows digital samples
resulting from both sampling and
quantization
Complete Figure –
Generating a Digital Image
Use of Sensing Array for Image Acquisition
 Number of sensors in array establishes limits of sampling in both
directions
 Fig. (left) shows a continuous image projected on plane of 2-D sensor
 Fig. (right) shows result of image after sampling & quantization
 Quality of digital image is primarily by number of samples & discrete
intensity levels used in sampling & quantization
 However, image content also plays a role in choice of these
parameters (discussed later)
Representing Digital Images
 Let represent a continuous image function of
two continuous variables s and t
 This function can be converted into a digital image by
sampling and quantization
 Suppose that continuous image is sampled into a
digital image, containing M rows and N columns,
where (x,y) are discrete coordinates
 For notational clarity and convenience, use integer values
discrete coordinates: x = 0, 1, 2, ... M-1 & y = 0, 1, 2, .... N-1
 For example
 f(0,0) is value of digital image at origin, f(0,1) is
next value along the first row
 Section of real plane spanned by coordinates of an
image is called the spatial domain with x and y being
referred to as spatial coordinates or spatial variables
Three Ways of Image Representation
 Fig. (top) is a plot of the function,
with two axis determining spatial
location and the third axis being the
values of f as a function of x and y
 Fig. (bottom left) as it would appear
on a computer display or photograph.
Here, the intensity of each point in
the display is proportional to the
value of f at that point.
 As Fig. (bottom right) shows image an
array (matrix) composed of the
numerical values of . It is used
for computer processing
Representing Digital Images
 In equation form, we write the representation of
an M X N numerical array as

 The right side at this equation is a digital image


represented as an array of real numbers.
 Each element of this array is called an image
element, picture element, pixel or pel.
Representing Digital Images
 Figure shows a graphical O rigin
representation of an image 0 1 2 j yc N - 1
array, where the x- and y-axis 0
1
are used to denote the rows
and columns of the array i
pixel f(i, j) The
 The coordinates of the image ima
xc
center are: Center x c,

(xc,yc)= (floor(M/2), floor(N/2)


 Specific pixels are values of Image f(x, y)
the array at a fixed pair of M - 1

coordinates. x
Another View – Representing Digital Images

Coordinates convention to represent digital


images
Matrix Representation of Images
 A digital image can be written as a matrix

35 45 20 
 43 64 52 
 
10 29 39 
Digitization Process – Requirements
 Image digitization requires that decisions be made
regarding the
 Values for M, N and for discrete intensity levels L
 No restrictions placed on M X N except that M & N
to be positive integers
 However, digital storage and quantizing hardware
considerations require number of intensity levels, L,
being an integer power of two i.e., L = 2k where k is an
integer.
 It is assumed that the discrete levels (intensity) are
equally spaced and are integers in range [0, L-1]
Dynamic Range of Imaging System
 Dynamic range of an imaging system
 Ratio of the maximum measurable intensity to
the minimum detectable intensity level in the
system.
 Upper limit is determined by saturation and the
lower limit by noise (See Figure on next slide)
 Dynamic range establishes the
 Lowest and highest intensity levels that a
system can represent and, consequently, that
an image can have

Note:- Sometimes, the range of values spanned by the gray


scale is referred to informally as the dynamic range.
Example – Saturation and Noise
More on Dynamic Range – Contrast
 Contrast
 Difference in intensity between the highest
and lowest intensity levels in an image
 When an appreciable number of pixels in an
image have a high dynamic range
 Image to have high contrast
 Conversely, an image with low dynamic
range typically has
 Dull, washed-out gray look
Image and Storage Bits
 The number, b, of bits required to store a
digitized image is
b=M*N*k
 When M = N this equation becomes
b = N2k
 An image having 2k intensity levels
referred as a “k-bit image”
 For example, an image with 256 possible
discrete intensity values is called an 8-bit
image
Number of Storage Bits for Various
Values of N and k

The number of bits required to store square images with various values of N and k
 The number of intensity levels corresponding to each value of k is shown in
parentheses
Megabytes required to Store Images
for Various Values of N & k

 When an image can have


2k possible intensity
levels, it is common
practice to refer to it as
a “k-bit image,”
 e.g., a 256-level image is
called an 8-bit image
Practice Question 1
 A common measure of transmission for digital data is
baud rate, defined by the number of bits transmitted
per second. Generally transmission is accomplished in
packets consisting of start bit, a byte of information
& a stop bit. Using these facts answer the following:
 How many seconds would it take to transmit a
sequence of 500 images of size 1024 x 1024 pixels
with 256 intensity levels using a 3M-baud (106
bits/sec) baud modem (this is representative
medium speed for DSL residential line ) ?
 What would the time be at 30G baud (109 bits/sec)
modem (this is representative medium speed for a
commercial line) ?
Solution – Practice Question 1
 Total amount of data = 1024 x 1024
 Total amount of data including start and
stop bit = 1024 x 1024 x (8+2) bits
(a) Total time required to transmit 500
images over 3 Mbaud modem is
Trans time = 500 x (1024 x 1024 x 10) / (3 x 106 )
(b) Find yourself for 30 G baud modem?
Practice Question 2
 High-definition TV (HDTV) generates images with
a resolution of 1125 horizontal TV lines interlaced
(where every other line is painted on the tube
face in each of two fields, each field being 1/6oth
of a second in duration). The width-to-height
aspect ratio of the images is 16:9. The fact that
the horizontal lines are distinct fixes the vertical
resolution of images. A company has designed an
image capture system that generates digital
images from HDTV images. The resolution of each
TV (horizontal) line in their system is in proportion
to vertical resolution, with proportion being the
width-to-height ratio of the images. Each pixel in
the color image has 24 bits of intensity resolution,
8 pixels for each a red, a green, and a blue images.
These three “primary” images form a color image.
How many bits would it take to store a 2-hour
HDTV program?
Solution – Practice Question 2
 Width-to-height ratio = 16/9
 Resolution in horizontal direction = 1125 lines
 i.e., 1125 pixels in horizontal direction
 Given: width-to-height ratio is in the 16/9
proportion, therefore, resolution in vertical
direction is
 1125 x (16/9) = 2000 pixels per line
 System constructs a full 1125 x 2000, 8 bit image
every 1/30 sec for each of red, green and blue
component images
 As there are7200 sec in 2 hours, therefore total
digital data generated in 2 hours is
 1125 x 2000 x 8 x 30 x 3 x 7200 = 1.266 x 10 13 bits or
1.458 x 1012 bytes equivalent to about 1.5 terabytes
 Conclusion: result shows the need of data
compression
Practice Question 3
 You are preparing a report and have to
insert in it an image of size 2048 x 2048
pixels (a) Assuming no limitations on the
printer, what would the resolution in line
pairs/mm have to be for the image to fit in
a space of size 5 x 5 cm ? (b) what would
the resolution have to be in dpi for the
image to fit 2 x 2 inches?
Solution – Practice Question 3
(a) The vertical (or horizontal) dimension in
which the image has to fit is 5 cm or 50 mm.
So we have to fit 2048 lines in 50 mm or
approximately 41 lines per mm. Line pairs (lp)
is half of that, or approximate 20 lp per mm
(b) 2048 pixles/2 inches = 1024 pixles / inches
= 1024 dpi in both directions
Spatial and Intensity Resolution
 Spatial resolution is the measure of smallest
discernible detail in an image.
 Quantitatively, spatial resolution can be stated in
several ways like
 Line pairs per unit distance
 Dots (pixels) per unit distance e.g., dpi
 If pixel size is kept constant, the size of an image will
affect spatial resolution
 Example regarding “effects of reducing spatial
resolution in an image” on Next Slide
Example – Effect of Reducing
Spatial Resolution
 Figure 2.23 (on next slide) shows the effects of reducing the
spatial resolution of an image. The images in Figs 2.23 (a)
through (d) have resolutions of 930, 300, 150, and 72 dpi,
respectively. Naturally, the lower resolution images are smaller
than the original image in (a)
 For example, the original image is of size 2136 x 2140 pixels, but
the 72 dpi image is an array of only 165 X 166 pixels. In order to
facilitate comparisons, all the smaller images were zoomed back
to the original size (the method used for zooming will be
discussed later in this section). This is somewhat equivalent to
"getting closer" to the smaller images so that we can make
comparable statements about visible details.
 By observing Figure on next slide, describe the impact of
reducing spatial resolution
Example – Effect of Reducing Spatial Resolution
Discussion on Example – Effect of
Reducing Spatial Resolution
 There are some small visual differences between Figs. 2.23(a) and
(b), the most notable being a slight distortion in the seconds marker
pointing to 60 on the right side of the chronometer. For the most
part, however, Fig. 2.23(b) is quite acceptable. In fact, 300 dpi is
the typical minimum image spatial resolution used for book
publishing, so one would not expect to see much difference between
these two images.
 Figure 2.23(c) begins to show visible degradation (see, for example,
the outer edges of the chronometer case and compare the seconds
marker with the previous two images). The numbers also show visible
degradation.
 Figure 2.23(d) shows degradation that is visible in most features of
the image. When printing at such low resolutions, the printing and
publishing industry uses a number of techniques (such as locally
varying the pixel size) to produce much better results than those in
Fig. 2.23(d).
Another Example - Effects of
Reducing Spatial Resolution
Original Reduced by 2 in each direction

Reduced by 8 in each direction Reduced by 32 in each direction

Checkerboard Effect
Intensity (Gray-Level) Resolution
 Intensity (gray-level) resolution refers to the
smallest discernable change in intensity level (gray-
level) and it depends upon
 Number of bits per pixel
 Color image has 3 image planes to yield 8 x 3 = 24
bits/pixel
 Considerable discretion regarding number of samples
used to generate digital image but
 This is not true for number of gray-levels as due to hardware
considerations gray-levels are integer power of 2
 Most common is 8 bits with 16 bits being used in some
applications
 Note that measuring discernable changes in intensity level
(gray-level) is highly subjective processes
Effects of Reducing Intensity
(Gray-Level) Resolution
When the number of gray level values are reduced,
very fine ridge like structures develop in the areas of
gray levels.
This effect is known as false contouring and is
caused by the insufficient number of gray levels in
smooth areas of the image.
Example – Effect of Varying
Intensity Levels in a Image
 Figure 2.24(a) (next slide) is a 774 x 640 CT
projection image, displayed using 256 intensity
levels
 The objective of this example is to reduce the
number of intensities of the image from 256 to 2
in integer powers of 2, while keeping the spatial
resolution constant
 Figures 2.24(b) through (d) were obtained by
reducing the number of intensity levels to 128, 64,
and 32, respectively
 By observing Figure on next two slides, describe
the impact of varying intensity levels
Discussion on Example – Effect of
Varying Intensity Levels in a Image
Discussion on Example – Effect of
Varying Intensity Levels in a Image
Discussion on Example – Effect of
Varying Intensity Levels in a Image
 The 128- and 64-level images are visually identical for
all practical purposes. However
 32-level image in Fig. 2.24(d) has a set of almost
imperceptible, very fine ridge-like structures in areas of
constant intensity
 These structures are clearly visible in the 16-level
image in Fig. 2.24(e)
 This effect, caused by using an insufficient number of
intensity levels in smooth areas of a digital image, is
called false contouring, so named because the ridges
resemble topographic contours in a map
 False contouring generally is quite objectionable in
images displayed using 16 or fewer uniformly spaced
intensity levels as the images in Figs. 2.24(e)-(h) show
Another Example – Varying Intensity
Level ( Gray Level) Resolution
Original (256 levels) 64 levels

4 levels 2 levels
Practice Question 4
 Suppose that a flat area with center at (x0, y0) is
illuminated by light source with intensity
distribution 2 2
𝑖ሺ𝑥,𝑦ሻ = 𝐾𝑒−[ሺ𝑥−𝑥0ሻ +ሺ𝑦−𝑦0ሻ ]
Assume for simplicity that reflectance of the area
is constant and equal to 1.0 and let K = 255. If the
intensity of the resulting image quantized using k
bits, and the eye can detect an abrupt change of
eight intensity levels between adjacent pixels,
what is the highest value of k that will cause
visible false countering?
Solution Hints – Practice Question 4

 The image in question is given by


 f(x,y) = i(x,y)r(x,y)

 If the intensity is quantized using k bits


than
 Intensity Levels = (255+1)/2k
 Find k from above equation
Effects on Image Quality by Interaction
of Spatial & Intensity Resolution

 The results in Previous two Examples (using Fig.


2.23 and Fig. 2.24) illustrate effects produced on
image quality by varying spatial and intensity
resolution independently
 However, these results did not consider any relationships
that might exist between these two, parameters
 An early study by Huang [1965] attempted to
quantify experimentally the
 Effects on image quality produced by the interaction of
these two variables.
 The experiment consisted of a set of subjective tests
Effects on Image Quality by Interaction
of Spatial & Intensity Resolution
 Images similar to those shown in Fig. 2.25 (next
slide) were used to quantify the effects on image
quality produced
 The woman's face represents an image with relatively
little detail
 The picture of the cameraman contains an intermediate
amount of detail and
 The crowd picture contains, by comparison, a large
amount of detail
 Sets of these three types of images of various
sizes and intensity resolution were generated by
varying N and k [see Eq. (2-13) in book]
 Observers were then asked to rank according to
their subjective quality
Size, Quantization Levels and
Details

(a) Image with a low level of detail (b) Image with medium level of
detail (c) Image with relatively large level of detail

 Sets of these three types of images were generated by varying N


and k, and observers were then asked to rank them according to
their subjective quality
 Results were summarized in the form of so-called isopreference
curves in the Nk-plane (See next slide)
Isopreference curves for three
types of images (ref: previous slide)

 k = any integer value & represents


number of bits
 L = 2k are gray levels
 N = number of samples
Discuss the impact of increase
or decrease in k?
Image Interpolation
 Interpolation Definition
 Process of using known data to estimate the
values at unknown locations
 Interpolation used in tasks such as
 Zooming, Shrinking, rotating and geometrically
correcting digital images
 Here we will apply it to image resizing (zooming
and shrinking)
 Zooming can be done by 3 methods
 Nearest neighborhood interpolation
 Bilinear interpolation
 Bicubic interpolation
Zooming – Nearest Neighbor Interpolation
 Require 2 steps: (1) Creation of new pixel locations (2) Assignment
of gray levels to those new locations
 Nearest Neighbor Interpolation Example:
 Suppose an image of size 500 x 500 pixels
 Want to enlarge it 1.5 times. i.e., 750 x 750 pixels
 Conceptually, it is equivalent to laying an imaginary plane of 750
x 750 grid over an original image
 The spacing in the grid would be less than one pixel coz fitting
grid over smaller image
 In order to perform gray level assignment for any new point in
the overlay
• Look for the closest pixel in original image & assign its gray
level to new pixel in the grid
 When done with all points in overlay grid, simply expand it to
original specified size to obtain the zoomed image
 This phenomenon of gray level assignment is known as nearest
neighbor interpolation
 Drawback: Checker board effect
Zooming - Bilinear Interpolation
 Bilinear Interpolation
 More sophisticated way of accomplishing gray level
assignments using the four nearest neighbors of
the point
 Method:
 Let (x, y) denote the coordinates of the point in
the zoomed image
 Further let v(x, y) denote the gray level assigned
to it
 For bilinear interpolation, the assigned gray level is
given by
v(x,y) = ax + by + cxy + d
where the four coefficients are determined from
the four equations in four unknowns that can be
written using the four nearest neighbors of point
(x, y)
Example: Digital Image Zooming

Top row: Images zoomed from 128 x 128, 64 x 64, & 32 x 32 pixels to
1024 x 1024 using nearest neighbor gray level interpolation
Bottom row: same sequence as above using bilinear interpolation
Zooming – Bicubic Interpolation
 Bicubic interpolation involves
 Sixteen nearest neighbors of a point

 The intensity value assigned to point is obtained using


the equation

where the sixteen coefficients are determined from


the sixteen equations in sixteen unknowns that can be
written using the sixteen nearest neighbors of point
 Bicubic interpolation does a better job of preserving
fine detail than its bilinear counterpart
 It is the standard used in commercial image editing
programs, such as Adobe Photoshop and Corel Photo
paint
Example - Zooming
Shrinking Digital Images
 Image shrinking is done in similar manner
as described for zooming.
 Zooming is pixel replication whereas shrinking is
pixel deletion
 Method : Row column deletion of pixels
For example, To shrink an image by one-
half
 Need to delete every row and column
Example – Shrinking

 Fig (d)-(f) Shrinking down to 150 dpi instead of 72


dpi [Note: Fig. (d) is the same as Fig. (c) in Fig. 2.27]
 Compare Fig. (e) and (f) especially later with original
image
THE END

You might also like