You are on page 1of 56

Dr. Mohannad K.

Sabir
Fifth Class / Biomedical Engineering
K= number of bits in each pixle
N= number of pixels
k=4bit,5bit,8bit,16bit (number of levels of gray scale )
N=32,64,128,256
which mean 32*32,etc
According to this graph, we can see that the first image which was of face, was subject to contouring early then all of the other two images. The second image, that
was of the cameraman was subject to contouring a bit after the first image when its gray levels are reduced. This is because it has more details then the first image.
And the third image was subject to contouring a lot after the first two images i-e: after 4 bpp. This is because, this image has more details.

• Sets of these three types of images were generated by varying N and


k, and observers were then asked to rank them according to their
subjective quality. Results were summarized in the form of so-called
isopreference curves in the Nk-plane
• Spatial resolution is a measure of the smallest discernible detail in an i
mage.
• spatial resolution can be stated in several ways:
1-line pairs per unit distance,
2- dots (pixels) per unit distance being common measures.
• A widely used definition of image resolution is the largest number of d
iscernible line pairs per unit distance (e.g., 100 line pairs per mm).
• Dots per unit distance is a measure of image resolution used in the pri
nting and publishing industry.
• In the U.S., this measure usually is expressed as dots per inch (dpi).
• To be meaningful, measures of spatial resolution must be stated
with respect to spatial units. Image size by itself does not tell th
e complete story. without stating the spatial dimensions encom
passed by the image.
• Intensity resolution similarly refers to the smallest discernible c
hange in intensity level.
• Based on hardware considerations, the number of intensity level
s usually is an integer power of two, the most common number
is 8 bits, with 16 bits being used in some applications in which
enhancement of specific intensity ranges is necessary. Intensity
quantization using 32 bits is rare.
• Unlike spatial resolution, which must be based on a per-unit-of-
distance basis to be meaningful, it is common practice to refer t
o the number of bits used to quantize intensity as the “intensity
resolution.” , is common to say that an image whose intensity is
quantized into 256 levels has 8 bits of intensity resolution.
• keep in mind that discernible changes in intensity are influence
d also by noise and saturation values, and by the capabilities of
human perception to analyze and interpret details in the context
of an entire scene.
Image Interpolation
• Interpolation is used in tasks such as zooming, shrinking, rotating, and geometricall
y correcting digital images.
• Interpolation is the process of using known data to estimate values at unknown lo
cations.
1-nearest neighbor interpolation because it assigns to each new location the intens
ity of its nearest neighbor in the original image.
• This approach is simple but, it has the tendency to produce undesirable artifacts, s
uch as severe distortion of straight edges.
2-bilinear interpolation, in which we use the four nearest neighbors to estimate the inten
sity at a given location.
v(x,y)=ax+by+cxy+d
• where the four coefficients are determined from the four equations in four unknowns th
at can be written using the four nearest neighbors of point (x, y).
3-bicubic interpolation, which involves the sixteen nearest neighbors of a point. The inte
nsity value assigned to point (x, y) is obtained using the equation
3 3
v(x,y)=∑ ∑ aij xi yj
i=0 j=0
• The sixteen coefficients are determined from the sixteen equations with sixt
een unknowns that can be written using the sixteen nearest neighbors of po
int (x, y).
https://youtu.be/f855MUyG9g8

Neighbors of a Pixel
• A pixel p at coordinates (x, y) has two horizontal and two vertical neighbors with c
oordinates.
(x+1,y), (x-1,y), (x,y+1), (x,y-1)
• This set of pixels, called the 4-neighbors of p, is denoted N4(p).
• The four diagonal neighbors of p have coordinates
(x + 1, y + 1), (x + 1, y − 1), (x − 1, y + 1), (x − 1, y − 1)
• and are denoted ND(p)These neighbors, together with the 4-neighbors, are called
the 8-neighbors of p, denoted by N8(p) The set of image locations of the neighbors
of a point p is called the neighborhood of p. The neighborhood is said to be closed
if it contains p. Otherwise, the neighborhood is said to be open.
Adjacency, Connectivity, Regions, and Boundaries
• Let V be the set of intensity values used to define adjacency. In
a binary image, if we are referring to adjacency of pixels with v
alue 1.
• In a grayscale image, the idea is the same, but set V typically c
ontains more elements.
• We consider three types of adjacency:
• Mixed adjacency is a modification of 8-adjacency, and is introd
uced to eliminate the ambiguities that may result from using 8-
adjacency.
https://youtu.be/3Z6QGSvYKCY min 25

• A digital path (or curve) from pixel p with coordinates (x0,y0) to pixel q
with coordinates (xn,yn) is a sequence of distinct pixels with coordinates
:

• where points (xi,yi) and (xi-1,yi-1) are adjacent for 1<=i<=n. In this case,
n is the length of the path. If (x0,y0)= (xn,yn) the path is a closed path.
• We can define 4-, 8-, or m-paths, depending on the type of adjacency
specified.
• For example, the paths in Fig. 2.28(b) between the top right and botto
m right points are 8-paths, and the path in Fig. 2.28(c) is an m-path.
• Let R represent a subset of pixels in an image. We call R a region
of the image if R is a connected set.
• Two regions, and are said to be adjacent if their union forms a co
nnected set. We consider 4- and 8-adjacency when referring to regi
ons.
• Regions that are not adjacent are said to be disjoint.
• Suppose an image contains K disjoint regions Rk, k=1,2,3,… K, none
of which touches the image border.
• Let Ru denote the union of all the K regions,
• let (Ru)c denote its complement (recall that the complement of a set
A is the set of points that are not in A).
• We call all the points Ru in the foreground, and all the points i
n (Ru)c the background of the image.
• The boundary (also called the border or contour) of a region R
is the set of pixels in R that are adjacent to pixels in the comple
ment of R. Stated another way, the border of a region is the set
of pixels in the region that have at least one background neigh
bor.
• As a rule, adjacency between points in a region and its backgro
und is defined using 8-connectivity to handle situations such as
this.
• The preceding definition sometimes is referred to as the inner
border of the region to distinguish it from its outer border, wh
ich is the corresponding border in the background.
• This distinction is important in the development of border-follo
wing algorithms.
• Such algorithms usually are formulated to follow the outer bou
ndary in order to guarantee that the result will form a closed p
ath.
• If R happens to be an entire image, then its boundary (or bord
er) is defined as the set of pixels in the first and last rows and c
olumns of the image.
• Normally, when we refer to a region, we are referring to a subs
et of an image, and any pixels in the boundary of the region th
at happen to coincide with the border of the image are include
d implicitly as part of the region boundary.
what is difference between edge and boundary??

• The concept of an edge is found frequently in discussions dealing wi


th regions and boundaries. However, there is a key difference betwe
en these two concepts.
• The boundary of a finite region forms a closed path and is thus a “g
lobal” concept,
• Edges are formed from pixels with derivative values that exceed a pr
eset threshold.
• Thus, an edge is a “local” concept that is based on a measure of int
ensity-level discontinuity at a point. It is possible to link edge points
into edge segments, and sometimes these segments are linked in su
ch a way that they correspond to boundaries, but this is not always t
he case.
Distance Measures
• For pixels p, q, and s, with coordinates (x, y), (u, v), and (w, z), r
espectively, D is a distance function or metric if:

• The Euclidean distance between p and q is defined as


• For this distance measure, the pixels having a distance less than or e
qual to some value r from (x, y) are the points contained in a disk of
radius r centered at (x, y).
• The D4 distance, (called the city-block distance) between p and q is
defined as

• In this case, pixels having a D4 distance from (x, y) that is less than o
r equal to some value d form a diamond centered at (x, y). For exam
ple, the pixels with D4 distance <=2 from (x, y) (the center point) for
m the following contours of constant distance:
• The pixels with D4 =1 are the 4-neighbors of (x, y).
• The D8 distance (called the chessboard distance) between p and q is
defined as

• In this case, the pixels with distance from (x, y) less than or equal to
some value d form a square centered at (x, y). For example, the pixel
s with distance form the following contours of constant distance:
• The pixels with D8 =1 are the 8-neighbors of the pixel at (x, y).
• In the case of m-adjacency, however, the Dm distance between
two points is defined as the shortest m-path between the point
s.
Linea11r Versus Nonlinear Operations
One o"f the most imponant classifica. tions of an image processing metJhod is "vhetlher it is linear o..- nonl inear_ Consider a general
operator. ::re, that pr-oduces an output image. g(x, _y). 'from a given input image. -r(x. _y):

:7-C[f( :x:, y)] = .9(:x:, y) (2-22)

Given two arbit.-a. ..-y constants, a and b, and two ar-bitra..-y images f 1 (:x.y) anc:11 f 2 (:x:. y). :;,-c is said t,o be a li
. near operator if

= :7-C[f :L (.x-, y)] + b :7-C[f"2 (:x:, y)] (2-23)


=.-g .It (:x:. y) +- b.9 = (=, y)

This equation indicates that 1:!he outpu t o-r a lin ear operation applied ·to the sum of two inputs is the s ame as pertorming the oper atio n
individually on the inputs and then summing the ..-esults_ In addition. the ou tput of a linear ope.-at . ion on a constant multiplied by an input
is the same as the output of the ope..-ation due to the original input multiplied by tih at constant_ The first property is called the property of"
additivit:y, and the second is called the property of hornogerl'eity_ By definition. an operator that fails to sati sfy Eq_ (2-23) is said to be
nonlinear_

As an example. suppose that :7-C is the sum operator. :E; _ ne . function pertormed by this operator is simply to sunn its inputs_ To test for
linearity, we start vvith 1Jhe left side of' Eq_ (2.-2.3) and attempt to prove tlhat it is equal to the right side:

These are image summations. not 1:!he sums of" all the elements of an image_

.E [a.f 1 (:x:, y) -+ bf2 (:x:, y)] .E =f 1 (.x-, y) +- :E: bf2 (.x. _y)
a.:E; f 1 C.x-, y) +- b E
. f2 (.x. _y)
a.g 1 (.x, Y) +- bg --::,. (:x:, y)

vv'her-e the fir-st step follows from the fact that summatiion is distributive_ So. an expansion of the left side i s equal to the right side of'
Eq_ (2-23) • and we conciude tJha.t the sun"l operator is linear_

On tihe otl,er 1,and, suppo se that we a..-e wol"king witl, 1:lhe max operation, whose function is to find the nnaxirnum value of the pixels in an
image_ For our purposes here, t!he si1"l"lplest vvay to pr,ove t!hat this opera.to..- is nonlinear is to fiind an example that fa ils t1,e test in Eq_
(2-23
. ) _ Consider- tlhe following two images

and suppo se that we let cz 1 and b 1_ To test for- lineanity, we aga in st art with the left side of Eq_ (2-23)

-2

VVor1king next vvith the lri ght side. we obtain

3 +- { - 1)7 - 4
.Suppose t:lhat g:(x. y) is a coirnupted image formed by t:lhe- addit:ion of noise. 17(.x:. y) to a noise.fess image .t'(x. y); t:hat is,

9(:x:, y) = f(:x:, y) -+- r,(x::, y) (2-25)

"v'he-.-e the assumpt:ion is t:hat: at: every p air of coordinates (x. _y) the noise is unco.-relat:ed--t and has z e.-o average valu e. VVe assume
also tlhat the noise a.n d in..age values are unco.-relat:ed (t:his is a typical assunn,ption fo..- additive noise). The objective of t:he follovving
p.-ocedure is t:o reduce t:he noi se oont:ent of tt.e outiPut irnage by adding a set: of" noisy input irnages. {g,(x-. :v)}. Tihis is a technique
used firequentlly for image enhancement:_

-t-- As VIie disauss lat er in th is cl--.a pter, the va r-ian ce 01' a .-and om va r-iable :z:: ......-i th n--.ean � is d efii ne d as E { (z - � =}-. Vllh ere E { - } is tl--.e expected va I u e
of" the argument. .--.-.e covar-iance 01' two r-an dom var-iall:>mes z, and z-1 is c:le1'ined as E{:(z, - z.) (z1 - z1 )}. 11' the varia.bles a.-e unoo:.-re·l ated, their-·
covariance is 0, an d v ice versa . (Do not con1'use con:.-elatJion and statistical independence. 11' tvvo .-and om var-iab3e s are sita. tisticaUy independent.
their con-elation
· is ze.-o. Hov,,.,ever, the converse is n ot tr-ue in general. vve discuss this later in this chapter_)

If the noise satiisfies the constiraints just: st:at:ed. it: can be shown (Probl-m 2..2.8 ) tihat: i"f an i mage u(.x. y) is formed by ave.-aging K
different noisy images.

1 (2-26)
g(x::.y) SI ,, (:x:, Y)

'£ =.1..

tlhen i:t fol Iovvs t:h at

E'-[9(:x:, y)} f(:x:, y) (2-27)

a.n d

(2-28)

vv'he.-e E{g(.x:. y)} is the expected value of o(:x:. y), and .::Tic.,... v) a.nd .::T;;<""'·>-·) are tlhe variances of o(.x:, y) and :17
> (:x:. y) respectively, all at
.
:
coordinates (x. y)_ These varia.nces a. .-e arra.ys of the same size a.s tlhe input: image, and there is a scalar variance value for each
pixel location_

The standard deviat:ion (square root of the variance) at any point: (x. y) in the average image is

(2-29)
=--:.:,(=.,·) = -../7<' =-.-,(.=.:,,-)

As K inorea.ses. Eqs_ (2.-2.8) and (.2-.29) indicate that: t:he vaniability (as measured by ·tJhe va.-iance or the standard clle-viation) of
tlhe pixel values at each location (x. y) decreases. Because E{g(:x:, y)}- = f(.x:. y), this means that u(:x:. y) a.pproadhes the noiseless
image l'(x. _y) as tJhe number- of noisy images used in the averaging process inc.-eases_ In order to avoid blurring and ot:her artifacts in
tlhe output (average) image, it is necessary that tlhe innages u,(:x:,y) be �egistered (i e.• spatially aligned)_
• In practice, most images are displayed using 8 bits (even 24-bit color images consist of three separate 8-bit c
hannels). Thus, we expect image values to be in the range from 0 to 255. When images are saved in a standa
rd image format, such as TIFF or JPEG, conversion to this range is automatic. When image values exceed th
e allowed range, clipping or scaling becomes necessary. For example, the values in the difference of two 8-bit
images can range from a minimum of to a maximum of 255, and the values of the sum of two such images ca
n range from 0 to 510. When converting images to eight bits, many software applications simply set all negativ
e values to 0 and set to 255 all values that exceed this limit. Given a digital image g resulting from one or mor
e arithmetic (or other) operations, an approach guaranteeing that the full range of a values is “captured” into a
fixed number of bits is as follows. First, we perform the operation Q)what to do if subtraction lead to
gm= g- min(g) negative or sum is exceed limit 255
• which creates an image whose minimum value is 0. Then, we perform the operation
gs=K[gm / max(gm)]
• which creates a scaled image, gs whose values are in the range [0, K]. When working with 8-bit images, settin
g gives us a scaled image whose intensities span the full 8-bit scale from 0 to 255. Similar comments apply to
16-bit images or higher. This approach can be used for all arithmetic operations. When performing division, w
e have the extra requirement that a small number should be added to the pixels of the divisor image to avoid
division by 0.
• Geometric Transformations
• We use geometric transformations modify the spatial arrangement
of pixels in an image. These transformations are called rubber-sheet
transformations because they may be viewed as analogous to “printi
ng” an image on a rubber sheet, then stretching or shrinking the sh
eet according to a predefined set of rules.
• Geometric transformations of digital images consist of two basic op
erations:
1. Spatial transformation of coordinates.
2. Intensity interpolation that assigns intensity values to the spatially tr
ansformed pixels.
• The transformation of coordinates may be expressed as
• where (x, y) are pixel coordinates in the original image and (x’,y’) are
the corresponding pixel coordinates of the transformed image.
• For example, (x’,y’) = (x/2, y/2) the transformation shrinks the original
image to half its size in both spatial directions.
• Our interest is in so-called affine transformations, which include scaling
, translation, rotation, and shearing. The key characteristic of an
• affine transformation in 2-D is that it preserves points, straight lines, an
d planes. Equation above can be used to express the
• transformations just mentioned, except translation, which would require
that a constant 2-D vector be added to the right side of the
• equation. However, it is possible to use homogeneous coordinates to e
xpress all four affine transformations using a single 3x3
• matrix in the following general form:
• This transformation can scale, rotate, translate, or sheer an imag
e, depending on the values chosen for the elements of matrix
A.
• Table below shows the matrix values used to implement these
transformations.
• Image Registration
• Image registration is an important application of digital image processing us
ed to align two or more images of the same scene. In image registration, we
have available an input image and a reference image. The objective is to tra
nsform the input image geometrically to produce an output image that is ali
gned (registered) with the reference image.
• Examples of image registration include aligning two or more images taken a
t approximately the same time, but using different imaging
• systems, such as an MRI (magnetic resonance imaging) scanner and a PET (p
ositron emission tomography) scanner. Or, perhaps the
• images were taken at different times using the same instruments, such as sat
ellite images of a given location taken several days,
• months, or even years apart
• One of the principal approaches for solving the problem ju
st discussed is to use tie points (also called control points)
. These are corresponding points whose locations are kno
wn precisely in the input and reference images. Approach
es for selecting tie points range from selecting them intera
ctively to using algorithms that detect these points automa
tically. Some imaging systems have physical artifacts (suc
h as small metallic objects) embedded in the imaging sens
ors. These produce a set of known points (called reseau m
arks or fiducial marks) directly on all images captured by t
he system. These known points can then be used as guide
s for establishing tie points.
• The problem of estimating the transformation function is one of modeli
ng. For example, suppose that we have a set of four tie points each in a
n input and a reference image. A simple model based on a bilinear app
roximation is given by
-------- 1

• And

---
---- 2

• During the estimation phase, (v, w) and (x, y) are the coordinates of tie points
in the input and reference images, respectively. If we have four pairs of corre
sponding tie points in both images, we can write eight equations using Eqs. 1
and 2 and use them to solve for the eight unknown coefficients,
• The variances of the pixel intensities in the three images are 1154, 2527,
and 5469, respectively, and the corresponding standard deviations are 34
, 50, and 74. All values were rounded to the nearest integer to make them
compatible with pixel intensities, which are integers. As you can see, the
higher the value of the standard deviation (or variance), the higher the co
ntrast. Although both carry the same basic information, they differ in that v
ariance is proportional to intensity squared, while standard deviation is pr
oportional to intensity. We are much more familiar with the latter. In additi
on, standard deviation values are in a range comparable to the intensity v
alues themselves, making the standard deviation much easier to interpret
in terms in terms of image contrast. As expected, the shapes of the intens
ity histograms are related also to image contrast, with the “spread” of the
histograms increasing as a function of increasing contrast.

You might also like