This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

seminar ppt1|Views: 23|Likes: 1

Published by jenish2

See more

See less

https://www.scribd.com/doc/50713023/seminar-ppt1

07/26/2012

text

original

What is Digital image?

An image may be defined as a two dimensional function f(x,y) , x & y = spatial coordinates . The amplitude of f at any pair coordinates (x,y) is called the intensity or Gray level of the image at that point . When x,y & the amplitude value of f are all finite, discrete quantities ,we call the image a DIGITAL IMAGE. Digital Image is composed of a finite no of elements. They are referred to as picture elements, image elements ,peels or pixels. Pixel is the term most widely we use. Two techniques for image enhancement: Spatial Domain, which refers to the image plane itself & direct manipulation of pixels in image. Frequency Domain ,which are based on modifying the Fourier transform of an image .

1) 2)

g(x.y) is the input image . defined over some neighborhood of (x. .These methods are procedures that operate directly on these pixels.1) Spatial Domain Tequenics: It is the aggregate of pixels composing an image. It is denoted by the formula g(x.y)].y) is the processed image & T is an operator on f . where f(x. Enhancement at any point in an image depends only on the gray level at that point.y)=T[f(x. known as point processing.y).

L-1] is obtained by using negative transformation s=L-1-r. Log Transformation: The expression for this is s=clog(1+r) where c is a constant.` Some basic gray level Transformation: Image negatives: The negative of an image with gray levels in the range [0. The process used to correct this power law response phenomena is called gamma correction. with the opposite being true for higher values of input levels. Power law curves with fractional values of map a narrow range of dark input values into a wider range of output levels . where c & are positive constants. ` ` . It is used to expand the values of dark pixels in the image while compressing the higher level values. Power law transformation: It has the basic form s=cr (gamma).

` ` ` . rk is the kth gray level & nk is the no of pixels in the image having gray levels rk. In the dark image that the components of the histogram are concentrated on the low side of the gray scale.L-1] is a discrete function h(rk)=nk. Low contrast images can result from poor illumination.` ` Contrast Streching: Simple piecewise linear function is Contrast stretching transformation. ` ` ` Gray level slicing: One approach of its is to display a high value for all gray levels in the range of interest & a low value for all other gray levels. The idea behind contrast stretching is to increase the dynamic range of the gray levels in the image being processed. Similarly. Histogram processing: The histogram of of a digital image with gray levels in the range [0. the components of the histogram of the bright image are biased towards the high side of the gray scale.

0<=r<=1 that produces a level s for pixel value r in the original image but T(r) satisfies some conditions: 1)T(r) is single valued & monotonically increasing in the interval [0. . with r=0 represent black & r=1 represent white.` Histogram Equalization: Let variable r represent gray level of the image to be enhanced. The procedure is to define a square or rectangular neighborhood & move the center of the area from pixel to pixel.1]. 2)0<=T(r)<=1 for 0<=r<=1 The method used to generate a processed image that has a specified histogram is called HISTOGRAM MATCHING or HISTOGRAM SPECIFICATION. At each location the histogram in the points in the neighborhood is computed & either a histogram specification transformation function is obtained. ` Local Enhancment: The histogram methods are global in the sense that pixels are modified by a transformation function based on the gray level content of the entire image. For the formula s=T(r) .

y)=f(x. ` . masking is used primarily to isolate an area for processing. Performing the NOT operation on a black.Masking sometimes referred to as region of interest processing. Image subtraction: The difference between two images f(x. g(x.y) & h(x.` Enhancement using Arithmetic/Logic operation: A/L operations involving images are performed on a pixel by pixel basis between two or more images. light represents a binary 1 & dark represents a binary 0.y)-h(x.y). The key usefulness of subtraction is enhancement of difference between images. The higher order bit planes of an image carry a significant amount of visually relevant details . In terms of enhancement.y) expressed as. Intermediate values are processed the same way.The AND and OR operations are used for masking. It is obtained by computing the difference between all pairs of corresponding pixels from f & h. changing all 1¶s to 0¶s. In the AND & OR image masks.8-bit pixel produces a white pixel. Logic operations similarly operate on a pixel by pixel basis.

y-1)+w(-1.y)+«w(1.y) the response of the filter at that point is calculated using a predefined relationship.the result R of linear filtering with the filter mask at a point (x . R=w(-1. It is possible In some implementations of image averaging to have negative values when noise is added to an image.y)+.+w(0.0)f(x.y) is formed by averaging K different noisy images.. The concept of filtering has its roots in the use of Fourier Transform so called frequency domain. the response is given by a sum of products of the filter coefficients.-1)f(x-1.y). rather than pixels.0)f(x1.y)=original image g(x..y) } if an image g^(x. g^(x.y).y+1) .1)f(x+1.y)+w(1.` ` Image Averaging: Consider g(x.y)+ n(x.0)f(x+1.y)=noise The objective of the following procedure is to reduce the noise content by adding a set of noisy images { gi (x. y) in the image is. For 3*3 mask .y) }= f (x.. Basics of spatial filtering: The values in a filter sub image is are referred to as coefficients. where f(x.y)=noisy image & n(x. For linear spatial filtering . At each point (x.y) then E{ g^ (x.y)= f(x.y)=1/K 1kgi(x.

bridling of small gaps in lines or curves. Smoothing Spatial filters: These filters are used for blurring & for noise reduction. & mn is the total no of coefficients. Blurring is used in preprocessing steps such as removal of some details from an image prior to object extraction .w¶s are mask coefficients. These filters are also called averaging filters.y). The response R of an m*n mask at any point (x. using the following expression: R = w1z1+w2z2 + w3z3+ w4z4 +«« + wmnzmn = 1 mn wizi where . The most obvious application of smoothing is noise reduction. Smoothing linear filters: The output of smoothing linear filter is simply average of the pixels contained in the neighborhood of the filter mask. z¶s are values of image gray levels corresponding to those coefficients. ` ` .

` ` Sharpening Spatial filters: The principal objective of sharpening is to highlight fine detail in an image or to enhance details that has been blurred. while a second derivative 1) must be zero in flat areas 2) must be nonzero at onset & at a gray level step or ramp 3) must be zero along ramps of constant slope. . 3) must be nonzero along ramps. Uses of image sharpening :vary & include application ranging from 1)electronic printing 2) medical imaging & 3) to industrial inspection & 4) autonomous guidance in military systems. We use for a first derivative 1) must be zero in flat segments 2)must be nonzero at the onset of a gray level step or gray level. The derivatives of a digital function are defined in terms of differences. Sharpening filters are based on first & second order derivatives.

y) d^2f/dx^2=f(x.y+1) + f(x.Comparing the response between first & second order derivative .y) + f(x-1.its use highlights gray level discontinuities in an image .y) + f(x. 4)Second order derivatives produce a double response at step changes in gray level. Isotroic filters are rotation invariant in the sense that rotating the image & then applying the filter gives the same result as applying the filter to image first & then rotating the result. Use of development of the method: d^2f/dx^2=f(x+1. 3)First order derivatives have a stronger response to gray level step.y) ^2f= [f(x+1.y+1) + f(x.y-1) ± 2f(x. Thus the basic way in which way we use Laplacian for image enhancement is as follows: . we arrive the following conditions.. This will tend to produce images that have grayish edge lines & other discontinuitty .y) Because the Laplacian is a derivative operator . 1)First order derivatives generally produce thicker images in an image. featureless background.y-1)] ± 4 f(x. 2)Second order derivative have a stronger response to fine detail such as thin lines & isolated points.y) ± 2f(x.

y)= F(u. These equations are easy F(u. Similarly for the inverse transform. .g(x. is defined by the equation F(u) = f(x) e^(-j2ux)dx . The spectral density also is used to refer to the power spctrum.^2f(x.y) if the center coefficient is negative. |F(u)|=[ R^2(u) + I^2(u) ] ^1/2 is called the magnitude or spectrum of the Fourier transform & . These two equations comprise the Fourier transform pair. f(x) .v)e^j2(ux+vy)dudv F(u)=| F(u) | e^(-j (u)) where . = f(x.y) + ^2f(x.y) if the center coefficient is positive. where j^2=(-1) .v)=f(x. 2) Image enhancement in the frequency domain: The one dimensional Fourier . F(u) of a single variable . f(x.y)= f(x.y) .y) e^-j2(ux+vy)dxdy . continous function . (u)=tan^-1[ I[u] / R[u] ] & Power spectrum P(u)= | F(u) |^2 = R^2(u)+ I^2(u) . f(x) by means of the inverse Fourier transform f(x)= F(u) e^(j2ux)du.

v) is called a filter because it suppresses certain frequencies in the transform while leaving others unchanged.N/2) . The average of an image is given by F(0.v)=H(u. Obtain the real part of result in (4).v) by a filter function H(u. If we set this term to zero in the frequency domain & take the inverse transform ..` 1) 2) 3) 4) 5) 6) Basics of filtering in frquency domain: Multiply the input image by (-1)^x+y to center the transform. Multiply F(u. Multiply the result in (5) by (-1)^x+y . H(u.The multiplication of H & F involves two dimensional function & it s defined on an element ± by ± element.0). Compute the inverse DFT of the result in (3). We can do this operation by multiplying all values of F(u. Compute F(u.v)=0 if (u. .. then average value of the resulting image will be zero.v) . 1 otherwise. ` Some basic filters & their properties : Suppose that we wish to force the average value of an image to zero.v) .v)F(u. G(u.v).v) by the filter function: H(u.v)=(M/2. the DFT of an image from (1).

The Butterworth filter has a parameter called the filter order. 2)Butterworth & 3) Gaussian . This filters are called notch filter because it is constant function with a hole at the origin. Smoothing frequency domain filters: Edges & other sharp transitions in the gray levels of an image contribute significantly to the high frequency content of its Fourier transform. ` . Low frequencies in an image are responsible for the general gray level appearance of an image over smooth areas. For high value of this parameter .These three filters cover the range from very sharp to very smooth filter functions. while high frequencies are responsible for edges & noise. For lower order values the Butterworth filter has a smooth waveform similar to Gaussian filter.the Butterworth filter approaches the form of the ideal filter. Hence smoothing is achieved in the frequency domain by attenuating high frequency components in the transform of a given image. There are basically 3) types of filters: 1) ideal .

v) = 1 if D(u.y) is the inverse Fourier transform of filter function.v)=[ (u-M/2)^2 + (v.v) to the origin of the frequency rectangle. .v)<= Do = 0 if D(u.v)=1 & H(u. the point of transition between H(u. For an ideal low pass filter .v)is the filter function & G & F are FT of the two images.N/2)^2 ] ^1/2.y) & blurred image g(x.v)=0 is called the cutoff frequency.` Ideal low pass filers: It has the transfer function H(u.y) where h(x.v) to the center of the Fourier transform is given by D(u.v) is the distance from point (u.v).y)* f(x.v) = H(u.y) are related in the frequency domain G(u.y)= h(x.v) F(u.v)>Do. The distance from any point (u. H(u. The FT of original image f(x. where Do is the specified nonnegative quantity & D(u. In spatial domain g(x.

& with cutoff frequency at a distance Do at the origin. but can become a significant factor in filters of higher order.v)/Do]^2n where D(u. . The BLPF transfer function does not have a sharp discontinuity that establishes a clear cutoff between passed & filtered frequencies .v)= ([u-M/2]^2 + [v-N/2]^2)^1/2. In general . A Butterworth filter of order 1 has no ringing . The filter of order 2 does show mid ringing & small negative values.` Butterworth lowpass filters: The transfer function of a Butterworth low pass filter of order n . BLPFs of order 2 are a good compromise between low pass filter & acceptable ringing characteristics. is defined as H(u.v)= 1/ 1+ [D(u. but they certainly are less pronounced than in the ILPF. The BLPF of order 1 has neither ringing nor negative values. Unlike the ILPF .

v)/2 ² . 1) . with application to character recognition .v) = e^(-D²(u. where D(u. is a measure of the spread of the Gaussian curve . 3) Low pass filtering is a staple in the printing & publishing industry. H(u.v)=Do the filter is down to 0. including unsnap masking . 2) From the printing & publishing industry & the third is related to processing satellite & aerial images.v)=e^(-D²(u.607 of its maximum value. Additional examples of low pass filtering : ` From the field of machine perception . where it is used for numerous preprocessing functions.where Do is the cutoff frequency.v) is the distance from the origin of the Fourier transform .v)/2Do²) . By letting = Do.` Gaussian lowpass filters: The form of these filters is H(u. Cosmetic processing is another use of low pass filter. When D(u.

F.v) is the T.v)=1 ± Hlp(u. of the corresponding low pass filter. The transfer function of the high pass filter is Hhp(u.v) .v)= 1 if D(u.v)> Do . ` . Where Hlp(u.` Sharpening frequency domain filters: Image can be blurred by attenuating the high frequency components of its FT because edges & other abrupt changes in gray levels are associated with high frequency components . Image sharpening can be achieved in the frequency domain by a high pass filtering process . where Do is the cutoff distance measured from the origin of frequency rectangle . which attenuates the low frequency components without disturbing high frequency information . Ideal highpass filters: A 2-D ideal high pass filter is defined as H(u.v)<=Do = 0 if D(u.

v)= 1 / 1+ [Do/D(u. .F of the Butterworth high pass filter of order n & with cutoff frequencies of order n & locus at a distance Do from the origin is given by H(u. The performance of a BHPF of order 2 & with Do set to the same values. As in the case of low pass filters . The transition into higher values of a cutoff frequencies is much smoother with the BHPF.` Butterworth High pass filters: The T.v)]^2n . we can expect Butterworth high pass filters to behave smoother than IHPS .

y) ] =/ F( i(x.v) . S(u.y) } or z(u.v) & Fr (u.v) are fourier transform of the result.y) ).v) = H(u. In this way the results obtained are smoother than previous two examples.y) = ln i(x. F[ f(x.v) ] ` . F( r(x.y)= ln f(x.Gaussian high pass filters: The transfer function of the Gaussian high pass filter with cutoff frequency locus at a distance Do from the origin is given by H(u.v)/2Do²).y) } =F { ln i(x.v) is the fourier transform of the result.y) ).y)= F^(-1) [ S(u.y)= i(x.y) can be expressed as the product of illumination & reflection components.v)Fr(u.y).v)= 1 ± e^(-D²(u.y) + ln r(x.y) } + F{ ln r(x.y) }=F{ ln f(x.v) + Fr (u. s( u.y) The F.v) + H(u. z(x. f(x. Then F{ z(x.v)= Fi(u.y) r(x.v)= H(u. f(x. ` Homographic filtering: ` It is used to develop a frequency domain for improving the appearance of an image by simultaneous gray level range compression & contrast enhancement. S(x. of the product of two functions is not separable.T. Even filtering of the smaller objects & thin bars is cleaner with the Gaussian filter. Suppose .v) . In the spatial domain . Where Fi(u.v)Fi(u.v) Z(u.

BE-1,2-Dec-2011 2003_5-10-11[1]

BE-1,2-Dec-2011 2003_5-10-11[1]

dsp_see_syllabus

dsp_see_syllabus

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->