You are on page 1of 55
uF + : Dicitat Imace FUNDAMENTALS _ AND IMAGE TRANSFORMS DIGITAL IMAGE FUNDAMENTALS & IMAGE TRANSFORMS: Digital Image Fundamentals, Sampling and Quantization, Relationship between Pixels. IMAGE TRANSFORMS: 2-D FFT, Properties, Walsh Transform, Hadamard Transform, Discrete Cosine Transform, Haar Transform, Slant Transform, Hotelling Transform. LEAR! G OBJECTIVES Digital image fundamentals Sompling and quantization Relationship between pixels 2-D FFT and Its Properties: Image transforms such at Walsh, Hadamard, Discrete cosine, Haar, Slant and Hotelling (KL). TRODUCTION ‘An image can be considered as a two dimensional plane, in which the variation of intensity or brightness at each point depends upon the scene. An image of any scene con be stored, processed, displayed and recorded by an image processing system. , 99999 Image is categorized into two types as, analog image and digital image. An analog image is composed of continuous range ofvalves indicating position and intensity whereas a digital image is composed of a finite number of image elements called picture elements orpixels. The processing of « digital image by means of a digital computer s known as digital image DIGITAL IMAGE PROCESSING [JNT SHORT QUESTIONS WITH SOLUTIONS Q1. Write in brief about an image. “ “4 imensi which the variation of intensity or brightnes ‘An image can be considered as a two dimensional plane, in 1 S ateny - depends upon the scene. An image of any scene canbe stored, processed, displayed and recorded by an image processig:t iy Image is categorized into two types as, analog image and digital image. An analog image is composed of a continuous, ey values indicating position and intensity. A digital image is composed by finite number of image elements called picture Tate or pixels. 7 Q2. Compare and contrast digital image and binary image. Ans: Mor The differences between digital image and binary image are mentioned below. Binary Image An image is sid to be binary image, if the ampli of image varies between | and 0. Digital Image 1. | An image is said to be digital image, if amplitude of the image and spatial coordinates are discrete quantity. 2. | Digital image is composed by finite number of elements } 2. called image elements, picture elements (or) pixels. Binary image can be represented by a set of triples ie., binary image can be represented bya se ‘maximal blocks of I-pixel only. 3. | In binary images, sampling and quantization is possible. What is digital image processing? List out the fundamental steps involved in digital image processing ‘Model Paper, Qi 3. | Quantization and sampling is performed on digital images. OR What is digital image processing. Now Dec.-18, (R15), 0% ‘(Refer only Digial Image Processing) Ans: ‘The processing of a digital image by means of a digital computer is known as digital image processing. Fundamental Steps Involved in Digital Image Processing 1. Image acquisition 2, Image enhancement ' 3. Image restoration 4. Color image processing 5. Wavelets and Multi resolution 6. Compression 7 8 9, UNIT-1 (Digital Image Fundamentals and Image Transforms) ‘Q4. Mention the components of an image processing | Q7. system. cals Nov/Dec, (R13), Na) Ans: : Weber ratio is defined as the increment of illumination The main rt 5 he main components of an image processing system. (A/,) discriminable 50% of the time with background Define Weber ratio. Image sensor illumination (J). are, 1 2 Speci 3, Computer 4 5. 6 1. 8 ized Hardware Wabecratie Software Mass storage i Display “Ff is large, represents poor brightness discrimination. Hardcopy Network 1 ee 5. Listthe applications of digitalimage processing. is Ans: The applications of digital image processing are, 1. Inremote sensing, itis use forthe purpose of monitoring -0s environmental conditions like temperature, humidity ete. 2. In biomedical, it is used for analysis of x-ray image and also to examine for presence or absence of various ae diseases. 3, In industrial automation, itis used for testing and -20 ‘monitoring automatic and non-destructive systems and 43 201 234 also in VLSI for checking PCB and its manufacturing, araareTs — 4. In office automation for recognizing optical characters ; ee and identifying the area address on envelope. ne nee ee = ‘ .rge values of weber ratio and good brightness discrimination 5. In eriminology, itis used for recognizing faces and | (eevee or meee ate identifying the finger prints. 6. In information technology for video conferencing and | 98 Define sampling and quantization. . data transmission of images. (Now JDec.-17, (R13), QA) Inastronomy for recognizing and analyzing the object. OR In military for detection of targets and navigation of air Define Image Sampling. Dec. 19, (R18), Ne) cols ae \ Refer only Sampling) 6, Assume that a 10 m high structure is observed from a distance of 50 m, What is the size of the | ANS: is small, represents good brightness discrimination retinal image? Sampling Ans: The process of digitizing the coordinate values of an Given tial, image is known as sampling. Height of the structure, h= 10m - ‘Quantization. Distance from where structure is observed is, The process of digitizing the amplitude values of an d=50m image is known as quantization, Size of the retinal image, a=? Qy. Define spatial and gray level resolution. The size of retinal image is given as, has ine iduaeon: Spatial Resolution Spatial resolution is a measure of pixels per unit distance in the image. 4 Q10. Mention the various distance measures used in image processing. Ans: ‘The various distance measures used in image processing include, & Euclidean distance % ‘City-block distance x ‘Chessboard distance and 4. * D,-Distance. Q11. What is city block distance. Ans: Nov JDec.-16, (R13), Q1(b) City block distance between the pixels p and q is defined as, Dip. a= i In this case, the pixels having a distance less than or equal eg fixed value *r’ from (x, y) form a diamond centered at Q12. What is gray-level interpolation. Ans: Nov.JDec.-16, (R13), Q1(f) -s+b—4 | Gray level interpolation is a technique in which the gray level values of nearest neighbor pixels are used to assign a pixel whose coordinates are not restricted in the available grid locations. Q13, What are array and matrix operations? Ans: Array Operations ‘An array operation is performed on one or more images on pixel-by- pixel basis. Matrix Operations The matrix operations are performed on one or more images that are represented in the form of matrices. Q14, What is mean by image subtraction? (Mode! Paper, Q1(a) | Nov /Dee.-16, (R13), Q1(c)) OR List the various areas of application of image subtraction. ‘May-19, (R16), Q1(c) Image subtraction is defined as the dif wo images Ue) and s)he Ans: DIGITAL IMAGE PROCESSING [JNTU-HYDERa, Q15. What images? Ans: the logic operation involving yp Nove. aha The essential logic operations involved in binary, are, *Y Maga, L Logical OR 2. Logical AND 3. Logical NOT/Complement 16. Whatis Image Transform? List the applicaugy of Transform. Dee-18. em, i (or) Define image transform and mention its nog Ans: Image Transform Image transform is defined as an important tool used fy image processing. It is comprehensively used to characteriy images mathematically for understanding and designing image processing procedures. ° Applications ‘The practical applications of image transforms are, iL. ‘It converts the spatial information into frequency domain information which leads to perform simple operations 2. © Itis applicable in gaining valuable knowledge in varios areas such as sampling. 3. Transforms are used for preserving signal energy. case of unitary transform, the vector ‘fis rotated it N-dimensional space, iz 4. © Image transforms are used to design faster algorithms 5, In transforms a large amount of average energy of image is packed into few components and it is as property of energy compaction. 6. Quality of the image.is.improved, by: using, spall frequency transforms, Sn Ne ian a A matrix is said to be orthogonal tansform, when the ts result in the matrix contains only real numbers. A matrix 8 to be Hermitian if it contains only complex numbers._~ yniT-1 (Digital Image Fundamentals and Image Transforms) Image Transforms :m ; T ‘orthogonal sinuso- Non-sinusodial Basis function ial basis function ‘orthogonal basis | depending on statistics function of input signal Fourier transform Haar transform KL transform Walsh transform Discrete cosine transform Singular value Hadamard transform decomposition Discrete sine transform ‘Slant transform Figure Q18. Define discrete Fourier and inverse discrete Fourier transforms. Ans: ‘The discrete Fourier transform of two dimensional function fix, y) of size Mx Nis given by,- Fuye Sd fe ye atin =) ‘Where, w =0, 1, 2,3,...(M-1) =0,1,2,3,...V=1) Inverse discrete Fourier transform of two dimensional function F(u,v) of size M * N is given by, fey = DE Fuge 0, 1,2, 3,...(M=1) y=0,1,2,3,...=1) . ‘The variables 1 and v represents transform or frequency variables while x and y represents spatial coordinates. Q19. Define Walsh Transform and list its properties. for) Define walsh transform. Nov.Dec.-18, (R18), Q1(b) (Refer only Definition) ~ (2) Where, (on) List the properties of Walsh Transform. Nov Dec-17, (R13), Q1(0) (Refer only Properties) 4 Ans: Walsh Transform . Walsh transform is an orthogonal transform that is fro quaintly used in different spectral methods of image processing applications are as follows, Properties of Walsh Transform 1, Walsh transform is a series expansion of basis functions. whose values are only 1 or | and form a symmetrical square wave 2. These transforms are implemented more efficiently in digital systems. 1 Except a constant multiplication factor of “3”, the forward and inverse Walsh transform kemels of 1D-signals are same, Bei of 2 sens te forward and inverse Walsh transform kernals are same for all the eases. This is due to the DIGITAL IMAGE PROCESSING [JNTU OER a, PART-B ESSAY QUESTIONS WITH SOLUTION 4.4 Dicirat Imace FunpawenraLs | BS. = ty £ e it 1.1.1 introduction Sts Spt Q20. Explain the origin of digital image processing. Ans: ; f cable picture trans Digital image processing was first introduced in the year 1920, it uses the Bartlane cable pi mission 5 transmission of pictures. It saves the time and transmits the picture in few hours i.e. ess than 3 hours rather than a wee. fhe digital image processing was employed in newspaper industry for transmission of picture. The pictures were first ‘rans ‘means of submarine cable between London and Newyork hy But, there were two major problems in enhancing the visual quality of these early digital pictures, that is, - Amappropriate selection of printing procedure * Distribution of intensity levels. To overcome these problems, photographic reproduction made from tapes perforated atthe telegraph receiving tenn, which increases the picture quality both in tonal quality and in resolution. ™ In 1929, the image coding capability of Bartlane systems was increased to 15 gray levels from 5 gray levels. Inspite of various development in technologies, the digital images re still not considered asthe digital image prociny results. Thus, for complete digital processing of an image digital computers were introduced. It has the greater storage and computation power which i required for digital image processing. In addition, technologies that supports daa storage, digg and transmission are also added. In the year 1940 John Von-Neumann introduced two key concepts. They are, — Amemory to hold a stored program and data Conditional branching. 4 With the key concept of Von-Neumann, series of key’s were developed that makes the computers powerful to be used fa digital image processing. The inventions and developments in the technologies corresponding to years is illustrated in below table, Invention and Developments [_ Year 1948 Transistor at Bell Laboratories 1950's and 1960's High-level programming languages COBOL and FORTRAN 1958 Integrated circuit at Texas instruments 1960°s Operating systems 1970's Microprocessor by Intel 1981 IBM personal computer 1970's Miniaturization of components, starting with large scale integration 1980's (VLSI) Very Lorge Scale Integration , Table In addition to development in these technologies, developments also take place in the areas of mass storage and dist!” systems, which are the basic requirements of digital image processing. In the early 1960's, first powerful computer was introduced for digital image processing, It was mostly used in the ™ rogram. A yeiter wi) ben ia In 1964, the pictures of moon pes of image distortions inherent in o untt-1 (Ota Image Fundamentals and image Trepsform) Zz G21. Explain the basle principle of imaging iv altferent bande of electromagnetic spectrum, nn The imaging indifferent bands of electromagnetic spactnu is shown in igure (1), IP AP Ye 10 Ww 1 101 we 19s 10 Yor ee. 4o+ 10 Figure (1) (@) Gamma rays (b) X-rays (©) Ultraviolet rays (@) Infrared rays (©) Microwaves (Radio waves Gamma Rays: Gamma rays are used mainly in nuclear medicine and in astronomical observations. Innuclear medicine, gamma rays are much helpful in detecting infections and tumors. In this, patient is injected with a radioactive isotope that emits gamma rays as it decays, Positron Emission Tomography (PET) is another type of nuclear imaging, In this case, the external source of X-ray en By is not used. Figure (2) Figure (2a) shows the image of'a complete bone scan obtained by using gamma ray imaging, 2X se shows the application of gamma rays in astronomics the eygnus loop in gamma rays, which has exploded gbout 15,000 years ago, X-rays are used mainly in medical diagnosis and in respective industries, X-rays are considered as anachronistic source for imaging. X-rays are generated using a vacuum tube containing @ cathode and anode known as X-ray tube. The images of blood vessels can be obtained by using X-rays. This process is known as angiography. Another method used in medical imaging using X-rays is Computerized Axial Tomography (CAT). X-rays finds their application in industrial processes, which are used to analyze the circuit boards. Ultraviolet Rays The ultraviolet rays is used in imaging. Its applications include lithography, microscopy, astronomy, biological imaging and industrial inspection. Imaging in the Visible and Infrared Bands The process of converting invisible infrared images into visible images is called infrared imaging. Normal human. vision can see only visible light, which is a small part of the electromagnetic spectrum. ‘The temperature whose value is above absolute zero temperature and warm blooded animals produces infrared radiations. The infrared radiation emitted by objects is detected and picked up by long-wave thermal infrared imagers. The thermal imager lens directs the infrared rays on to infrared sensor array. There can be several thousands of sensors on the sensor array. These transform the infrared energy into electrical signals and these electric signals are then converted into an image. Infrared band finds its applications in police, fire fighters and search-and-rescue teams. 4 Microwaves Microwaves are used for detecting invisible objects like dowel and also used to examine the principle of structural elements in civil engineering. Microwaves finds their main applications in RADAR. Imaging in Radio Band ‘The application of radio band imaging are similar to that of gamma rays, i.c., medical and astronomy. Radio waves are used in MRI (Magnetic Resonance Imaging). Q22. Describe neatly all the components of a general purpose image processing system. What are the fundamental steps in digital image processing. Mode! Paper, 2 . (or) List and explain the fundamental steps in digital image processing (Refer Only Fundamental Steps in Image Processing) Nov Dee.-17, (R13), 03(a) (or) With a neat block diagram explain the fundamental steps in digital image processing. ‘Nov Dec.-16, (R13), G2(b) 9 WARNING: xeroPtittying of stot CRNAL ae: Anyone found uit i LIABLE to face LEGAL poceesin® Discuss the fundamental steps of processing. Nov-45, ta (Refer Only Fundamental Steps in Image Py, <4 (or) * Explain about the fundamental steps in image processing. oi mona (Refer Only Fundamental Steps in Image Ans: ™ Components of an Image Processing System ‘The general block diagram of an image process system and its elements is as shown in figure (1). ing Software Mass Storage | tmoge |_| Specinliced us ‘Sensor ‘Hardware e [ Disphy Hardcory | Figure (1) ‘The first element is the image sensor which is used obtain the digital image. It consists of a sensing device whic converts the iheident light energy into its proportional clectrial signal and a digitizer to convert the analog electrical sign! to digital signal. For example, the image sensor of the digit ‘camera consists of the sensing device that produces an electrical ‘output proportional to'the incident light and a digitizer thi ‘converts these outputs to a digital data. A specialized hardware is used to improve the spett of the image processing system and it is used only for spevil applications, It consists of a digitizer and an ALU unit. A digits is used to get a digital data from the sensor output and an ALU ut is used to perform primitive operations like addition, subtracts di ‘etc.; on the images. This unit perform functions tif require fast data throughputs which cannot be handled by amit computer and is called as front-end subsystem. The computer used can be either a PC or @ supe computer. For dedicated applications, specially des! computers are used to achieve the desired level of perf {A software that consists of specialized modules are require computer to perform the specific task. The memory requ store the image is large and hence a mass storage is T°4! to store the images. The digital storage used for 2” Processing system consists of, : 4 jort-term storage is used during the pri ig of images. The computer memory (RAM, ROM) ora frame buffer can be used as short-term storage, On-line storage is used when frequent recall of images are required. Magnetic disk or optical media storage can be used as on-line storage. Archival storage is used to store the images which are accessed frequently in magnetic tapes and optical disks stored in juke boxes can be used for archival storage. Torepresent the image on the screen, display is used, Colour ‘are usually used now a days, itis driven by the output of images and graphic card which is an integral pat ofthe computer. In ‘aver torecord image permanently, hard copy devices such as laser printers lm camera, inkjet units et, are used, For the purpose of fommunication between several computers, a network is used to onnect them. The bandwidth required to transmit images is large and hence a good network architecture is required specially for ‘communication with emote sites through internet. Fundamental Steps in Image Processing The digital image processing methods. are mainly described in two ways, 1. Both inputs and outputs are images. 2, Inputis an image and outputs are attributes of that image. In general, the fundamenital steps in image processing ‘can be described by an illustration shown in figure (2). ee fexner @ (i) Waa mare'| hn == ee [Ppesone Tae Revirton Outputs ae sinha Re, EREicenet] 3} Knowle Base a me, scene Some Image Acquisition Image acquisition is the process of acquiring an image using image sensor equipment. The signals which are produced by the sensors like line-scan camera, TV camera are digitized using image sensor equipment. Image Enhancement Image enhancement is a technique used to improve the ‘quality of an image for better viewing purpose. Image Restoration ‘Image restoration isa process of getting an original image from a unclear or blur image. in which the information of colour image 6 characteristics 9 (Digital image Fundamentals and Image Transforms) ‘Compres This technique is used to reduce the memory of a file without lowering the quality of an image Morphological Processing It is a process in which some operations are performed to an image to get the desired shape. ‘Segmentation The process of divi called segmentation. This process is mainl} the images easily. Representation and Description It process the data ‘which is not yet proces ssed into a suitable form, Representation is of two types mainly, (i) Boundary representation (ii). Regional representation. (@ Boundary Representation ‘The representation of an image by its external features like boundary is called boundary representation. (Gi) Regional Representation ‘The representation of an image by its internal features like texture is called regional representation. Description Description is also called feature selection. It describes about features of an object like length, size of pixels its. ‘Object Recognition In order to identify the object, a label or tag is allocated to the features of an object. Knowledge Base The role of knowledge base is to control and manage the {interactions between the processing modules. ___ Q23. Explain how a digital image is formed. Ans: ___ An image can be considered as a two dimensional plane, jin which the variation of intensity or brightness at each point depends upon the scene, An image of any scene can be stored, processed, displayed and recorded by an image processing system. A typical example of displaying an image of a scene by an image processing system is as shown in the figure, iw ding an image into subimages is iy used for analyzing Daphy Scene: Figure: Display of an image by an image Processing Systom converts the light received into its proportional electrical signal. ‘When the light falls from the sun on the scene, it reflects back to the image processing system, then image sensor in image Processing system converts the incident light energy into electrical Signal. This signal is in analog form and for the purpose of processing by a digital computer, itis converted to digital form by a digitizer. Then, the processing of digital image is done by the computer and the resultant image can be displayed or recorded. For this example, the digital image is displayed. Thus, the formation of di wze-can be observed on the display, Q24. Explain the basic understanding of human visual perception in detail. (or) Explain about elements of visual perception. An: Digital image processing deals with mathematical and probabilistic formulation but, human visual perception is also. has equal importance. The following study describes the human visual perception. ‘Structure of Human Eye The horizontal cross-section of human eye is shown in figure (1). Its shape is nearly same as sphere with a diameter of 20 mm. Figure (1): Cross-section of a Human Eye ‘The eye is enclosed by comea and sclera outer cover and retina, choroid. The upper most layer ofthe eye is comea which is a tough transparent tissue. The sclerais an opaque membrane layer consisting of fibrous coat.and covers.the remainder of the eye. WARNING: ( DIGITAL IMAGE PROCESSING [JNT! iia YDERag, “The next layer roid Tyee Whig group of blood vessels that provides nutritions to the 5" choroid layer i divided into two sublayers. They go, Ty lera are, 1, Ciliary body i, 2, Iris diaphragm. The lens of the eye consists of fibrous cells attached to the ciliary body which contains 60 t0 70% of ‘The central opening Of iris diaphragm allows they to pass through the eye upto the range of 2 to 8 thin dig, The front and back portion a is diaphragm conse fg and black pigments respectively. ‘The retina is the interior layer of the eye which ofrods and cones, Cones show effective vision ie, radian ay rods show ineffective vision ie., blur. Image Formation in Eye Flexibility is the main difference between the len the eye and ordinary optical lens, The radius of curvature y the posterior surface of the lens is less than the radius ofiy anterior surface as shown in figure (2). The tension in the fbey of the ciliary body control the shape of the lens. The controlg muscles allow the lens to become flatter in order to focus op ! Scotopic threshold pousiiiin 64202 4 Log of intensity (mL) Figure (3) Figure (4) shows the basic experimental setup used to excel the brightness discrimination, Ital Figure (4) Where, A/ represents the increment of illumination. 26. Discuss optical illusions with examples. ‘Ans: Nov Dee.-16, (R13), Q3(0) Optical Tusion 5 Optical illusions are the wrong perception of f geometric properties of an object by the eyes. Examples that illustrate optical WN ilusion are as follows, Example-t is no line representing the exact square, it wpoteng 12 Example-2 Figure (2) This is an another example for optical illusion where one can find a circular shape from the few lines drawn out. Example-3 oe Figure (3) From figure (3), it is clear that even the line segments are of same length, the first horizontal line segment appears small when compared to the second. It is an optical illusion of eyes which perceives wrongly. Q26. Write short notes on light and electromagnetic spectrum. Ans: In the year 1666, Sir Isaac Newton discovered that, when a sunlight beam is incident on a glass prism, then emerging beam, of light contains a continuous specttum of colors with different ranges from violet to red. Electromagnetic Spectrum Electromagnetic spectrum can be defined in terms of wavelength, frequency, or energy. The relation between wavelength and frequency is given as, (1) Where, _C — Speed of light (2.998 x 10* m/s) A. — Wavelength y — Frequency. The energy of the various components of electromagnetic spectrum is expressed as, E=hy Q) Where, A ~ Planck’s constant, The electromagnetic spectrum is as shown in figure below, Gamma Xays Upc nfaied, Microwaves Rad waves ay 7 Vesibe Spectrum 04% 10* 05% 10% 06% 10* 07104 Viokt blue Green Yellow Orange "Red Infrared Figure: Electromagnetic Spectrum Ukraviolet WARNING: Xerox/Rhétocopying of.this Bookiis @ CRIMINAL ‘act. Anyone fold guilty is LIABLE to face LEGAL. proceedit ta 4 DIGITAL IMAGE PROCESSING [JNTU-HY Dep, waves art defined jectromagneti Ay °S the ae massless particles ae a of light. Fach particle has some energy which i rep eq photon. From equation (2), it can betioticed that energy, inne a roportional to frequency. Thus gamma rays hag monte Ls ‘compared to other rays and radio waves has lower Sey, Light It is one type of electromagnetic radiation, whigy sensed by the human eye. The speed of light is 2.998 Toe ‘The colour spectrum is mainly classified into six broad p_,™& They are, violet, blue, green, yellow, orange and req, “& There are two types of lights, 1. Monochromatic light 2. Chromatic light. 1. | Monochromatic Light It is void of color defined in terms of intensity of, level Itranges from white to gray and from gray 0 black fy monochromatic images are called gray-scale images, 2. Chromatic Light ‘The chromatic light contain colors from violet 9 The chromatic light is described by frequency. To describe ie auality of chromatic ight three basic quantities are use, thy are, % — Radiance Luminance Brightness. Radiance: The term radiance is referred as the total amoun, ‘of energy that conies out from light source. Its units are waite Luminance: The term luminance is referred as the amount of energy that an observer observes from the light source. It unis are lumen(Im).The third quantity brightness cannot be measured, it is a subjective descriptor of light perception. Q27. Explain the various methods of image sensing, Ans: ‘An image is formed by combining source illumination with the reflection or absorption of energy obtained/generated from the elements of the scene being imaged. The purpose af enclosed source illumination and scene is to generate'3-D séené For instance, consider that source illumination is generatél from a source of electromagnetic energy such as radar, infrared Xray system. And it can also be generated from less traditional sources such as, ultra sound. Whereas, scene elements af common objects which can be easily buried rock formation. _ ‘Thus, the illumination energy is reflected from the objet! depending upon the nature of the source. For example, Reflection of light from a plane surface. 2. A diagnostic x-ray is obtained by passing a8 *™ through a patient body. 3. In few cases, a photo converter converts bath the reflected and transmitted energy into visible light: strated in figures below. nergy into digital imag YY Wh AULT Figure (b): Line Sensor DMM DM Figure (c}: Array Se ‘The most commonly used sensor is single imaging sensor as sho ‘The general arrangement of image sensing element which is most freq} the form of sensor strip a illstrmed in gu (0).T cain a tt i Perpendicular to the strip offers imaging in This sensor Figure (¢) represents the arrangement of nee sensors whi commonly found in digital cameras. Array sensors are used to obtain complete imaging ee a the concept of sensor arrangement is simple to understand and implement. During this proces the incoming energy i da aaa Sa ‘combining input electrical pewer and sensor material which is adjustable to any type of energy being detected. Atthe voltage waveform ofa sensor and digital quatty i i obtained by digitizing the response of each sensor. cusing the energy pattern on to the INTU-HYD) . PROCESSING [. ERABAy ITAL IMAGE DIGITAL I 14 wi ‘of and Min equation ¢ Substituting the values of u jed with a aes, ide and is imag = ject is 15 cm s, 10° 0.058 x 07. 058 c from a distance paste aot ee aie 0.7 m. What shou! focal length x * 1.058 Given that, Size of the object, O, = 15 em= 15 « 10mm eas Size of the image, 1, = 8.8 * 6,6 mm Distance from sensor to the object, Therefore, the required focal length is 38.37 my refore, 4=07m=07 * 10° mm State and explain various methods of imag a2. sition. Focal length of sensor (lens), f=? acqui Ans: By Lens equation, ' Ae a () ‘The various methods of image acquisition A 88 flog eas 1. Image Acquisition using Single Sensor may Consider the components of single sensor shown g — Distance from the lens (Sensor) to the object figure (1), ¥ Distance from the lens (Sensor) to the image iy J ~ focal length. From equation (1), > vu Dividing above expression by w in Numerator and denominator, we get, ~ Linear motion > 8 lin out per increment i of rotation and fulllinear displacement Fa of sensor from left to right, 1-8 tat Figure (1: Image Acquisition using Single Sensor an PurPose of image adguist » a i ql ion is to ‘Senerate a rire Q) Cee eared Rie Th is a photodiode whos ; output vo} age is directly, Proportional to the light. For improvag M- Mahisesion tg sensitivity, fers are ‘sed infront of sensor = ¥ _ Image distance is x Maz Ott dma ae ae the 2-D image Using single sensor, thet lat & wnanth N¢ displacement between M = ltage size the sensor and areal “s : ne sen >A negative fm ig Mounted on drum wh ee '0N Provides displacement in one dimensite eae lead screw provides movens! tion. This Tesolution ‘Method is an economical Wil att This arrangement can dew Strip sensors and array sensors. “Sing Sensor Seip Aon for image acquisition wit os Me gy nit 1_ (Digital mage Fundamentals and image Transforms) = (ne imagen ot per icremen of bea ation ‘Ginna ages o¢Dajet Figure (2: mage Acquisition using Sensor Strips The motion along the strip gives image elements in one direction and that of perpendicular dircetion to the strip gives imaging in other direction. 4, Image Acquisition using Sensor Arrays The arrangement shown in figure (3) contains the sensors arranged in the form of 2-D array. Figure (3): Image Acquisition using Sensor Arrays 9 ‘The main advantage of this method is; complete image can Be obtained by focusing the enéfgy pattem on to the surface ofthe array. 30. Explain a simple image model. Ans: Consider a two-dimensional function ft, »), which represents an image. From the source of the image, the physical meaning ofapositive sealar quantity for obtaining the value on amplitude of ‘f” at spatial coordinates (x, y)¢an be obtained. 'A monochromatic image obtained from a physical process having values proportional to energy radiated by a physical process (Example: Electromagnetic waves) results in, % 0 Quantization: Consider an image f(x, y) is continuous in amplitude with tect tox and yeo-ordinates. Ifco-ordiniate values of an image a ilized then it is referred as sampling and if the amplitude v re digitized then it is referred as quantization. : @ R Figure (1): Continuous image Figure (1) shows the continuous image f(x, y). This cc. nuous image can be converted into digital form by co: dering the gray level values of an image along the line se nent OR as shown in figure (2). Q R Figure (2): Grayevel Values of the Continuous image along OR The random varaiations in figure (2) are due to noise in an Hence, the function has to be sampled by taking equally sped samples along line segment OR as shown in figure (3). im) berteiiis ‘Quantization The location of each sample is give In figure (3), the points on the continuous waye discrete samples. The gray scale level values are cg, into 8 diserete gray levels varying from black 19, ™idy assigning each sample value to one of the discrete it the continuous gray scale values are quantized. Thes, values are converted to discrete quantities to form ing fimetion. This results in digital samples as shown inthe yi " pat eaten @ ® 4, ooo0 cose , ao 7 a = = (Udita Figure (4): Digital Scan Line Starting from the top of the image and carryin, procedure line by line produces a two-dimensional di Q32. Explain in I the following concepts, > NB Out, (a) Uniform sampling and quantization (b) Non-uniform sampling and quantization, Ans: (a) Uniform Sampling and Quantization Consider an image x, ») is continuous in amplitude wit respect to i and y co-ordinates. If co-ordinate values are dit itis referred ds Sampling and if amplitude values are digitng it is referred as quantization. Consider that the equally spaced samples are arrange in the form of N x Marray and which are used to-approximat continuous image f(x, ») ie. £0, 0) FON) £0, M1) FQ, 0) fA), GM —) Sle, y)= i a: 7 F(N-1,0) S(N=11) SNA M =D ‘The above array contains N= M elements of discr quantity. A digital image is commonly referred as the right-hand side of equation (1). The more formal mathematical terms can be used at tim to express the sampling.and quantization. Consider that thes of real integers and the set of real numbers are denoted wi Zand R respectively. The sampling process may be views! portioning the x y plane into a grid, with the coordinates oft center of each grid being a pair of elements from the product Z x Z. The set of all ordered pairs of elements (4° with a and b being integers from Zis the above cartesian| Then, the function f(x, y) is said to be digital image if, 1. @,y) are integers from Z x Z 2), “Fis ‘a fiction that assigns a gray-tevel valve © ® distinct pair of coordinates (x,y). Lact, Anyone found {uilty is LIABLE to face LEGAL proceedings. oe aes ss UNIT-1 (Digital Image Fundam lentals and Image Transforms) Soles AY Thus, the above functional assignment describes a aly quantization Process. A digital image can be converted into a twodimensional function (.e., coordinates and amplitude Valles are integers) by taking gray levels also as integers, The digitization process requires two values, 1. Values of M,N Number of discrete gray scale levél chosen for every pixel. In digital image processing, these imteger values can be considered as, N=2, And G=2" Where, G—Number of gray levels. M 2) 3) Inthis section, we have to assume that: the discrete levels are equally spaced between “0” and ‘Z’ in the gray scale. Then, thie number of bits required to store a digitized image, b can be obtained by using equations (2) and (3) as, b=NxMxm (4) tM b=N'm= Mm 5) For example, the number of bits required to store a 128 x 128 image with 64 gray levels is 98,304 bits. As the equation (1)isan approximation to a continuous image, the resolution of an image depends strongly on the number of samples and gray levels required for a good approximation. The approximation of the digitized array approaches (reaches) the original image by increasing the above two parameters. (>) Non-uniform Samj g and Quantization ‘An adaptive scheme (in which the characteristics of the image are dependent on the sampling process) can be used in many cases to improve the appearance of an image for a fixed value of spatial resolution. Due to the utilization of coarse sampling in relatively smooth regions, fine sampling is generally required in the neighbourhood of sharp gray-level transactions. For example, let us considera simple image consisting of a face superimposed on a uniform background. The coarse sampling, has the ability to represent alittle detailed information carried by the background. However, more details are present in the face. The overall result of the process would be tend to improve, if the additional samples not used in the background are used in this region of the image. A greater sample concentration should beused in gray-level transition boundaries (i.e. in this case the boundary between the face and the background) in samples distribution. © pat Irv! ‘and “N’ columns. The values of the coordinates are now dis. te In the quantization process, the use of uneq spaced levels is desirable for less number of gray levels! distribution of gray levels in an image can be clearly stuci Using a method similar to the non-uniform sampling appr The technique which is adopted in estimating shades o! near abrupt level changes is to use few gray levels i neighborhood of boundaries. The regions in which gray: |. cl Variations are smooth in that the remaining levels ean be If the remaining levels are too coarsely quantized, the | se Contours that often appear in this region can be reduced The above method is basically deals about bow detection and detail content. The distributing gray lev which frequency of occurrence of all allowed levels calculated using an alternative technique. Depending range, the occurrence of gray levels (i.¢., frequently or ry) may vary. The quantization levels are finely spaced in t region and coarsely spaced outside of it. This method known as tapered quantization. Q33. How a digital image can be represen Explain the effect of gray level resolutio: digital images. Ans: A two dimensional (2-D) light intensity functioy « +») having amplitude ‘f at pair of coordinates (x, y) is « intensity (or) gray level of the image. AA digital image is one for which both the coord! and the amplitude values of fare all finite discrete quan\\ \-s. Hence, a specific location value is given to each elemen digital image. These elements are called image elements, elements or pixels. ‘The matrix of a digital image can be constructed with s) co-ordinates and brightness that are discrete in nature. The ro\, ‘columns of a matrix gives a point on image and the corres" ‘elements of a matrix gives gray level at that point. There are many sensors to acquire image. Most 0 ‘he sensors gives a continuous voltage as output, which li be continuous in both amplitude and coordinates: To conver image f(x, y) into digital form which is continuous in amp!\'. lc with respect to x and y co-ordinates, ifthe co-ordinate valu sie digitized it is referred as sampling and if amplitude values .e digitized it is referred as quantization, The output of semphing and quantization gives a matrix of real mumbers. Hence an image can be represented as shown in figure (1). ly h, Origin Figure () ‘Where the function f(x, ») is assumed to have ‘\/"1-vs quantities. DIGITAL IMAGE PROCESSING LINTU-HY DER 18 Using figure (1, the funetion fy) canbe written as, 7.0. £00 J00,N-!) f0,0),, 0.0) s0.N-) FQ¥)=| FM +10) f(M=T ND) p-1,N-D) [And the above matrix notation, can be modified as, ag) a Hay ay fyjy Gyn ~ Myaws fe=iy=p=Aei. Effect of Gray Level Resolution on Digital Images or plane co-ordinates. The Consider two dimensional lightintesity function fay) wheres,» denotes the spatial aly of f° at any pair of co-ordinates (x, ) is called intensity (or gray evel ofthe image. The image is sad to be digital ina amplitude is discrete quantity : Gray Level Resolution Gray lever resolution is the smallest recognizable change in the gray level. Amplitude digitization is called graypy quantization, which is described in figure (2). Fgure (2) Consider the spatial resolution as constant and change the ’ i Bray level. Similarly consider an i 4x 1024p 2so ay el ar rele fo 256 102, which coud be ohined by varying tebe ghk fom se * while Keeping the spatial resolution constant at 1024 x 1024 ps image wis : lt atc nh ne ee ‘ most unnoticeabl ike 0% are developed in areas of smooth gray levels and these are increased ae the Regine OL ine Urs 3 ‘ i as the numberof pray levels ae f 416, 8,4 and 2 levels fespectivel. Tis effect caused by the use of insufficient number aeresed ation ot Q34, Witte about bpatial teva Feaolution, Discuss about iso-preference curves. on having ample tpi corns (8 ‘image, if amplitude is a discrete quantity. yniT-1 (Digital Image Fundamentals and image Transforms) 19 qa Level Resolution Itis the smallest recognizable detail in an image shown in figure (1) ‘i ‘wv Ww (wid 7 Figure (1) Consider a 1024 1024 pixel image with 256 gray level digital image. First of al lets keep the gray level fixed at 256 and change the spatial resolution of the image, Now, reduce the number of pixels from 1024 to 512 pixels in each row and column ofthe image matrix. The image will nearly be same and it is very difficult to find any differences between the two. The value of Nise., number of pixels are further decreased to 256, 128, 64 and 32 respectively keeping the gray level constant at 256 and the display area is kept the same as for 1024 x 1024 display field. Very slight checkerboard pattern appear and more pronounced {graininess through the image begins to appear in the 256 * 256 image which begins to increase in the 128 x 128 image which becomes more noticeable in the 64 * 64 and the 32 * 32 image respectively. Iso-preference Curves ‘An image with M N pixels and ‘B" gray levels can be varied by changing the values of Nand b. Thus, the different images can be obtained by varying the values of N and b are visualized to the observers, who can rank those images using their subjective ‘quality. The results obtained from these ranking are produced in a curve form. These curves are called iso-preference curves, Consider the following three figures. Figure (2) Figure (3) Figure (4) (CO eirec rum auLamone souRNAL FoR EwciNeenING sTuDeNTS: i ol 20 @ little detail figure G) ail and figure (4) contains contains an intermediate amount of det: a large amount of detail ‘The iso-preference curves obtained for these images © shown in figure (5). 2 Figure (5) From figure (5), its clear that, the iso-preference curve for figure (4) is vertical compared to other two, so it is evident that an image with large amount of detail contain less intensity levels. 2 Q35. A medical image has a size of 8:8 inches. The sampling resolution in 5 cycle/mm. How many pixels are requiréd? Will an image of size 256*256 be enough. : Ans: Dec.13, St-4, a) ‘Note: In the given questioh ‘spatial resolution’ is misprinted as ‘sampling resolution’. Given that, ‘A medical image has a size of 8 x 8 inches. Spatial resolution is 5 cycle/mm Numiber of pixels required for the image =? ‘The spatial resolution gives the member of pixels present per millimeter of an image. In the given data, spatial resolution is $ cycle/mm, it means that 1 mm of the image contains five pixels. The medical image is given in the units of inches converting it into mm we get, 8 x 8 inches = (8 x 25.4) x (8 x 25.4) mm [2 Linch = 25.4 mm). = 203.2 * 203.2 mm Since 1 mm contains 5 pixels an image of size 203.2 203.2 contains (5 x 203.2) x (5 x 203.2) pixels. ICESSING WNT AY De: re of transmission mate ci of bits tra Un - Gay ion is accomplished in.” Pai Dpatarting bit, a byte orm Sada stop bit. Using this approach, angy following. a : inutes woul (a) Le ey. image with 128 ory tenn 300 baud? : What would the time be at 9600 baug, Repeat (a) and (b) for 2 1024 x 1024in, 428 gray levels. data is (b) (ce) Ans: Given that, + Fora $12 x 512 image, ‘Number of gray levels, L= 128 B = 300 baud. 2 M=N=512 ‘The total number of bits required to store an image given by M= Nx However, the expression for number of gray lev given by, | La pzs= 2" b=7 The total number of bits are, MxNXxB =512x'512%7 = 1835008 bits “. The given image can be stored in 1835008 bits incl a start and stop bit. Also given Baud rate= 300 baud ~The number of seconds required to transmit the i=% (@) “ L= 128) is obtained by, Number of seconds = 1835008 = 6116.7 sec. Number of minutes : we Becaty. required to transmit the _ Number of minutes = mist a. 4 - = 101.945 mins. herefore, approximately 102 mins are re4 UNIT=1 (Digital image Fundamentals and image Transforms) 21 () Here the number of bits to be transmitied is ame ae i.e., 1835008 bits, Also given, Baud rate = 9600 baud 1835008 ‘The number of seconds required to transmit the given image is, _ 1835008 ‘Number of seconds = 9600 = 1911S see. Therefore, number of minutes Fequired t6 transmit the image is given by, Number of minutes = LIS ‘The number of minutes re age is 3.19 mins (© Given that, 3.19 mins uited to transmit the given Fora 1024 x 1024 image, M=N=1024 L=128>6=7 The total number of bits required to store an image is given by, Number of bits = MxN% 5 = 1024 x 1024 x 7 = 7340032 bits. Therefore, the given image can be stored in 7340032 bits including a start and stop bit. ‘At a baud rate of 300 baud the number of seconds required to transmit the image is, 7340032 Number of seconds = Bee = 24466.78 sec. The number of minutes required to transmit the image ‘is given by, 2 Number of minutes = 48578 _ 407.78 mins, Approximately 408 minutes are required to transmit the given image. Ata Baud Rate of 9600 Bauds Here the number of bits to be transmitted are same, i.e, 7340032 bits. Baud rate = 9600 bauds (given) ‘Therefore, the number of seconds require to transmit the given image is, © 7340032 Number of seconds = 9600 > 764,59 sec. Number of minutes required to transmit the image is given by, Number of minutés = 794° = 12.74 min, Therefore, the number of minutes required to transmit teks z us me 1.1.3 Relationship Between Pixels Q37. Define the following terms with respect to image (i) Distance measure (il) Connectivity (ili) Neighborhood. (or) Discuss briefly the following: (i) Neighbours of pixels ii) Connectivity. (er) Explain the relationships between pixels, connectivity, distance measures. (or) tionship between pixels in Now Dee-16,(R13), a3) Dee.19, (R16), 03(8) Nov.JDec.-17, (R13), Q3(b) Discuss the r detail. (Refer Only Connectivity) Ans: Neighbors of a pixel ‘They are defined’as the pixels that are adjacent and diagonal to a particular pixel or pixels around a particular pixel. Considering that ‘pis a pixel at coordinates (x, y). Then, the pixels at coordinates (x + 1, y), G- 1,9), (x.y + 1). = 1) are called 4-neighbours of ‘p’ denoted by N,(p) as shown in figure (1). [oaiess| i Figure (1) ‘The pixels at coordinates (x + 1, y+ 1), (x +1, y~1), (x-1,y+ 1), @-1,y~=1) are called four diagonal neighbours ‘of ‘p’ and are denoted as N,(p). In the figure (1), the pixels a, 4, ¢, d forms the set V,(p) and the pixels /, m,n, 0 form N,(p). ‘These two sets (Np) and N,(p)) are combinely referred as 8-neighbours of ‘p’ denoted as N,(p). ‘Connectivity In order to define the connectivity relation between two. pixels, they should be neighbors to one another. » Assuming ‘s’ as a set of gray level values, used to define adjacency. Then, the three types of connectivity are defined as, GINEERING STUDENTS... annwinay DIGITAL, IMAGE PROCESSING IJNTU-HYDeERg, 22 Here, p has m-connectivity with q, Distance Measures The various distance measures used in image processing include, Distance (D.) si @. Ewel @ connectivity . ‘The Euclidean distance between two pixels Two pixels p and g with values from s are said to se coordinatgs p(x.) 40 1) is given as,” ayy rected, if “q" is in ? eto 7 pos ted, if ‘g” is in the set N,(p) or ‘p" is the s Div.ad= Menayino=nF a For this distance measure, the pixels having a (ii) 8-connectivity less than or equal to a fixed value ‘r* from ¢ ste Two pixels p and g with values from s are said to be inside or on the circle of radius r. + Y) ig 8-connected if *g’ is in the set N,(p), or ‘p’ is the set of | jj) _ City-Block Distance (D,) N&@. The city-block distance between the pixels » ang (iil) m-conneetivity Se 5 , Dip. g=be-s1+ =A Two pixels p and g with values from s are said to be In this eas, the pixels having distance las hy m-connected if ‘equal to some fixed value ‘r” from (x, y) forms a diam, (@) ‘a’ is in N,(p), or ‘p” is the set of N,(g) and centered at (x, ¥). : Example () The set N,(p) > N,(q) is empty (This is the set of ROR iota hbk 0 ak pixels that has 4-neighbours ofp and q and whose pees oe values are from 5). 252 Example 2.18 2: Let s = {0.25,0.5, 0.75, 1} 212 % A ds 1 os Figure (5) dat (ii) Chessboard Distance (D,) ays RPE as. ‘The chess board distance between the two pixels p anj see qis given as, D,{p, q) = max(lx—s|, v4) Figure (2) Here, pixels with D, distance less than or equal to some Here the pixel, p has 4-connectivity with q, and 9,. value ‘r’ forms a square centered at (x, y), as shownin au the below example, t 5 end cP tit ‘ BSH “iki 2 Hoa 3 2 ctbeOydy 2: ey % Sette at? 2 4 eS 22 nee Figure (3) Note: Pixels with D, = 1 are the 8-neighbours of (x, y) ie.0. Here the pixel, p has 8-connectivity with pixels q,, 4, | (iv) D,-Distance and q,. It is defined as the shortest m-path between the points: | os Here, the distance between two pixels depends on the 5 Is. 075 values of the pixels along the path, as well as the valuts of their neighbours. Consider the following pixels ® 2 eG shown in figure (6). : . % 4. Soe ae ae ™ O14. Figure(4) cee Figure (6) Suppose thatthe adjacency of pixels haste vate I soil). Then, D,(p, g) = 2. Thus, system is represented 8 Consider the image segment shown below, © MQ 6) 4 2 1) Gy=0.3) 02 no 1 3 wiz 1 1 2 1 2 2 0-B.9e@L o Let V=(0, 1) From the above image segment, (x, y, s, 1) can be determined as, 6,9 =G, 0); (x, 9) = (0, 3) = e900 = 0-37 -6-07 = 9-9 =0 D,=|x~-s|+ly-t] =|0-3|+|3-0] =1-31+131 =343 Di=6 D,= Max (Ix-s|,|y—1)) =Max (|0-3 |,|3-0)) = Max (3, 3) Dy=3 Note: Ifthe distance between the two pixels is max(4, 5), then the highest value init represents the chessboard distance i, gel 02 D, Given image sequent is, id 0 27 1 OWT OH > 212 1A) 6.) V ={0,1) City block distance, D, Chessboard distance, D, Coordinates of g(x, Coordinates of p(s, 1) City block distance, D, is given by, D, Wat =444 ile Chessboard distance, D, is given by, D,= max{|r—s), y~ al] max(}0 — 4|, (0 - 4]] D,=max(4, 4} 4 units When ¥’= (0, 1) the shortest distance path is given by, heanpie gt 23] reel aes — P. _ ++ Connecting the paths by 0 and |... is obtained as, Q40. Show that the D, distance between two points Pand q is equal to the: shortest 4-path between these points, Is this path unique? Ans: ‘An image is simply defined as a combination of pixels providing information, An image ean have any numberof pixels alan Re raureneskeialene Each | is generally identified by g(r, c). pea ae Pe 22 12 1.8 onal Given V = [0, 1]. +» Connecting the paths by 0 and 1, D, is obtained as, 7s Se PE cert oa apt etoi neaniatelt Sh0, 39. Let V={0, 1}, Compute D4. D8 distance between the pixels p and q for the figure given below. Where, * avant 1 > Represent row ¢ > Represent column, BEF. ¢\) Bley, 3) wee . aphyt 24 Figure (1), represents an image Wi rows and ‘n’ number of columns, ea represents a pixel. The distance between any two pixels by using row and colu au.) are two pixels, then ‘wo points is given by, Dip.@)= Ve | 2 Dip.) =|@-9+0-M| ‘The D, distance between p and is 7) ymn information i... the Euclidean distance between th ch block inthe figure (!) can be caleulated fet pr, ») and a) 2) D,@.9) =1z-¥l FY shortest 4-path between Consider figure (2), where the ‘pand gis shown. qe) poy) Figure 2 ‘The length ofthe segments of| Ip respectively. The total path ent these two segments i.e given by, jroul+ly-¥l This is same as D, distance Therefore, the D, ds ofthe shortest 4-path, when the y)—. This occurs whenever we can the shortest 4-path between stance obviously ise ength ofthe path isx | +] get fom po g by following the path are |x-u | and is obtained by adding \pand gis 0) ‘obtained in equation (2). ua tothe length ‘a path whose elements. “Vf represents adjaceney of pte) (i) Are from V {where, such a way that, we can traverse the pall (ii) Arearrangedin making tums in aost two directions, from p 109 by ice, right and up. ‘onditions itis clear that, this is not @ From the above c unique path and it requires 10 for , distance equa tothe shortest -path betwen ‘G41. Explain the following terms: satisfy the above given conditions given points. (iii) Regions : Nov sDec.18, (R15), 0218) (on) Explain the terms Adjacency, Boundaries and (or) Regions and pixels. ‘Ree Only ate) 5 nem ncmnme er ABLE to fe LEGAL poe > Now, ate) Adjaceney 1 of gray level used 0 define ag with values and | aaceney Yel 0 ‘wo pines and g wi values from vare dal ‘aay aisinthe set of NP). Gi) Sadjnceney ‘Two pixels pand with val qisin de set of N0). ues from v are 8-adjacy it m-adjacency (i wo pines p and g with ales fom Yar adage if (a) gin) or (o) qisin pant setN 0) Nias gi whose values are from ¥. 2. Connectivity For answer refer Uni-1,037, Topic: Connecti, 3, Regions 11° isa connected subset of pixels in an image be «pe iscalled as region ofthe image 4, Boundaries ‘The st of pil in the region that have one OE asthe boundary ofareit neighbor that are notin ‘Rf called “The boundary ofthe defined asthe set of pixels ‘gst ad last rows and columns ofthe image, when ‘piste as entire image ‘heabovedefnton is ncessary since he meee bound teighbors beyond its border, boundary, When te Af the region coincides with the border of the mas thet pixels in the boundary ofthe region are sicitly incl part of region boundary. : 42, Pemopan gpa forconveton #0 ; thick, connected path to 4-connectd pat 2 ‘Ans: ‘The slton othe given problem includes 8 - neighbour shapes to go from a diagonal s.connected tos conesponding 4-comected segments as stow? fo atch , ‘The algorithm then looks fof the suitable yy sopinrterinomss oon ag q (Digital image Fundamentals and image Transforms) 35 Q43. Write a short note on the tools used in digital image processing. Ans: The various tools used in digital image processing are, L Array Vs Matrix Operations Array Operations ‘An array operation is performed on one or more images on pixel-by- pixel basis. For instance, consider two 2 x 2 images A and B given as, a2 a er The array product of these two images A and B is given by, cel eo axba abn Matrix Operations ‘The matrix operations are performed on one or more images that are represenited in the form of matrices. The matrix product of two images 4 and B is given by, ay. ay] | on on abu terby aba andy AXB= 9 yam | | ba be aby tara by — A by +z by 2. Linear Vs Non-linear Operations Linear Operations A to be linear, ifthe output of an image satisfies both the properties of additivity and homogeneity. (5,9) and output, g (x,y). A general operator, His used to generate the image output. 26 DIGITAL IMAGE PROCESSING [JNTU. YDERA Bap Hla,f. (x.y) +f, (6,99) = Mla f.ee9)) + Mla 9) =a, HS 9) + 4A 9) = ag, (%y) +4809) (from equation (1)) Where, -g 4, a,~ Arbitrary constants S,0%),f, 0.9) — Images of equal size, Non-linear Operations ‘An image is said to be non-linear, if it does not satisfy additivity and Homogeneity property. Consider two images f, and f, given by, Lise ie 5 and f= 5 7 x For an image to be linear, max [a,f, + a,f,] =a, max [f,] + a, max [f) 8 Consider, L. H.S with a, = 1 and a,=—1 = imax(o,f-+a,4) -max fof, S]ecof i M6 2 Sele ane 6 Consider R. H. S, =a, max f] +0, max U6} =(1) max! 7 +e 1) max] 4 =7+C1)7=0 From equations (4) and (5), it can be observed that the images, f, and f, are non-linear in nature. G44, State and explain with suitable examples the arithmetic/logic operations among pixels. Ans: Arithmetic Operations ‘The arithmetic operations applied on matrices which are applicable for pixels. These arithmetic operations are applied adjacent pixels of the image. The arithmetic operations include addition, subtraction, multiplication and division of two adjacett ) pixels, If fx, ») and g(x, y) are two images then, ax, y) = fix, y) + 8,9) s(x, 9) =f») = 859) POY) = Ax, 9) * 8059) Ax, y) = fx, ¥) + BOY) ' ‘These operations indicate that the arithmetic operations are performed between corresponding pair of pixels in /+ and g fo x=0,1 N- 1. Mand N are number of rows aiid columrs respectively of an image matrix. The rest matrices ie, a(x, y), (8), PCs») and d(x, ») are of same order. Let us consider an example, Let g(x, y) be an image and h(x, y) be'a corrupted image, resultant of g(x, y) and the noise component (x, »)- Hence ‘A(x, ») can be written as, é In, y) = gO. 9) + 02,9) In order to reduce the noise component we use image enhancement technique. The main objective of this techniques" reduce the noise component by adding noisy images (h(x, »)}. Ifan image A(x, ») is formed by averaging ‘K’ noisy image ¥ (QURMTTT Waren: rercrotocoprig ot is books CRIMINAL ac ‘Anyone, found culty is LIABLE to face LEGAL proceedings. UNIT-1- (Digital Image Fundamentals and image Transforms) © x a phy) Bia») = C09) 1 k ies. = 0x04) Here £(h(x, »)}is the expectation of f and ,,.»), Gig.» are variances of i and n respectively, 1 6, rou ‘The above equation indicates the inverse relationship Sen = of kand variance of i.e., as k increases the Variance of pixel values at each location decreases. In digital image processing, arithmetic operations are used for image enhancement. The arithmetic operations implemented on images are, (i) Addition (ii) Subtraction (iii) Multiplication (iv) Division () Averaging These operations are executed on each and every pixel of ‘the entire image i.e., a pixel value in the resultant image is based onthe value of its respective pixel in the input images. Because of this reason, the input images must be of equal dimensions. @ Addition Addition operation between two images is carried out by adding their corresponding pixel values. For instance, addition of two identical input images ‘and ‘B’ producing a sum ‘C” is expressed as, Cey)=4 y+ BHI): ice, the output image isthe sum of associated pixel values of input images. ‘The expression for addition operation performed on one image and one scalar quantity (S) is, CO, y= AY) FS i.e, Sis added to each and every pixel in the image. (i) Subtraction ‘Subtraction between two input images is designated as, =A (#,y)— BOY) (G, ») is obtained by subtracting the vales of A and B. also. produces absolute 27 (iii) Multiplication Multiplication of images is performed by multiplying their respective pixe! levels. This operation is expressed as, Clx, y) =A Gy) * BO, y) Multiplication between an image and a scalar (S) is, Cla, y) =A (ay) *S Division Division operation of the images 4 and B producing result ‘C’ is Cls.y) Alay) + BEY) ) A(x, y) wy) an ai ‘And division of one image by a constant is, Clix, y) =A ey) + £ rc) Alay) c (iv) Coy) = () Averaging Consider f(x, ») as the input image and noise image as ‘n(x, ) then output image g(x, ») is, (x,y) = Aix, 9) + m9») ‘The image Z (x, )) is formed by averaging & different noise images ic. L 7 1 Fav Zd se» a ‘The average value of (x, y) is equal to the input image ie, EEG =f) Standard deviation of average image is given by, 1 Spay = "ou Where, Sj(;,y) and n(x») are variances of output image and noisy image. In order to avoid blurring in the output image, g(x, ») must be aligned. Logical Operations Logical operations are implemented on binary images. ‘The binary value ‘1’ is for foreground image and ‘0” is for background image for sets of pixels, The logic operations in digital image processing are performed to join two images (especially binary images), ‘These operations are also executed on every pixel of the image. ‘The logic operations find their intense applications in image morphology. ‘The essential logic operations in image enhancement are, (i) Logical OR Gi) Logical AND (iii), Logical NOT/Complement. FORENGINEERING STUDENTS = 28 @ Logical OR ‘The logical OR operation of two input images “B’ consists of either 4 or B or both of these images. Figure (1) shows a pictorial representation of logical OR operation. Aon OR Operation i) Logical AND The AND operation performed on the images produce a resultant image which includes only the regions which are common to the input images as shown in figure (2), A » AANDB = as Figure (2): AND Operation (ii) Logical NOT ‘The logical NOT operation complements an image i-e., the resultant image consists of the elements which are not present in the input image. The NOT operation is as shown in figure (3), a Nora) processing ? Explain in brief. Ans: Consider ‘A’ and “B’ as two sets with ordered pairs of real numbers (gray scale images). ‘U” be the universal set of all the set of elements. The set operations and designations employed in digital image processing arc, 1. If ais an element of 4, then a € A (i. +A’), otherwise a ¢ 4. ‘a’ belongs to a ee ee ne es 0 tot wh “pull or empty set. ie =o 33 ie, C=A% B. 6. Theintersection Silico sets A and B is the set of, that are common in both and B. ie, D=AB. 7, IfA and Bare disjoint or mutually exclusive jig, no common elements). ie, dO B=$- 8. ‘The complement of a set is the set of elements wg are not present in A. ie, Ae = (PIP € AJ=U-A The difference between the two sets A and B is as the set of elements belonging to A but not to 8 + A-B={mime Ame B)=4 B Figure illustrates a diagrammatic representation of operations carried out in image processing. Aca, © Figure: Set Opera image Processing Q46. Classify the spatial operations and explain in brief about each of them. Ans: In digital image processing, spatial operations are dire!) implemented on the image pixels. There are three types of pa# operations. They are, 1, Single - pixel operations 2. Neighborhood operations 3. Geometric spatial transformations. 1. Single-pixel Operations ‘ In single-pixel operations, each pixel value of digit! image is modified depending on their intensity values: The transformation function used to perform sing!°P™ ‘operations on an image is expressed as, s=Te) Where, 2 ~ Pixel intensity of original image 8 —Corresponding pixel intensity in found guilty is LIABLE to face LEGAL proceedings. UNIT-1 (Digital Image Fundamentals and Image Transforms) For instance, the ne e of an ‘ age using intensity eee eos is as shown in figure (1), 285 sy (Corresponding ‘output (nput intensity 258 vahie) Figure (1): Negative of an 8-Bit Image 2, Neighborhood Operations Consider a set of coordinates in an image present at the neighborhood centre of an arbitrary point (x, y) be designated by ‘Sy. Let m * n rectangular neighborhood consists of an average pixel value centered on (x, y). ‘The output processed image is produced by changing the center of the neighborhood (pixel to pixel) with respect to the input image using neighborhood process as shown in figure (2), aA =| ata et Figure (2): Local Averaging using Neighbourhood Processing ‘The equation to perform neighborhood operations on the pixels is, say-t L feo ; mn (dey Where, r ~Row coordinate of pixel ¢~Column coordinate of pixel f — Input image 29 ie, Gy) = PX, wh Where, (v,#) ~ Coordinates of the pixel in original image (x,y) ~ Corresponding coordinates of the pixel in the transformed image. Geometric spatial transformations are also referred as rubber-sheet transformations. This is because, the rubber-sheet is printed with an image and then stretched as desired by the application, Affine Transform Matrix Affine transform matrix can be used in two basic ways. ‘They are, (i) Forward mapping (ii) Inverse mapping, (i) Forward Mapping In forward mapping, the spatial location (x, y) of the output pixel is measured by considering the pixels at each location of the input image using affine transformation matrix. Ifall the input image pixels are transformed to the same location, then the total single output pixel can not be" formed and their locations do not have pixel values. (ii) Inverse Mapping In inverse mapping, the spatial location of the output pixel is measured by considering the location of pixels in the input image using the transformation given by, (wi =T' Gy) The implementation of the inverse mapping is much effective than forward mapping, Q47. Discuss basic transformation of pixel: ‘Ans: Some of the basic geometric transformations used in imaging are, 1. Spatial transformation 2. Gray level interpolation. 1. Spatial Transformation Spatial transformation can be expressed as, Ray) y= Sx, y) Where, R(x,y) and S(x,y) are the spatial transformations that creates geometrically distorted image. (x,,,) and (,») are the pixel coordinates of an image ‘/. JOURNAL FOR ENGINEERING STUDENTS = 30 Ax, y) can be recovered from the G(x, y,) by reverse transformations if R(x, ) known, forted image and S(x,y) are But, in practice itis not possible to recover x,y) from the distorted image G(«,,))). ‘One of thé most commonly used spatial coordinate transformations is the Affine transform, which has the general form, TT 0 bey!) =pwyr-pwn]t Te % By Ta 1 This transformation can scale, rotate, translate or shear 4 set coordinate points, depending on the value chosen for the element of matrix T. Table given below illustrates the matrix values used to implement these transformations. xeviw yaw 2. Gray-level Interpolation Let x’ and y" be the non-integers obtained from distortion correction equations. This, we get a set of gray levels fornon- integer positions in the image to determine the gray levels which are assigned to the integer pixels locations in the output image. In gray level interpolation, the gray values of F(x, y) are mapped with nearest co-ordinates of #(x’, y"). But, the problem involved in this approach is that sometimes one gray level gets mapped with two pixels and sometimes the gray levels are left without mapping any pixels. the pixel location in the output image is considered and by using the inverse image transform, the corresponding imp image Ication is computed. The locations hat are determined « £ Schemes ‘AL IMAGE PROS en AN we pray levels of the respective integer pixgie Ricegar known intensities of output rica _ : the gray level at that position The bilinear interpolation is the general method useg interpolation and the expression js given by, fo Vey) =Ox+ Cyt Coy + Cy Where, V(x, ») = Gray value at position (1, ‘This can be represented in matrix form as, K] fam ar 1][G lo |a Me 22 1114 BK] |} 5 5 1G Vp [ae Ye 0% APG! ‘The above form is expressed as, IM=(4 (Cl [a= These calculations are to be done for all the pixels, By the computational complexity increases. Hence, the value of pig which is closest to the location from where the pixel comes fron is calculated, This value gives the average valuc of all pixels, Q48. Write short notes on image registration. Ans: Image registration in digital image processing is th process of arranging (registering) two (or more) images of th same scene in the spatial manner. The first image used in image registration is known a reference image whereas second image is known as sensed image ‘The corresponding point in the second image is determined f each point of the first image using same coordinates. The coordinates of the sensed image are transforme into geometry and spatial coordinates of the reference imag (which is maintained unaltered). The steps involved to get a registered image using imag registration system is as shown in figure (1), «Reference mage Sensed image 4 Similarity measures] [ Point detectors ee ee Feature extraction [Point descriptors t T Feature selection L f Robust 4 (Digital Image Fundamentals and image Tranefo “he control points are chosen in each image and with the neighbourhood points of significant restasae areliigetermining the Corresponding points, the non linear meters are calculated to transform sen, feo that ofthe reference image sensed image ‘When the images are considered as rigid bodies, th pe egitered by translating and rotating the sensed image ding to the reference image. But the outliers present fete image disable the image registration process, ‘This can be eliminated by matching multiple windows with shared rotation parameters in the images with out lies, ‘The image registration process for a reference image is as shown in figure (2). {a) Reference Image and Geometrically Distorted Image (0) Difference between Reference and Registered Image Figure (2): image Registrat ‘Q49. Explain vector and matrix operations. ‘Ans: The vector and matrix operations find their extensive ‘plications in the domain of multi-spectral image processing, Vector Operations _ Consider a RGB colour image of size MN. The image is designated either by three components images or by MN 3-D Vectors. The pixel vector of the image (2) is represented as, [| 31 “The graphical representation of the pixel vector in RGB image is as shown in figure below. é * Le Be image component i bse eres vomalon |e image component ofre.color Figure: Pixel Vector of RGB image The N component images in multispectral image gives n-dimensional vectors which employs vector - matrix theory for their operation. In n-dimensional space, the distance between a pixel vector ‘2’ and an arbitrary point ‘kis defined as Euclidean distance (D) or vector product and is given by, Dee, b =(e-W 8)" = [e,-k P+ GBP + ne @— kIT Dz, k) is also known as a vector norm and is represented D(z, k) zk Matrix Operations The representation of pixel vectors in linear transformations is given by, yax@-b Where, 2,k = 71 column vectors Xoo or Tones gpa) Tet ays Suey Awei Awe.t yee ‘An image with a wide range of linear processes applied to itis represented as, g=Hftn Where, Jf ~ Inputimage of MN * 1 vector ‘g~ Processed image of MV 1 vector H ~ Linear process of MN x MN matrix applied to the input image n—Representation of Mx Nnoise pattem using MN % 1 vector. ‘The MN x I vector used in matrix operations consists of first N elements of the vector in the first row of image, second Nelements of vector in the second row of image and so on. /ALL-IN-ONE JOURNAL FOR ENGINEERING STUDENTS = JAGE PROCESSING eee DIGITAL IM) Se aalcomeene ad 0) 32 Mat is information 250 Discuss the probabiliatic methods used 2 Tues mow ini image processing. ea. thematical Conve as a thematical convenience ey oe consider According ean et on the simple mult tn In image processing, probabilistic, methods ¢ | tae ts ani ae Fh oi intensity values as random values. ikaw operation in i rae ae iat in The probability (,) of intensity levels for ALN orginal afore possible intensity values is (.e.,z, =; 12, yin pene extract More Io * N digital image is, + ee eat sa materaticl ool Wich ena tract more relevant information from the signal eer for instance, consider a person oN the lft han land the prism and person yon therieht hand sie ofthe pig of the pris shown in figure (I) o willie Violet ‘Where, . MN - Total number of pixels. Spectrum rn, ~Number of times of occurrence of intensity (&,) F na a in the image. 7 Fg Ot Fag Wie] 9) 2 Whe = Wie and Wii! =— Wy, , by substituting these values in ‘equation (7) to (9) we get, 1 Flu+M) =F Fon Foy) Wty) (10) By analysing the equations from (7) to (10) some interesting properties are observed. That is a N-point transform can be calculated by spliting the original equation into two parts as shown in equation (9) and (10). Where computation of two 'N2 point transform is requited for calculating the first half part of F(u). The result obtained from the Fg (u) and F,,, (u) are Substituted in equation (9) to get Fu) for u = 0, 1... (N/2—1) where, the second half part does not require additional Ganstorns valuations, it directly follows from equation (10). E i For analysing the computational implication of FFT algorithm, let us consider m(n) and a(n) which denotes the number of complex multiplications and additions required in computation respectively. ise., m(n) = Number of complex multiplications a(n) = Number of complex additions and_ 2° represents the number of samples ALLIN-ONE JOURNAL FOR ENGINEERING STUDENTS)" ==) DIGITAL IMAGE PROCESSING rao EAR 3 36 = i jlementati The recursive expressions which require the number of multiplications and additions for implementation of FFT is given as, ‘; ann) = 2m(n— 1) + 2°? fen = 1 y a(n) = 2a (n~ 1) + 2n forn21 As one point transform does not require any’additions (or) multiplications, the value of m(0) and a(0) is zero, ie, [m(0) = 0, a(0) = 0} ‘The implementation of equations from (7) to (10) involves the successive doubling FFT algorithms. The name cee doubling FFT algorithm comes from the method of calculating from two one-point transforms. Similarly, four-point transom, is evaluated from two point transforms and so on. For any N value which is integer power of 2 i.e., 2". Q55. Explain how the Inverse Fast Fourier Transform (IFFT) is performed. Ans: , The algorithm used for implementing discrete forward fast Fourier transférm can also be used to compute the inverse Fourier transform with minor modifications in the input. To see this, consider the discrete fourier transform pair equations gi, by, Nob = (l) Fw) = ¥ fe) exp[-j2xux/ N] For u = 0, 1,2, and 4 0) =F Fwew tia m) Pe. Taking the complex conjugate of equation (2) and dividing both sides by N yields. i 1S. 7 = wot (weap [j2mux/ N] i ~@) Comparing the above result with equation (2) shows that the right hand side of equation (3) is in the form of the forward fourier transform. Thus using /*(u) as input in the algorithm designed to compute the forward transform gives the quantity £ N Taking the complex conjugate and multiplying by N yields the desired inverse flx). The discrete Fourier transform pair in 2-D for square array is given by, ee Ruy) = a) 2 L(y) exp [-J2n (ux + YN] for, v=0, 1,2, ...N=1 a) and fay) = ey y Aus, ») exp [-j2m (wx + yyIN] for x, y=0, 1,2, N= 1. O Taking the complex conjugate of equation (5), we get, ‘ Peay) = #z x FP(uv) exp [-j2m (ur+ VN] O Equation (6) is in the form of the 2D forward transform of equation (4). Therefore using F* (u,v) the algorithm design! for computing the forward transform gives /*(x, y). Taking the complex conjugate of this result yields Ax, y). When f(x) of A)? is real, the complex conjugate operation is unnecessary, because f(x) = /*(x) and fix, y) = (x, 9) for teal fonctions. Q56. State and prove the properties of 2-D DFT. : (or) ‘State and prove separability property of 20-DFT. May-19, (16) 04 He UNLE etvLEGAL pcs OD Ss pei (Otel ope Ferdemeniais andimage ransiome) ge Sepersbelty Jt states that the pair of discrete fourier transform FLU, V) expressed in separble forms. i, Plu, v) = +3, eatrecitg Yexp(-jamy/N] For u, v= 0,1, o and im w Pe) = 5) 2 explizmuniny Liementiznniny Forxy=0.1,. @ Themainadvanage ofthis propery that he pur of discrete transform can be oie in wostepsby 1 -D fourier transform or by inverse transform, Fu,v) = + F(x, v)expl-f2nuxl) -@) Fiz, = [3 Les yyewl- am (4) ,2/..N ~ | then expression inside brakets in equation (4) becomes 1-D transform focoeryanbertolh x’. BY transform of each row of fx, 9) IN, 2-D function f(x, v) is obtained. Then, the discrete fourier transform by spying the Cag each column of 2-D function F(x, v). The procedure is summarized OO} ae 0 oy, @o_en, Row instore column my | eee | comme FOP ot ou oo x x : Figure (1: Computation ofthe 2.0 Fourier Transform a a Series of 1-0 Transforms “The same results may be obtained by first taking transforms along the columns of fx, y) and then along the rows of results. ‘This is easily verified by reversing the order of the summations of equation (1). Identical comments apply to the implementation of equation (2), 2, Translation (@ Translation in Spatial Domain ‘The translation property of discrete Fourier transform is given by the expression, a . S(x~%q, ¥~ Yo) @ Fuvye {i ars Proof From the definition of Fourier transform, rome Bsns ~6 Seoccoih: habeas fake ee rich th te tS sone Fw ) Is, Sar ‘ “algal Fife-x9y-y)) = a RUN. © fe? ion in Frequency Doi The translation properties of the Fourier transform pair, Kx, ¥) expli2nlugx + vy)/N] 2 Flu vy) ©) IX Ky V—Yp) 2 Flu, v) expl-j2r(ux, + vy /N] ~- (7) The double arrow indicates similarity of a function and its Fourier transform and vice versa. Equation (6) shows multiplying f(x, y) by exponential term and applying transform gives frequency shift of point from origin. Similarly, multiplying Flu, v) by the exponential term shown and taking the inverse transform moves the origin of the spatial plane to (x,, ¥) Consider equation (6) with u, = vy, =N/2 or expli2atuge + vyN]= em? = (1)? and Rex, y) (1)? © Flu N?2, v= NP2) (8) ‘Thus the origin of the Fourier transform of fx, ») can be ‘moved to the center ofits corresponding N * N frequency square simply by multiplying fx, ») by (-1)"”. In the one-variable case this shift reduces to multiplication of x) by the term (-1) Note from equation (2) that a shift in flx, y) does not affect the magnitude of its Fourier transform as, [Flue v) exp[-j2m(ux, + vyg)/NII = IUF(u, ¥)II ~) It is important to keep this in mind, because visual ‘examination of the transform is usually limited to a display of its minuted. 3. Periodicity A2D Fourier transform function F(u, v) is periodic with period ‘D’, when, Flu, v) = Flu + pD, V+ qD) Proof ‘The 2D-discrete Fourier transform of flx, ») is, Fl, v) = TroneGery) Flu + pD, v + qD) = LYrene "5 (tu pD)+v+4D) a LY rene fae tae clan oo Span heme trem 2 [ES sane ‘" ] in glen Plu, ve? [Since x, y and p, q are real, e?™” =e?" = 1] *. Flu + pD, v + qD) = Flu, v) 1 = Flu, v) =F (ut pDiv+ gD) = Flu») DIGITAL IMAGE PROCESSING [JNTU: HYDERARAG, 4 Rotation Rotation property of 2D Fourier transform state, “Ifa function fx, ») is rotated by an angie, its regu, Fourier transform also gets rotated by the same angle(gy> For instance, if Ax, y) is rotated by an angle g “fir cos 8, r sin 8), then the discrete Fourier transform ob is, Ss DFTIfr cos 8, x sin @)] > FIR cos 4 R sin gy and DFTIAVr c0s(0 + 0,).r sin(® + 8,)] > FIR coms + 4 Rsin@+4 a Diagrammatic representation of rotation of an image as shown in figure, ee = i | (a) Original image (b) Image rotated by 45° (@ Spectrum of routed mage (©) Original image spectrum, Figure (2): Diagrammatic Representation of Rotation Distribution Statement . ‘The distribution of two functions each having two variables i.e., f(x, ») + /,(x,») from the definition of 2D discrete Fourier transform is, DET f(x,y) +, »)} > DFT (f(x, Proof LHS : )} + DET (te) DET (6.9) +O) = Eee howle * this book is 4 CRIMINAL act. Anyone found guilty is LIABLE to face LEGAL proceedings. , +E fylxvie =F(u,) + Fu) DET {f{e,3) + A599} = DFT Uffay99] + DET UE” . LHS =RHS 6 B MT (Dota image Fundementalsand image Transforms) Scaling 6 The scaling property of discrete Fourier transform is given by the expression, @ afix,y) 2 aFU, 1) Le @ fax.) <= a (2T) Proof: From the definition of discrete Fourier transform, FU.) = Fifty) = ED x,y oe Oe + (10) =F flax, by) = EE fax, by) oa .() Multiplying and dividing the term ¢"" with ‘a’ and e""* with ‘b? in equation (11), we get, Fiflax, by) = "5! 'S gax, by) ml)" F (flax, by) = % flax, by) e Since, Fifa = + (2) Then, - F (hax, by} = flax, by) = Tia F Hence proved. 7. Average Value ‘The average value property of the discrete Fourier transform is given by the expression, 0. 7 -m FWY) = tay fae {i *) (12) Substituting = V = 0 in equation (12), we get, FO.) = sr EE uy ‘of two-variable function f(x, y) is expressed as, =e Vian BE Bae 4 _y) from the definition of 2D discrete Fourier transform is, <2 — ny (+) Flu, v) JOURNAL FOR ENGINEERING STUDENTS, EE DIGITAL IMAGE PROCESSING [JNTU. YOERAB ay LAS ayy are DET (W'¢x.y9) per [2 Pos c-orr [*29] -er[222] Since, the differential property of DFT is, (2muy’ Fu) = ((j2nuy Flu] + (—j2avP FW) (Qn)? Flu) — Qn) VQ) =- Gay te Fw) + 7 FO) ~ Qh? +) Flu, v) LHS oe HE DETIV'Ax, ») LHS Hence proved, Convolution and Correlation Convolution Statement The 2. dimensional convolution of two arrays x,y) and g(x, ») is expressed as; Fey) + 8,9) = EE fludgc—uy-v) Proof " ‘The convolution of (x,y) and g(x,y) from the definition of 2D diserete Fourier transform is, et sane DET ls.) + 965,9)} = EELE'E seu rrete—ny— vee (1) ‘Separating all “S’ and et in equation (13), we get, DFT ffx, ») * gtx, »)} = E[fomet—uy-ne (1) ae, Bete oy Splitting <*“""'s and & ¥ in equation (14), we get, Btn Bt ayy hg E(eu.v)g(x—u,y—ve ret Pha By wins Mure 46 EE e-uy-ye*™ Biyey é The cross-correlation of two funct folded version of other function, Proof ‘The discrete Fourier transform of correlation of two functions a(n) and h(n) is given as, . tions x(n) and h(n) is equivalent to Performing the convolution of one function with the NApNar iy DFr(R, ) = {[Esoonnrahe a mC) Where, ‘xa epresenis the correlation between the two functions x(n) and A(n) = xinyninsayfe rn PX armk ay Monta oS seni tc Ky Anjo fond itis LMBLE to face LEGAL proceadigs. ae

You might also like