P. 1
MC0086

MC0086

|Views: 2,712|Likes:
Published by puneetchawla2003

More info:

Published by: puneetchawla2003 on Nov 19, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

05/10/2013

pdf

text

original

July 2011 Master of Computer Application (MCA) – Semester 6 MC0086 – Digital Image Processing– 4 Credits

(Book ID: B1007)

Assignment Set – 1 (60 Marks)
Answer all Questions Each Question carries fifteen Marks

1. Discuss the following with respect to Digital Image Processing: a. Origins of Digital Image Processing b. Examples of Fields that use Digital Image Processing c. Components of an Image Processing System . A

Processing of digital image involves the following steps to be carried out in a sequence: Image acquisition, Image enhancement, Image restoration, Color image processing, Wavelets and Multiresolution processing, Compression, Morphological processing, Segmentation, Representation with description and finally Object recognition. Image acquisition is the first process. To do so requires an imaging sensor and the capability to digitize the signal produced by the sensor. The sensor could be a monochrome or a color TV camera that produces an entire image of the problem domain every 1/30 seconds. The imaging sensor could also be a line-scan camera that produces a single image line at a time. If the output of the camera or other imaging sensor is not already in digital form, an analog-to-digital converter digitizes it. Note that acquisition could be as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.
B

Image enhancement is one of the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better”. It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Color image processing is an area that has been gaining in

Storage is measured in bytes (eight bits). Gbytes (meaning giga. and (3) archival storage. Fortunately. Seldom are there requirements for image display applications that cannot be met by display cards available commercially as part of the computer system. characterized by infrequent access. Film provides the highest possible resolution. film camera. and Tbytes (meaning tera. in which the intensity of each pixel is an 8-bit quantity. this typically is not a problem. In particular. (2) On-line storage for relatively fast re-call. in which images are subdivided successively into smaller regions. images are displayed on film transparencies or in a digital medium if image projection equipment is used. In dedicated networks. the key consideration in image transmission is bandwidth. inkjet units. providing adequate storage in an image processing system can be a challenge. Because of the large amount of data inherent in image processing applications. or one billion. this is used for image data compression and for pyramidal representation. Mbytes (one million bytes). C Mass storage capability is a must in image processing applications. Networking is almost a default function in any computer system in use today. it is necessary to have stereo displays. and digital units. heat-sensitive devices. requires one megabyte of storage space if the image is not compressed. .importance because of the significant increase in the use of digital images on the Internet. of images. An image of size 1024X1024 pixels. Image displays in use today are mainly color (preferably flat screen) TV monitors. In some cases. such as optical and CD-ROM disks. Kbytes (one thousand bytes). Color is used as the basis for extracting features of interest in an image. Digital storage for image processing applications falls into three principal categories: (1) short-term storage for use during processing. For presentations. or even millions. but communications with remote sites via the Internet are not always as efficient. this situation is improving quickly as a result of optical fiber and other broadband technologies. bytes). or one trillion. and these are implemented in the form of headgear containing two small displays embedded in goggles worn by the user. Wavelets are the foundation for representing images in various degrees of resolution. When dealing with thousands. The latter approach is gaining acceptance as the standard for image presentations. Hardcopy devices for recording images include laser printers. bytes). Monitors are driven by the outputs of image and graphics display cards that are an integral part of the computer system. but paper is the obvious medium of choice for written material.

3. discontinuities in image brightness are likely to correspond to[1][2]: • • • • discontinuities in depth. Edge Linking and Boundary Detection The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. A) Adjacency B) Connectivity C) Regions and Boundaries Specifically. the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation. Undirected graphs often use the latter convention of counting loops twice. and the diagonal entry aii. changes in material properties and variations in scene illumination. discontinuities in surface orientation. the adjacency matrix of a finite graph G on n vertices is the n × n matrix where the non-diagonal entry aij is the number of edges from vertex i to vertex j. If the graph is undirected. is either once or twice the number of edges (loops) from vertex i to itself. It can be shown that under rather general assumptions for an image formation model. Detection of Discontinuities b. The relationship between a graph and the eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory. Thus. There exists a unique adjacency matrix for each isomorphism class of graphs (up to permuting rows and columns). depending on the convention. In the special case of a finite simple graph. the adjacency matrix is a (0. the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects. and it is not the adjacency matrix of any other isomorphism class of graphs. In the ideal case. applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed . whereas directed graphs typically use the former convention. Describe the following with respect to Image Segmentation: a. the adjacency matrix is symmetric.1)-matrix with zeros on its diagonal.2. Explain the following terms.

This texture measure is defined as Where W = 2w + 1 is the dimension of the observation window. During recent years. an edge map array E(j. k) = 0 otherwise. image pattern recognition. Usually. Edges extracted from non-trivial images are often hampered by fragmentation. Texture Definition e. Image Feature Evaluation b. A variation of this approach is to substitute the edge gradient G(j. missing edge segments as well as false edges not corresponding to interesting phenomena in the image – thus complicating the subsequent task of interpreting the image data. and computer vision techniques. If the edge detection step is successful. k) is produced by some edge detector such that E(j. k) = 1 for a detected edge and E(j.6. it is not always possible to obtain such ideal edges from real life images of moderate complexity. k) for the edge map array in Eq. however. b) Rosenfeld and Troy have proposed a measure of the number of edges in a neighborhood as a textural measure. Describe the following with respect to Image Feature Extraction: a. image analysis.and may therefore filter out information that may be regarded as less relevant. Amplitude Features c. As a first step in their process. Transform Coefficient Features d. Here we concentrate our attention on the evaluation of image features amongst . 4. substantial (and successful) research has also been made on computer vision methods[which?] that do not explicitly rely on edge detection as a pre-processing step. the detection threshold is set lower than the normal setting for the isolation of boundary points. while preserving the important structural properties of an image. Visual Texture Discrimination a) This paper is concerned with feature evaluation for content-based image retrieval.[3] Edge detection is one of the fundamental steps in image processing. However. meaning that the edge curves are not connected. the subsequent task of interpreting the information contents in the original image may therefore be substantially simplified.

we use a nearest neighbour approach to threshold the Euclidean distances between pairs of corresponding features. 'vote' for a retrieval candidate in the data-set. To evaluate these image features in a content-based image retrieval setting. variance. b) Image segmentation decisions are typically based on some measure of image amplitude in terms of luminance. This voting scheme allows us to arrange the images in the data set in order of relevance and permits the recovery of measures of performance for each of the three alternatives. In this method. and entropy. For example.three alternatives. the average or mean amplitude in a 2 M+1×2 M+1 neighborhood centered on (x. In this way. With the matches at hand. the retrieval is such that those features whose pairwise distances are small. The features are formed by . median. We use the KD-tree algorithm to match those features corresponding to the query image with those recovered from the images in the data set under study. we have used the KD-tree algorithm. this paper presents a novel feature extraction method based on discrete wavelet transform. namely the Harris corners. c) Discrete wavelet transform has become a widely used feature extraction tool in pattern recognition and pattern classification applications. The amplitude values may be used directly or may result from some transformation of the original pixel values. spectral value. or other units.y) is given by where . energy. Shannon's entropy measure is used for identifying competent wavelet coefficients. Other commonly used amplitude features include range. color value. In our experiments. the maximally stable extremal regions and the scale invariant feature transform.the enormity of data and irrelevant wavelet coefficients may adversely affect the performance. using all wavelet coefficients as features is not desirable in most applications -. we focus in the evaluation of the effects of scaling and rotation in the retrieval performance. In this section we describe amplitude features that result from measurements over a neighborhood of pixels. However. Therefore.

an object. and proportions of its parts: soil of a sandy texture. . an essential or characteristic quality. arrangement. shape. or the like. by the size. d) The visual and especially tactile quality of a surface: rough texture. The method is applied to the lung sound classification problem. the characteristic physical structure given to a material. The experimental results show that the new method performs better than a well-known feature extraction method that is known to give the best results for lung sound classification problem. strands. that make up a textile fabric: coarse texture. essence. the characteristic structure of the interwoven or intertwined threads.. etc.calculating the energy of coefficients clustered around the competent clusters. a cake with a heavy texture.

We also evaluate various measures for capturing local image appearance around each boundary point and conclude that the Mahalanobis distance applied to normalized image intensity profiles extracted normal to the shape is the most suitable criterion among the tested ones for guiding the ASM optimization. In this paper. color value. or other units. b) Image segmentation decisions are typically based on some measure of image amplitude in terms of luminance. Extensive validation of these methods on a database containing more than 400 images of the femur. we evaluate various image features and different search strategies for fitting Active Shape Models (ASM) to bone object boundaries in digitized radiographs. The amplitude values may be used directly or may result from some transformation of the original pixel values. We propose an improved search procedure that is more robust against outlier configurations in the boundary target points by requiring subsequent shape changes to be smooth. but instead optimizes a global Bayesian objective function derived from statistical a priori contour shape and image models. The original ASM method iteratively refines the pose and shape parameters of the point distribution model driving the ASM by a least squares fit of the shape to update the target points at the estimated object boundary position. In this .Master of Computer Application (MCA) – Semester 6 MC0086 – Digital Image Processing– 4 Credits (Book ID: B1007) Assignment Set – 2 (60 Marks) Answer all Questions Each Question carries fifteen Marks 1. which is imposed by a smoothness constraint on the displacement of neighbouring target points at each iteration and implemented by a minimal cost path approach. humerus and calcaneus using the manual expert segmentation as ground truth shows that our minimal cost path method is the most robust. Describe the following features of Image Extraction: A) Image Feature Evaluation B) Amplitude Features C) Transform Coefficient Features a. spectral value. We compare the original ASM search method and our improved search algorithm with a third method that does not rely on iteratively refined target point positions. as determined by a suitable object boundary criterion.

let’s start from image manipulation. The method is applied to the lung sound classification problem. In this method. However.y) is given by where . variance. I had to learn new features. introduced in .NET 3. Contrast Manipulation b. Other commonly used amplitude features include range. Shannon's entropy measure is used for identifying competent wavelet coefficients. 2. So. energy.5 SP1 and Visual Studio 2008 SP1.section we describe amplitude features that result from measurements over a neighborhood of pixels. Noise Cleaning a. In order to begin. b) . I want to perform contrast and brightness manipulation in GPU over displayed image. median. this paper presents a novel feature extraction method based on discrete wavelet transform.the enormity of data and irrelevant wavelet coefficients may adversely affect the performance. For example. you should download and install . The experimental results show that the new method performs better than a well-known feature extraction method that is known to give the best results for lung sound classification problem. the average or mean amplitude in a 2 M+1×2 M+1 neighborhood centered on (x.5 SP1. While being in flight.NET 3. Describe the following with respect to Image Enhancement: a. c) Discrete wavelet transform has become a widely used feature extraction tool in pattern recognition and pattern classification applications. and entropy. The features are formed by calculating the energy of coefficients clustered around the competent clusters. Therefore. using all wavelet coefficients as features is not desirable in most applications -. Meanwhile (it’s about 500 MB of download) we’ll learn how to write custom shader effect. Histogram Modification c.

The Dolby B system (developed in conjunction with Henry Kloss) was a single band system designed for consumer products. The following images illustrate histogram stretching.Many image processing operations result in changes to the image's histogram. You can perform the histogram stretching for an 8-bit image using: ImageStats/Q imageWave Variable normalization=255/(V_max-V_min) ImageWave=normalization*(ImageWave-V_min) The normalization variable makes the subsequent operation a bit more efficient. the frequencies above 1 kHz would be boosted. then decreased proportionately during playback (decoding). Since Analog video recordings use frequency . if the image is under-exposed its values would only occupy the lower part of the dynamic range. It used a root-mean-squared (RMS) encode/decode algorithm with the noise-prone high frequencies boosted. while not as effective as Dolby A. Dbx operated across the entire audible bandwidth and unlike Dolby B was unusable as an open ended system. This had the effect of increasing the signal to noise ratio on tape up to 10dB depending on the initial signal volume. However it could achieve up to 30 dB of noise reduction. In each case the image is shown on the left and the corresponding luminance histogram is shown on the right. had the advantage of remaining listenable on playback systems without a decoder. When it was played back. Intended for professional use. in effect reducing the noise level by up to 10dB. The class of histogram modifications which we consider here include operations where the changes to pixel levels are computed so as to change the histogram in a particular way. and the entire signal fed through a 2:1 compander. Below the image we show the histograms of the RGB components. The Dolby B system. Dolby Type A was an encode/decode system in which the amplitude of frequencies in four bands was increased during recording (encoding). the decoder reversed the process. Histogram Stretching The simplest form of histogram modification is histogram stretching. For example. Dbx was the competing analog noise reduction system developed by dbx laboratories. when recording quiet parts of an audio signal. In particular. the first widely used audio noise reduction technique was developed by Ray Dolby in 1966. c) While there are dozens of different kinds of noise reduction.

modulation for the luminance part (composite video signal in direct colour systems). the Detection Algorithm works with triangulation methods. 1 Introduction Feature recognition has become an attractive field for research especially within industrial applications.. . and we show that this yields approximation of second order accuracy instead of first order as in the global case. such as discontinuities across planar curves. unlike Dolby and dbx Type I & Type II noise reduction systems. Describe the following with respect to Image Segmentation: A) Detection of Discontinuities B) Edge Linking and Boundary Detection a) A Detection Algorithm for the localisation of unknown fault lines of a surface from scattered data is given.[4][5] 3.[1] First sold in 1981. introduced in 1971. introduced by National Semiconductor to reduce noise levels on long-distance telephony. DNR is a playback-only signal processing system that does not require the source material to first be encoded. DNR is frequently confused with the far more common Dolby noise reduction system. audio style noise reduction is unnecessary. The method is based on a local approximation scheme using thin plate splines. In geophysical sciences these discontinuities are referred to as fault lines [1]. and it can be used together with other forms of noise reduction. The motivation for this work is given by applications from oil industry where method. [9]. Furthermore. with the circuitry on a single chip.. which keeps the tape at saturation level. [20]. A feature of a surface f : R 2 ! R typically reflects characteristic properties of f .[3] It was a development of the unpatented Philips Dynamic Noise Limiter (DNL) system. The output of our method provides polygonal curves which can be used for the purpose of constrained surface approximation.[2] However. Dynamic Noise Reduction Dynamic Noise Reduction (DNR) is an audio noise reduction system. and we show their utility for the approximation of the fault lines.

Second-Order Derivative Edge Detection c. edge linking methods can be classified into two categories: Local Edge Linkers -. • • Small pieces of edges may be missing. Global Edge Linkers -. particularly in the areas of feature detection and feature extraction. more formally. which aim at identifying points in a digital image at which the image brightness changes sharply or.where edge points are grouped to form edges by considering each point's relationship to any neighbouring edge points. . Edge detection is a fundamental tool in image processing and computer vision. Luminance Edge Detector Performance a. such as points which share the same edge equation. The next step is to try to collect these pixels together into a set of edges. etc. Describe the following with respect to Edge Detection: a.b) Edge detectors yield pixels in an image lie on edges. Thus. Small edge segments may appear to be present due to noise where there is no real edge. Edge-Fitting Edge Detection d. First-Order Derivative Edge Detection b. The same problem of finding discontinuities in 1D signals is known as step detection. has discontinuities. The practical problem may be much more difficult than the idealised case. In general. our aim is to replace many points on edges with a few edges themselves. 4.where all edge points in the image plane are considered at the same time and sets of edge points are sought according to some similarity constraint.

. discontinuities in surface orientation. it is not always possible to obtain such ideal edges from real life images of moderate complexity. In the ideal case.[3] b) John Canny considered the mathematical problem of deriving an optimal smoothing filter given the criteria of detection. It can be shown that under rather general assumptions for an image formation model. If the edge detection step is successful.[10] Unless the preconditions are particularly suitable. the subsequent task of interpreting the information contents in the original image may therefore be substantially simplified. meaning that the edge curves are not connected. missing edge segments as well as false edges not corresponding to interesting phenomena in the image – thus complicating the subsequent task of interpreting the image data.[7] He showed that the optimal filter given these assumptions is a sum of four exponential terms.[9] Although his work was done in the early days of computer vision. the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation. Thus. the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects. That observation was presented by Ron Kimmel and Alfred Bruckstein. applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant. localization and minimizing multiple responses to a single edge. it is hard to find an edge detector that performs significantly better than the Canny edge detector. the Canny edge detector (including its variations) is still a state-of-the-art edge detector. while preserving the important structural properties of an image. Canny also introduced the notion of non-maximum suppression.[8] It took less than two decades to find a modern geometric variational meaning for that operator that links it to the Marr–Hildreth (zero crossing of the Laplacian) edge detector. changes in material properties and variations in scene illumination. discontinuities in image brightness are likely to correspond to[1][2]: • • • • discontinuities in depth. edge points are defined as points where the gradient magnitude assumes a local maximum in the gradient direction.The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. Edges extracted from non-trivial images are often hampered by fragmentation. Looking for the zero crossing of the 2nd derivative along the gradient direction was first proposed by Haralick . However. which means that given the presmoothing filters. He also showed that this filter can be well approximated by first-order derivatives of Gaussians.

precisely. We present an edge detection and line fitting procedure which ascribes a direction. Moreover. c) The detection and tracing of edges of varying diffusion is a problem of importance in image analysis. We discuss predictor-corrector procedures for performing this edge tracing where predicted and calculated lines and confidences are used to generate a better fitting line. The performance of the procedures is demonstrated using both synthetic and satellite meteorological images. So the first problem encountered with modeling this biological process is that of defining. although starting from a discrete viewpoint and then leading to a set of recursive filters for image smoothing instead of exponential filters or Gaussian filters. In particular. The usual approach is to simply define edges as step discontinuities in the image signal. However. d) It seems clear. of the objects in a scene. or occluding boundaries. and quality of fit to the edge within a square segment of a controlled size or ``scope. there is much physiological evidence suggesting that one form of this compression involves finding edges and other information-high features in images. a measure of gradient. that some form of data compression occurs at a very early stage in image processing.[11] The differential edge detector described below can be seen as a reformulation of Canny's method from the viewpoint of differential invariants computed from a scale-space representation leading to a number of advantages in terms of both theoretical analysis and subpixel implementation. large luminance changes can also correspond to surface markings on objects. and consequently they often indicate the edges.The Canny-Deriche detector was derived from similar mathematical criteria as the Canny edge detector. The method of localising these discontinuities often then becomes one of finding local maxima in the derivative of the signal. it is of interest for the segmentation of meteorological and physiological pictures where the boundaries of objects are possibly not well defined or are obscured to a varying extent by noise. what an edge might be.'' To detect and fit edges to diffuse objects the scope is adaptively altered based on the confidence of fit to permit tracing of the object's boundary. or zero-crossings in the . both from biological and computational evidence. Points of tangent discontinuity in the luminance signal (rather than simple discontinuity) can also signal an object boundary in the scene. Edges often occur at points where there is a large variation in the luminance values in an image.

and many others [3. This idea was first suggested to the AI community.second derivative of the signal. An even symmetric filter will approximate a second derivative operator. edge detection is traditionally implemented by convolving the signal with some form of linear filter.4].2]. or lines. An odd symmetric filter will approximate a first derivative. both biologically and computationally. . and later developed by Marr and Hildreth [6]. In computer vision. usually a filter that approximates a first or second derivative operator. Zero-crossings in the output of convolution with an even symmetric filter will correspond to edges. often referred to as bars. and peaks in the convolution output will correspond to edges (luminance discontinuities) in the image. maxima in the output of this operator will correspond to tangent discontinuities. Canny [1. by Marr [5].

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->