A Technical PRESENTATION ON IMAGE PROCESSING

ADITYA ENGINEERING COLLEGE RAJAHMUNDRY

PRESENTED BY G.M.M.V.KRISHNA &P.SANDEEP REDDY 2nd B. TECH ELECTRONICS AND COMMUNICATION ENGINEERING mkrishna612@gmail.com
sandeepreddy.padala@gmail.com

ABSTRACT

In the present world Computer Graphics plays an important role. The areas here we

are using computer graphics are Entertainment, Presentations, Education and training, Visualization, Design, Image Processing and Graphical User Interface. In all these Image Processing has its own importance.

Image Processing deals with how we can improve the clarity of image and to manipulate the image which is a ver important application of computer graphics. In Image processing we are doing some operation on image.

This paper mainly concentrates on what is an image and how processing takes place, digital image. It also deals with Characteristics of image operations like types of operations and types of neighborhood, video parameters, statistics of images, contour representations like chain code, crack code, run code.

This paper also deals with Noise that contaminates the images acquired from modern sensors and one of the main applications of Image Processing that is cameras.

INTRODUCTION:

Modern digital technology has made it possible to manipulate multi-

dimensional Signals with systems that range from simple digital circuits to advanced parallel computers. The goal of this manipulation can be divided into three categories: • Image Processing image in ->image out • Image Analysis image in ->measurements out • Image Understanding image in ->highlevel description out

resonance imaging, the direct physical measurement yields a complex number in the form of a real magnitude and a real phase. DIGITAL IMAGE: A digital image a[m,n] described in a 2D discrete space is derived from an analog image a(x,y) in a 2D continuous space through a sampling process that is frequently referred to as digitization.

The 2D continuous image a(x,y) is divided An image defined in the “real world” is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). An image may be considered to contain sub-images sometimes referred to as regions of interest(ROI) or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition. into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,…,M–1} and {n=0,1,2,…,N–1} is a[m,n]. In fact, in most cases a(x,y) which we might consider to be the physical signal that impinges on the face of a 2D sensor is actually a function of many variables including depth (z), color(  , and time (t). )

The amplitudes of a given image will almost always be either real numbers or integer numbers. The latter is usually a result of a quantization process that converts a continuous range to a discrete number of levels. In certain image-forming processes, however, the signal may involve photon counting which implies that the amplitude would be inherently quantized. In other image forming procedures, such as magnetic Digitization of continuous image In the above figure the coordinates with [m=10, n=3] has the highest brightest value.

CHARACTERISTICS OF IMAGE OPERATIONS: There are a variety of ways to classify and characterize image operations. The reason for doing so is to understand what type of results we might expect to achieve with a given type of operation or what might be the computational burden associated with a given operation. Types of operations: The types of operations that can be applied to digital images to transform an input image a[m,n] into an output image b[m,n] (or another representation) can be classified into three categories. • Point – Characterization is the output value at a specific coordinate is dependent only on the input value at that same coordinate. • Local –Characterization is the output value at a specific coordinate is dependent on the input values in the neighborhood of that same coordinate. • Global –Characterization is the output value at a specific coordinate is dependent on all the values in the input image.

and how that relates to the various neighborhoods that can be used to process an image. • Rectangular sampling – In most cases, images are sampled by laying a rectangular grid over an image as illustrated in above figure. This results in the type of sampling shown in figure below. • Hexagonal sampling – An alternative sampling scheme is shown in figure below and is termed hexagonal sampling. Both sampling schemes have been studied extensively and both represent a possible periodic tiling of the continuous image space. We will restrict our attention, however, to only rectangular sampling as it remains, due to hardware and software considerations, the method of choice. Some of the most common neighbourhoods are the 4connected neighborhood and the 8-connected neighborhood in the case of rectangular sampling and

the 6-connected

neighborhood in the case of

hexagonal sampling illustrated in Figure. Figure (a) Figure(c) Rectangular sampling Rectangular Hexagonal sampling 4-connected 6-connected Figure(b) sampling 8-connected

Note: Complexity is specified in operations per pixel. Illustration of various types of image operations Types of neighborhoods: Neighborhood operations play a key role in modern digital image processing. It is therefore important to understand how images can be sampled

IMPORTANCE MAGNITUDE:

OF

PHASE

AND

NOISE Images acquired through modern sensors may be contaminated by a variety of noise sources. By noise we refer to stochastic variations as opposed to deterministic distortions such as shading or lack of focus. We will assume for this section that we are dealing with images formed from light using modern electro-optics. In particular we will assume the use of modern, charge-coupled device (CCD) cameras where photons produce electrons that are commonly referred to as photoelectrons. Nevertheless, most of the observations we shall make about noise and its various sources hold equally well for other imaging modalities. PHOTON NOISE When the physical signal that we observe is based upon light, then the quantum nature of light plays a significant role. A single photon at = 500 nm carries an energy of E = h  hc/  3.97  0–19 = = 1 Joules. Modern CCD cameras are sensitive enough to be able to count individual photons.

Both the magnitude and the phase functions are necessary for the complete reconstruction of an image from its Fourier transform. Figure (a) shows what happens when Figure below is restored solely on the basis of the magnitude information and Figure(b) shows what happens when Figure(1) is restored solely on the basis of the phase information. Figure (1)

THERMAL NOISE An additional, stochastic source of electrons in a CCD well is thermal energy. Electrons can be freed from the CCD material itself through thermal Figure(a) Figure(b) Neither the magnitude information nor the phase information is sufficient to restore the image. The magnitude–only image (Figure a) is unrecognizable and has severe dynamic range problems. The phaseonly image (Figure b) is barely recognizable, that is, severely degraded in quality. vibration and then, trapped in the CCD well, be indistinguishable from “true” photoelectrons. By cooling the CCD chip it is possible to reduce significantly the number of “thermal electrons” that give rise to thermal noise or dark current. AMPLIFIER NOISE The standard model for this type of noise is additive, Gaussian, and independent of the signal. In modern well-designed electronics, amplifier noise is generally negligible. The most common exception to

this is in color cameras where more amplification is used in the blue color channel than in the green channel or red channel leading to more noise in the blue channel. APPLICATIONS: CAMERAS: The cameras and recording media available for modern digital image processing applications are changing at a significant pace. Video cameras Values of the shutter speed as low as 500 ns are available with commercially available CCD video cameras although the more conventional speeds for video are 33.37ms (NTSC) and 40.0 ms (PAL, SECAM). Values as high as 30 s may also be achieved with certain video cameras although this means sacrificing a continuous stream of video images that contain signal in favor of a single integrated image amongst a stream of otherwise empty images. Subsequent digitizing hardware must be capable of handling this situation. Scientific cameras Again values as low as 500 ns are possible and, with cooling techniques based onPeltier-cooling or liquid nitrogen cooling, integration times in excess of one hour are readily achieved. CONCLUSIONS I concluded that image processing is one of the pioneering application of computer graphics. Due to the vitality of the image processing it is dealt as a separate subject. In image processing many new techniques were developed and still developing to overcome the disturbances created by noise when acquiring images through modern sensors. In present technology, movies mainly consist of animations and graphics. Image processing plays a major role in

animations. So in the future, importance of image processing increases to a very large extent. This image processing requires highlevel

involvement, an understanding of system aspects of graphics software, a realistic feeling for graphics, system capabilities and ease of use. REFERENCES: THE IMAGE PROCESSING HANDBOOK – RUSS T.CDIGITAL IMAGE PROCESSING – GONZALEZ, R.E.WOODS WWW.GOOGLE.CO.IN\IMAGE PROCESSING

Sign up to vote on this title
UsefulNot useful