You are on page 1of 25

SEMINAR REPORT

Image Processing Introduction and Application

Guided By By Mayank Srivastava Ravi Kumar Verma Asst. Professor ECE 3rd Year

ECE Dept. RKGIT 0803331092


CERTIFICATE

This is to certify that Ravi Kumar Verma of ECE 6 th semester has worked hard under my guidance on the seminar topic assigned to him. He has been honest and determined throughout the seminar conducted.

GUIDE FACULTY MR. MAYANK SRIVASTAVA ASST. PROFESSOR DEPT. OF ECE, RKGIT

ACKNOWLEDGMENT

I extend my sincere gratitude towards Prof. K.K. Tripathi Head of Department for giving us his invaluable knowledge and wonderful technical guidance. I express my thanks to Mayank Srivastava Sir, who guided me and provided with all the usefull information for presenting this seminar.

I also thank all the other faculty members of ECE department and my friends for their help and support.

Ravi Kumar Verma ECE 3rd year 0803331092

ABSTRACT

Image Processing, in its broadest and most literal sense, aims to address the goal of providing practical, reliable and affordable means to allow machines to cope with images while assisting man in his general endeavors. The term image processing itself has become firmly associated with the much more objective of modifying images such that they are either: a. Corrected for errors introduced during acquisition or transmission (restoration); or b. Enhanced to overcome the weakness of human visual system (enhancement)

TABLE OF CONTENTS (a) Acknowledgment (b)Abstract 1. 2. 3. 4. Introduction Image and Image Processing Vision and Computer vision Types of image processing

5. Steps involved in image processing 6. Components of image processing 7. 8. 9. Image sensors(CCD and CMOS) Applications Conclusion

10. References

Introduction
Images are a vital and integral part of everyday life. On an individual or person-to-person basis, images are used to reason, interpret, illustrate, represent, memorize, educate, communicate, evaluate, navigate, survey, entertain, etc. We do this continuously and almost entirely without conscious effort. As man builds machines to facilitate his ever more complex lifestyle, the only reason for NOT providing them with the ability to exploit or transparently convey such images is a weakness of available technology. Interests in image processing processing stems from two principal application areas: a) Improvement of pictorial information for better human interpretation b) Processing of scene data for autonomous machine perception One of the first applications of image processing techniques in the first category was in improving digitized newspaper sent by submarine cable between London and Newyork. From then till these days, image processing is continuously improving human vision. The field has grown so vigorously that it is now used to solve variety of problems ranging from improving vision to space program, in geographical information systems, in medicines, in surveillance etc. Geographers use the same technique to study pollution patterns from aerial and satellite imagery. Image enhancement and restoration techniques are used to process degraded images of unrecoverable objects or experimental results too expensive to duplicate. In archaeology, image processing methods have successfully restored blurred pictures that were the only available records of rare artifacts lost or damaged after photographed. In physics and related fields, computer techniques routinely enhance images of experiments in areas such as high energy plasma and electron microscopy. Similarly successful applications of image processing can be found in astronomy, biology, nuclear medicine, law enforcement, defense, and industrial applications. Typical problems in machine perception that routinely utilize image processing techniques are automatic character recognition, industrial machine vision for product assembly and inspection, military recognizance, automatic processing of fingerprints, screening of x-rays and blood samples, and machine processing of aerial and satellite imagery for weather prediction and crop assessment.

IMAGE
An image (Latin: imago) is an artifact, for example a two-dimensional picture, that has a similar appearance to some subjectusually a physical object or a person. Mathematically image can be defined as, Image is a two dimensional light intensity function, f(x, y), where the value of f at a spatial location (x, y) is the intensity of the image at that point. Digital image is obtained by sampling and quantizing the function f(x, y). The function f(x, y) can be a measure of the reflected light (photography), X-ray attenuation (XRays) or any other physical parameter. Digital Image is actually an image discretized both in spatial coordinates and brightness. A digital image can be considered a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. The elements of such digital array are called image elements, picture elements, pixels, or pels.

IMAGE PROCESSING
In electrical engineering and computer science, image processing is any form of signal for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. In short, Act of examining images for the purpose of identifying objects and judging their significance.

An image may be considered to contain sub-images sometimes referred to as regions-of-interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition.

WHY DO WE NEED IMAGE PROCESSING?


A)

Improvement of pictorial information for human interpretation Processing of scene data for autonomous machine perception

B)

Improvement of pictorial information for human interpretation


A) Involved selection of printing procedures and distribution of brightness levels B) Improvements on processing methods for transmitted digital pictures

Application areas include a) Archeology b) c) d) e) f) g)


Astronomy Biology Industrial Applications Law enforcements Medical Imaging Space program etc.

Processing of scene data for autonomous machine perception

Focuses on procedures for extracting from image information in a form suitable for computer processing. NOTE: Often this information bears little resemblance to visual features that human beings use in interpreting the content of an image.

Application areas include:


a) Automatic Optical Character Recognition b) Machine vision for product assembly and inspection c) Military recognizance d) Automatic fingerprint matching etc.

Vision and Computer Vision


Whatever human eyes see and then perceive the world around - VISION To duplicate human eye by electronically perceiving and understanding the image by any means COMPUTER VISION

Computer Vision

Vision

TYPES OF PROCESSING

IMAGE

Based on the mode of techniques used image processing can be broadly categorized into following three types:

A) Analog Image Processing B) Digital Image Processing C) Optical Image Processing ANALOG IMAGE PROCESSING
Is any image processing task conducted on two-dimensional analog signals by analog means.

DIGITAL IMAGE PROCESSING


Is the use of computer algorithms to perform image processing on digital images.

OPTICAL IMAGE PROCESSING


Is the use of optical techniques to process image for increasing clarity and extracting information from the image.

BASED ON THE TRANSFORMATIONS IMAGE PROCESSING IS CLASSIFIED INTO FOLLOWIN TYPES

a) Image-to-image transformation b) Image-to-information transformation c) Information-to-image transformation

IMAGE TO IMAGE TRANSFORMATION


Enhancement (make image more useful, pleasing) Restoration (DE blurring, grid, line removal) Geometry (scaling, sizing, zooming, morphing etc.)

IMAGE TO INFORMATION TRANSFORMATION


Image statistics (histograms) Image compression Image Analysis (segmentation, feature extraction) Computer aided detection and diagnosis (CAD)

INFORMATION TO IMAGE TRANSFORMATION

Depression of compressed image data

Reconstruction of image Computer graphics, animation and virtual reality

STEPS INVOVED IN IMAGE PROCESSING


Image processing encompasses a broad range of hardware, software, and the theoretical underpinnings.

Following flow diagram clearly depicts the important steps involved in image processing:

IMAGE ACQUISTION
The first step in the process is image acquisition-that is, to acquire a digital image. To do so requires following elements:

a) Imaging Sensors b) Digitizer Imaging sensors acquires image and the digitizer converts that image into computer understandable language of digital form. The imaging sensor could be a monochrome or color TV camera that produces an entire image of the problem domain every 1/30 sec. The imaging sensor could also be a line camera that produces a single image line at a time. In this case the objects motion past the line scanner produces a two-dimensional image. If the output of camera or the other imaging sensor is not in digital form, that is achieved by an ADC (analog to digital converter). The nature if the sensor and the image it produces are determined by the application. For ex/-mail reading applications greatly rely on line-scan cameras.

PREPROCESSING
After a digital image has been acquired, the next step deals with the preprocessing of the image. The key function of preprocessing is to improve the image in ways that increase the chances for success of other processes. Mainly, preprocessing deals with the techniques for enhancing contrast, removing noise, and isolating regions whose textures indicate a likelihood of alphanumeric information.

SEGMENTATION
The next stage deals with segmentation. Broadly defined, segmentation partitions an input image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. On the one hand a rugged segmentation brings the process a long way towards successful solution of an imaging problem. On the other hand erratic segmentation results always into eventual failure. The output of the segmentation is raw pixel data, constituting either the boundary of a region or all the points in the region itself. In either case, converting data to a form suitable to computer processing is necessary. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners or inflections. Regional representations are appropriate when the focus is on internal

properties, such as textures or skeletal shape. In some situations both representations may coexist.

REPRESENTATION AND DESCRIPTION


Choosing a representation is only a part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description also called feature selection, deals with extracting features that result in some quantitative information of interest or features that are basic for differentiating one class of object from other.

RECOGNITION AND INTERPRETATION


The last stage involves recognition and interpretation. Recognition is the process that assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. In terms of example, identifying a character as, say, a c requires associating the descriptors for that character with the label c. Interpretation attempts to assign meaning to a set of labeled entities.

KNOWLEDGE BASE
Knowledge about a problem domain is coded into an image processing in the form of knowledge database. This knowledge base may be as simple as detailing regions of an image where the information of interest is known to be located. It can be quite complex too, such as an

interrelated list of all major possible defects in a materials inspection problem or an image database containing high resolution satellite images of a region in connection with changedetection applications. In addition to this, knowledge database also controls the interaction between modules.

COMPONENTS OF IMAGE PROCESSING


Image Sensors Image Displays Image Processing Software(OpenCV,Matlab,CIMG) Image Processing Hardware

Memory

IMAGE SENSORS
Sensors are device which convert illumination energy into digitized form. An image sensor is a device that converts an optical image to an electric signal. It is used mostly in digital cameras and other imaging devices. Early sensors were video camera tubes but a modern one is typically a charge-coupled device (CCD) or a complementary metaloxide semiconductor (CMOS) active pixel sensor. Following are the sensors which are used dominantly in image processing:

Charge Couple Devices (CCD) Complementary MOSFET (CMOS)

CHARGE COUPLED DEVICES (CCD)

A charge-coupled device (CCD) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. Often the device is integrated with an image sensor, such as a photoelectric device to produce the charge that is being read, thus making the CCD a major technology for digital image. Although CCDs are not the only technology to allow for light detection, CCDs are widely used in professional, medical, and scientific applications where high-quality image data is required.

COMPLIMENTARY MOSFETs SENSORS (CMOS)

CMOS sensors also known as ACTIVE PIXEL SENSORS(ASP), uses integrated circuits like transistors at each pixel that amplify and move the charge using more traditional wires.

The CMOS approach is more flexible as each pixel can be read individually.

CCD VS CMOS
Most digital still cameras use either a CCD image sensor or a CMOS sensor. Both types of sensor accomplish the same task of capturing light and converting it into electrical signals. A CCD is an analog device. When light strikes the chip it is held as a small electrical charge in each photo sensor. The charges are converted to voltage one pixel at a time as they are read from the chip. Additional circuitry in the camera converts the voltage into digital information. A CMOS chip is a type of active sensor pixel (ASP) made using the CMOS semiconductor process. Extra circuitry next to each photo sensor converts the light energy to a voltage. Additional circuitry on the chip may be included to convert the voltage to digital data. Neither technology has a clear advantage in image quality. On one hand, CCD sensors are more susceptible to vertical smear from bright light sources when the sensor is overloaded; highend CCDs in turn do not suffer from this problem. CMOS can potentially be implemented with fewer components, use less power, and/or provide faster readout than CCDs. CCD is a more mature technology and is in most respects the equal of CMOS. CMOS sensors are less expensive to manufacture than CCD sensors.

APPLICATIONS
Medicine Defense Meteorology Environmental science Manufacture Surveillance Crime investigation Script Recognition Optical Character Recognition Handwritten Signature Verification

One of the first applications of image processing techniques in the first category was in improving digitized newspaper sent by submarine cable between London and New York. From then till these days, image processing is continuously improving human vision. The field has grown so vigorously that it is now used to solve variety of problems ranging from improving vision to space program, in geographical information systems, in medicines, in surveillance etc. Geographers use the same technique to study pollution patterns from aerial and satellite imagery. Image enhancement and restoration techniques are used to process degraded images of unrecoverable objects or experimental results too expensive to duplicate. In archaeology, image processing methods have successfully restored blurred pictures that were the only available records of rare artifacts lost or damaged after photographed. In physics and related fields, computer techniques routinely enhance images of experiments in areas such as high energy plasma and electron microscopy.

CONCLUSION
Using image processing techniques, we can sharpen the images, contrast to make a graphic display more useful for display, reduce amount of memory requirement for storing image information, etc., due to such techniques, image processing is applied in recognition of images as in factory floor quality assurance systems; image enhancement, as in satellite reconnaissance systems; image synthesis as in law enforcement suspect identification systems, and image construction as in plastic surgery design systems.

REFERENCES [1] Digital Image Processing, Third Edition, Gonzalez


[2] [3] [4] [5] [6] Wikipedia Awcock G.W. & Thomas R. (1996) Applied Image Processing. Sid Ahmed (1995) Image Processing. William K. Pratt (1978) Digital Image Processing. Christopher Watkins, Alberto Sadun, Stephen Marenka Mordern Image Processing. [7] Maher A. Sid-Ahmed Image Processing. [8] G.W.Awcock, R. Thomas Applied Image Processing. [9] Google

You might also like