You are on page 1of 23

ABSTRACT

Measuring objects within an image or frame can be an important capability for many
applications where computer vision is required instead of making physical measurements.
This application note will cover a basic step-by-step algorithm for isolating a desired object
and measuring its diameter. MATLAB is a high-level language and interactive environment
for computer computation, visualization, and programming. Image Processing Toolbox is an
application available for use in MATLAB, which provides a comprehensive set of reference-
standard algorithms, functions, and apps for image processing, analysis, visualization, and
algorithm development.” Using these tools provides a fast and convenient way to process and
analyze images without the need for advanced knowledge of a complex coding language it is
used to measure the dimensions of the object with in the image without using physical
measurements.

iv
INDEX

S.NO CONTENT PAGE


NO

1 INTRODUCTION 1-5
1.1 LITERATURE SURVEY

2 IMAGEPROCESSING,MEASURINGDIGITAL 6 -10
2.1 IMAGE
2.1.2 PIXEL
2.2 DIGITAL IMAGE PROCESSING
2.3 NONCONTACT MEASUREMENT
2.3.1 MEASURING IN IMAGE
2.4 IMAGENOISE
2.4.1 HOLES
2.4.2 BLOB

3 SOFTWARE USED 11-12


3.1 MATLAB
3.1.1 IMAGE TOOLBOX
3.2 BENEFITS

4 SYSTEM DEVELOPMENT 13-14


4.1 ALGORITHM
4.1.1 IMAGE IMPORT
4.1.2 SEGMENTATION
4.1.3 THRESHOLDING
4.1.4 REMOVE NOISE
4.1.5 MEASURING

5 15 -18
MODELLING

6 19
RESULT

7 20
REFERENCE

v
LIST OF FIGURES

S.NO NAME OF THE FIGURE PAGE NO

1. PIXEL 12

2. ORIGINAL IMAGE 19

SEGMENTED IMAGE 20
3.

4. THRESHOLDED AND HOLES 21


FILLED IMAGE

5. NOISE REMOVAL
22

vi
1. INTRODUCTION

Nowadays, non-contact distance measurement becomes a very popular method .Non-contact


measurement means measuring offers the ability to assess the dimensional integrity or
profile of a part quickly without touching the surface also distance measurement without
any physical contact between the distances meter and the measured object .The importance
of determining dimensions of objects remotely has been increased especially in the
regulations robot and control systems industry.

If you want to measure objects that are represented by simple shapes like circles, ellipses,
rectangles, or lines, and you have approximate knowledge about their positions,
orientations, and dimensions, you can use 2D metrology to determine the exact shape
parameters. In particular, the values of the initial shape parameters are refined by a
measurement that is based on the exact location of edges within so called measure regions.

1.1Measurement tasks

For abroad range of different 2D measurement tasks. Measuring in images consists of the
extraction of specific features of objects. 2D features that are often extracted comprise

1.The area of an object, i.e., the number of pixels representing the object,

2. The orientation of the object

3. The angle between objects or segments of objects

4. The position of an object

5. The dimension of an object, i.e., its diameter, width, height, or the distance between
objects or parts of objects.

6.the number of objects. To extract the features, several tools are available. Which tool to
choose depends on the goal of the measuring task ,the required accuracy, and the way the
object is represented in the image.

1.2 Preprocess Image or Region

A preprocessing is recommended if the conditions during the image acquisition are not ideal,
e.g., the image isnoisy, cluttered, or the object is disturbed or overlapped by objects of small
extent so that small spots or thin lines prevent the actual object of interest from being

1
described by a homogeneous region. Often applied preprocessing steps comprise the
elimination of noise using mean image or binomial filter and the suppression of small spots
or thin lines with median image. Further operators common to preprocess the whole image
comprise. A smoothing of the image can be realized with smooth image function. If you want
to smooth the image but you want to preserve edges, you can apply anisotropic diffusion
instead.For regions, holes can be filled up using fill up or a morphological operator.
Morphological operators modify regions to suppress small areas, regions of a given
orientation, or regions that are close to other regions. When having an inhomogeneous
background, a shading correction is suitable to compensate the influence of the background.
There, a reference image of the background without the object to measure is taken and
subtracted from the images containing the object.

1.3 Segment the Image into Regions


After the preprocessing the image must be segmented into suitable regions that represent the
objects of interest. Several kinds of threshold operators are available that segment a gray-
value image or a single channel of a multi channel image according to its gray value
distribution. When choosing a threshold manually, it may be helpful to get information about
the gray value distribution of theimage. Tosplit this region into several regions, i.e., one
region for every connected component, the operator connection must be applied. In many
cases, a modification of the regions is necessary. For example, small gaps or small
connections can beeliminated by morphological operators.

1.4 Literature Survey

1.4.1 Traditional system

The most common that you probably noticed would be in crime scene photos when an object
of known size such as a pencil or even better a ruler is placed next to the object of interests.
From this it is very easy to testimage the size of the object of interest.

As long as you know the focal length and the object distance, both of which some lenses
return to the camera you can calculate the real size of an item in the image. The focal length
will give you the lens field of view in terms of degrees.

2
For example, a 100mm lens has a vertical field of view of 14 degrees on a full frame camera.
Something that fills half your frame vertically therefore uses 7 degrees of field of view.
If you no the distance, you can now build a right angled triangle with internal angles of 7, 90
and 83 degrees (cos the internal angles of a triangle add up to 180 degrees).
At this point you have enough information to use the law of tangents (you need two angles
and a length or two lengths and an angle) to calculate the remaining sides of the triangle and
hence the size of the object.
Another method would be to use some clever mathematics based on whatever information
you can gather such as the focal length of a camera or the angle of your camera's view
frustum.

1.4.2 Detailed study of some research and journal papers

Image segmentation algorithm research and improvement(IEEE)[1] this paperproposes some


improvements for the traditional algorithm in image segmentation.

Rajagopalan and Chaudhuri [2] divided the image into many sub-images and superimposed
the sub-images by using block shift variant blur model

Thresholding: A Pixel-Level Image Processing by H. K. Anasuya Devi[3] In this paper we


study the methodology employed for preprocessing the archaeological images.

Supervised Classification Methods for object identification by ijesmr[4] journal Due to


increasing spactiotemporal dimensions of the remote sensing data,traditional classification
algorithms have exposed weakness necessitating further research in the remote sensing
data,traditional classification .So an efficient classifier is needed to classify the google map
imageris to extract information.In this paper we compare the different classification methods
and their performances.

Algorithms for image Thresholding by Mohmad A.EI-sayed[5] this paper describes about
various threhsolding algorithms like spatial thresholding and improved adaptive thresholding.

Image denoising using wavelet thresholding by kaur[6].This paper proposes an adaptive


threshold estimation method for image denoising in the wavelet domain based on the
generalized Guassian distribution(GGD)modelling of subband coefficients.

3
R.Thilepa and M.Thanikachalam (2010)[7] mention the image processing technique using
MATLAB that identify the fault presents on the fabrics. For this image processing technique
first image is taken, Noise Filtering, Histogram and Thresholding techniques are applied on
the image and the output is get. First Input color fabric fault image to the Matlab in image
processing system is done, then Color image to gray image conversion is done, then Noise
removal and filtering from the image is done, ThenBinary image conversion from the noise
removed output is done, then Histogram output is obtain, then Thresholding technique is
applied.

Ching Yee Yong et.al. (2012) [8] mention that MATLAB is among the famous software
package used in image processing. In this paper first Introduces general view of the
visualization tools in medical image processing.Then objectives of the study, and then
discusses the background of studies, literature review and the study implementation. Then the
computerEnvironment, developmental tool, processing and analysis of various medical
images are discussed. Then mansion the results, conclusions, future developments and
possible enhancement and improvement on this study.

Sachin V. Bhalerao andDr. A. N. Pawar(2012)[9] developedsystematic method for automatic


interpretation of thermal paints. First surface preparation is done and image is captured. Then
image acquisition, image segmentation, image processing using a proper algorithm for
surface temperature interpretation is done this stage consist another three stages Image
Filtration and Enhancement, Boundary Detection, Interpretation.

Anita Chaudhary and Sonit SukhrajSingh (2012)[10] state three stages in study preprocessing
stage, feature extraction stage and lung cancer cell identification stage. Watershed
segmentation method is used to separate a lung of a CT image and then apply a small
scanning window to check if any pixel is part of a disease lump. Most of the lumps can be
detected if parameters are carefully selected. The main purpose of the study is to computerize
these selections.

Ms. JyotiAtwal and Mr. SatyajitSenPurkayastha(2012)[11] developed digital imaging


processing technique that has been used to find out different types of characteristics to
identify different rice varieties. Applications which are based on image processing
performing hard core processing techniques such as HSI model for morphological properties
analysis, Raster Scanning for dimensional analysis. The morphological features were took out
and processed by linear discriminate analysis to get better efficiency of the identification

4
process. In this paper they work on physical separation and nutrient content of seeds using
different methods like Erosion and Dilation, Watershed Model and Line draw method.

M. S. MallikarjunaSwamy and Mallikarjun S. Holi(2012)[12] statethat Magnetic resonance


imaging (MRI) is the generally used to take the image the knee joint because it give risk free
and high resolution result. There are so many algorithms are available for knee joint image
segmentation. They are classified based on pixel and model methods. Segmentation methods
are also classified classify as manual, semi-automatic and fully automatic methods.

Dr. N. Senthilkumaran (2012)[13] mentions the theory of edge detection for dental x-ray
image segmentation using a neural network approach. Neural networks have been applied to
edge detection based on the adaptive learning ability and nonlinear mapping ability. Neural
networks can be trained to detect edges and can serve as nonlinear filters once they are fully
trained. Dr. N. Senthilkumaran (2012)studies the edge detection method for Dental X-ray
image segmentation based on a genetic algorithm approach. It will first select a random point,
which will divide the 2-D array into four parts, and then exchange the genetic material in two
of the four parts. The mutation operation is used, just like the traditional one, which is to
select random genes, and then toggle the bit.

5
2. IMAGE PROCESSING AND MEASURING IN IMAGE

2.1 Digital Image

A digital image is a numeric representation, normally binary, of a two-dimensional image.


Depending on whether the image resolution is fixed, it may be of vector or raster type. By
itself, the term "digital image" usually refers to raster images or bitmapped images.

2.1.1 Pixel

In digital imaging, a pixel, pel or picture element is a physical point in a raster image, or the
smallest addressable element in an all points addressabledisplay device, so it is the smallest
controllable element of a picture represented on the screen.

fig1. Pixels

Each pixel is a sample of an original image; more samples typically provide more accurate
representations of the original. The intensity of each pixel is variable. In color imaging
systems, a color is typically represented by three or four component intensities such as red,
green, and blue, or cyan, magenta, yellow, and black.

6
2.2 Digital Image Processing

Digital image processing is the use of computer algorithms to perform image processing on
digital images. As a subcategory or field of digital signal processing, digital image processing
has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up of
noise and signal distortion during processing. Since images are defined over two dimensions
(perhaps more) digital image processing may be modeled in the form of multidimensional
systems. The generation and development of digital image processing are mainly affected by
three factors: first, the development of computers; second, the development of mathematics
(especially the creation and improvement of discrete mathematics theory); third, the demand
for a wide range of applications in environment, agriculture, military, industry and medical
science has increased. Digital image processing allows the use of much more complex
algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the
implementation of methods which would be impossible by analog.

In particular, digital image processing is the only practical technology for

2.2.1 Image editing

Image editing encompasses the processes of altering images, whether they are digital
photographs, traditional photo-chemical photographs, or illustrations. Traditional analog
image editing is known as photo retouching, using tools such as an airbrush to modify
photographs, or editing illustrations with any traditional art medium. Graphic software
programs, which can be broadly grouped into vector graphics editors, raster graphics editors,
and 3D modelers, are the primary tools with which a user may manipulate, enhance, and
transform images. Many image editing programs are also used to render or create computer
art from scratch.

2.2.2 Image restoration

Image restoration is the operation of taking a corrupt/noisy image and estimating the clean,
original image. Corruption may come in many forms such as motion blur, noise and camera
mis-focus. Image restoration is performed by reversing the process that blurred the image and

7
such is performed by imaging a point source and use the point source image, which is called
the Point Spread Function (PSF) to restore the image information lost to the blurring process

2.2.3 Neural networks

Artificial neural networks (ANN) or connectionist systems are computing systems that are
inspired by, but not identical to, biological neural networks that constitute animal brains.
Such systems "learn" to perform tasks by considering examples, generally without being
programmed with task-specific rules. For example, in image recognition, they might learn to
identify images that contain cats by analyzing example images that have been manually
labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this
without any prior knowledge of cats, for example, that they have fur, tails, whiskers and cat-
like faces. Instead, they automatically generate identifying characteristics from the examples
that they process.

2.2.3 Pattern recognition

Pattern recognition is the automated recognition of patterns and regularities in data. Pattern
recognition is closely related to artificial intelligence and machine learning, together with
applications such as data mining and knowledge discovery in databases (KDD), and is often
used interchangeably with these terms. However, these are distinguished: machine learning is
one approach to pattern recognition, while other approaches include hand-crafted (not
learned) rules or heuristics; and pattern recognition is one approach to artificial intelligence,
while other approaches include symbolic artificial intelligence.

2.2.4 Measuring dimensions

Using images like X-ray images we can able to detect length of tumors, cracks and also to
find dimensions of objects.

2.2.5 Filtering

Digital filters are used to blur and sharpen digital images. Filtering can be performed by
convolution with specifically designed kernels (filter array) in the spatial domain masking
specific frequency regions in the frequency (Fourier) domain.

8
2.3 Non-contact measurement

Non-contact measurement means distance measurement without any physical contact


between the distances meter and the measured object .The importance of determining
dimensions of objects remotely has been increased especially in the regulations robot and
control systems industry. Measuring objects within an image or frame can be an important
capability for many applications where computer vision is required instead of making
physical measurements.

2.3.1 Measuring in Image

A digital image is a string of numbers, displayed in a rectangular array, according to a lookup


table. We know about the three dimensions of an image width, height, and bit depth. The
power of image processing is its ability to make measurements in these dimensions:

(i) Spatial measurements -Measurements of distance, area, and volume. These involve the
first two dimensions of the image, its width and height.

(ii) Density measurements- Measurements involving the third dimension, the pixel values.
Pixel values can represent temperature, elevation, salinity, population density, or virtually
any phenomenon you can quantify.

Before you can make meaningful measurements, you need to calibrate the image — that is,
"tell" the software what a pixel represents in real-world terms of size or distance (spatial
calibration), in terms of what the pixel values mean (density calibration), or both. In this
section, you will learn how to spatially calibrate digital images.

2.4 Image noise

Image noise is random variation of brightness or color information in images, and is usually
an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or
digital camera. Image noise can also originate in film grain and in the unavoidable shot noise
of an ideal photon detector. Image noise is an undesirable by-product of image capture that
obscures the desired information.

9
2.4.1 Holes

Holes in a binary image,the boundary of the required object is dark and distinct from its
white background. So, we use simple image thresholding to separate the boundary from the
background. In other words, we say pixels with intensities above a certain value ( threshold )
are the background and the rest are the foreground. The black represents background, and
white represents foreground. Unfortunately, even though the boundary has been nicely
extracted ( it is solid white ), the interior of the object has intensities similar to the
background.

2.4.2 BLOB

A Binary Large Object (BLOB) is a collection of binary data stored as a single entity in a
database management system. Blobs are typically images, audio or other multimedia objects,
though sometimes binary executable code is stored as a blob.is a data type that can store
binary data. This is different than most other data types used in databases, such as integers,
floating point numbers, characters, and strings, which store letters and numbers. Since blobs
can store binary data, they can be used to store images or other multimedia files. For
example, a photo album could be stored in a database using a blob data type for the images,
and a string data type for the captions.

Because blobs are used to store objects such as images, audio files, and video clips, they
often require significantly more space than other data types. The amount of data a blob can
store varies depending on the database type, but some databases allow blob sizes of several
gigabytes.

10
3. SOFTWARE USED

3.1 MATLAB software

MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment and


proprietary programming language developed by MathWorks. MATLAB allows matrix
manipulations, plotting of functions and data, implementation of algorithms, creation of user
interfaces, and interfacing with programs written in other languages.

MATLAB was first adopted by researchers and practitioners in control engineering, Little's
specialty, but quickly spread to many other domains. It is now also used in education, in
particular the teaching of linear algebra and numerical analysis, and is popular amongst
scientists involved in image processing.

MATLAB also supports object-oriented programming including classes, inheritance, virtual


dispatch, packages, pass-by-value semantics, and pass-by-reference semantics. However, the
syntax and calling conventions are significantly different from other languages. MATLAB
has value classes and reference classes, depending on whether the class has handle as a super-
class (for reference classes) o r not (for value classes).

3.1.1 Image Processing Toolbox

Image Processing Toolbox provides a comprehensive set of reference-standard algorithms


and workflow apps for image processing, analysis, visualization, and algorithm development.
You can perform image segmentation, image enhancement, noise reduction, geometric
transformations, and image registration using deep learning and traditional image processing
techniques. The toolbox supports processing of 2D, 3D, and arbitrarily large images.

Image Processing Toolbox apps let you automate common image processing workflows. You
can interactively segment image data, compare image registration techniques, and batch-
process large datasets. Visualization functions and apps let you explore images, 3D volumes,
and videos; adjust contrast; create histograms; and manipulate regions of interest.
Image analysis is the process of extracting meaningful information from images such as
finding shapes, counting objects, identifying colors, or measuring object properties. The

11
toolbox provides a comprehensive suite of reference-standard algorithms and visualization
functions for image analysis tasks such as statistical analysis and property measurement.

3.1.2 Color Thresholder

The Color Thresholder app lets you threshold color images by manipulating the color
components of these images, based on different color spaces. Using this app, you can create a
segmentation mask for a color image.

1.Image Segmentation Using the Color Thresholder App

This example shows how to segment an image based on regions with similar color. You can
display the image in different color spaces to differentiate objects in the image.

2. Acquire Live Images in the Color Thresholder App

You can perform color thresholding on an image acquired from a live USB webcam.

3. Image Segmentation Using Point Clouds in the Color Thresholder App

Use point cloud control to segment an image by selecting a range of colors belonging to the
object to isolate.

3.2 Benefits of Matlab


 Accelerated image processing
 Performance improvement in image segmentation
 Displaying large images
 Block processing for large images
 Parallel computing
 Wide range of functions

12
4. SYSTEM DEVELOPMENT

4.1 ALGORITHM:

Algorithm consists of the following steps.

1.Import image (input)

2.Segmentation

(i).Red plane segmentation.

(ii).Green plane segmentation.

(iii)Blue plane segmentation.

3.Thresholding

4.Remove Noise.

5.Measuring image.

Coming to the details of the algorithm

4.1.1 Import image

To capture the desired image and give it as input to the code.

4.1.2 Segmentation

In image processing, image segmentation is the process of partitioning a digital image into
multiple segments (sets of pixels, also known as image objects). The goal of segmentation is
to simplify and/or change the representation of an image into something that is more
meaningful and easier to analyze. Image segmentation is typically used to locate objects
and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the
process of assigning a label to every pixel in an image such that pixels with the same label
share certain characteristics.

The result of image segmentation is a set of segments that collectively cover the entire image,
or a set of contours extracted from the image (see edge detection). Each of the pixels in a
region are similar with respect to some characteristic or computed property, such
as color, intensity, or texture. Adjacent regions are significantly different with respect to the

13
same characteristic(s). When applied to a stack of images, typical in medical imaging, the
resulting contours after image segmentation can be used to create 3D reconstructions with the
help of interpolation algorithms like Marching cubes.

4.1.3 Thresholding
The simplest method of image segmentation is called the thresholding method. This method
is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image.

The key of this method is to select the threshold value (or values when multiple-levels are
selected). Several popular methods are used in industry including the maximum entropy
method, balanced histogram thresholding, Otsu's method (maximum variance), and k-means
clustering.

Recently, methods have been developed for thresholding computed tomography (CT) images.
The key idea is that, unlike Otsu's method, the thresholds are derived from the radiographs
instead of the (reconstructed) image.

New methods suggested the usage of multi-dimensional fuzzy rule-based non-linear


thresholds. In these works decision over each pixel's membership to a segment is based on
multi-dimensional rules derived from fuzzy logic and evolutionary algorithms based on
image lighting environment and application.

4.1.4 Remove noise


There is quite a bit of “noise” and we need to clean the image up significantly to improve the
accuracy of our diameter measurement. Noises can be removed using some filtering
techniques, there are some built in functions in matlab which uses filtering techniques to
remove noise.

4.1.5 Measuring image


To measure the diameter of an object in an image after completing the above steps.

14
5. MODELLING

Here we are implementing the algorithm into a Matlab code which can be used to determine
the object diameter of an object with in an image.

The implementation is as followed below,

5.1 Import image


To import an image we have to use imread function that reads an image and converts into a
“3-Dimensional”matrix in the RGB color space. The imread function converts image into a
matrix that is Rows X Columns X RGB).The final dimension corresponds to a red, green,
blue intensity level. Use imshowo to view the produced image in a new window.

Fig 2: original image

Code:
clear;
clc;
obj=imread(‘filename.jpj’);
imshow(obj);

15
5.2 Segment Image based on intensities
We have to divide the image into its respective RGB intensities

fig 3.segmented image

Code:
red=obj(: , : , 1);
green=obj(: , : , 2);
blue=obj(: , : , 3);
figure(1)
subplot(2,2,1);
imshow(obj);
title(‘original Image’);
subplot(2,2,2);
imshow(red);
title(‘Redplane’);
subplot(2,2,3);
imshow(green);
title(‘greenplane’);
subplot(2,2,4);
imshow(blue);
title(‘blueplane’);

16
5.3 Thresholding

The blue plane is the best choice to use for Image Thresholding because it provides the most
contrast between the desired object (foreground) and the background. Image Thresholding
takes an intensity imageand converts it into a binary image based on the leveldesired(See line
25).Avalue between 0 and 1 determines which pixels (based on their value)will be set to a 1
(white) or 0 (black)). To choose the best value suited for your application right-click on the
value and at the top of the menu and select “Increment Value and Run Section”. Set the
increment value to 0.01 and choose the best value at which to threshold. Figure 5shows the
result of the Image Thresholding at 0.37. You can see that the image (Top-right of Figure 6)
hasbeen segmented between the objectwe desire to measure and the background.

Fig 4.Thresholded and holes filled images

Code:

Figure(2);
Level=0.37;
Bw2=im2bw(blue,level)
Subplot(2,2,1);
Imshow(bw2);
Title(‘blue plane threshold’);

5.4 Noise removal


Blobs in this document are any collection of white pixels that touch to create a cohesive and
distinct object. We use fill function to fill the holes in the image. We use imclearborder
function to clear blobs. We use strel function to remove the blobs less than 7 pixels.

17
Fig 5 image after noise removal

Code:
Fill=imfill(bw2,’holes’);

Subplot(2,2,2);
Imshow(fill);
Title(‘holes filled’);
Clear=imclearborder(fill);
Subplot(2,2,3);
Imshow(clear);
Title(‘Remove bolobs on border’);
Se=strel(‘disk’,7);
Open=imopen(fill,se);
Subplot(2,2,4);
Imshow(open);
Title(‘Remove small blobs’)

5.5 Measuring the image


The regionprops function is the tool that will provide the major axis length of the blob in the
image. The diameter is displayed in the command window

Code:
Diameter=regionprops(open,’majorlengthaxis’);
Figure(3)
Imshow(obj)
D=imdistline;

18
6.RESULT

The diameter is now displayed in the Command Window to be approx pixels value across.
This was verified in figure by using the ‘imdistline’ function in line. As you can see between
the two figures, the value calculated by the code was very close to the manual measurement
in figure. We are able to learn how to get accurate measurements using mat lab(image
processing tool)with in a image. This allowed us to get a better understanding of the principle
of the system to practice using the different measuring syntax for variety of objects the main
object. We gain from this project was when it comes from measurements, it is best if we can
make the accuracy of the measurements as precise as possible , Since this an essential skill
that will be required more often as we progress in the field of engineering.

19
7.REFERENCE

[1] Kenneth R.Castleman, Digital Image Processing,Prentice-Hall, 1996. ISBN 0-13-


211467-4

[2] Rafael C. Gonzalez and Richard E. Woods, Digital Image Processing, Second Ed,
Prentice-Hall, 2001. ISBN 0-20-118075-8

[3] BerndJahne, Digital Image Processing, (FourthEd) Springer-Verlag, 1997. ISBN 3-540-
62724-3

[4] Anil K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall,1989. ISBN 0-13-
336165-9

[5] WayneNiblack, An Introduction to Digital ImageProcessing,Prentice-Hall International,


1985. ISBN 0-13-480674-3

[6]WilliamPratt, DigitalImageProcessing,(ThirdEd) Wiley-Interscience, 2001. ISBN 0-471-


37407-5

20

You might also like