You are on page 1of 50

Chapter 1

INTRODUCTION

As the speed, capability, and economic advantages of modern signal processing


devices continue to increase, there is simultaneously an increase in efforts aimed at
developing sophisticated, real-time automatic systems capable of emulating human
abilities. Image Processing is concerned with acquiring and processing of an image. In
simple words an image is a representation of a real scene, either in black and white or in
color, and either in print form or in a digital form i.e., technically a image is a twodimensional light intensity function. There is a growing demand of Image Processing in
diverse application areas, such as multimedia computing, secured image data
communication, biomedical imaging, biometrics, remote sensing, pattern recognition,
texture understanding, compression and so on.

1.1 OVERVIEW OF AN IMAGE

Figure 1.1: 2D view of an Image


An image is a rectangular grid of pixels. It has a definite height and a definite width
counted in pixels. It is an array, or a matrix, of square pixels (picture elements) arranged
in columns and rows. Each pixel is square and has a fixed size on a given display.
1

however different computer monitors may use different sized pixels. Each pixel has a
color. The color is a 24-bit integer. The first eight bits determine the redness of the pixel,
the next eight bits the greenness, the next eight bits the blueness.
1
1 111111
Red

1 1111111 1 1111111
Green

Blue

Pure white is 255 red, 255 green, 255 blue.


255 255 255
Pure red is 255 red, 0 green, 0 blue.
255 0 0
Figure 1.2: Representation of a image pixel
The RGB colour model relates very closely to the way we perceive colour with the r, g
and b receptors in our retinas. RGB uses additive colour mixing and is the basic colour
model used in television or any other medium that projects colour with light. It is the
basic colour model used in computers and for web graphics, but it cannot be used for
print production.The secondary colours of RGB cyan, magenta, and yellow are
formed by mixing two of the primary colors (red, green or blue) and excluding the third
colour. Red and green combine to make yellow, green and blue to make cyan, and blue
and red form magenta. The combination of red, green, and blue in full intensity makes
white. In Photoshop using the screen mode for the different layers in an image will
make the intensities mix together according to the additive colour mixing model. This is
analogous to stacking slide images on top of each other and shining light through them.
Each of these values can be interpreted as an unsigned byte between 0 and 255. Within
the color higher numbers are brighter. Thus a red of 0 is no red at all while a red of 255 is
a very bright red. In a (8-bit) grayscale image each picture element has an assigned
intensity that ranges from 0 to 255. A grey scale image is what people normally call black

and white image, but the name emphasizes that such an image will also include many
shades of grey.

Figure 1.3: RGB Color Model


There are two general groups of 'images': vector graphics (or line art) and bitmaps
(pixel-based or 'images'). Some of the most common file formats are:

GIF - an 8-bit (256 colour), non-destructively compressed bitmap format. Mostly


used for web. Has several sub-standards one of which is the animated GIF.

BMP- as known as a "bump" file, it is the native, bitmapped graphics format in


Windows. A BMP can be saved in several color options: 1-, 4-, 8- and 24-bit
color provide 2, 16, 256 and 16,000,000 colors respectively. BMP files use the
.BMP or .DIB file extensions

JPEG - a very efficient (i.e. much information per byte) destructively compressed
24 bit (16 million colors) bitmap format. Widely used, especially for web and
Internet (bandwidth-limited).

TIFF - the standard 24 bit publication bitmap format. Compresses nondestructively with, for instance, Lempel-Ziv-Welch (LZW) compression.

PS - Postscript, a standard vector format. Has numerous sub-standards and can be


difficult to transport across platforms and operating systems.
3

PSD - a dedicated Photoshop format that keeps all the information in an image
including all the layers.

1.2 INTRODUCTION TO IMAGE PROCESSING


Image processing techniques were first developed in 1960 through the collaboration
of a wide range of scientists and academics. The main focus of their work was to
develop medical imaging, character recognition and create high quality images at the
microscopic level. During this period, equipment and processing costs were
prohibitively high. Modern image processing tends to refer to the digital domain where
the color of each pixel is specified by a string of binary digits.

Figure 1.4: An Example of Processed Image


Image processing involves treating a two-dimensional image as the input of a system
and outputting a modified image or a set of defining parameters related to the image.
There are many transformations and techniques, usually derived from the field of signal
processing. There are standard geometric transformations such as enlargement, size
reduction, linear translation and rotation. It is possible to modify the colors in images
such as enhancing contrasts or even transforming the image into an entirely different
color palette according to some specific mapping system. Compositions of images are
frequently conducted to merge portions from multiple images. Basically, images
retrieved in some contexts are sparse with missing pixels. Standard techniques involve
simply estimating the missing pixels based on the color of the nearest known pixels.
More sophisticated techniques may involve using algorithms to judge the missing pixels
usually by factoring in the relative colors of all surrounding pixels. Techniques to align
images are also quite straightforward. Segmentation tends to involve decomposing
4

images into smaller sections based on some common quality such as color or light
intensity. It is possible to extend the dynamic range of photos by combining images that
have variation in light exposure. Some of the most sophisticated techniques include
morphology and fly. The Holy Grail of image processing tends to be object recognition
where software is trained to be able to recognize and categorize the parts of an image
based on colors and outlines.
1.2.1 STAGES IN IMAGE PROCESSING
Image Processing techniques are used to enhance, improve, or otherwise alter an
image and to prepare it for image analysis. Usually, during image processing
information is not extracted from the image. The intention is to remove faults, trivial
information, or information that may be important, but not useful, and to improve the
image. Image processing is divided into many sub processes, including Histogram

Analysis, Thresholding, Masking, Edge Detection, Segmentation, and others.

Figure 1.5: Stages of Image Processing


1. Image Acquisition: An image is captured by a sensor (such as a monochrome or color
TV camera) and digitized. If the output of the camera or sensor is not already in digital
form, an analog-to digital converter digitizes it.
2. Recognition and Interpretation: Recognition is the process that assigns a label to an
object based on the information provided by its descriptors. Interpretation is assigning
meaning to an ensemble of recognized objects.

3. Segmentation: Segmentation is the generic name for a number of different techniques


that divide the image into segments of its constituents. The purpose of segmentation is to
separate the information contained in the image into smaller entities that can be used for
other purposes.
4. Representation and Description: Representation and Description transforms raw data
into a form suitable for the Recognition processing.
5. Knowledge base: A problem domain detailing the regions of an image where the
information of interest is known to be located is known as knowledge base. It helps to
limit the search.
Thresholding is the process of dividing an image into different portions by picking a
certain grayness level as a threshold, comparing each pixel value with the threshold, and
then assigning the pixel to the different portions, depending on whether the pixels
grayness level is below the threshold or above the threshold value. Thresholding can be
performed either at a single level or at multiple levels, in which the image is processed by
dividing it into layers, each with a selected threshold. Various techniques are available
to choose an appropriate threshold ranging from simple routines for binary images to
sophisticated techniques for complicated images.
Connectivity: Sometimes we need to decide whether neighboring pixels are somehow
connected or related to each other. Connectivity establishes whether they have the same
property, such as being of the same region, coming from the same object, having a similar
texture, etc. To establish the connectivity of neighboring pixels, we first have to decide
upon a connectivity path.
Noise Reduction: Like other signal processing mediums, Vision Systems contains noises.
Some noises are systematic and come from dirty lenses, faulty electronic components,
bad memory chips and low resolution. Others are random and are caused by
environmental effects or bad lighting. The net effect is a corrupted image that needs to be
preprocessed to reduce or eliminate the noise. In addition, sometimes images are not of

good quality, due to both hardware and software inadequacies; thus, they have to be
enhanced and improved before other analysis can be performed on them.
Convolution Masks: A mask may be used for many different purposes, including
filtering operations and noise reduction. Noise and Edges produces higher frequencies in
the spectrum of a signal. It is possible to create masks that behave like a low pass filter,
such that higher frequencies of an image are attenuated while the lower frequencies are
not changed very much. There by the noise is reduced.
Edge Detection: It is a general name for a class of routines and techniques that operate
on an image and results in a line drawing of the image. The lines represented changes in
values such as cross sections of planes, intersections of planes, textures, lines, and colors,
as well as differences in shading and textures. Some techniques are mathematically
oriented, some are heuristic, and some are descriptive. All generally operate on the
differences between the gray levels of pixels or groups of pixels through masks or
thresholds. The final result is a line drawing or similar representation that requires much
less memory to be stored, is much simpler to be processed, and saves in computation and
storage costs. Edge detection is also necessary in subsequent process, such as
segmentation and object recognition.
Image Data Compression: Electronic images contain large amounts of information and
thus require data transmission lines with large bandwidth capacity. The requirements for
the temporal and spatial resolution of an image, the number of images per second, and the
number of gray levels are determined by the required quality of the image.

1.2.2 APPLICATIONS OF IMAGE PROCESSING


There are large no of applications of Image Processing in diverse spectrum of human
activities from remotely sensed scene interpretation to biomedical image interpretation.
There have been several successful implementation cases where criminals have been
identified within large crowds such as sports stadiums through the use of image
processing techniques. Image processing techniques allow the automation of this study to
identify sources of malignancy reliably and efficiently. They enable doctors to perform
8

guided surgery by planning their incisions and insertions through the maze of the human
body. Successful techniques have allowed scientists to judge the presence of craters, soil
and atmospheric characteristics .The main applications of Image Processing can be
categorized as follow:
Biomedical Applications: In the field of Medicine this is highly applicable in
areas like Medical imaging, Scanning, Ultrasound and X-rays etc. Image
Processing is rapidly used for MRI SCAN (Magnetic Resonance Imaging) and CT
SCAN (Computer Tomography). Tomography is an imaging technique that
generates an image of a thin cross sectional slice of a test piece.

Figure1.6: An example of Biomedical Application


Robotics: Image Processing is vastly being implemented in Vision Systems in
Robotics. Robots capture the real time images using cameras and process them to
fulfill the desired action. A simple application in robotics using Vision Systems is
a robot hand-eye coordination system. Consider that the robots task is to move an
object from one point to another point. Here the robots are fixed with cameras to
view the object, which is to be moved. The hand of the robot and the object that is
to be captured are observed by the cameras, which are fixed to the robot in
position, this real time image is processed by the image processing techniques to
get the actual distance between the hand and the object. Here the base wheel of
the robots hand is rotated through an angle, which is proportional to the actual
distance between hand and the object. Here a point in the target is obtained by
using the Edge Detection Technique. The operation to be performed is controlled
by the micro-controller, which is connected to the ports of the fingers of the
robots hand. Using the software programs the operations to be performed are
9

assigned keys from the keyboard. By pressing the relative key on the keyboard the
hand moves appropriately.
Defense Surveillance: Applications of image processing techniques in defense
surveillance is an important area of study. Suppose we are interested in locating
the type and the formation of Naval vessels in an aerial image of ocean surface
The primary task here is to segment different objects in the water body part of the
image .After this ,the parameters like area, location, parameter, aspect ratio are
found to classify each of the segmented object. To describe all possible formations
of the vessels, it is required that we should be able to identify the distribution of
objects in eight possible directions. From the spatial distribution of these objects it
is possible to interpret the entire oceanic scene.
Remotely Sensed Scene Interpretation: Information regarding the natural
resources such as agriculture, hydrological, mineral, forest, geological resources
etc can be extracted based on remotely sensed image analysis. For remotely
sensed image analysis, images of earths surface are captured by cameras in the
remote sensing satellites and transmitted to earth stations for further processing.
Law Enforcement: Police and detective agencies use intelligent software that is
able to zoom in on suspicious behavior usually triggered by sounds, the presence
of packages for protracted periods of time or clustering of many people. Image
processing allows the comparison of people on video surveillance images to
suspected rogues. There have been several successful implementation cases where
criminals have been identified within large crowds such as sports stadiums
through the use of image processing techniques.

10

Figure1.7: An example of Law Enforcement Application

1.2.3 ADVANTAGES AND DISADVATAGES


ADVANTAGES :
In medicine by using the Image Processing techniques the sophistication has
increased. This lead to technological advancement.
Vision Systems are flexible, inexpensive, powerful tools that can be used with
ease
In Space Exploration the robots play vital role which in turn use the image
processing techniques.
Image Processing is used for Astronomical Observations.
Also used in Remote Sensing, Geological Surveys for detecting mineral resources
etc.

Also used for character recognizing techniques, inspection for abnormalities in


industries.

DISADVANTAGES:
11

A Person needs knowledge in many fields to develop an application / or part of an


application using image processing.
Calculations and computations are difficult and complicated so needs an expert in
the field related. Hence its unsuitable and unbeneficial to ordinary programmers
with mediocre knowledge.

1.3 FORMAT OF BMP IMAGE


1.3.1 INTRODUCTION
A BMP computer image is the easiest to understand because it does not use compression,
making pixel data retrieval much easier. The table below shows how the pixel data is
stored from the first byte to the last.
TABLE 1: BMP File Structure
Byte # to fseek file pointer

Information

Signature

File size

18

Width (number of columns)

22

Height (number of rows)

28

Bits/pixel

46

Number of colors used

54

Start of color table

54 + 4*(number of colors)

Start of raster data

The first 14 bytes are dedicated to the header information of the BMP. The next 40 bytes
are dedicated towards the info header, where one can retrieve such characteristics as
width, height, file size, and number of colors used. Next, is the color table, which is 4 x
(number of colors used) bytes long. So for an 8-bit grayscale image (number of colors is
256), the color table would be 4 x 256 bytes long or 1024 bytes. And the last bit of data in
a BMP file is the pixel data, or raster data. The raster data starts at byte 54 (header + info
header) + 4 x number of colors (color table). For an 8-bit grayscale image, the raster data
12

would start at byte 54 + 1024 = 1078. The size of the raster data is (width x height) 1
bytes. Therefore, a 100 row by 100 column 8-bit grayscale image would have (100 x 100)
1 = 9,999 bytes of raster data starting at byte 1078 and continuing to the end of the
BMP.
In terms of image processing, the most important information is the following:
(1)Number of columns byte #18
(2)Number of rows - byte #22

1.3.2 READING BMP RASTER DATA


TEST.bmp is a 20 row by 20 column BMP image which we will use to read raster data
from. In an 8 bit BMP image, black is 0 and white is 255. The top left corner TEST.bmp
starts at a pixel value of 0 (black) and progressively works its way down the diagonal to
pixel value of 255 (white). Thinking of rows and columns in a BMP is not the same as
thinking of rows and columns in a matrix. In a matrix row 0 and column 0 would start
you at the top left corner of the matrix. However, in a BMP, the rows increase from
bottom to top. Therefore, row 0 and column 0 in a BMP would correspond to the bottom
leftconer

TEST.bmp is scaled up here to a 100 by 100 BMP, so be sure and download the zip file
to test out your raster data program.
The figure is shown below:

Figure 1.8: TEST.bmp

TEST.bmp contains 20 rows and 20 columns, so we know we will have 400 bytes of
raster data. We also know the raster data will start at byte #(54 + 4 x number of colors).
The number of colors of TEST.bmp is 256 because it is a grayscale image with colors
13

ranging from 0 to 255. Therefore, the raster data will start at byte #1078 and the file size
will be 1078 + 400 = 1478 bytes.

1.4 BRIEF VIEW OF EDGE DETECTION


1.4.1 Edge Detection
Unlike the real world, images do not have edges. An edge is sharp change in intensity of
an image. But, since the overall goal is to locate edges in the real world via an image, the
term edge detection is commonly used. An edge is not a physical entity, just like a
shadow.

Figure 1.9: An example of Edge Detection


Edge detection is one of the subjects of basic importance in image processing. The parts
on which immediate changes in grey tones occur in the images are called edges.
Benefiting from the direct relation between physical qualities of the materials and their
edges, these qualities can be recognized from edges. Because of these qualities, edge
detection techniques gain importance in terms of image processing. Edge detection
techniques transform images to edge images benefiting from the changes of grey tones in
the images. Edges productive methods of finding final edges are to designate the
immediate changes of grey level.
Edge detection is one of the most frequently used techniques in digital image processing.
Its application area reaches from astronomy to medicine where isolation of objects
focused on from the unwanted background is of great interest. Edge detection has also
found application for photogrammetric purposes. In typical images, edges characterize
14

object boundaries and are therefore useful for segmentation, registration, and
identification of objects in a scene.
Edge Detection is a technique used in image detection and identification fields. The
algorithm basically tries to identify (as the name suggests) edges in the image by looking
for color variations that are sharp in nature, thereby indicating the presence of an edge. In
specialized cases like Facial detection (think 'drawing a box around a face' like modern
digital cameras do), edge detection techniques are used to detect the presence of edges
that denote a face, 2 eyes, a nose, mouth, etc. Once these are detected with a certain
confidence, the system assumes a face and responds appropriately.
1.4.2 Types of Edges
All edges are locally directional. Therefore, the goal in edge detection is to find out what
occurred perpendicular to an edge. The following is a list of commonly found edges.

Figure 1.10: Types of Edges (a) Sharp step (b) Gradual step (c) Roof (d) Trough
A Sharp Step, as shown in Figure 1.6(a), is an idealization of an edge. Since an Image is
always band limited, this type of graph cannot ever occur. A Gradual Step, as shown in
Figure 1.6(b), is very similar to a Sharp Step, but it has been smoothed out. The change in
intensity is not as quick or sharp. A Roof, as show in Figure 1.6(c), is different than the
first two edges. The derivative of this edge is discontinuous. A Roof can have a variety of
sharpness, widths, and spatial extents. The Trough, also shown in Figure 1.6(d), is the
inverse of a Roof.
1.4.3 Criteria for Edge Detection
There are large numbers of edge detection operators available, each designed to be
sensitive to certain types of edges. The Quality of edge detection can be measured from
several criteria objectively. Some criteria are proposed in terms of mathematical
measurement, some of them are based on application and implementation requirements.
In all five cases a quantitative evaluation of performance requires use of images where
the true edges are known.

15

Good detection: There should be a minimum number of false edges.


Usually,edges are detected after a threshold operation. The high threshold will

lead to less false edges, but it also reduces the number of true edges detected.
Noise sensitivity: The robust algorithm can detect edges in certain acceptable
noise (Gaussian, Uniform and impulsive noise) environments. Actually, an edge
detector detects and also amplifies the noise simultaneously. Strategic filtering,
consistency checking and post processing (such as non-maximum suppression)

can be used to reduce noise sensitivity.


Good localization: The edge location must be reported as close as possible to the

correct position, i.e. edge localization accuracy (ELA).


Orientation sensitivity: The operator not only detects edge magnitude, but it
also detects

edge orientation correctly. Orientation can be used in post

processing to connect edge segments, reject noise and suppress non maximum

edge magnitude.
Speed and efficiency: The algorithm should be fast enough to be usable in an
image processing system. An algorithm that allows recursive implementation or
separately processing can greatly improve efficiency.

Criteria of edge detection will help to evaluate the performance of edge detectors.
Correspondingly, different techniques have been developed to find edges based upon the
above criteria, which can be classified into linear and non linear techniques.
1.4.4 Motivation behind Edge Detection
The purpose of detecting sharp changes in image brightness is to capture important
events and changes in properties of the world. For an image formation model,
discontinuities in image brightness are likely to correspond to:a) Discontinuities in depth
b) Discontinuities in surface orientation
c) Changes in material properties
d) Variations in scene illumination
In the ideal case, the result of applying an edge detector to an image may lead to a set of
connected curves that indicates the boundaries of objects, the boundaries of surface
marking as well curves that correspond to discontinuities in surface orientation. If the
edge detection step is successful, the subsequent task of interpreting the information
contents in the original image may therefore be substantially simplified. Edges extracted
16

from non-trivial images are often hampered by fragmentation i.e. the edge curves are not
connected, missing edge segments, false edges etc., which complicate the subsequent task
of interpreting the image data.
1.4.5 Edge Detection a non-trivial task
To illustrate why edge detection is not a trivial task, let us consider the problem of
detecting edges in the following one-dimensional signal. Here, we may intuitively say
that there should be an edge between the 4th and 5th pixels.

152

148

149
Figure
1.11:

Edge Detection a non trivial task


If the intensity difference were smaller between the 4th and the 5th pixels and if the
intensity differences between the adjacent neighbouring pixels were higher, it would,
however, not be as easy to say that there should be an edge in the corresponding region
or, indeed, if there even could be multiple edges. Hence, to firmly state a specific
threshold on how large the intensity change between two neighbouring pixels must be for
us to say that there should be an edge between these pixels is not always a simple
problem. Indeed, this is one of the reasons why edge detection may be a non-trivial
problem unless the objects in the scene are particularly simple and the illumination
conditions can be well controlled.

1.5 Applications of Edge Detection


In Computer Vision :
Object recognition
Line drawing Analysis
Motion drawing
In Image Analysis:
Segmentation
Enhacement

17

1.6 Blur Image

Figure 1.12: An example of Blur Image


Blur image is an image effect that is used to remove detail, resulting in an image that
appears as if viewed through a translucent lens.
A blurred image has less abrupt areal changes than the original. Thus, we can blur by
averaging every pixel with some of the energy of its neighbours. A blurring acts as a lowpass filter by removing high areal frequencies which cause abrupt changes.

1.7 Grayscale Image

18

Figure 1.13: An example of Grayscale Image


Grayscale images are images without color, or achromatic images. The levels of a
grayscale range from 0 (black) to 1 (white).

Figure 1.10: GrayScale Range


A grayscale is an image in which the value of each pixel is a single sample, that is, it
carries only intensity information. Images of this sort, also known as black-and-white, are
composed exclusively of shades of gray, varying from black at the weakest intensity to
white at the strongest. Grayscale images are distinct from one-bit black-and-white
images, which in the context of computer imaging are images with only the two colors,
black, and white (also called bilevel or binary images). Grayscale images have many
shades of gray in between. Grayscale images are also called monochromatic, denoting the
absence of any chromatic variation. Grayscale images are often the result of measuring
the intensity of light at each pixel in a single band of the electromagnetic spectrum (e.g.
infrared, visible light, ultraviolet, etc.) and in such cases they are monochromatic proper
when only a given frequency is captured.

1.8 Negative Image

Figure 1.14: An example of Negative Image


A positive image is a normal image. A negative image is a tonal inversion of a image is
additionally color reversed. Negative images are useful for enhancing white or grey
details embedded in the dark regions of an image.
Film negatives usually also have much less contrast than the final images. This is
compensated by the higher contrast reproduction by photographic paper or by increasing
the contrast when scanning and post processing the scanned images.

1.9 SUMMARY
19

Its a critical study, which plays a vital role in modern world as it is involved with
advanced use of science and technology. The advances in technology have created
tremendous opportunities for Vision System and Image Processing. There is no doubt that
the trend will continue into the future. From the above discussion we can conclude that
this field has relatively more advantages than disadvantages and hence is very useful in
varied branches.

[1]

Chapter 2
LITERATURE: A REVIEW

2.1 Edge Detection


We are in the midst of a visually enchanting world, which manifests itself with a variety
of forms and shapes, colors and textures, motion and tranquility. The human perception
has the capability to acquire, integrate, and interpret all this abundant visual information
around us. It is challenging to impart such capabilities to a machine in order to interpret
the visual information embedded in still images, graphics, and video or moving images in
our sensory world. It is thus important to understand the techniques of storage,processing,
transmission, recognition, and finally interpretation of such visual scenes. A two
dimensional image that is recorded by sensors is the mapping of the three-dimensional
visual world. The captured two dimensional signals are sampled and quantized to yield
digital images.
20

In image processing and computer vision, edge detection treats the localization of
significant variations of a gray level image and the identification of the physical and
geometrical properties of objects of the scene. Edge detection is a difficult issue. Many
difficulties come from the complex contents like noise, varying contrast in an
image, orientation sensitivity.
Digital image processing allows one to enhance image features of interest while
attenuating detail irrelevant to a given application, and then extract useful information
about the scene from the enhanced image. Images are produced by a variety of physical
devices, including still and video cameras, x-ray devices, electron microscopes, radar, and
ultrasound, and used for a variety of purposes, including entertainment, medical, business
(e.g. documents), industrial, military, civil (e.g.traffic), security, and scientific. The goal
in each case is for an observer, human or machine, to extract useful information about the
scene being imaged. Traditional edge detection techniques, such as Robert operator,
Sobel operator, Laplacian of Gaussian operator are widely used. Most of the existing
techniques are either very sensitive to noise and do not give satisfactory results in low
contrast areas.
A fuzzy theory based Edge Detector avoids these problems and is a better method for
edge information detection and noise filtering than the traditional methods. Edge
detection using fuzzy logic provides an alternative approach to detect edges. Edge
detection is one of the subjects of basic importance in image processing. The parts on
which immediate changes in grey tones occur in the images are called edges.
Benefiting from the direct relation between physical qualities of the materials and their
edges, these qualities can be recognized from edges. Because of these qualities, edge
detection techniques gain importance in terms of image processing. Edge detection
techniques transform images to edge images benefiting from the changes of grey tones in
the images. Edges productive methods of finding final edges is to designate the
immediate changes of grey level.
2.1.1 History of Edge Detection
In this section, work done in the area of edge detection is reviewed and focus has been
made on detecting the edges of the digital images.Edge detection is a problem of
fundamental importance in image analysis. In typical images, edges characterize object
boundaries and are therefore useful for segmentation, registration, and identification of
objects in a scene. Edge detection of an image reduces significantly the amount of data
and filters out information that may be regarded as less relevant, preserving the important
structural properties of an image.
In 1997 Ng Geok See, Chan Khue Hiang, proposed a technique for edge detection based
on neural network. Neural network has many processing elements joined together and
usually organized into groups called layers. Training is provided to the neural network in
supervised or unsupervised learning mode, to force the network to yield particular result
to a specific input.
21

In 1998 Zhengquan He, M.Y.Siyal, proposed a new technique based on neural network.
Most of the existing techniques like Sobel[refrencee} are effective in certain senses and
require more computation time. In the proposed edge detection technique a three layer BP
neural network is employed to classify the edge elements in binary images into one of the
predefined categories. To detect edges first binarize the image by choosing threshold by
some optimal criteria and classify the edge patterns of binary images in different
categories. Train the neural network on these patterns and on their noisy patterns. After
the network is trained, it can recognize the input pattern as a most like pattern in our edge
pattern bank. This technique is more flexible to the edge structures in the image. It can
not only extract straight lines but also can extract corners and arcs edges.
In 2005 Zhang, Zhao and Li Su, proposed a technique based on the integer logarithm
ratio of gray levels. In order to remove the ability of noise rejection they proposed a
ratio of gray levels between the two successive image points rather than the difference
of gray levels to denote the variation in the gray levels . In this, division operation
becomes the subtraction operation of the logarithmic ratio of gray levels. This is more
convenient for calculations.
In 2005 Stamatia Giannarou, Tania Stathaki, proposed a technique that allows combining
the methods of different edge detection operators in order to yield improved results for
edge detection in an image. This is called Receiver Operating Characteristics (ROC)
analysis . This technique uses the statistical approach to automatically form a optimum
edge map, by combining edge images from different detectors. The characteristics of this
method are to produce accurate and noise free results. One possible concern regarding
these techniques is the selection of the edge detectors to be combined.
In 2006 M.Hanmandlu, Rohan Raj Kalra, Vamsi Krishna Madasu, proposed a technique
based on Univalue Segment Assimilating Nucleus (USAN) area i.e. fuzzy technique. The
USAN characterizes the structure of the edge present in the neighborhood of a pixel and
can thus be considered as a unique feature of the pixel and is fuzzified . This technique is
best in yielding the large number of longest edge segments. This is used for the
applications like face recognition and fingerprint identification, as it does not distort the
shape of the image and is able to retain all the important edges. Appropriate fuzzification
function and threshold election are important for the success of the proposed edge
detection algorithm.
Later on Fast fuzzy edge detection technique was proposed. Heuristic membership
functions, simple fuzzy rules, and fuzzy complements were used to develop new edge
detectors . Then Fuzzy edge detector using entropy optimization was proposed.The
proposed fuzzy edge detector involves two phases: global contrast intensification and
local fuzzy edge detection. In the first phase, a modified Gaussian membership function
is chosen to represent each pixel in the fuzzy plane . To realize the fast and accurate
detection of the edges from the blurry images, the Fast Multilevel Fuzzy Edge Detection
(FMFED) algorithm was proposed . The FMFED algorithm first enhances the image
contrast by means using a simple transformation function based on two image thresholds.
Second, the edges are extracted from the enhanced image by the two-stage edge detection

22

operator that identifies the edge candidates based on the local characteristics of the image
and then determines the true edge pixels using the edge detection operator based on the
extreme of the gradient values.
The goal of the edge detection process in a digital image is to determine the frontiers of
all represented objects, based on automatic processing of the color or gray level
information in each present pixel. Edge detection has many applications in image
processing and computer vision, and is an indispensable technique in both biological and
robot vision . The main objective of edge detection in image processing is to reduce data
storage while at same time retaining its topological properties, to reduce transmission
time and to facilitate the extraction of morphological outlines from thedigitized image.
In our research problem we have used a simple algorithm to find edges in the image
which is based on colour of the image . So the major stress will be on development of
algorithms for improving the quality of detecting edges by Edge detection technique.

2.2 INTRODUCTION TO C
C is a general purpose computing programming language.
C was invented and was first implemented by Dennis Ritchie with the Unix Operating
System in 1972.
C is often called a middle level computer language.
C is a Structured Language.
Data types supported by C language are integer, float, double, character etc.,

23

Figure 2.1 C Compilation Model


2.2.1 History of the C language
ALGOL 60 (1960): the first programming language with block structures, control constructs, and
recursion possibilities.
BCPL (Basic Combined Programming Language) (1967), developed by Martin Richards at
Cambridge which built a foundation for many C elements.
B (1970), developed by Ken Thompson at the Bell Laboratories for the first UNIX system.
BCPL and B are type less languages whereas C offers a variety of data types.
C (1972), written by Dennis Ritchie at Bell Labs for the implementation of UNIX. With the
publication of The C Programming Language in 1978 by Kernighan and Ritchie evolved into
the standard for C.JJ

2.2.2 Usage of C
C's primary use is for system programming, including implementing operating systems
and embedded system applications.
C has also been widely used to implement end-user applications, although as
applications became larger much of that development shifted to other, higher-level
languages.
One consequence of C's wide acceptance and efficiency is that the compilers,
libraries, and interpreters of other higher-level languages are often implemented in
C.
You will be able to read and write code for a large number of platforms even
microcontrollers.
2.2.3 Characteristics of C
Portability
Portability means it is easy to adapt software written for one type of computer or
operating system to another type.
Structured programming language
It make use of subroutines by making us of temporary variables.
Control the memory efficiently
It makes the concept of pointers.
Various application
Wide usage in all upcoming fields.
Lack of nested function definitions
Variables may be hidden in nested blocks
24

Partially weak typing; for instance, characters can be used as integers


Low-level access to computer memory by converting machine addresses to typed
pointers
Function and data pointers supporting ad hoc run-time polymorphism
Array indexing as a secondary notion, defined in terms of pointer arithmetic
A preprocessor for macro definition, source code file inclusion, and conditional
compilation
Complex functionality such as I/O, string manipulation, and mathematical functions
consistently delegated to library routines
A relatively small set of reserved keywords
2.2.4 C Advantages
Programming and program test time is drastically reduced
Knowledge of the processor instruction set is not required. Only rudimentary knowledge
of the memory structure of the CPU is desirable, although not necessary.
Details like register allocation, the addressing of the various memory, and data types are
managed by the compiler.
Programs get a formal structure and can be divided into separate functions. This provides
better program structure.
The ability to combine variable selections with specific operations improves program
readability.
Keywords and operational functions can be used that closely resemble the human thought
process.
The supplied and supported C libraries contain many standard routines such as formatted
output, numeric conversions, and floating point arithmetic.
Existing program parts can be more easily included into new programs by the use of the
modular programming techniques.
The C language is a very portable language (standardized to ANSI X3J11), enjoys wide
support, and is easily obtained for most systems. This means that any existing program
investment
can
be
quickly
adapted
to
other
microcontrollers.

Chapter 3
25

DESIGN AND ALGORITHMS

3.1 DATA FLOW DIAGRAM


Level Zero DFD:-

Figure 3.1: Level Zero DFD

Level One DFD:-

26

Figure 3.2: Level One DFD

3.2 ALGORITHMS
3.2.1 Algorithm for Edge Detection

27

It is a very efficient algorithm through which we can detect the edges in the image.
This Algorithm takes image as input and gives its edge detected image version as output.
Now lets see how this algorithm works:
Firstly, we have opened one of the file in read mode pointed by pointer fr and also opened
an another file in write mode pointed by pointer named fw.Then the header of he file is
copied in file opened in write mode after that first three bytes of the image are read and
stored them in array named pix[].For a particular pix[], pixb[] indicate the previous pixel,
pixu[] indicates the upper pixel ,pixd[] indicates the bottom pixel.The 19th and 20th byte
of the header gives the no of columns in the image that are calculated by using the
formula
a+b*256 where (0-255) represents the range that one color byte can type
Similarly the 23rd and 24th byte of the header gives gives the no of rows in the image that
are calculated using the same formula. Two inbuilt functions are used that are fsee() and
ftell()
Fseek() It takes three parameters ,first the pointer to the file ,offset of the location to
move in the file and lastly the whence(beg, current, end ).
Ftell()-It takes the position of the pointer pointing the file.
After that the size of the file is calculated and stored in variable named size. For all the
three bytes stored earlier in pix[] are compared with the pixels in upper row, lower row
and the previous pixel. For this comparison we have used function named Check() which
returns a value 1 if no change is found and 0 if a change is found where a change means
edge is detected. Once an edge is detected we place value 200(white color) in another file
opened in write mode pointed by fw pointer .Otherwise we place a value 0(black color) in
another file opened in write mode pointed by fw pointer indicating that no change is
found. Finally the result is stored in the file named edge.c .
The Algorithm is as follow:EDGE(Image)
1.

Fr <- initial add of image

2.

Fw<- initial add of output file

3.

for i=0 to 61 { c=fgetc(fr); fputc(c,fw);}

4.

Read the 19th & 20th location


Col=a+256*b
5.

rd

Read the 23 & 24


Row=a+256*b

6.

//calculate the width of the image


th

// calculate the height of the image

Go to the START of the file


size= ftell(fr);

7.

Go to the 63rd location of the file

8.

while ( k is not greater than size)

9.

cur =ftell(fr);
28

10. for j=0 to 3


pix[j]=fgetc(fr); k++;
11. if((k-3)==0)

for j=0 to 3 pixb[j]=pix[j];

if(k/(3*wd)==0)

// if first pixel

for j=0 to 3 pixu[j]=pix[j]; // if first row?

else
fseek(fr,cur-(wd*3),SEEK_SET);
for j=0 to 3 pixu[j]=fgetc(fr);
12. if(k>(wd*(ht-1))*3) for(j=0;j<3;j++) pixd[j]=pix[j]; // if last row?
else
fseek (fr, cur+(wd*3),SEEK_SET);
for j=0 to 3 pixd[j]=fgetc(fr);
13. fseek(fr,cur+3,SEEK_SET);
14. eql=check(pix,pixb,pixu,pixd);
15. if(eql) {for(j=0;j<3;j++)fputc(0,fw);}
else

for(j=0;j<3;j++)fputc(200,fw);

for j=0 to 3 pixb[j]=pix[j];


Check(pix ,pixb ,pixu ,pixd)
for i=0 to 3
if (*(pix+i)==*(pixb+i)&&*(pix+i)==*(pixu+i)&&*(pix+i)==*(pixd+i))
j=1;
else
j=0;
return j;
3.2.2 Algorithm for Blur Image
In order to detect the edges in the image algorithm takes image as input and gives its
edge detected image version as output. Now lets see how this algorithm works:
Firstly, one of the file is opened in read mode pointed by pointer fr and also opened an
another file in write mode pointed by pointer named fw. Then the header of he file is
copied in file opened in write mode after that first three bytes of the image are read and
stored them in array named pix[].For a particular pix[], pixb[] indicate the previous pixel,
pixu[] indicates the upper pixel ,pixd[] indicates the bottom pixel.The 19th and 20th byte
of the header gives the no of columns in the image that are calculated by using the
formula
a+b*256 where (0-255) represents the range that one color byte can type
29

Similarly the 23rd and 24th byte of the header gives gives the no of rows in the image that
are calculated using the same formula. Also two inbuilt functions are used that are fsee()
and ftell()
Fseek It takes three parameters ,first the pointer to the file ,offset of the location to move
in the file, and lastly the whence(beg,current,end).
Ftell()-It takes the position of the pointer pointing the file.
After that the size of the file is calculated and stored in variable named size. For all the
three bytes stored earlier in pix[] are compared with the pixels in upper row, lower row
and the previous pixel, next pixel.
For each pixel we have first take red byte and calculated the mean of all the four red
bytes, then replaced the central pixels red byte with the calculated mean value .Similarly
the above process is repeated for green and blue byte of the pixel. Proceeding in this
manner the blurred image is obtained.
BLUR(Image)
1.

Fr <- initial add of image

2.

Fw<- initial add of output file

3.

for i=0 to 61 { c=fgetc(fr); fputc(c,fw);}

4.

Read the 19th & 20th location


Col=a+256*b

5.

rd

Read the 23 & 24


Row=a+256*b

//calculate the width of the image


th

//calculate the width of the image

6. Go to the START of the file


Size = ftell(fr);
7. Go to the 63rd location of the file
8.

while( k is not greater than size)

9.

cur=ftell(fr);

10. for(j=0;j<3;j++){pix[j]=fgetc(fr); k++;}


if((k-3)==0) for(j=0;j<3;j++) pixb[j]=pix[j];

// if first pixel

if(k/(3*wd)==0) for(j=0;j<3;j++) pixu[j]=pix[j]; // if first row?


Else
fseek(fr,cur-(wd*3),SEEK_SET);
11. for(j=0;j<3;j++) pixu[j]=fgetc(fr);
12. if(k>(wd*(ht-1))*3) for(j=0;j<3;j++) pixd[j]=pix[j]; // if last row?
else
fseek(fr,cur+(wd*3),SEEK_SET);
for(j=0;j<3;j++) pixd[j]=fgetc(fr);
30

13. fseek(fr,cur+3,SEEK_SET);
14. eql=check(pix,pixb,pixu,pixd);
15. if(eql) {for(j=0;j<3;j++)fputc(pix[j],fw);}
else {for(j=0;j<3;j++)fputc((pix[j]+pixb[j]+pixu[j]+pixd[j])/4,fw);}
16. for(j=0;j<3;j++) pixb[j]=pix[j];
Check(pix ,pixb ,pixu ,pixd )
for i=0 to 3
if(*(pix+i)==*(pixb+i)&&*(pix+i)==*(pixu+i)&&*(pix+i)==*(pixd+i))
j=1
else
j=0
return j;
3.2.3 Algorithm for Grayscale Image
Grayscale is a very efficient algorithm through which we can convert a colorful image
into a high definition gray scale black and white image. This algorithm take image as
input and gives its black and Grayscale version as output. Now lets see how this
algorithm works.
Firstly we have opened one of the file in read mode pointed by pointer fr and also opened
an another file in write mode pointed by pointer named fw. Then the header of the file is
copied in file opened in write mode after that we have read the first three bytes of the
image and stored them in array named pix[].each pixel of the image consists of three
bytes red(0-255),blue(0-255),green(0-255).Finally for each pixel the mean of three
colours is taken and stored in a variable mean and a grayscale image is obtained.
GRAYSCALE(Image)
1. Fr <- initial add of image
2. Fw<- initial add of output file
3. for j=0 to 62

// copy header

c=fgetc(fr); fputc(c,fw);
4. while(!feof(fr)
for j=0 to 3 pix[j]=fgetc(fr);
mean=(pix[1]+pix[2]+pix[3])/3;
for j=0 to 3 fputc(mean,fw);
3.2.4 Algorithm for Negative Image
In order to obtain the negative of an image one of the file is opened in read mode
pointed by pointer fr and also opened an another file in write mode pointed by pointer
31

named fw. Then the header of the file is copied in file opened in write mode after that we
have read the first three bytes of the image and stored them in array named pix[].each
pixel of the image consists of three bytes red(0-255),blue(0-255),green(0-255).Finally
for each pixel the mean of three colours is taken and stored in a variable mean. After that
we have subtracted the mean from 256 so that light colour appears in dark range and the
dark colour appears in light range. Ultimately the negative of the image in file opened in
write mode. The algorithm is as follows:
NEGATIVE(IMAGE)
1. Fr <- initial add of image
2. Fw<- initial add of output file
3. for j=0 to 3

// copy header

c=fgetc(fr); fputc(c,fw);
4. while(!feof(fr))
for j=0 to 3 pix[j]=fgetc(fr);
mean=(pix[1]+pix[2]+pix[3])/3;
mean=256-mean;
for j=0 to 3 fputc(mean,fw);

3.3 SPECIFICATIONS
Software Requirements:
One of the following Operating Systems:
Windows(R) XP Professional with Service Pack 1.
Windows 2000 Professional with Service Pack 2 or higher.
Windows NT(R) Workstation or Server Version 4.0 with Service Pack 6a or higher.
Red Hat, Version 7.2.
Red Hat, Version 8.0.
Turbo C (version 1.5 or higher )
Hardware Requirements:
Intel(R) Pentium(R) II processor minimum
(Pentium III 500 MHz or higher is recommended)
256 MB RAM minimum (512 MB RAM is recommended)
Display resolution:

800 x 600 display minimum (1024 x 768 recommended)

3.4 SYSTEM IMPLEMENTATION AND TESTING


3.4.1 Implementation Issues
32

Implementation phase of the software development is concerned with translating the


design specifications into the source code. After the system has been designed, arrives the
stage of putting it into actual usage known as the implementation of the system. This
involves putting up of actual practical usage of the theoretically designed system. The
primary goal of implementation is to write the source code and the internal
documentation so that conformance of the code to its specifications can easily be verified
and so the debugging, modifications and testing are eased. This goal can be achieved by
making the source code as clear and as straightforward as possible. Simplicity, Elegance
and Clarity are the hallmarks of good programs whereas complexity are indications of
inadequate design and misdirected thinking. The system implementation is a fairly
complex and expensive task requiring numerous inter-dependent activities. It involves the
effort of a number of groups of people: user and the programmers and the computer
operating staff etc. This needs a proper planning to carry out the task successfully. Thus it
involves the following activities:
Writing and testing of programs individually
Testing the system as a whole using the live data
Training and Education of the users and supervisory staff
Source code clarity is enhance buy using structured coding techniques, by efficient
coding style, by appropriate supporting documents, by efficient internal comments and by
features provided in the modern programming language.
The following are the structured coding techniques:
1) Single Entry, Single Exit
2) Data Encapsulation
3) Using recursion for appropriate problems
3.4.2 Testing
The most important activity at the implementation stage is the system testing with the
objective of validating the system against the designed criteria. During the development
cycle, user was involved in all the phases that are analysis, design and coding. After each
phase the user was asked whether he was satisfied with the output and the desired
rectification was done at the moment. During coding, generally bottom up technique is
used. Firstly the lower level modules are coded and then they are integrated together.
Thus before implementation, it involves the testing of the system. The testing phase
involves testing first of separate parts of the system and then finally of the system as a
whole. Each independent module is tested first and then the complete system is tested.
This is the most important phase of the system development. The user carries out this
testing and test data is also prepared by the user to check for all possible combinations of
correct data as well as the wrong data that is trapped by the system. So the testing phase
consists of the following steps:
Unit testing:
In the bottom of coding technique, each module is tested individually. Firstly the
module is tested with some test data that covers all the possible paths and then the
actual data was fed to check for results.
33

Integration testing:
After all the modules are ready and duly tested, these have to be integrated into the
application. This integrated application was again tested first with the test data and then
with the actual data.
Parallel testing:
The third in the series of tests before handling over the system to the user is the parallel
processing of the old and the new system. At this stage, complete and thorough testing is
done and supports out the event that goes wrong. This provides the better practical
support to the persons using the system for the first time who may be uncertain or even
nervous using it.
The testing will be performed considering the following points:
1) Clerical procedure for collection and disposal of results
2) Flow of data
3) Accuracy of output
4) Software testing which involves testing of all the programs together.
5) Incomplete data formats
6) Halts due to various reasons and the restart procedures.
7) Range of items and incorrect formats
8) Invalid combination of data records.
3.5 SUMMARY

This chapter provides the design of the project and the various data Algorithms
used for the project. It also states the software and hardware specifications of the
project plus the various testing and implementation issues of the project.

34

Chapter 4
RESULT AND DISCUSSION
4.1 FRONT PAGE

This is the first page that opens whenever you want to processes an image.This page will
display the project name (Edge Detection Technique in image).

Figure 4.1: Front Page

4.2 INPUT HANDLER


This screen prompts the user to enter the name of the input image and will prompt the use
to select one of the options available on the screen.

35

Figure 4.2: Input Handler

4.3 INPUT IMAGE


This is the Input image whose name is given in order to detect edges in it.

36

Figure 4.1: Input Image

4.4 Output of Edge Detection

An output image is formed which contains all the edges of Input image named as
EDGE_ALL.bmp

37

Figure 4.2: Edge Detected Image

4.5 Output of Grayscale

An output image is formed which is the grayscale form of Input image named as
BANDW.bmp

38

Figure 4.5: Grayscale Image

4.6 Output of Negative


An output image is formed which is the negative form of
NEGATIVE.bmp.

39

Input image named as

Figure 4.6: Negative Image

4.7 Output of Blur


An output image is formed which is the blurred form of
BLUR.bmp.

40

Input image named as

Figure 4.7: Blurred Image

4.8 SUMMARY

This chapter clearly gives the outlook of the implemented project. It shows the Graphical
User Interface (GUI) provided to the user and the various options the user can access in
the project. This chapter also acts as the guide for the usage of this project

41

Chapter 5
CONCLUSION AND FUTURE WORK

5.1 CURRENT SCOPE


Edge detection is in the forefront of image processing for object detection, it is crucial to
have a good understanding of edge detection edge detection is a fundamental step in
computer vision, is the initial step in object recognition, it is necessary to point out the
true edges to get the best results. Edge Detection techniques are being used in computer
vision and image processing applications such as object recognition, image
segmentation/compression and so forth.
5.2 FUTURE SCOPE
The project has a wide scope of enhancing its power of detecting the edges in the image.
The project can now detect edges in BMP image , so in future we can improve it further
to be able to detect edges in other images also. Noise is the main problem in edge
detection so in future further work has to be done in order to reduce the effect of noise in
edge detection. The advances in technology have created tremendous opportunities for
Vision System and Image Processing. There is no doubt that the trend will continue into
the future. Developing a program that would decrease the human intervention to the least
was targeted to be able to response of this question and our studies on this subject are
going on. If a complete automation can be provided by using this type of program in the
future, good results can be taken in different areas by doing this kind of studies.

5.3 SUMMARY
This chapter briefs what has already been done in the project, i.e., tells the present scope
of the project, what all areas it covers. This chapter also states what all could be added to
it in future to make it more gernalised, useful and efficient.

42

APPENDIX I
SOURCE CODE
INPUT HANDLER
#include<conio.h>
#include<stdio.h>
int main()
{
int i;
char *image;
void EDGE(char *);
void BLUR(char *);
void GRAYSCALE(char *);
void NEGATIVE(char *);
clrscr();
textcolor(7);
textbackground(0);
gotoxy(30,10);
cprintf("EDGE DETECTION IN IMAGE");
getch();
clrscr();
window(10,5,70,40);
textcolor(3);
textbackground(1);
clrscr();
while(1)
{
cprintf("PLEASE ENTER THE PATH OF THE IMAGE: "); //PATH OF THE IMAGE
scanf("%s",image);
gotoxy(1,3);
cprintf("PLEASAE ENTER UR CHOICE\n\r1) DETECT THE EDGES\n\r"); //MENU
cprintf("2) BLUR THE IMAGE\n\r3) BLACK N WHITE THE IMAGE\n\r");
cprintf("4) NEGATIVEATIVE of THE IMAGE\n\r");
43

cprintf("5) I M Done Please Take Me To My KIN Now\n\r");


scanf("%d",&i);
getch();
switch(i)
{
case 1: EDGE(image);
break;
case 2: BLUR(image);
break;
case 3: GRAYSCALE(image);
break;
case 4: NEGATIVE(image);
break;
case 5: exit();
default: cprintf("Oh! U have entered WRONG CHOICE. PLZ ENTER IT AGAIN");
}
return 0;
}
}

EDGE DETECTION FUNCTION


void EDGE(char *image)
{
int i,c,eql,j=0,pix[3],pixb[3],pixu[3],pixd[3],a,b;
44

FILE *fr,*fw;
long wd,ht,k=0,cur,size;
clrscr();
fr=fopen(image,"rb");
fw=fopen("EDGE_all.bmp","wb");
for(i=0;i<62;i++){ c=fgetc(fr); fputc(c,fw);} // copy header
fseek(fr,18L,SEEK_SET);

//no of COLUMNS

a=fgetc(fr);b=fgetc(fr);
wd=a+b*256;
fseek(fr,22L,SEEK_SET);

//NO OF ROWS

a=fgetc(fr);b=fgetc(fr);
ht=a+b*256;
fseek(fr,0L,SEEK_END);
size= ftell(fr);

// file size

fseek(fr,62L,SEEK_SET);
while(k<=size)

//!feof(fr))

{ cur=ftell(fr);
for(j=0;j<3;j++){pix[j]=fgetc(fr); k++;}
if((k-3)==0) for(j=0;j<3;j++) pixb[j]=pix[j];

// if first pixel

if(k/(3*wd)==0) for(j=0;j<3;j++) pixu[j]=pix[j]; // if first row?


else
{ fseek(fr,cur-(wd*3),SEEK_SET);
for(j=0;j<3;j++) pixu[j]=fgetc(fr);
}
if(k>(wd*(ht-1))*3) for(j=0;j<3;j++) pixd[j]=pix[j]; // if last row?
else
{ fseek(fr,cur+(wd*3),SEEK_SET);
for(j=0;j<3;j++) pixd[j]=fgetc(fr);
}
fseek(fr,cur+3,SEEK_SET);
eql=check(pix,pixb,pixu,pixd);
if(eql) {for(j=0;j<3;j++)fputc(0,fw);}
else

{for(j=0;j<3;j++)fputc(255,fw);}

for(j=0;j<3;j++) pixb[j]=pix[j];
45

}
fclose(fr);
fclose(fw);
}
int check(int p[],int pb[],int pu[],int pd[])
{
int i,j;
for(i=0;i<3;i++)
{
if(*(p+i)==*(pb+i));
else return 1;
if(*(p+i)==*(pu+i));
else return 1;
if(*(p+i)==*(pd+i));
else return 1;
}
return 0;
}

BLUR FUNCTION
void BLUR(char *image)
{
int i,c,eql,j=0,pix[3],pixb[3],pixu[3],pixd[3],a,b;
FILE *fr,*fw;
long wd,ht,k=0,cur,size;
clrscr();
fr=fopen(image,"rb");
fw=fopen("BLUR.bmp","wb");
46

for(i=0;i<62;i++){ c=fgetc(fr); fputc(c,fw);} // copy header


fseek(fr,18L,SEEK_SET);
a=fgetc(fr);b=fgetc(fr);
wd=a+b*256;
fseek(fr,22L,SEEK_SET);
a=fgetc(fr);b=fgetc(fr);
ht=a+b*256;
fseek(fr,0L,SEEK_END);
size= ftell(fr);

// file size

fseek(fr,62L,SEEK_SET);
while(k<=size)//!feof(fr))
{ cur=ftell(fr);
for(j=0;j<3;j++){pix[j]=fgetc(fr); k++;}
if((k-3)==0) for(j=0;j<3;j++) pixb[j]=pix[j];

// if first pixel

if(k/(3*wd)==0) for(j=0;j<3;j++) pixu[j]=pix[j]; // if first row?


else
{ fseek(fr,cur-(wd*3),SEEK_SET);
for(j=0;j<3;j++) pixu[j]=fgetc(fr);
}
if(k>(wd*(ht-1))*3) for(j=0;j<3;j++) pixd[j]=pix[j]; // if last row?
else
{ fseek(fr,cur+(wd*3),SEEK_SET);
for(j=0;j<3;j++) pixd[j]=fgetc(fr);
}
fseek(fr,cur+3,SEEK_SET);
// eql=check(pix,pixb,pixu,pixd);
//if(eql) {for(j=0;j<3;j++)fputc(pix[j],fw);}
//else
for(j=0;j<3;j++)fputc((pix[j]+pixb[j]+pixu[j]+pixd[j])/4,fw);
for(j=0;j<3;j++) pixb[j]=pix[j];
}
fclose(fr);
fclose(fw);}
GRAYSCALE FUNCTION
47

void GRAYSCALE(char *image)


{
int i,c,eql,j=0,pix[3],mean;
FILE *fr,*fw;
clrscr();
fr=fopen(image,"rb");
fw=fopen("GRAYSCALE.bmp","wb");
for(i=0;i<62;i++)

// copy header

{ c=fgetc(fr); fputc(c,fw);
}
while(!feof(fr))
{
for(j=0;j<3;j++)pix[j]=fgetc(fr);
mean=(pix[1]+pix[2]+pix[3])/3;
for(j=0;j<3;j++)fputc(mean,fw);
}
fclose(fr);
fclose(fw);
}
NEGATIVE FUNCTION
void NEGATIVE(char *image)
{
int i,c,eql,j=0,pix[3],mean;
FILE *fr,*fw;
clrscr();
fr=fopen(image,"rb");
fw=fopen("NEGATIVEATIVE.bmp","wb");
for(i=0;i<62;i++)

// copy header

{
c=fgetc(fr); fputc(c,fw);
}
while(!feof(fr))
{
48

for(j=);
fclose(fw);}
$

(pix[1]+pix[2]+pix[3])/3;
APPENDIX II

REFERENCES

putc(mean,fw);
}
fclose(fr);
fclose(fw);}

APPENDIX II
REFERENCES

Image Processing and Applications-By Gonzalez & Woods


[2] Feature extraction and image processing -By Mark S. Nixon, Alberto S.
Aguado
[3] The image processing handbook -By John C. Russ
[4] New edge detection method in image processing- By Renyan Zhang,
Guoling Zhao, Li Su
[5] http://www.pages.drexel.edu/~weg22/edge.html
[6] http://www.altera.com/literature/cp/gspx/edge-detection.pdf
[7] http://mecadserv1.technion.ac.il/public_html/LabCourses/MachineVision
/Documents/Material/Tutorial4.pdf
[8] http://en.wikipedia.org/wiki/Image_processing

49

50

You might also like