Professional Documents
Culture Documents
INTRODUCTION
however different computer monitors may use different sized pixels. Each pixel has a
color. The color is a 24-bit integer. The first eight bits determine the redness of the pixel,
the next eight bits the greenness, the next eight bits the blueness.
1
1 111111
Red
1 1111111 1 1111111
Green
Blue
and white image, but the name emphasizes that such an image will also include many
shades of grey.
JPEG - a very efficient (i.e. much information per byte) destructively compressed
24 bit (16 million colors) bitmap format. Widely used, especially for web and
Internet (bandwidth-limited).
TIFF - the standard 24 bit publication bitmap format. Compresses nondestructively with, for instance, Lempel-Ziv-Welch (LZW) compression.
PSD - a dedicated Photoshop format that keeps all the information in an image
including all the layers.
images into smaller sections based on some common quality such as color or light
intensity. It is possible to extend the dynamic range of photos by combining images that
have variation in light exposure. Some of the most sophisticated techniques include
morphology and fly. The Holy Grail of image processing tends to be object recognition
where software is trained to be able to recognize and categorize the parts of an image
based on colors and outlines.
1.2.1 STAGES IN IMAGE PROCESSING
Image Processing techniques are used to enhance, improve, or otherwise alter an
image and to prepare it for image analysis. Usually, during image processing
information is not extracted from the image. The intention is to remove faults, trivial
information, or information that may be important, but not useful, and to improve the
image. Image processing is divided into many sub processes, including Histogram
good quality, due to both hardware and software inadequacies; thus, they have to be
enhanced and improved before other analysis can be performed on them.
Convolution Masks: A mask may be used for many different purposes, including
filtering operations and noise reduction. Noise and Edges produces higher frequencies in
the spectrum of a signal. It is possible to create masks that behave like a low pass filter,
such that higher frequencies of an image are attenuated while the lower frequencies are
not changed very much. There by the noise is reduced.
Edge Detection: It is a general name for a class of routines and techniques that operate
on an image and results in a line drawing of the image. The lines represented changes in
values such as cross sections of planes, intersections of planes, textures, lines, and colors,
as well as differences in shading and textures. Some techniques are mathematically
oriented, some are heuristic, and some are descriptive. All generally operate on the
differences between the gray levels of pixels or groups of pixels through masks or
thresholds. The final result is a line drawing or similar representation that requires much
less memory to be stored, is much simpler to be processed, and saves in computation and
storage costs. Edge detection is also necessary in subsequent process, such as
segmentation and object recognition.
Image Data Compression: Electronic images contain large amounts of information and
thus require data transmission lines with large bandwidth capacity. The requirements for
the temporal and spatial resolution of an image, the number of images per second, and the
number of gray levels are determined by the required quality of the image.
guided surgery by planning their incisions and insertions through the maze of the human
body. Successful techniques have allowed scientists to judge the presence of craters, soil
and atmospheric characteristics .The main applications of Image Processing can be
categorized as follow:
Biomedical Applications: In the field of Medicine this is highly applicable in
areas like Medical imaging, Scanning, Ultrasound and X-rays etc. Image
Processing is rapidly used for MRI SCAN (Magnetic Resonance Imaging) and CT
SCAN (Computer Tomography). Tomography is an imaging technique that
generates an image of a thin cross sectional slice of a test piece.
assigned keys from the keyboard. By pressing the relative key on the keyboard the
hand moves appropriately.
Defense Surveillance: Applications of image processing techniques in defense
surveillance is an important area of study. Suppose we are interested in locating
the type and the formation of Naval vessels in an aerial image of ocean surface
The primary task here is to segment different objects in the water body part of the
image .After this ,the parameters like area, location, parameter, aspect ratio are
found to classify each of the segmented object. To describe all possible formations
of the vessels, it is required that we should be able to identify the distribution of
objects in eight possible directions. From the spatial distribution of these objects it
is possible to interpret the entire oceanic scene.
Remotely Sensed Scene Interpretation: Information regarding the natural
resources such as agriculture, hydrological, mineral, forest, geological resources
etc can be extracted based on remotely sensed image analysis. For remotely
sensed image analysis, images of earths surface are captured by cameras in the
remote sensing satellites and transmitted to earth stations for further processing.
Law Enforcement: Police and detective agencies use intelligent software that is
able to zoom in on suspicious behavior usually triggered by sounds, the presence
of packages for protracted periods of time or clustering of many people. Image
processing allows the comparison of people on video surveillance images to
suspected rogues. There have been several successful implementation cases where
criminals have been identified within large crowds such as sports stadiums
through the use of image processing techniques.
10
DISADVANTAGES:
11
Information
Signature
File size
18
22
28
Bits/pixel
46
54
54 + 4*(number of colors)
The first 14 bytes are dedicated to the header information of the BMP. The next 40 bytes
are dedicated towards the info header, where one can retrieve such characteristics as
width, height, file size, and number of colors used. Next, is the color table, which is 4 x
(number of colors used) bytes long. So for an 8-bit grayscale image (number of colors is
256), the color table would be 4 x 256 bytes long or 1024 bytes. And the last bit of data in
a BMP file is the pixel data, or raster data. The raster data starts at byte 54 (header + info
header) + 4 x number of colors (color table). For an 8-bit grayscale image, the raster data
12
would start at byte 54 + 1024 = 1078. The size of the raster data is (width x height) 1
bytes. Therefore, a 100 row by 100 column 8-bit grayscale image would have (100 x 100)
1 = 9,999 bytes of raster data starting at byte 1078 and continuing to the end of the
BMP.
In terms of image processing, the most important information is the following:
(1)Number of columns byte #18
(2)Number of rows - byte #22
TEST.bmp is scaled up here to a 100 by 100 BMP, so be sure and download the zip file
to test out your raster data program.
The figure is shown below:
TEST.bmp contains 20 rows and 20 columns, so we know we will have 400 bytes of
raster data. We also know the raster data will start at byte #(54 + 4 x number of colors).
The number of colors of TEST.bmp is 256 because it is a grayscale image with colors
13
ranging from 0 to 255. Therefore, the raster data will start at byte #1078 and the file size
will be 1078 + 400 = 1478 bytes.
object boundaries and are therefore useful for segmentation, registration, and
identification of objects in a scene.
Edge Detection is a technique used in image detection and identification fields. The
algorithm basically tries to identify (as the name suggests) edges in the image by looking
for color variations that are sharp in nature, thereby indicating the presence of an edge. In
specialized cases like Facial detection (think 'drawing a box around a face' like modern
digital cameras do), edge detection techniques are used to detect the presence of edges
that denote a face, 2 eyes, a nose, mouth, etc. Once these are detected with a certain
confidence, the system assumes a face and responds appropriately.
1.4.2 Types of Edges
All edges are locally directional. Therefore, the goal in edge detection is to find out what
occurred perpendicular to an edge. The following is a list of commonly found edges.
Figure 1.10: Types of Edges (a) Sharp step (b) Gradual step (c) Roof (d) Trough
A Sharp Step, as shown in Figure 1.6(a), is an idealization of an edge. Since an Image is
always band limited, this type of graph cannot ever occur. A Gradual Step, as shown in
Figure 1.6(b), is very similar to a Sharp Step, but it has been smoothed out. The change in
intensity is not as quick or sharp. A Roof, as show in Figure 1.6(c), is different than the
first two edges. The derivative of this edge is discontinuous. A Roof can have a variety of
sharpness, widths, and spatial extents. The Trough, also shown in Figure 1.6(d), is the
inverse of a Roof.
1.4.3 Criteria for Edge Detection
There are large numbers of edge detection operators available, each designed to be
sensitive to certain types of edges. The Quality of edge detection can be measured from
several criteria objectively. Some criteria are proposed in terms of mathematical
measurement, some of them are based on application and implementation requirements.
In all five cases a quantitative evaluation of performance requires use of images where
the true edges are known.
15
lead to less false edges, but it also reduces the number of true edges detected.
Noise sensitivity: The robust algorithm can detect edges in certain acceptable
noise (Gaussian, Uniform and impulsive noise) environments. Actually, an edge
detector detects and also amplifies the noise simultaneously. Strategic filtering,
consistency checking and post processing (such as non-maximum suppression)
processing to connect edge segments, reject noise and suppress non maximum
edge magnitude.
Speed and efficiency: The algorithm should be fast enough to be usable in an
image processing system. An algorithm that allows recursive implementation or
separately processing can greatly improve efficiency.
Criteria of edge detection will help to evaluate the performance of edge detectors.
Correspondingly, different techniques have been developed to find edges based upon the
above criteria, which can be classified into linear and non linear techniques.
1.4.4 Motivation behind Edge Detection
The purpose of detecting sharp changes in image brightness is to capture important
events and changes in properties of the world. For an image formation model,
discontinuities in image brightness are likely to correspond to:a) Discontinuities in depth
b) Discontinuities in surface orientation
c) Changes in material properties
d) Variations in scene illumination
In the ideal case, the result of applying an edge detector to an image may lead to a set of
connected curves that indicates the boundaries of objects, the boundaries of surface
marking as well curves that correspond to discontinuities in surface orientation. If the
edge detection step is successful, the subsequent task of interpreting the information
contents in the original image may therefore be substantially simplified. Edges extracted
16
from non-trivial images are often hampered by fragmentation i.e. the edge curves are not
connected, missing edge segments, false edges etc., which complicate the subsequent task
of interpreting the image data.
1.4.5 Edge Detection a non-trivial task
To illustrate why edge detection is not a trivial task, let us consider the problem of
detecting edges in the following one-dimensional signal. Here, we may intuitively say
that there should be an edge between the 4th and 5th pixels.
152
148
149
Figure
1.11:
17
18
1.9 SUMMARY
19
Its a critical study, which plays a vital role in modern world as it is involved with
advanced use of science and technology. The advances in technology have created
tremendous opportunities for Vision System and Image Processing. There is no doubt that
the trend will continue into the future. From the above discussion we can conclude that
this field has relatively more advantages than disadvantages and hence is very useful in
varied branches.
[1]
Chapter 2
LITERATURE: A REVIEW
In image processing and computer vision, edge detection treats the localization of
significant variations of a gray level image and the identification of the physical and
geometrical properties of objects of the scene. Edge detection is a difficult issue. Many
difficulties come from the complex contents like noise, varying contrast in an
image, orientation sensitivity.
Digital image processing allows one to enhance image features of interest while
attenuating detail irrelevant to a given application, and then extract useful information
about the scene from the enhanced image. Images are produced by a variety of physical
devices, including still and video cameras, x-ray devices, electron microscopes, radar, and
ultrasound, and used for a variety of purposes, including entertainment, medical, business
(e.g. documents), industrial, military, civil (e.g.traffic), security, and scientific. The goal
in each case is for an observer, human or machine, to extract useful information about the
scene being imaged. Traditional edge detection techniques, such as Robert operator,
Sobel operator, Laplacian of Gaussian operator are widely used. Most of the existing
techniques are either very sensitive to noise and do not give satisfactory results in low
contrast areas.
A fuzzy theory based Edge Detector avoids these problems and is a better method for
edge information detection and noise filtering than the traditional methods. Edge
detection using fuzzy logic provides an alternative approach to detect edges. Edge
detection is one of the subjects of basic importance in image processing. The parts on
which immediate changes in grey tones occur in the images are called edges.
Benefiting from the direct relation between physical qualities of the materials and their
edges, these qualities can be recognized from edges. Because of these qualities, edge
detection techniques gain importance in terms of image processing. Edge detection
techniques transform images to edge images benefiting from the changes of grey tones in
the images. Edges productive methods of finding final edges is to designate the
immediate changes of grey level.
2.1.1 History of Edge Detection
In this section, work done in the area of edge detection is reviewed and focus has been
made on detecting the edges of the digital images.Edge detection is a problem of
fundamental importance in image analysis. In typical images, edges characterize object
boundaries and are therefore useful for segmentation, registration, and identification of
objects in a scene. Edge detection of an image reduces significantly the amount of data
and filters out information that may be regarded as less relevant, preserving the important
structural properties of an image.
In 1997 Ng Geok See, Chan Khue Hiang, proposed a technique for edge detection based
on neural network. Neural network has many processing elements joined together and
usually organized into groups called layers. Training is provided to the neural network in
supervised or unsupervised learning mode, to force the network to yield particular result
to a specific input.
21
In 1998 Zhengquan He, M.Y.Siyal, proposed a new technique based on neural network.
Most of the existing techniques like Sobel[refrencee} are effective in certain senses and
require more computation time. In the proposed edge detection technique a three layer BP
neural network is employed to classify the edge elements in binary images into one of the
predefined categories. To detect edges first binarize the image by choosing threshold by
some optimal criteria and classify the edge patterns of binary images in different
categories. Train the neural network on these patterns and on their noisy patterns. After
the network is trained, it can recognize the input pattern as a most like pattern in our edge
pattern bank. This technique is more flexible to the edge structures in the image. It can
not only extract straight lines but also can extract corners and arcs edges.
In 2005 Zhang, Zhao and Li Su, proposed a technique based on the integer logarithm
ratio of gray levels. In order to remove the ability of noise rejection they proposed a
ratio of gray levels between the two successive image points rather than the difference
of gray levels to denote the variation in the gray levels . In this, division operation
becomes the subtraction operation of the logarithmic ratio of gray levels. This is more
convenient for calculations.
In 2005 Stamatia Giannarou, Tania Stathaki, proposed a technique that allows combining
the methods of different edge detection operators in order to yield improved results for
edge detection in an image. This is called Receiver Operating Characteristics (ROC)
analysis . This technique uses the statistical approach to automatically form a optimum
edge map, by combining edge images from different detectors. The characteristics of this
method are to produce accurate and noise free results. One possible concern regarding
these techniques is the selection of the edge detectors to be combined.
In 2006 M.Hanmandlu, Rohan Raj Kalra, Vamsi Krishna Madasu, proposed a technique
based on Univalue Segment Assimilating Nucleus (USAN) area i.e. fuzzy technique. The
USAN characterizes the structure of the edge present in the neighborhood of a pixel and
can thus be considered as a unique feature of the pixel and is fuzzified . This technique is
best in yielding the large number of longest edge segments. This is used for the
applications like face recognition and fingerprint identification, as it does not distort the
shape of the image and is able to retain all the important edges. Appropriate fuzzification
function and threshold election are important for the success of the proposed edge
detection algorithm.
Later on Fast fuzzy edge detection technique was proposed. Heuristic membership
functions, simple fuzzy rules, and fuzzy complements were used to develop new edge
detectors . Then Fuzzy edge detector using entropy optimization was proposed.The
proposed fuzzy edge detector involves two phases: global contrast intensification and
local fuzzy edge detection. In the first phase, a modified Gaussian membership function
is chosen to represent each pixel in the fuzzy plane . To realize the fast and accurate
detection of the edges from the blurry images, the Fast Multilevel Fuzzy Edge Detection
(FMFED) algorithm was proposed . The FMFED algorithm first enhances the image
contrast by means using a simple transformation function based on two image thresholds.
Second, the edges are extracted from the enhanced image by the two-stage edge detection
22
operator that identifies the edge candidates based on the local characteristics of the image
and then determines the true edge pixels using the edge detection operator based on the
extreme of the gradient values.
The goal of the edge detection process in a digital image is to determine the frontiers of
all represented objects, based on automatic processing of the color or gray level
information in each present pixel. Edge detection has many applications in image
processing and computer vision, and is an indispensable technique in both biological and
robot vision . The main objective of edge detection in image processing is to reduce data
storage while at same time retaining its topological properties, to reduce transmission
time and to facilitate the extraction of morphological outlines from thedigitized image.
In our research problem we have used a simple algorithm to find edges in the image
which is based on colour of the image . So the major stress will be on development of
algorithms for improving the quality of detecting edges by Edge detection technique.
2.2 INTRODUCTION TO C
C is a general purpose computing programming language.
C was invented and was first implemented by Dennis Ritchie with the Unix Operating
System in 1972.
C is often called a middle level computer language.
C is a Structured Language.
Data types supported by C language are integer, float, double, character etc.,
23
2.2.2 Usage of C
C's primary use is for system programming, including implementing operating systems
and embedded system applications.
C has also been widely used to implement end-user applications, although as
applications became larger much of that development shifted to other, higher-level
languages.
One consequence of C's wide acceptance and efficiency is that the compilers,
libraries, and interpreters of other higher-level languages are often implemented in
C.
You will be able to read and write code for a large number of platforms even
microcontrollers.
2.2.3 Characteristics of C
Portability
Portability means it is easy to adapt software written for one type of computer or
operating system to another type.
Structured programming language
It make use of subroutines by making us of temporary variables.
Control the memory efficiently
It makes the concept of pointers.
Various application
Wide usage in all upcoming fields.
Lack of nested function definitions
Variables may be hidden in nested blocks
24
Chapter 3
25
26
3.2 ALGORITHMS
3.2.1 Algorithm for Edge Detection
27
It is a very efficient algorithm through which we can detect the edges in the image.
This Algorithm takes image as input and gives its edge detected image version as output.
Now lets see how this algorithm works:
Firstly, we have opened one of the file in read mode pointed by pointer fr and also opened
an another file in write mode pointed by pointer named fw.Then the header of he file is
copied in file opened in write mode after that first three bytes of the image are read and
stored them in array named pix[].For a particular pix[], pixb[] indicate the previous pixel,
pixu[] indicates the upper pixel ,pixd[] indicates the bottom pixel.The 19th and 20th byte
of the header gives the no of columns in the image that are calculated by using the
formula
a+b*256 where (0-255) represents the range that one color byte can type
Similarly the 23rd and 24th byte of the header gives gives the no of rows in the image that
are calculated using the same formula. Two inbuilt functions are used that are fsee() and
ftell()
Fseek() It takes three parameters ,first the pointer to the file ,offset of the location to
move in the file and lastly the whence(beg, current, end ).
Ftell()-It takes the position of the pointer pointing the file.
After that the size of the file is calculated and stored in variable named size. For all the
three bytes stored earlier in pix[] are compared with the pixels in upper row, lower row
and the previous pixel. For this comparison we have used function named Check() which
returns a value 1 if no change is found and 0 if a change is found where a change means
edge is detected. Once an edge is detected we place value 200(white color) in another file
opened in write mode pointed by fw pointer .Otherwise we place a value 0(black color) in
another file opened in write mode pointed by fw pointer indicating that no change is
found. Finally the result is stored in the file named edge.c .
The Algorithm is as follow:EDGE(Image)
1.
2.
3.
4.
rd
6.
7.
8.
9.
cur =ftell(fr);
28
if(k/(3*wd)==0)
// if first pixel
else
fseek(fr,cur-(wd*3),SEEK_SET);
for j=0 to 3 pixu[j]=fgetc(fr);
12. if(k>(wd*(ht-1))*3) for(j=0;j<3;j++) pixd[j]=pix[j]; // if last row?
else
fseek (fr, cur+(wd*3),SEEK_SET);
for j=0 to 3 pixd[j]=fgetc(fr);
13. fseek(fr,cur+3,SEEK_SET);
14. eql=check(pix,pixb,pixu,pixd);
15. if(eql) {for(j=0;j<3;j++)fputc(0,fw);}
else
for(j=0;j<3;j++)fputc(200,fw);
Similarly the 23rd and 24th byte of the header gives gives the no of rows in the image that
are calculated using the same formula. Also two inbuilt functions are used that are fsee()
and ftell()
Fseek It takes three parameters ,first the pointer to the file ,offset of the location to move
in the file, and lastly the whence(beg,current,end).
Ftell()-It takes the position of the pointer pointing the file.
After that the size of the file is calculated and stored in variable named size. For all the
three bytes stored earlier in pix[] are compared with the pixels in upper row, lower row
and the previous pixel, next pixel.
For each pixel we have first take red byte and calculated the mean of all the four red
bytes, then replaced the central pixels red byte with the calculated mean value .Similarly
the above process is repeated for green and blue byte of the pixel. Proceeding in this
manner the blurred image is obtained.
BLUR(Image)
1.
2.
3.
4.
5.
rd
9.
cur=ftell(fr);
// if first pixel
13. fseek(fr,cur+3,SEEK_SET);
14. eql=check(pix,pixb,pixu,pixd);
15. if(eql) {for(j=0;j<3;j++)fputc(pix[j],fw);}
else {for(j=0;j<3;j++)fputc((pix[j]+pixb[j]+pixu[j]+pixd[j])/4,fw);}
16. for(j=0;j<3;j++) pixb[j]=pix[j];
Check(pix ,pixb ,pixu ,pixd )
for i=0 to 3
if(*(pix+i)==*(pixb+i)&&*(pix+i)==*(pixu+i)&&*(pix+i)==*(pixd+i))
j=1
else
j=0
return j;
3.2.3 Algorithm for Grayscale Image
Grayscale is a very efficient algorithm through which we can convert a colorful image
into a high definition gray scale black and white image. This algorithm take image as
input and gives its black and Grayscale version as output. Now lets see how this
algorithm works.
Firstly we have opened one of the file in read mode pointed by pointer fr and also opened
an another file in write mode pointed by pointer named fw. Then the header of the file is
copied in file opened in write mode after that we have read the first three bytes of the
image and stored them in array named pix[].each pixel of the image consists of three
bytes red(0-255),blue(0-255),green(0-255).Finally for each pixel the mean of three
colours is taken and stored in a variable mean and a grayscale image is obtained.
GRAYSCALE(Image)
1. Fr <- initial add of image
2. Fw<- initial add of output file
3. for j=0 to 62
// copy header
c=fgetc(fr); fputc(c,fw);
4. while(!feof(fr)
for j=0 to 3 pix[j]=fgetc(fr);
mean=(pix[1]+pix[2]+pix[3])/3;
for j=0 to 3 fputc(mean,fw);
3.2.4 Algorithm for Negative Image
In order to obtain the negative of an image one of the file is opened in read mode
pointed by pointer fr and also opened an another file in write mode pointed by pointer
31
named fw. Then the header of the file is copied in file opened in write mode after that we
have read the first three bytes of the image and stored them in array named pix[].each
pixel of the image consists of three bytes red(0-255),blue(0-255),green(0-255).Finally
for each pixel the mean of three colours is taken and stored in a variable mean. After that
we have subtracted the mean from 256 so that light colour appears in dark range and the
dark colour appears in light range. Ultimately the negative of the image in file opened in
write mode. The algorithm is as follows:
NEGATIVE(IMAGE)
1. Fr <- initial add of image
2. Fw<- initial add of output file
3. for j=0 to 3
// copy header
c=fgetc(fr); fputc(c,fw);
4. while(!feof(fr))
for j=0 to 3 pix[j]=fgetc(fr);
mean=(pix[1]+pix[2]+pix[3])/3;
mean=256-mean;
for j=0 to 3 fputc(mean,fw);
3.3 SPECIFICATIONS
Software Requirements:
One of the following Operating Systems:
Windows(R) XP Professional with Service Pack 1.
Windows 2000 Professional with Service Pack 2 or higher.
Windows NT(R) Workstation or Server Version 4.0 with Service Pack 6a or higher.
Red Hat, Version 7.2.
Red Hat, Version 8.0.
Turbo C (version 1.5 or higher )
Hardware Requirements:
Intel(R) Pentium(R) II processor minimum
(Pentium III 500 MHz or higher is recommended)
256 MB RAM minimum (512 MB RAM is recommended)
Display resolution:
Integration testing:
After all the modules are ready and duly tested, these have to be integrated into the
application. This integrated application was again tested first with the test data and then
with the actual data.
Parallel testing:
The third in the series of tests before handling over the system to the user is the parallel
processing of the old and the new system. At this stage, complete and thorough testing is
done and supports out the event that goes wrong. This provides the better practical
support to the persons using the system for the first time who may be uncertain or even
nervous using it.
The testing will be performed considering the following points:
1) Clerical procedure for collection and disposal of results
2) Flow of data
3) Accuracy of output
4) Software testing which involves testing of all the programs together.
5) Incomplete data formats
6) Halts due to various reasons and the restart procedures.
7) Range of items and incorrect formats
8) Invalid combination of data records.
3.5 SUMMARY
This chapter provides the design of the project and the various data Algorithms
used for the project. It also states the software and hardware specifications of the
project plus the various testing and implementation issues of the project.
34
Chapter 4
RESULT AND DISCUSSION
4.1 FRONT PAGE
This is the first page that opens whenever you want to processes an image.This page will
display the project name (Edge Detection Technique in image).
35
36
An output image is formed which contains all the edges of Input image named as
EDGE_ALL.bmp
37
An output image is formed which is the grayscale form of Input image named as
BANDW.bmp
38
39
40
4.8 SUMMARY
This chapter clearly gives the outlook of the implemented project. It shows the Graphical
User Interface (GUI) provided to the user and the various options the user can access in
the project. This chapter also acts as the guide for the usage of this project
41
Chapter 5
CONCLUSION AND FUTURE WORK
5.3 SUMMARY
This chapter briefs what has already been done in the project, i.e., tells the present scope
of the project, what all areas it covers. This chapter also states what all could be added to
it in future to make it more gernalised, useful and efficient.
42
APPENDIX I
SOURCE CODE
INPUT HANDLER
#include<conio.h>
#include<stdio.h>
int main()
{
int i;
char *image;
void EDGE(char *);
void BLUR(char *);
void GRAYSCALE(char *);
void NEGATIVE(char *);
clrscr();
textcolor(7);
textbackground(0);
gotoxy(30,10);
cprintf("EDGE DETECTION IN IMAGE");
getch();
clrscr();
window(10,5,70,40);
textcolor(3);
textbackground(1);
clrscr();
while(1)
{
cprintf("PLEASE ENTER THE PATH OF THE IMAGE: "); //PATH OF THE IMAGE
scanf("%s",image);
gotoxy(1,3);
cprintf("PLEASAE ENTER UR CHOICE\n\r1) DETECT THE EDGES\n\r"); //MENU
cprintf("2) BLUR THE IMAGE\n\r3) BLACK N WHITE THE IMAGE\n\r");
cprintf("4) NEGATIVEATIVE of THE IMAGE\n\r");
43
FILE *fr,*fw;
long wd,ht,k=0,cur,size;
clrscr();
fr=fopen(image,"rb");
fw=fopen("EDGE_all.bmp","wb");
for(i=0;i<62;i++){ c=fgetc(fr); fputc(c,fw);} // copy header
fseek(fr,18L,SEEK_SET);
//no of COLUMNS
a=fgetc(fr);b=fgetc(fr);
wd=a+b*256;
fseek(fr,22L,SEEK_SET);
//NO OF ROWS
a=fgetc(fr);b=fgetc(fr);
ht=a+b*256;
fseek(fr,0L,SEEK_END);
size= ftell(fr);
// file size
fseek(fr,62L,SEEK_SET);
while(k<=size)
//!feof(fr))
{ cur=ftell(fr);
for(j=0;j<3;j++){pix[j]=fgetc(fr); k++;}
if((k-3)==0) for(j=0;j<3;j++) pixb[j]=pix[j];
// if first pixel
{for(j=0;j<3;j++)fputc(255,fw);}
for(j=0;j<3;j++) pixb[j]=pix[j];
45
}
fclose(fr);
fclose(fw);
}
int check(int p[],int pb[],int pu[],int pd[])
{
int i,j;
for(i=0;i<3;i++)
{
if(*(p+i)==*(pb+i));
else return 1;
if(*(p+i)==*(pu+i));
else return 1;
if(*(p+i)==*(pd+i));
else return 1;
}
return 0;
}
BLUR FUNCTION
void BLUR(char *image)
{
int i,c,eql,j=0,pix[3],pixb[3],pixu[3],pixd[3],a,b;
FILE *fr,*fw;
long wd,ht,k=0,cur,size;
clrscr();
fr=fopen(image,"rb");
fw=fopen("BLUR.bmp","wb");
46
// file size
fseek(fr,62L,SEEK_SET);
while(k<=size)//!feof(fr))
{ cur=ftell(fr);
for(j=0;j<3;j++){pix[j]=fgetc(fr); k++;}
if((k-3)==0) for(j=0;j<3;j++) pixb[j]=pix[j];
// if first pixel
// copy header
{ c=fgetc(fr); fputc(c,fw);
}
while(!feof(fr))
{
for(j=0;j<3;j++)pix[j]=fgetc(fr);
mean=(pix[1]+pix[2]+pix[3])/3;
for(j=0;j<3;j++)fputc(mean,fw);
}
fclose(fr);
fclose(fw);
}
NEGATIVE FUNCTION
void NEGATIVE(char *image)
{
int i,c,eql,j=0,pix[3],mean;
FILE *fr,*fw;
clrscr();
fr=fopen(image,"rb");
fw=fopen("NEGATIVEATIVE.bmp","wb");
for(i=0;i<62;i++)
// copy header
{
c=fgetc(fr); fputc(c,fw);
}
while(!feof(fr))
{
48
for(j=);
fclose(fw);}
$
(pix[1]+pix[2]+pix[3])/3;
APPENDIX II
REFERENCES
putc(mean,fw);
}
fclose(fr);
fclose(fw);}
APPENDIX II
REFERENCES
49
50