You are on page 1of 45

:

Course: Software metrics


Faculty: prof. Ramathunnisa U

TEAM MEMBERS:
A. NOMIKA-19MIS0417
J. SOWMYA-19MIS0288
A. DUHITHA HARSHA-19MIS0271
M.V. KRISHNAMANAIDU-19MIS0193

1
Table of Contents:

Chapter Page No
Abstract 2
1. Introduction 4-6
1.1 Overview 4
1.2 Work Breakdown 4-5
1.3 Gantt Chart 5-6
2. Project Resource Requirements 6
2.1 Software Requirements 6
2.2 Hardware Requirements 6
3. Literature Survey 7-11
4. System Architecture 12
5. Use case diagram 13
6. Module Description 14
7. Software Metrics (used by Project Manager, Team 15-27
Manager, Developer and Tester)
8.Work done by each member and the software metrics used 28-34
9.Output & screenshots 35-44
10.Conclusion and future work 45
References 46-47

2
1. Introduction:
1.1 Overview
Humans’ eye has some natural tendency to recognize shapes based on their prior knowledge.
Therefore, vision plays an important role in human knowledge. We can corelate this and
apply the same operation in computers to assist the software to recognize the shapes. There
are many existing systems which recognize shapes based on their colour and size. Since we
know that different shapes may possess identical colour and size values, these parameters are
not sufficient to identify and recognize the shapes. In this project, a new system has been
proposed which recognizes shapes based on the shape’s edges to increase the accuracy. We
use canny edge detection method to find edges of the shape by looking for local maxima of
the gradient of image to recognize the shape. Shape recognition finds its application in
Analysis of fingerprint, robotics, handwriting mapping, remote sensors and in many other. In
pattern recognition system, recognizing and identifying the shapes is one of the significant
research areas. Main focus of the pattern recognition is the classification between objects.

1.2 Work Breakdown

Hierarchy:

TEAM Manager : DUHITHA


Developer 1 : NOMIKA
Developer 2 : SOWMYA
Tester :
KRISHNA

3
1.3 Gantt Chart

4
2. Project Resource Requirements:

2.1 Software Requirements:


 IDE : MATLAB
 Project Management : INSTAGANTT

2.2 Hardware Requirements


 Processor : Intel Core I5
 Hard Disk Drive(HDD) : 2-4 GB
 Ram : 2-4 GB

5
3. Literature Survey:

S.No Author Title Review Methodology Used

1 Krystian Shape recognition Introduce a newedge- Lowe's SIFT method


Mikolajczyk, with edge-based based feature
features detector that is
Andrew Zisserman, invariant to
Cordelia Schmid similarity
transformations. The
features are localized
on edges and a
neighborhood is
estimated in a scale
invariant manner.

6
2 Mohd Firdaus Object Shape This is analogous to Parameter estimation
algorithm
Zakaria, Hoo Seng Recognition in machine vision such
Choon, and Image for as shape recognition
Shahrel Azmin Machine Vision application which is
Suandi Application important nowadays.
This paper proposed
shape recognition
method where circle,
square and triangle
object in the image
will be recognizable
by the
algorithm.This
proposed
method utilizes
intensity value from
the input image then
thresholder by
Otsu’s method to
obtain the binary
image.

7
3 Jelmer Ph Object Recognition: The approach is SUSAN
Vries A Shape-Based algorithm,
shape-based and
Approach using insertion sor
works towards algorithm
Artificial Neural recognition under a
Networks broad range of
circumstances, from
varied lighting
conditions to affine
transformations. The
main emphasis is on
its neural elements
that allow the system
to learn to recognize
objects about which
it has no prior
information.

8
4 Ohtani, Kozo & Position and posture In most of the past Flight methods,
measurements and acoustic holographic
Baba, Mitsuru & methods of this kind,
shape recognition of methods
Konishi, Tadataka. columnar objects the characteristic
using an ultrasonic
quantities have been
sensor array and
neural networks based on either
timeof-flight
methods or acoustic
holographic
methods. In these
methods, measuring
and recognizing the
width and depth
directions
simultaneously with
a high resolution has
been difficult in
principle.

5 Das, Manas Ranjan Object Shape The approach here is Corner detection
and Barla, Sunil to classify some of method, signature
Recognition
the common objects method and chain
around us and decide code method
whether they belong
to any geometric
shape or not. The
shape of the objects
can be represented
by some feature
space which may be
used for recognizing
shape of the objects.

9
6 El Abbadi, Nidhal & Automatic Detection This is analogous to Statistical method,
Saadi, Lamis. and Recognize machine vision such
structural method.
Different Shapes in as shape recognition
an Image application which is Contrast-limited
important field
adaptive histogram
nowadays. This
paper introduces a equalization
new approach for (CLAHE) is used.
recognizing
twodimensional
shapes in an image,
and also recognizes
the shapes type.

7 Pedro F. Representation and Our methods revolve Segmentation


Detection of Shapes around a particular algorithm, generic
Felzenszwalb
in Images shape representation optimization methods
based on the
description of
objects using
triangulated
polygons. This

representation is
similar to the
medial axis
transform and has
important
properties from
a
computational
perspective.

10
8 Gulce Bal, Julia Research in We present a novel The geometric
Diebold, Erin Wolf shape modeling image recognition algorithm, SAT-EDF
method based of the algorithms
Chambers, Ellen Blum medial axis
Gasparovic, that identifies shape
information present
Ruizhen Hu, in unsegmented
Kathryn input images.
Leonard, Matineh
Shaker, and
Carola Wenk

9 Jaruwan Toontham Object Recognition This paper presents Hough Transform


method and sobel
and Chaiyapon and Identification an object recognition
edge detection
Thongchaisuratkrul System Using the and identification algorithm.
Hough Transform system using the
Method. Object Hough Transform
Recognition and method. The
Identification System process starts from
Using imported images into
the Hough Transform the system by
Method webcam,
detected image edge
by fuzzy, recognized
the object by Hough
Transform, and
separated the
objects by the robot
arm.

11
10 A.Ashbrook and Algorithms For two Representation of Stereo matching
N.A.Thacker Dimensional algorithm and
arbitrary shape for
Object thinning algorithm
Recognition. purposes of visual
recognition is an
unsolved problem.
The task of
representation is
intimately
constrained by the
recognition process
and one cannot be
solved without some
solution for the
other. We have
already done some
work on the use of
an associative neural
network system for
hierarchal pattern
recognition of the
sort that may be
ultimately useful for
generic object
recognition.

12
4. System Architecture:

13
5. Use case diagram:

14
6. Module Description:

Image Pre-processing module:


The pre-processing performed well, as expected on the black and white images. For real colour
images performance is of course less exact, especially in cases of noisy or complex images.
And if we take a look at the shapes that were abstracted from the different images of the same
object, we can say the pre-processing component performed rather well in terms of
consistency. The quality of edges is increased by applying differentiation.

Image Acquisition module:


This module helps in capturing an image and make the image as a dataset to identify the
shape of an image.

Morphological Processing module:


This module performs morphological dilation and erosion. Morphological is a broad set of
image processing operations that process images based on shape. Morphological operations
apply a structuring element to an input image; create an output image of the same size. The
goal of removing the imperfection objects accounting for the form and structure of the things

Feature extraction module:


The feature extraction is used to reduce the large input data to small data so that it will take
less time to process data but in extracted the feature must have important data to be process.
We use canny edge detector to extract the edges. The feature extraction can be done by using
morphology, colour, edges, texture and etc.

Edge Detection module:


This is used to classify the shape of an image based on the size like area, perimeter etc., A
software environment was designed to test and use the proposed method, and to evaluate its
speed and accuracy.

15
7. Software Metrics:

Metrics used by Project and Team


managers 1.Productivity Metrics
These metrics can be used to track and measure how efficient our team is in getting their tasks
done. These metrics are used to manage and improve performance of our project as well as
highlight where we need to improve. Some productivity metrics used in our project are as
followed:
Planned-to-done ratio: The planned-to-done ratio calculates what percentage of the assigned
tasks were completed adequately. It also helps in tracking whether the project is getting done
in the way we planned them.
Effort per Team member: This metrics helps us to assess the effort of each team
member. This will help us to determine the overall productivity of the project and team
members.

2. Cost Effectiveness metrics:


Cost effectiveness is a category of metrics that are used to measure the results of strategies,
programs, projects and operations where benefits are non-financial. These metrics will help
the project manager to find the open-source tools by comparing the cost of tools and
strategies.

3.Maintainability Index:
Calculates an index value between 0 and 100 that represents the relative ease of maintaining
the code. A high value means better maintainability. Color coded ratings can be used to
quickly identify trouble spots in your code. A green rating is between 20 and 100 and
indicates that the code has good maintainability. A yellow rating is between 10 and 19 and
indicates that the code is moderately maintainable. A red rating is a rating between 0 and 9
and indicates low maintainability.
Maintainability Index = 171 - 5.2 * ln(Halstead Volume) - 0.23 * (Cyclomatic Complexity) -
16.2 * ln(Lines of Code)

4. Quality metrics:
A major challenge in software maintenance is understanding the existing code, and this is
where code quality metrics can have a big impact. These metrics help to improve the project
efficiency also.

16
5.Defect detection efficiency:
This metric will help us to assess the performance and productivity of the tester in order to
ensure the quality of the product.
Defect detection efficiency = Number of defects detected / Total Number of defects

Metrics used by
Developers 1.Lines of Code
(LOC):
Lines of code can be used by the developers to measure the size of code developed. This in
turn helps the developer to assess few other parameters such as number of lines of code
developed per day and effort to develop it.

2. Cyclomatic Complexity:
This metric used to indicate the complexity of a program. It is a quantitative measure of the
number of linearly independent paths through a program's source code. This gives the
developer an idea about the complexity of code needed to develop.

3. Function point metrics:


This provides a standardized method for measuring the various functions of a software
application. It measures the functionality from the user's point of view, that is, on the basis of
what the user requests and receives in return. This would be helpful for the developer to
develop the code efficiently.

4. Efficiency:

This metric will help the developer to assess the efficiency of the code developed. This will
give a clear picture of whether the code provide accurate results according to the user needs
and also evaluates the code performance.

5. Time complexity:
Time complexity is the computational complexity that describes the amount of computer time
it takes to run an algorithm. Thus, the amount of time taken and the number of elementary
operations performed by the algorithm are taken to differ by at most a constant factor.

17
Metrics Used by Testers
1.Code Coverage:
Code coverage is a metric that can help testers to understand how much of your source is
tested. This in turn helps to assess the quality of test suit and to find bugs.
Code Coverage Percentage = (Number of lines of code executed by a testing algorithm/Total
number of lines of code in a system component) * 100.

2. Defect Density:
Defect Density is the number of defects confirmed in software/module during a specific period
of operation. This metrics helps the testers to find out the density of bugs in the code
developed.

3. Portability:
Portability measures how usable the same software is in different environments. It relates to
platform independency. There isn’t a specific measure of portability. But there are several
ways you can ensure portable code. It’s important to regularly test code on different platforms,
rather than waiting until the end of development.

4. Bug Find Rate:


One of the most important metrics used during the test effort percentage is bug find rate. It
measures the number of defects/bugs found by the team during the process of testing.

5. Accuracy:

This metric helps the tester to check and assess the accuracy of the developed code by giving
various sample data. This helps to calculate the accuracy rate of the software. In our project,
we take different shapes of different parameters and calculate the efficiency of the output.

6. Severity of the defects:


By measuring the severity of the defects, the developers can get a wide picture of the impact
of defect on the quality of the application developed.

18
Metrics Calculation:
TEAM A : DUHITHA AND SOWMYA
TEAM B : NOMIKA AND KRISHNA

Month-september Planned Activities Done Activities Planned to done Ratio


Team A 1.Literature Survey 1.Literature Survey 4:3
(4 papers) 2.Module’s (4 papers) 2.Module’s
description and description and
analysis analysis
3.Requirements 3.Requirements
gathering and gathering and
analysis analysis
4. Algorithm Analysis

Team B 1. Literature Survey 1. LiteratureSurvey 4:4


(4 papers) (4 papers)
2. Use case Diagram 2. Use case Diagram
3. Requirements 3. Requirements
gathering and gathering and
analysis analysis
4. Architecture 4. Architecture
Diagram Diagram
Testers 1. Literature Survey 1. Literature Survey 2:2
(2 papers) (2 papers)
2. Analysis on 2. Analysis on
Literature Survey Literature Survey

Month-october Planned Activities Done Activities Planned to done Ratio


Team A 1. Development 1. Development 4:4
of Image acquisition of Image acquisition
module module
2. Development 2. Development
of of
Morphological Morphological
Processing module Processing module
3. Algorithm 3. Algorithm
analysis 4.Gather analysis 4.Find
software metrics for metrics for
developers developers

19
Team B 1. Development of 1. Development of 5:4
Image preprocessing Image preprocessing
module module
2.Development of 2.Development of
direction detection direction detection
module. module extraction
3. Algorithm analysis module.
4. Find metrics 3. Algorithm analysis
for managers 4. Find metrics
5.Gather software for managers
metrics for testers
Testers 1. Find errors in 1. Find
errors in 3:2
developed modules developed modules
2. Report errors 3. 2. Report errors
Gather software
metrics for testers
Month-november Planned Activities Done Activities Planned to done Ratio
Team A 1. Development 1. Development 3:2
of edge detection of edge detection
module. module.
2. Evaluate 2. Evaluate
Metrics of Metrics of
Manager Manager
3. Evaluate
Metrics of
developers
Team B 1. Development 1. Development 3:2
of edge detection of edge detection
module. module.
2. Evaluate 2. Evaluate
Metrics of Metrics of
Manager Manager
3. Evaluate
Metrics of
developers
Testers 1. Gather software 1. Gather software 3:2
metrics for testers 2. metrics for testers 2.
Evaluate Metrics of Test the Source
developers code.
3. Test the Source
code.

Month-november Planned Activities Done Activities Planned to done Ratio


Team A 1.Evaluate Metrics of 1.Evaluate Metrics of 3:3
developers 2. developers 2.
Documentation 3. Documentation 3.
Evaluate Accuracy Evaluate Accuracy
metrics metrics

20
Team B 1. Evaluate Metrics of 1.Evaluate Metrics of 3:3
developers 2. developers 2.
Documentation 3. Documentation 3.
Evaluate Accuracy Evaluate Accuracy
metrics metrics

Testers 1. Evalute test metrics 1. Evalute test metrics 3:3


2. Documentation 3. 2. Documentation 3.
Evaluate Accuracy Evaluate Accuracy
metrics metrics

2. Effort per Team member:

Team Members Number of Number of Number of weeks


worked
Activities Activities
assigned Performed

DUHITHA (Project manager) 10 10 13 weeks

NOMIKA (Developer) 12 12 13 weeks

SOWMYA(Developer ) 7 7 13 weeks

KRISHNA(Tester) 6 6 10 weeks

21
3. Maintainability Index:
171 - 5.2 * ln(Halstead Volume) - 0.23 * (Cyclomatic Complexity) - 16.2 * ln(Lines of
Code)
171-5.2*ln(6480.31)-0.23*(13)-16.2ln(110)
171-(5.2*8.7765)-2.99-(16.2*4.7)
171-45.6378-2.99-76.14
46.2322 (good maintainability)

22
4. Defect detection efficiency:
This metric will help us to assess the performance and productivity of the tester in order to
ensure the quality of the product.
Defect detection efficiency = (Number of defects detected / Total Number of defects) *100

Modules No.of defects Total No.of defects Defect detection


detected efficiency
Image acquisition 1 1 100%
module
Morphological 2 3 66.67%
Processing module
Image preprocessing 1 2 50%
module
direction detection 1 1 100%
module
edge detection 0 0 100%
module.

5. Linesof code: Manual


approach: 110 lines
Halsted approach:
Operators Number of occurrences Operators Number of occurrences
Clear all 1 size 2
clc 1 for 8
; 57 if 4
= 45 < 11
imread 1 + 14
() 120 end 12
‘’ 4 zeros 3
: 9 >= 9
\ 5 > 9
. 1 && 9
, 115 || 12
figure 4 elseif 8
imshow 3 colorbar 1
rgb2gray 1 .^ 2
double 1 sqrt 1
[] 7 - 16
/ 3 == 8
.* 3 max 8
conv2 3 Unit8 1
atan2 1 % 15
* 3
imagesc 1

23
U1(Number of distinct operators in the program) =42
N1(Total number of occurrences of operators in the program) =542

Operands Number of Operands Number of


occurrences occurrences
img 7 arah 24
strings 72 180 1
T_Low 4 pi 1
T_High 12 pan 8
0.075 1 leb 8
0.175 1 i 64
B 4 j 64
2 11 360 1
4 8 arah2 10
5 4 22.5 2
9 4 157.5 2
12 4 202.5 2
15 1 337.5 2
1 28 360 1
159 1 67.5 2
A 5 247.5 2
KGx 2 45 3
KGy 2 112.5 2
-1 4 295.5 2
0 12 90 3
-2 2 8 1
Filtered_X 3 135 3
Filtered_Y 3 255 1
magnitude 2 magnitude2 18
BW 18 T_res 5
edge_final 2

U2(Number of distinct operands in the program) = 51


N2(Total number of occurrences of operands in the program) = 449

 Program vocabulary = µ=µ1+ µ2 = 42+51 = 93


 Program length = N = N1+ N2 = 991
 Program volume = V = N * log2(µ) = 6480.31
 Program level= L =2 * (µ2) / (µ1)(N2) = 0.01
 Program Difficulty = D = 1/L = 184.88
 Effort = E = V*D = 1198094.29
 Programming Time = T = E/ β = 66560.79

24
 Program bugs = B = E^(2/3) / 3000 = 3.76

6. Cyclomatic complexity:

D (number of predicate nodes) = 12


1+d= 1+12= 13

7. Efficiency:
Modules Number of No.of Images Efficiency
Images given as achieved Expected
input for each output=actual output
module
Image acquisition 3 1 33.33%
module
Morphological 2 2 100%
Processing module
Image preprocessing 2 2 100%
module
direction detection 2 2 100%
module
edge detection module. 1 1 100%

8. Code coverage:
 Testing approach used: unit Testing
 Code Coverage Percentage = (Number of lines of code executed by a testing
algorithm/Total number of lines of code in a system component) * 100.
 (110/110) *100= 100%

9. Defect Density:
Modules No.of defects detected

Image acquisition module 1

Morphological Processing module 3

Image preprocessing module 4

direction detection module 4

25
edge detection module. 3

10. Bug Find Rate (per week):


Week 1st week 2nd week 3rd week 4th week

Bugs Found 3 5 4 3

Average Bug Find rate: 3+5+4+3/4 = 15/4 = 3.75 (4) bugs found per week

11. Accuracy:
Accuracy = (correct predicted image / total testing image)*100 %

12. Fixed Defects percentage:


Modules No.of defects detected No.of defects fixed Percentage

Image acquisition 1 0 0
module
Morphological 2 2 100%
Processing module
Image preprocessing 2 2 100%
module
direction detection 1 1 100%
module
edge detection module. 0 0 0

13. Number of Testcase passed:

(Passed testcases) / (total number of testcases) * 100


8/10 *100 = 80%

26
14. Severity of the defects:
Number of defects founded Category

Critical defects 1

High 2

Medium 5

Low 7

Total number of defects identified 15

Category Critical High Medium Low

Impact 90%-100% 50%-75% 10%-50% 0-10%

Probability of 0-20% 20%-40% 40%-60% 60%-100%


occurrence

15. Test Analysis:

S.No Test metrics Data retrieved during


Development and Testing
process

1 Number of Requirements 6
for the project

2 Total number of testcases 10


written for all requirements

27
3 Total number of testcases 10
executed

4 Total number of testcases 8


passed

5 Total number of testcases 2


failed

6 Total number of testcases 0


not executed

16. Requirement Creep

(Total number of requirements added/No of initial requirements) X 100


2/6 =0.333

17. Number of defects per test hour

Total number of defects/Total number of test hours


15/5 = 3

18. Cost of finding a defect in testing

(Total effort spent on testing/ defects found in testing)


300/15 = 20 mins per bug

19. Accepted Defects Percentage

(Defects Accepted as Valid by Team /Total Defects Reported) X 100


(15/15) * 100 = 100

20. Number of tests run per time period

Number of tests run/Total time


10/5 = 2 tests run per hour

21. Test design efficiency

Number of tests designed /Total time

28
10 / 2 = 5 test cases designed per hour

8. Work done by each member and the software metrics used:

Project Manager: DUHITHA HARSHA

Work done:
 As a Project Manager, I have divided the work based on their positions.
 With the help of references, I had written the abstract.
 I had done the Use case Diagram based on the Shape recognition.
 Prepared schedule of deadlines and activities
 Evaluate various productivity and quality metrics
 Coordinated the team by overcoming miscommunication.
 Assessed the performance of each team member.
 Conducted some cost metrics.

Software Metrics used:


 Quality metrics
 Proper Functioning of team members
 Overcoming communication pitfalls
 Proper Scheduling
 Estimating project outcomes
 Customer Satisfaction
 Productivity metrics
 Plan to done ratio
 Effort per Team member
 Maintainability Index
 Cost Effectiveness metrics

Developer: SOWMYA

Work done:
 As a developer, I have measured the sensitivity of shape recognition and detection.

29
 Helped in designing system architecture diagram.
 Helped in Literature Survey.
 Suggested some modules for our project.
 Evaluated various code metrics.
 Assessed the efficiency of the algorithm used in our implementation.

Software Metrics used:


 Lines of code
 Program Difficulty
 Effort
 Time complexity
 Cyclomatic Complexity
 Function point metrics
 Accuracy
 Accepted defects percentage

Developer: NOMIKA

Work done:
 As a developer I have developed some part of code using canny edge detection algorithm.
 We are taking image as input and filter the image in horizontal and vertical direction to
identify the shape of the image.
 Assessed various different algorithms for shape recognition.
 Evaluated various code metrics to produce error free code.
 Helped in literature survey.

Software Metrics used:


 Lines of code
 Program Volume
 Program Level

30
 Time complexity
 Cyclomatic Complexity
 Function point metrics
 Efficiency
 Requirement’s creep

Tester: M.V. KRISHNAMANAIDU

Work Done:
 We test code using MATLAB by taking image in format of “.PNG” as input and we check
whether the edges are detected accurately or not and image used is in the format that
accepted by code or not.
 Performed testcases as per the schedule.
 Helped in the literature survey in finding journals related to our title.
 Evaluated various test metrics.
 Suggested some improvements in the code.
 Helped in the documentation part.
Software Metrics used:
 Test case execution productivity metrics
 Code Coverage
 Defect Density
 Accuracy
 Bug Find Rate
 Fixed defects percentage
 Number of Test cases passed
 Number of defects per test hour
9. Output & screenshots:

Source Code:

31
clear all;
clc;

%Input image img = imread


('C:\Users\dokkuakash\Downloads\Canny\House.jpg');
%Show input image
figure, imshow(img);
img = rgb2gray(img);
img = double (img);

%Value for Thresholding


T_Low = 0.075;
T_High = 0.175;

%Gaussian Filter Coefficient


B = [2, 4, 5, 4, 2; 4, 9, 12, 9, 4;5, 12, 15, 12, 5;4, 9, 12, 9, 4;2, 4, 5, 4, 2
]; B = 1/159.* B;

%Convolution of image by Gaussian


Coefficient A=conv2(img, B, 'same');

%Filter for horizontal and vertical direction


KGx = [-1, 0, 1; -2, 0, 2; -1, 0, 1];
KGy = [1, 2, 1; 0, 0, 0; -1, -2, -1];

%Convolution by image by horizontal and vertical filter


Filtered_X = conv2(A, KGx, 'same');
Filtered_Y = conv2(A, KGy, 'same');

%Calculate directions/orientations

32
arah = atan2 (Filtered_Y,
Filtered_X); arah = arah*180/pi;

pan=size(A,1);
leb=size(A,2);

%Adjustment for negative directions, making all


directions positive for i=1:pan for j=1:leb if
(arah(i,j)<0) arah(i,j)=360+arah(i,j); end;
end; end;

arah2=zeros(pan, leb);

%Adjusting directions to nearest 0, 45, 90, or 135 degree for i = 1 : pan for j
= 1 : leb if ((arah(i, j) >= 0 ) && (arah(i, j) < 22.5) || (arah(i, j) >=
157.5) && (arah(i, j) < 202.5)
|| (arah(i, j) >= 337.5) && (arah(i, j) <= 360))
arah2(i, j) = 0;
elseif ((arah(i, j) >= 22.5) && (arah(i, j) < 67.5) || (arah(i, j) >= 202.5) &&
(arah(i, j) <
247.5))
arah2(i, j) = 45;
elseif ((arah(i, j) >= 67.5 && arah(i, j) < 112.5) || (arah(i, j) >= 247.5 &&
arah(i, j) <
292.5))
arah2(i, j) = 90; elseif ((arah(i, j) >= 112.5 && arah(i, j) < 157.5)
|| (arah(i, j) >= 292.5 && arah(i, j)
< 337.5))

33
arah2(i, j) =
135; end;
end;
end;

figure, imagesc(arah2); colorbar;

%Calculate magnitude magnitude


= (Filtered_X.^2) +
(Filtered_Y.^2); magnitude2 =
sqrt(magnitude);

BW = zeros (pan, leb);

%Non-Maximum
Supression for i=2:pan-
1 for j=2:leb-1 if
(arah2(i,j)==0)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j),
magnitude2(i,j+1), magnitude2(i,j-1)])); elseif (arah2(i,j)==45)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j),
magnitude2(i+1,j-1), magnitude2(i-1,j+1)])); elseif
(arah2(i,j)==90)
BW(i,j) = (magnitude2(i,j) ==
max([magnitude2(i,j), magnitude2(i+1,j), magnitude2(i-
1,j)])); elseif
(arah2(i,j)==135)
BW(i,j) = (magnitude2(i,j) == max([magnitude2(i,j),
magnitude2(i+1,j+1), magnitude2(i-1,j-1)])); end; end;
end;

34
BW = BW.*magnitude2;
figure, imshow(BW);

%Hysteresis Thresholding
T_Low = T_Low * max(max(BW));
T_High = T_High *
max(max(BW));

T_res = zeros (pan, leb);

for i = 1 : pan for j = 1 :


leb if (BW(i, j) <
T_Low) T_res(i,
j)
= 0; elseif (BW(i, j)
> T_High)
T_res(i, j) = 1;
%Using 8-connected components elseif ( BW(i+1,j)>T_High ||
BW(i-1,j)>T_High || BW(i,j+1)>T_High || BW(i,j1)>T_High || BW(i-1, j-
1)>T_High || BW(i-1, j+1)>T_High || BW(i+1, j+1)>T_High ||
BW(i+1, j-1)>T_High)
T_res(i,j)
= 1; end;
end; end;

edge_final = uint8(T_res.*255);
%Show final edge detection
result figure,
imshow(edge_final);

35
Screenshots Of
Implementation:

36
37
Input Image-1:

Outputs

Figure2:

Figure3:

38
Figure4:

Input Image-2:

Outputs:

39
40
Input Image-3:

Outputs:

41
42
9.Conclusion and future work:

We proposed an algorithm to detect a shape from any input image and we could even recognize the
edges of the shape given in the input image and after applying our proposed algorithm on images we
saw that the algorithm gives very good results even if they are many shapes in one photo by
depending on the value of the shape factor which is proposed in our project and if we compare our
work with other works we could see that most of other works are focusing on detecting and
recognizing some specific shapes but our work is detecting all the kinds of shapes. And further, many
software metrics have been used by the project managers, team managers, developers and testers to
improve the productivity and quality of our project. These metrics helped to improve the efficiency
and effectiveness of our application as well.

Future Work:
We are interested in doing a detailed study on the applications of canny edge detection algorithm in
real world and also interested in making a brief study on its utility in developing the applications in
various fields like in biometrics, medical field and many other.
In future, we would like to improve the efficiency of our application further by improving the edge
detection efficiency for complex images with have high noise and background complexity. Apart from
this, we would like to extend the applicability of our project incase if we come across new and
innovative ideas.

43
References:
[1] Mikolajczyk, K., Zisserman, A., & Schmid, C. Shape recognition with edge-based
features. In British Machine Vision Conference (BMVC'03) (Vol. 2, pp. 779-788). The
British Machine Vision Association, September 2003.

[2] Mohd Firdaus Zakaria, Hoo Seng Choon, and Shahrel Azmin Suandi, Object Shape
Recognition in Image for Machine Vision Application, International Journal of Computer
Theory and Engineering, Vol. 4, No. 1, February 2012.

[3] Jelmer de Vries, Object Recognition: A Shape-Based Approach using Artificial Neural
Networks, Marco Wiering.

[4] Kozo Ohtani and Mitsuru Baba, Shape Recognition and Position Measurement of an
Object Using an Ultrasonic Sensor Array, Hiroshima Institute of Technology Ibaraki
University Japan.

[5] Das, M. R., & Barla, S. (2012). Object Shape Recognition (Doctoral dissertation).

[6] Nidhal El Abbadi and Lamis Al Saadi, Automatic Detection and Recognize Different
Shapes in an Image, Computer Science Department University of Kufa, Najaf, Iraq.

[7] F. Felzenszwalb, Representation and Detection of Shapes in Images, W. Eric L. Grimson


Bernard Gordon Professor of Medical Engineering Thesis Supervisor,
MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2003.

[8] Gulce Bal, Julia Diebold, Erin Wolf Chambers, Ellen Gasparovic, Ruizhen Hu, Kathryn
Leonard, Matineh Shaker, and Carola Wenk, Skeleton-Based Recognition of Shapes in
Images via Longest Path Matching, K. Leonard, S. Tari (eds.), Research in Shape
Modeling, Association for Women in Mathematics Series 1, DOI 10.1007/978- 3319-
16348-2_6, Springer International Publishing Switzerland & The Association for Women
in Mathematics 2015.

44
[9] A.P.Ashbrook and N.A.Thacker, Algorithms For 2-Dimensional Object Recognition,
Imaging Science and Biomedical Engineering Division,Medical School, University of
Manchester, Stopford Building, Oxford Road, Manchester, M13 9PT, 1 / 12 / 1998.

[10] Jaruwan Toontham and Chaiyapon Thongchaisuratkrul, An Object Recognition and


Identification System Using the Hough Transform Method, International Journal of
Information and Electronics Engineering, Vol. 3, No. 1, January 2013.

45

You might also like