You are on page 1of 50

ALERT SYSTEM TO PROTECT

AGRICULTURAL LAND

A PROJECT REPORT

Submitted by

SHANMUGAVEL M 414216205080
JAWAHAR M 412416205026
KATHIRAVAN A 412416205032

in partial fulfillment for the award of the degree

of

BACHELOR OF ENGINEERING

in

INFORMATION TECHNOLOGY

SRI SAIRAM INSTITUTE OF TECHNOLOGY, WEST TAMBARAM


ANNA UNIVERSITY: CHENNAI 600 025

2019-2020
BONAFIDE CERTIFICATE

Certified that this project report “ALART SYSTEM TO PROTECT AGRICULTURAL


LAND” is the bonafide work of “SHANMUGAVEL M (412416205080), JAWAHAR M
(412416205026), KATHIRAVAN A (412416205032)” who carried out the project work
under my supervision.

SIGNATURE SIGNATURE

Dr. V. BRINDHA DEVI M.E, Ph.D., R. SHOBANA LAKSHMI M. TECH

HEAD OF THE DEPARTMENT SUPERVISOR


Associate Professor Assistant Professor,
Department of Information Technology, Department of Information Technology,
Sri Sairam Institute of Technology, Sri Sairam Institute of Technology,
West Tambaram, West Tambaram,
Chennai-600044 Chennai-600044

Submitted for the viva voice-examination held on 22/09/2020 at Sri Sai Ram
Institute of Technology, Chennai-44.

R. SHOBANA LAKSHMI Dr. D. GOKULAKRISHNAN Dr. V. BRINDA DEVI


SUPERVISOR EXAMINER HOD/PROJECT
CORDINATOR
CERTIFICATE OF EVALUATION

COLLEGE NAME: SRI SAIRAM INSTITUTE OF TECHNOLOGY


BRACH AND SEMESTER: INFORMATION TECHNOLOGY & VIII
SEMESTER.

S.NO NAME OF THE TITLE OF THE NAME OF THE


PROJECT PROJECT SUPERVISOR
MEMBERS WITH
DESIGNATION
1. SHANMUGAVEL M ALERT SYSTEM TO R.SHOBANA
PROTECT LAKSHMI,
2. JAWAHAR M AGRICULTURAL M.TECH.
LAND ASSISTANT
3. KATHIRAVAN A PROFESSOR
GRADE-III

The Report of the project work submitted by the above students in partial fulfillment
for the award of Bachelor of Technology Degree in Information technology of
Anna University, Chennai were evaluated and confirmed to be report of the work
done by the above students.
Submitted for Anna University Project Viva Voce held on 22/09/2020.

Dr. V. BRINDA DEVI R. SHOBANA LAKSHMI

HEAD OF THE DEPARTMENT GUIDE


ACKNOWLEDGEMENT

"A successful man is one who can lay a firm foundation with the bricks others have
thrown at him." -- David Brinkley

Such a successful personality is our beloved founder Chairman, Thiru. MJF. Ln.
LEO MUTHU. At the outset, we express our sincere gratitude to our beloved
Chairman through prayers, who is in the form of the Guiding Star and who has
spread his wings of eternal support with immortal blessings.

We express our gratitude to our Chief Executive Officer Mr. J. SAI PRAKASH
LEO MUTHU and our Trustee Ms. J. SHARMILA RAJAA for their constant
support to complete the project making the resources available at right time.
We express our solemn thanks to our beloved Principal,
Dr. K. PALANIKUMAR for having given us spontaneous and wholehearted
encouragement for completing this project.
We are indebted to our Head of the Department Dr. V. BRINDHA DEVI for her
support during the entire course of this project work.
We express our gratitude and sincere thanks to our guide R. SHOBANALAKSHMI
for her valuable suggestions and insights leading to the successful completion of this
project.
Our sincere thanks to our project coordinator, Dr. R. MURUGARADHA
DEVI, Associate Professor – I for his support and guidance in bringing out this
project successfully.

We thank all the Teaching and Non-Teaching staff members of the Department
of Information Technology and all others who contributed directly or indirectly for
the successful completion of the project.
CONTENTS

CHAPTER TITLE PAGE


NO. NO.
ABSTRACT 04
LIST OF FIGUREURES 05
LIST OF TABLES 06
LIST OF ABBREVIATION 07
1. INTRODUCTION 08
1.1 Video capturing 08
1.2Moving Object detection 08
2. Literature Survey 09
2.1 Applications built on detection of 09
Animals play a very vital role in providing
solutions to various real-life problems. The
base for most of the applications is the detection
of animals in the video or image.
2.2 Detection, identification and tracking of 10
objects during the motion.
2.3 Object detection and tracking – a survey 10
3. System Analysis 11
3.1 Existing System 11
3.2 Proposed System 12
4. Requirements 14
4.1 Software Requirements 14
4.2 Hardware Requirements 14

1
5. System Environment 15
5.1 Raspberry Pi 15
5.2 PIR Motion Detecting Sensor 16
5.3 CCTV 18
5.4 Python 19
5.5 TensorFlow 19

6. Algorithm 20

7. System Design

7.1 Architecture Diagram 21

8. Mechanism of Proposed System

8.1 Overall Working Mechanism 22

8.2 Mechanism of Object Detection 24

9. Development process

9.1 Project life cycle 25


9.2 Development model 26
9.3 Feasibility study 27
9.4 Feasibility analysis 27
9.5 Security feasibility 28
9.6 Operational feasibility 28
9.7 Economic feasibility 28
9.8 Technical feasibility 29
9.9 Hardware feasibility 29
9.10 Software feasibility 29
9.11 Requirement analysis and specification 30
2
9.11.1 Requirements gathering and analysis 30
1) Requirements Gathering 30
2) Analysis of Gathered Requirements 31
9.11.2 Software requirements and
specification (SRS) 31
9.12 Design 32
10. SYSTEM TESTING
10.1 UNIT TESTING 33
10.2 INTEGRATION TESTING 33
10.3 SYSTEM TESTING 33
10.4 VALIDATION TESTING 34
10.5 USABILITY TESTING 34
11. Appendix –I 35
12. Appendix-II 42
13. Confusion and future enhancement 46
13.1 Conclusion 46
13.2 Future work 46

3
ABSTRACT

In the field of agriculture, the loss of crops by animal attack is the one
of the major problem faced by the farmers. In order to protect the agricultural land
and to protect the animal from the electric fences. To avoid these problem a system
is developed to intimate the farmer about the animal movement and produce siren to
repel the animal from the agricultural land before the farmer reach the agricultural.
This methodology used is developed existed in the detection of animal in the
highways to help the auto driving system using the image processing techniques.
The system we developed detects the animal near the protected land in the live video
streaming from the CCTV camera using the tensorflow lite model and compare
detected object with the list class objects the can detect if the class found and check
whether the detected class of object is vulnerable to the agricultural land if yes, it
produces the siren to repel animal from the protected land in order to save the land
and it can also be used to save animals from electric fences.

4
LIST OF FIGUREURES

S.NO FIGUREURE NAME


1 Figure 1.1 Representation of Object Detection

2 Figure 3.1 Processing of the Image in the

Existing method.

3 Figure 3.2 Proposed alert system


4 Figure 3.3 Expected Output in the Proposed System.
5 Figure 5.1 Raspberry Pi

6 Figure 5.2 PIR Motion Detecting Sensor

7 Figure 5.3 CCTV Camera

8 Figure 7.1 Architecture Diagram of proposed System.

9 Figure 8.1 Working Mechanism of Proposed System

10 Figure 8.2 Flow of Object detection mechanism.

11 Figure 9.1 Project Life Cycle

12 Figure 9.2 Waterfall model

13 Screenshot 1 When the motion is detected and the animal


elephant is found in the Image

14 Screenshot 2 Detected Objects in the land

15 Screenshot 3 Showing the accuracy of the object detected.

16 Screenshot 4 Showing the accuracy of the object detected


75% and it is Dog.

5
LIST OF TABLES

S.NO TABLE.NO NAME

1. Table 4.1 Software Requirements

2. Table 4.2 Hardware Requirements

6
LIST OF ABBREVIATION

S.NO ABBREVIATION EXPANSION

1. CCTV Closed-Circuit Television


2. PIR Passive Infrared

7
INTRODUCTION

1.1 VIDEO TRANSMISSION TO SYSTEM FROM CCTV

CCTV systems provide surveillance capabilities used in the protection


of people, assets, and systems and etc. A CCTV system serves mainly as a
monitoring the area, providing surveillance for a larger area, more of the time,
than would be feasible with security personnel alone. CCTV systems are
often used to support comprehensive security systems by incorporating video
coverage and security alarms for barriers, intrusion detection, and access
control. For example, a CCTV system can provide the means to assess an
alarm generated by an intrusion detection system and record the event. The
CCTV sends the video to the video monitoring device using a direct
transmission system for the further process of the system. The camera we used
is USB camera. The CCTV camera sends the video to the video processing
system to analyses the video

1.2 OBJECT DETECTION

Object detection is important process in the surveillance or the security


related systems. Using the object detection, the system can detect or localizing
the object in the image or the video. The object detection is the computer
vision technique which is used to detect and track physical objects movement
in the image or the video. This technique is used in the surveillance system,
face recognition, theft detection and so on.

Figure 1.1: Representation of Object detection.

8
CHAPTER 2
LITREATURE SURVEY

Applications built on detection of animals play a very vital role in


providing solutions to various real-life problems. The base for most of the
applications is the detection of animals in the video or image. But the existing system
based on survey the hardware or system just detects only the movement of the object
in the desired area where the hardware is placed and it just gives notification to the
user who fixed the hardware in the land. On this methodology the detection result
may be not sure which animal is get into the protected area. The system needed must
be detect which animal get into the land and produce the sound to repel the animal
from the land. These are the surveyed papers to the current situation about the animal
detection in the technology.

2.1 APPLICATIONS BUILT ON DETECTION OF ANIMALS PLAY A


VERY VITAL ROLE IN PROVIDING SOLUTIONS TO VARIOUS REAL-
LIFE PROBLEMS. THE BASE FOR MOST OF THE APPLICATIONS IS
THE DETECTION OF ANIMALS IN THE VIDEO OR IMAGE.

In this paper, a simple and a low-cost approach for automatic animal detection
on highways for preventing animal-vehicle collision using computer vision
techniques are proposed. A method for finding the distance of the animal in real-
world units from the camera mounted vehicle is also proposed. The proposed system
is trained on more than 2200 images consisting of positive and negatives images and
tested on various video clip so animals on highways with varying vehicle speed.

9
2.2 DETECTION, IDENTIFICATION AND TRACKING OF OBJECTS
DURING THE MOTION

This paper intends to introduce discovery of various objects, object


classification, object tracking algorithms including analysis and a comparison of
different techniques used for different stages of tracking. The purpose of tracking
objects is segmenting a region of interest from a video scene and continuing tracking
movement, positioning, and match. Detection of the object and classification are the
steps preceding the object tracking in a sequence of images. Object detection is
performed to control the existence of objects in the video and find that object, and
then the detected object can be classified into many categories such as people,
vehicles, floating trees, trees and other moving objects. In this paper is elaborated
the autonomous tracking system with the aim of opposing movement in the event of
a demonstration of human tracking, as such may enable the former to be distributed
as it comprises more than one control center, key component parts of the system
include: The camera and its 'interface, Arduino viewing boards., PC - Controlled
Center.

2.3 OBJECT DETECTION AND TRACKING – A SURVEY

In computer vision applications tracking is an important aspect which


involves activity analysis classification and recognition of an object. Animal
detection has many applications which had become a challenging task to reduce
the distortion present in the sequence but visual tracking causes many problems in
practical applications and produces different issues leading to the effect of noise or
disturbance. To overcome all these drawbacks many object tracking applications
have been developed.
10
CHAPTER 3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM
The existing methodology is found in the driver alert system about the
interference of animal in the highways while driving. The methodology works
based on the openCV technology. The video camera is placed inside the car and
the camera send the live video to the object detecting system. First the video
frames are converted to the continues frames of images and then the images are
transformed to the gray scale images and then passed as the input to the object
detection algorithm then the algorithm process the input and detect the objects it’s
trained in the image and matches it with the database which has the processed
images, if any match found and the founded match is any animal and the
notifications send to the driver whether he want to apply brakes or need to make
attention to the animal interference in the road ways. Sometimes if this accident
occurs and if the animal big the effect can be to both the animal and the vehicle so
it need to considered. This technic is used in the tesla automatic pilot car.

Figure 3.1: Processing of the Image in the Existing method.

11
3.2 PROPOSED SYSTEM
You may have heard about the loss of agricultural lands and the crops and the
animal died due to the electric fences in the agricultural lands. The system we
developed will alert the farmer or the owner of the land when the arrival of animal
is noticed in the area where the system is implemented. The System finds the animals
by using object detection technology using the video from the camera. The working
methodology of the system is as follows. The passive motion detector sensor placed
in the agricultural land senses the movement of living things in the land. When the
movement is diagnosed it sends the signal to the raspberry pi which acts as
processing system turn on the camera and start recording the live recording of the
land. The TensorFlow lite models for object detection algorithm is used to detect the
image and check whether the detected object is harmful to the agricultural land then
the alarm gets activated to repel the animal from the desired land and send the
notification to the owner of the land to give notification about the arrival of animal
to the land.

Fig 3.2: Proposed alert system

12
Example Output of System Processed:

Figure 3.3: Expected Output in the Proposed System.

13
CHAPTER 4
4.1 SOFTWARE REQUIREMENTS
Python

Open CV

TensorFlow

Raspbian OS

Table 4.1 Software Requirements

4.2 HARDWARE REQUIREMENTS


CCTV camera

Raspberry pi 2GB(Min)

16GB Memory Card

Passive Motion Detector Sensor

Bread Board

Speaker

Emergency Light

Connecting Wires

Table 4.2 Hardware Requirements

14
CHAPTER 5
SYSTEM ENVIRONMENT
5.1 Raspberry Pi

The Raspberry Pi is a low cost, credit-card sized computer that plugs


into a computer monitor or TV, and uses a standard keyboard and mouse. It is a
capable little device that enables people of all ages to explore computing, and to
learn how to program in languages like Scratch and Python. It’s capable of doing
everything you’d expect a desktop computer to do, from browsing the internet and
playing high-definition video, to making spreadsheets, word-processing, and playing
games. What’s more, the Raspberry Pi has the ability to interact with the outside
world, and has been used in a wide array of digital maker projects, from music
machines and parent detectors to weather stations and tweeting birdhouses with
infra-red cameras. We want to see the Raspberry Pi being used by kids all over the
world to learn to program and understand how computers work. Here we use the
Raspberry Pi as a hub to connect the CCTV camera, Motion Detector Sensor, a GSM
module and speaker to alert the farmer. The raspberry pi is used as an environment
to process the image and detect the object and notify the farmer about the arrival of
the animal into the protected land.

Figure 5.1 Raspberry Pi


15
5.2 PIR Motion Detecting Sensor

A passive infrared sensor (PIR sensor) is an electronic sensor that


measures infrared (IR) light radiating from objects in its field of view. They are most
often used in PIR-based motion detectors. PIR sensors are commonly used in
security alarms and automatic lighting applications. PIR sensors detect general
movement, but do not give information on who or what moved. For that purpose,
an imaging IR sensor is required. PIR sensors are commonly called simply "PIR",
or sometimes "PID", for "passive infrared detector". The term passive refers to the
fact that PIR devices do not radiate energy for detection purposes. They work
entirely by detecting infrared radiation (radiant heat) emitted by or reflected from
objects. A PIR sensor can detect changes in the amount of infrared radiation
impinging upon it, which varies depending on the temperature and surface
characteristics of the objects in front of the sensor. When an object, such as a person,
passes in front of the background, such as a wall, the temperature at that point in the
sensor's field of view will rise from room temperature to body temperature, and then
back again. The sensor converts the resulting change in the incoming infrared
radiation into a change in the output voltage, and this triggers the detection. Objects
of similar temperature but different surface characteristics may also have a different
infrared emission pattern, and thus moving them with respect to the background may
trigger the detector as well. PIRs come in many conFigureurations for a wide variety
of applications. The most common models have numerous Fresnel lenses or mirror
segments, an effective range of about 10 meters (30 feet), and a field of view less
than 180. Models with wider fields of view, including 360°, are available, typically
designed to mount on a ceiling. Some larger PIRs are made with single segment
mirrors and can sense changes in infrared energy over 30 meters (100 feet) from the
PIR. There are also PIRs designed with reversible orientation mirrors which allow

16
either broad coverage (110° wide) or very narrow "curtain" coverage, or with
individually selectable segments to "shape" the coverage.

Figure 5.2 PIR Motion Detecting Sensor

17
5.3 CCTV

A Closed-Circuit Television Camera can produce images or


recordings for surveillance or other private purposes. Cameras can be either video
cameras, or digital stills cameras. Walter Bruch was the inventor of the CCTV
camera. The main purpose of a CCTV camera is to capture light and convert it into
a video signal. Underpinning a CCTV camera is a CCD sensor (charge-coupled
device). The CCD converts light into an electrical signal and then signal processing
converts this electrical signal into a video signal that can be recorded or displayed
on the screen.

Figure 5.2 CCTV Camera

18
5.4 Python

Python is a high level language and it is the most used technology in


this decade of technology world. Python can be used to develop desktop GUI
applications. Python process the high level language by converting it into a code
called byte code. Here python is used as basic programming language to code the
object detecting algorithm in this project with the help of tensorflow framework. The
python file can be created with the text editor and it can also be created with the IDE
such as Jupiter notebook in anaconda emulator or by using the google colaboratory
with the extension of .py or .ipynb. A python file is nothing but a text file contains
the instructions to be followed by a computer or a device. The python file is
processed by an interpreter which is an another form of compiler which converts the
higher level python file into lower level machine language and to higher level.

5.5 TENSORFLOW

TensorFlow is open source machine learning library and acts as a framework


to build and neural networks models. TensorFlow is developed to carry out the
numerical computations in the field of machine learning. TensorFlow provides an
accessible and readable syntax which is essential for making these programming
resources easier to use. The complex syntax is the last thing developers need to know
given machine learning’s advanced nature. Basically an Image is a array of numbers,
hence the TensorFlow is helping in the processing of images. TensorFlow can be
used for Classification, Perception, Understanding, Discovering, Prediction and
Creation. Here we used TensorFlow image classifier that is process of classifying
the image based on the TensorFlow feature matching of the images given.

19
CHAPTER 6
ALGORITHM
In this project the algorithm and the methodology used are based on the
TensorFlow lite models for the object detection and the classification of detected
objects. When the system gets activated the motion detector connected with the
raspberry pi detects movement in the desired land it’s sends the signal to the system
and the system activate the camera to record the video and the TensorFlow lite model
identifies the objects in the video and then send the notification to the users mobile
via the GSM module from the raspberry pi to the users mobile and the SMS. The
notification will be like an emergency call.

Given an image or a video stream, an object detection model can identify


which of a known set of objects might be present and provide information about
their positions within the image. An object detection model is trained to detect the
presence and location of multiple classes of objects. For example, a model might be
trained with images that contain various pieces of fruit, along with a label that
specifies the class of fruit they represent (e.g. an apple, a banana, or a strawberry),
and data specifying where each object appears in the image. When we subsequently
provide an image to the model, it will output a list of the objects it detects, the
location of a bounding box that contains each object, and a score that indicates the
confidence that detection was correct.

Here, we going to train the model with the image dataset which contains more than
ten different objects for e.g: cow, elephant, dog, sheep, horse and more animals
which is harmful to the agricultural land. Hence when our model gets activated
videos from the camera are analyzed and the objected is detected if the detected
object is harmful to the agriculture then the alarm gets activated and the notification
is send to the user’s mobile with the help of GSM module.
20
CHAPTER 7

SYSTEM DESIGN

7.1 Architecture Diagram:

Figure 7.1: Architecture Diagram of proposed System.

21
CHAPTER 8

MECHANISM OF THE PROPOSED SYSTEM

8.1 Overall Working Mechanism

Figure 8.1: Working mechanism of an alert system

22
When the system gets activated the motion detector connected with the raspberry pi
detects movement in the desired land it’s sends the signal to the system and the
system activate the camera to record the video. Given a video stream, an object
detection model can identify which of a known set of objects might be present and
provide information about their positions within the image. An object detection
model is trained to detect the presence and location of multiple classes of objects in
the video stream. For example, a model might be trained with images that contain
various objects, along with a label that specifies the class of objects they represent
(e.g. an elephant, deer, cow, sheep), and data specifying where each object appears
in the image. When we subsequently provide an image to the model, it will output a
list of the objects it detects, the location of a bounding box that contains each object,
and a score that indicates the confidence that detection was correct. The score
determines the objects accuracy. When the detected object is harmful to the land
then the siren gets turned on and plays a horrible sound to stop the animal
approaching the land and save it from the electric fences. This methodology includes
the preprocessing of the image to avoid the noises shadows and the unwanted pixels
which reduces the accuracy of the object detection.

23
8.2 Mechanism of Object Detection

Figure 8.2: Flow of Object detection mechanism.

The first step in this mechanism is image acquisition as an input to the


image processing system and the second step is the back ground modeling is
performed in order to extract the features of the image to determine which object is
present in the image. The third step in this flow is shadow removal which is to
determine exact accuracy of the object determined and the further steps are
determining the objects comes under which class of objects that the model known
then the drawing the boxes in the video to show the full location of the object in the
video stream and the final step is the representation of the objet in the window screen
to show the detected object.

24
CHAPTER 9

DEVELOPMENT PROCESS

9.1 PROJECT LIFE CYCLE

The Software Development Life Cycle (SDLC) is the process by which


software is created. Some companies don't have anything more than an ad hoc
approach to software development, but these environments still have an SDLC it's
just a bad one. Likewise, each company that has a formal SDLC probably has its
own, unique flavour of the SDLC. Sometimes the SDLC is a very complicated,
detailed approach that requires a whole team of project managers just to keep going.
The lifecycle of a project may include the following steps:

Figure 9.1: Project Life Cycle


By applying this simple set of definite steps and deliverables at the
outset and relating this to a transparent time and cost structure, we provide clients
31 with an effective framework against which to measure expectation, quality,
progress and cost for their project.

25
9.2 DEVELOPMENT MODEL

Waterfall Model
Planning the development process involves several important
considerations. The first consideration is to define a product life-cycle model. A
software LIFE-CYCLE model encompasses all activities required to define, test, and
develop.

Figure 9.2 Waterfall model

26
9.3 FEASIBILITY STUDY

The main aim of feasibility study is to determine whether developing


the product is financially and technically feasible. The feasibility study involves.
 An abstract definition of the problem.
 Formation of different solution strategies.
 Examination of alternative solution strategies and their benefits, indicating
resources required, developed, cost and time respect of each of the alternative
solution. A cost effective analysis is performed to determine which solution
is the best at this stage, it may also determine whether any of this solution is
not feasible due to the high cost, resource constraint or extraordinary
technically reason . The module is totally feasible in all respect i.e. technically
it reduces the time consuming and in economically it reduces the cost.

9.4 FEASIBILITY ANALYSIS

The advantage of computer based system is more likely to be plagued


by a scarcity of resources and difficulty delivery dates. A feasibility study is not
Warranted for a system in which economic justification is obvious, technical risk is
low, few legal problems are expected and no reasonable alternative risks. Even if
there are different types of feasibility study reports but the following were given
Importance for this project:
 Security Feasibility
 Operational Feasibility
 Economic Feasibility
 Technical Feasibility.

27
9.5 SECURITY FEASIBILITY
The security of the data logged is the same as any of the other data in
the user’s device. It is very important that user should keep their data secure,
especially if they share important personal data at any cost.

9.6 OPERATIONAL FEASIBILITY


People are inherently resistance to change and computers have been
known to facilitate change. By, the simplicity and usefulness of the system
environment, it is safe to assume that the end users are eager enough to use the
software. This would relieve them from any difficulty in keeping their private data
safe.

9.7 ECONOMIC FEASIBILITY


Economically Feasibility is most frequently used for evaluating the
effectiveness of equipment information system. More commonly known as cost
benefit analysis, the procedure is to determine the benefits and saving that are
expected from equipment information system and compare them with costs. If
benefits outweigh costs, then the decision is made to design and implement the
system, otherwise design and implement the system, otherwise further justification
or alternation in the proposed system will have to be made if it is to have a chance
of being approved. Keeping in view the no. of users who would be using this
software that is the different places and locations, this project was developed keeping
in view the lowest hardware capacity mobile devices, available during that time.

28
9.8 TECHNICAL FEASIBILITY
Technical Feasibility involves financial considerations to accommodate
technical enhancement. If the budget is a serious constraint, then the project is judged
‘not feasible’. Here the cost incurred by searching an appropriate s/w which may
handle by some few people and hidden to others, but it reduces the searching time,
so it technically fit. The module reduce the work load of different end user, and gives
a clear picture about the s/w required so the user use this module, hence this system
is operationally feasible.

9.9 HARDWARE FEASIBILITY


For this application to work properly a minimum set of Hardware is
required, If the hardware requirement is not met, then the application may not work
as advertised. Similarly, in order for the application to be developed a minimum set
of requirements must be satisfied.
• 2GB RAM minimum, 4 GB RAM recommended
• 16 GB hard disk space

9.10 SOFTWARE FEASIBILITY


These software packages are needed to develop and implement this project.
1. Raspbiean OS
2. Python 3.7
3. TensorFlow
4. Open CV
5. Python platform

29
9.11 REQUIREMENT ANALYSIS AND SPECIFICATION

Before starting to design a software product, it is extremely important


to understand the precise requirements of the customer and to document them
properly. The requirements analysis and specifications phase starts once the
feasibility study phase is complete and the project is found to be financially sound
and technically feasible, the goal of the requirements analysis and specifications
phase is to clearly understand the customer requirements and to systematically
organize these requirements in a specification document, this phase consists of the
following two activities:
 Requirements gathering analysis
 Requirements specification

9.11.1 REQUIREMENTS GATHERING AND ANALYSIS


We can elaborate the two main activities involved in the requirements
gathering and analysis phase:

1) Requirements Gathering:
This activity typically involves interviewing the farmers and
agriculturalist and studying the existing documents and problems in real time to
collect all possible information regarding the system. If the project involves
automating some existing procedures, then the task of the system analyst becomes a
little easier as he can obtain the input and the output data formats and the details of
the nation and creativity on the part of the system analyst is required.

30
2) Analysis of Gathered Requirements:
The main purpose to this activity is to clearly understand the exact
problem in the agricultural lands due to animals. The following basic questions
pertaining to the project should be clearly understood by the analyst in order to
obtain a good grasp of the problem:
 What is the problem?
 What are the possible solutions to the problem?
 What exactly are the data inputs to the system and what exactly are the data
outputs required of the system?
 What are the likely complexities that might arise while solving the problem?
 If there are external software or hardware with which the developed
 software has to interface, then what exactly would the data interchange
formats with the external system.

9.11.2 SOFTWARE REQUIREMENTS AND SPECIFICATION (SRS)


After the analyst has collected all the required information regarding
the software to be developed, and has removed all incompleteness, inconsistencies,
and anomalies from the specification, he starts to systematically organize the
requirements in the form of an SRS document. The SRS document usually contains
all the user requirements in an informal form.
Contents of the SRS Document
An SRS document should clearly document the following aspects of a system:
 Functional requirements
 Non-functional requirements
 Goals of implementation

31
9.12 DESIGN
Software design deals with transforming the customer requirements, as
described in the SRS document, into a form that is implement able using a
programming language, for a design to be easily implement able using a
programming language, the following items must be designed during the
design phase:
 Different modules required implementing the design solution by interacting
hardware components with each other.
 Design, Interaction and the implementation of the individual modules.
 Algorithms required implementing the individual modules.
 Algorithm which is used to implement the customer requirements.

32
CHAPTER 10
SYSTEM TESTING

10.1 UNIT TESTING


Modules forms functionally testable units in the application under
discussion, every single modules’ functionality was tested separately before they
were integrated, any inaccuracy found in module were taken seriously and acted
upon.

10.2 INTEGRATION TESTING


However successful the modules/units be in Modular level testing, it is
a lot tricky than what we might think, to make them all work together in unison. The
modules are integrated together and then what we call as an Integration testing is
done, if this test is successfully passed, it can be safely assumed that all the modules
of the system are properly integrated with proper interfaces for each module to
communicate with the other modules.

10.3 SYSTEM TESTING


Testing is vital to the success of the system. System testing makes a
logical assumption that if all the parts of the system are correct, the goal will be
successfully achieved. It is the major quality measure used to determine the status
and usefulness of the system. Its basic function is to find the error in the software by
examine all possible loopholes. The goal of testing is to point out uncovered
requirements, design or coding errors or invalid acceptance or storage of data.
During system testing, the system is used to experimentally to ensure that the
software does not run according to its specification and in the way users expect.
Special test data is the input for processing, and the results examined. The process
33
of testing has been divided into three distinctive stages. Different tests are performed
at different levels.

10.4 VALIDATION TESTING


Validation testing is the process of evaluating software during the
development process or at the end of the development process to determine whether
it satisfies specified business requirements. Validation Testing ensures that the
product actually meets the client requirements. It can also be defined as to
demonstrate that the product fulfils its intended use when deployed on appropriate
environment. In this project, client will receive their expected. The motive of the
project is to provide security to the agricultural lands and the animals from the
fences. So, in this we will validate both hardware and software in order to verify that
the modules satisfy the objective and requirements constraints.

10.5 USABILITY TESTING


Usability testing refers to evaluating a project or service by testing it
with representative users. Typically, during a test, participants will try to complete
typical tasks while observers watch, listen and take notes. The goal is to identify any
usability problems, collect qualitative and quantitative data and determine the
participant’s satisfaction with the product. In this project, the modules are tested on
client side in order to check whether the objective of the client is satisfied. The
outcome of this test process was successfully verified.

34
APPENDIX I

PYTHON CODE:

# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util
from gpiozero import LED
from gpiozero import MotionSensor
from playsound import playsound
#import time

# Define VideoStream class to handle streaming of video from webcam in separate process
ing thread
# Source -
Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-
raspberry-pi-fps-with-python-and-opencv/
class VideoStream:

"""Camera object that controls video streaming from the Picamera"""


def __init__(self,resolution=(640,480),framerate=30):
# Initialize the PiCamera and the camera image stream
self.stream = cv2.VideoCapture(0)
ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
ret = self.stream.set(3,resolution[0])
ret = self.stream.set(4,resolution[1])

# Read first frame from the stream


(self.grabbed, self.frame) = self.stream.read()

# Variable to control when the camera is stopped


self.stopped = False

def start(self):
# Start the thread that reads frames from the video stream
Thread(target=self.update,args=()).start()
return self

35
def update(self):
# Keep looping indefinitely until the thread is stopped
while True:
# If the camera is stopped, stop the thread
if self.stopped:
# Close camera resources
self.stream.release()
return

# Otherwise, grab the next frame from the stream


(self.grabbed, self.frame) = self.stream.read()

def read(self):
# Return the most recent frame
return self.frame

def stop(self):
# Indicate that the camera and thread should be stopped
self.stopped = True

# Define and parse input arguments


parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--
graph', help='Name of the .tflite file, if different than detect.tflite',
default='detect.tflite')
parser.add_argument('--
labels', help='Name of the labelmap file, if different than labelmap.txt',
default='labelmap.txt')
parser.add_argument('--
threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--
resolution', help='Desired webcam resolution in WxH. If the webcam does not support the
resolution entered, errors may occur.',
default='1280x720')
parser.add_argument('--
edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')

args = parser.parse_args()

36
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu

# Import TensorFlow libraries


# If tensorflow is not installed, import interpreter from tflite_runtime, else import f
rom regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tensorflow')
if pkg is None:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate

# If using Edge TPU, assign filename for Edge TPU model


if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use
default 'edgetpu.tflite'
if (GRAPH_NAME == 'detect.tflite'):
GRAPH_NAME = 'edgetpu.tflite'

# Get path to current working directory


CWD_PATH = os.getcwd()

# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)

# Path to label map file


PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)

# Load the label map


with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]

# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
37
del(labels[0])

# Load the Tensorflow Lite model.


# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0'
)])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)

interpreter.allocate_tensors()

# Get model details


input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]

floating_model = (input_details[0]['dtype'] == np.float32)

input_mean = 127.5
input_std = 127.5

# Initialize frame rate calculation


frame_rate_calc = 1
freq = cv2.getTickFrequency()

ms = MotionSensor(4)
while True:
print("Ready for Motion")
ms.wait_for_motion()
print("Motion Detected")
# Initialize video stream
videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
time.sleep(1)

timeout = time.time() + 10
#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=Tr
ue):
while True:
if time.time() > timeout:
print("timeout exit")
break
38
# Start timer (for calculating frame rate)
t1 = cv2.getTickCount()

# Grab frame from video stream


frame1 = videostream.read()

# Acquire frame and resize to expected shape [1xHxWx3]


frame = frame1.copy()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = np.expand_dims(frame_resized, axis=0)

# Normalize pixel values if using a floating model (i.e. if model is non-


quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std

# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()

# Retrieve detection results


boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box co
ordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index o
f detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of
detected objects
#num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of
detected objects (inaccurate and not needed)

def detect(objectName):
led = LED(17)
# Indicate that the camera and thread should be stopped
if(objectName == "cow" or objectName == "elephant" or objectName == "horse"
or objectName == "sheep"):
led.on()
time.sleep(0.5)
playsound('audio.mp3')
else:
led.off()

# Loop over all detections and draw detection box if confidence is above minimu
m threshold
39
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):

# Get bounding box coordinates and draw box


# Interpreter can return coordinates that are outside of image dimensio
ns, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))

cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)

# Draw label
object_name = labels[int(classes[i])] # Look up object name from "label
s" array using class index
print("Object detected: " + object_name)
detect(object_name)
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'pers
on: 72%'
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX,
0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label
too close to top of window
cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-
10), (xmin+labelSize[0], label_ymin+baseLine-
10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(frame, label, (xmin, label_ymin-
7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text

# Draw framerate in corner of frame


cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(30,50),cv2.FONT_HERSH
EY_SIMPLEX,1,(255,255,0),2,cv2.LINE_AA)

# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)

# Calculate framerate
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc= 1/time1

# Press 'q' to quit


if cv2.waitKey(1) == ord('q'):
break

40
# Clean up
cv2.destroyAllWindows()
videostream.stop()

print("Starting next loop")

41
APPENDIX II
SCREENSHOTS
The System is initialized and before any motion is detected in the Sensor

Screenshot 1: When the motion is detected and the animal elephant is found in the
Image

After PIR Sensor Detects the movements and object is detected by the
system and shows what object it detected.

Screenshot 2: Detected Objects in the land.

42
Elephant movement is detected in the land. The Elephant need to be stay from the
land hence the system plays the horrible sound to make the stay away from the
land.

Screenshot 3: Showing the accuracy of the object detected.

43
Screenshot 3: Showing accuracy of the object detected is Cow

44
Screenshot 4: Showing the accuracy of the object detected 75% and it is Dog.

45
CHAPTER 11

CONFUSION AND FUTURE ENHANCEMENT

CONCLUSION:

We proposed an alert system that helps the farmer to safeguard the crops and
land from the animal attacks and preventing the animal death from electric fences.
This system will be very helpful when it is implemented as the real time
application in the agricultural lands. It can also be used in the people living in the
mountains range who suffer from animal attacks in their living area. In this area the
alert system can also be used for their safety purpose to safeguard their living area.

FUTURE WORK:

Possible ideas of for future includes implementing the system more number
of cameras and more sensors to cover the large area. Another idea is to add the
temperature sensor to rescue animal from forest fire by alert the animal using the
horrible sound based on the temperature of the temperature sensor. Importing the
GSM module to connect the users’ mobile to notify the movements animals in the
agricultural lands and animals reaching the electric fences.

46

You might also like