You are on page 1of 40

1

CHAPTER-1
INTRODUCTION

1.1 INTRODUCTION
In agriculture one of the major social Problems that is existing in the present is the
damaging of the crops by the wild animals. Some of the animals in South India that act
as a threat to crops are deer, monkey, elephant and others. This problem must be
attended immediately and an effective solution must be created and accomplished.
Thus, this project aims to address this problem.
Animal attacks in India are a common story nowadays. Due to the unavailability of any
detection system these attacks destroy their crops. Due to lack of proper safety
measures, these villagers are left helpless to their fate. Also the crops of villagers are
destroyed due to frequent interference of animals. The crops and paddy fields cannot be
always fenced. So the possibility of crops being eaten away by cows and goats are very
much present. This could result in huge wastage of crops produced by the farmers.
Animals such as deer, wild boars, rabbits, moles, elephants, monkeys, and many others
may cause serious damage to crops. They can damage the plants by feeding on plant
parts or simply by running over the field and trampling over the crops.
The foraging activities of cropland bird species like House Crow have caused more
damage to wheat, while pigeons and doves cause damage to pearl millet and sunflower.
Also, the parakeets and crows were found to inflict more damage to the crops than what
they actually consumed.
Farmers are worried about wild animals like elephants and gaurs damaging their crop
along Western Ghats in Kansapuram. “Not a day passes without wild elephants or
gaurs or wild boars damaging the crops,” complained P.M. Radhakrishnan, 54, who
has his farm in Pandaram Parai. He said that mango trees and coconut palms in his
neighbor, K. Dharmalingam’s farm were damaged by wild elephants on Friday night.
“Invasion of wild elephants, gaurs, bears and wild boars have become so frequent that
farmers are afraid to stay back in the farms at night to keep vigil,” he said. He said
that five coconut palms and four mango trees were uprooted in Mr. Dharmalingam’s
farm. Elephants have been raiding the farms for the last few months and damaging the
crops, said another farmer, Pon Arunachalam, 69, of Aththikoil. Forest officials have
not supplied crackers to farmers to drive away the wild animals.

1
The existing systems mainly provide the surveillance functionality. Also, these systems
don't provide protection from wild animals, especially in such an application area. The
other commonly used methods by the farmers in order to prevent the crop vandalization
by animals include building physical barriers, use of electric fences, IOT based Sensor
monitoring and manual surveillance and various such exhaustive and dangerous
methods.
Our goal is to build a crop protection from animals using Artificial Intelligence. This
system is called “AI Based Scarecrow”. This system detects the animals entering into
the crop field and plays an extermination sound that is most feared by the animal. We
are using YoloV3 Algorithm that is a Real time live object detection algorithm that
detects the animals in the live video. cv2 module is used to capture video. Playsound
module is used to play extermination sound. We are using coco.names data set for
identifying animals and their names if detected by YoloV3 Algorithm. Its accuracy is
more compared to existing systems.

1.2 MOTIVATION
Traditional farming has been, is, and will continue in the future to be a manual and
labor-intensive industry. … To address these challenges, efforts and research are in
place to improve the quality and quantity of agriculture products by making them
‘connected’ and ‘intelligent’ through “smart farming”. Farmers being confronted with
a labor shortage, more stringent legislation, increasing global population and the
declining numbers of farmers, forces them to look to new solutions. With technologies
such as the Internet of Things (IOT), Big data & Analytics, Artificial Intelligence (AI)
And Machine Learning (ML) entering almost all industries. By monitoring the eating
habits of your livestock and thus giving your livestock higher quality food and more
optimal times you can reduce the amount of CO2 emissions. By implementing all of
these things you can make a great effort towards Climate-smart agriculture. A form of
sustainable smart farming.

1.3 PROBLEM STATEMENT


➢ In agriculture one of the major social problems that is existing in the present is the
damaging of the crops by the wild animals. Suggest a method to protect Crops from
Wild Animals and Birds.

2
➢ Building physical barriers, use of electric fences etc., are dangerous to animals. In
order to prevent the crop vandalization by animals we use AI based Technology.

1.4 EXISTING SYSTEM


Manual way such as constructing different kinds of fences and using natural repellents
are effective but they are not cost efficient. It is also not possible to increase the man
power. So, initial projects were taken up to drive away the animals automatically by
using hardware components like controllers and sensors.
One such approaches camera interfaced to the Raspberry pi module. Camera is used to
capture an image of wild animal and send captured image to the Raspberry pi module.
When image can take by the Raspberry pi and compared with database image. After
comparing, if the wild animal is detected then it gives commands to GSM module. GSM
used to send message to the owner of the farm. The problem here is there is more of
hardware components which is not cost efficient and its maintenance is also hard.
Another approach is based on Arduino Uno based system using microcontroller. This
system uses a motion sensor to detect wild animals approaching near the field. In such
a case the sensor signals the microcontroller to take action. The microcontroller now
sounds an alarm to drive away animals from the field as well as SMS send to the farmer
so that the farmer may know about the issue. In recent times, researches are taken to
solve this problem using Artificial Intelligence.
Limitations of Existing System:
• Electric fences are dangerous to animals and humans.
• IOT based Sensor monitoring doesn’t provide accurate results.

1.5 PROPOSED SYSTEM


Our goal is to build a crop protection from animals using Artificial Intelligence. This
system is called “AI Based Scarecrow”. This system detects the animals entering into
the crop field and plays an extermination sound that is most feared by the animal. We
are using YoloV3 Algorithm that is a Real time live object detection algorithm that
detects the animals in the live video, cv2 module is used to capture video. Playsound
module is used to play extermination sound. We are using coco.names data set for
identifying animals and their names if detected by YoloV3 Algorithm. Its accuracy is
more compared to existing systems.

3
CHAPTER-2
LITERATURE SURVEY

2.1 LITERATURE SURVEY


The purpose of animal detection is to prevent or reduce the number of animal-vehicle
collisions. These systems are specifically aimed at the wild animals that can cause
human death, injury and property damage. This system detects the wild animals before
they enter the road. Historically animal-vehicle collisions have been addressed by
putting up signs that warn peoples of potential animal crossings. In other cases, wildlife
warning reflectors or wildlife fences have been installed to keep animals away from the
road. When we look at images or videos, we can easily locate and identify the objects
of our interests within moments. Object Detection has found its application in a wide
variety of domain such as video surveillance, image retrieval systems, autonomous
driving vehicles and many more. Various algorithms can be used for object detection
but we will be focusing on YoloV3 algorithm. In some selected areas wildlife fencing
has been combined with a series of wildlife crossing structures.
1. TITLE: IOT BASED SMART AGRICULTURE- 2022
Authors: PROF.N.GONDACHWAR, DR.R.S.KAWITKAR,
Electronics and Telecommunication, college of engineering Pune.
Findings: The sensors and microcontroller of three nodes are successful interfaced with
raspberry Pi and wireless communication is achieved between various nodes. All
observations and experimental test proves that project is a complete solution to field
activities, Irrigation problems and storage problem using remote control smart robot
irrigation system and smart warehouse management system respectively.
Implementation of such system in field can definitely help to improve the crops and
overall productions. When things like household appliances are connected to a network,
they can work together in cooperation to provide the ideal service as a whole, not as a
collection of independently working devices. This is useful for many of the real-world
applications and services, and one would for example apply it to build a smart
residence. Therefore, automated inspection becomes a popular way for substituting
human inspection as machines are good at repeating a same task without fatigue until
now most automated crop inspector machine are realize by using visual inspection
systems.

4
2. TITLE: IOT BASED SMART CROP MONITORING IN FARM LAND- 2022
Authors: N.GOWTHAMAN, V.NANDINI, S.MITHRA, N.PRIYA– Assistant
professor, Department of ECE, SNS college of Technology, Coimbatore.
TN- INDIA- UG Student Department - ECE, SNS college of Technology TN-INDIA.
Findings: In this paper we purposed a method for efficient crop monitoring for
agriculture field. With the application for IOT the data can be stored and retrieved from
anywhere in this purposed work, the sensor part is limited only for monitoring Crops.
The agricultural farming is done by using human eye’s where farm are protected by
which is made of grass pot as a inspectors however, the accuracy of human infection is
unstable due to fatigue, and it’s more challenging for human eyes to detect brides attack
on the crops. To avoid person usually use the anonymous or rope to protect Crops.
Therefore, automated inspection becomes a popular way for substituting human
inspection as machines are good at repeating a same task without fatigue until now most
automated crop inspector machine are realize by using visual inspection systems.
3. ANIMAL REPELLENT SYSTEM FOR SMART FARMING USING AI AND
DEEP LEARNING- 2022
Authors: Mathanraj s1, Akshay k2, Muthukrishnan R3
Agricultural farm safety is a well-being wished time these days. Then, to achieve this,
a machine based on a dream is proposed and completed utilization of Python and
OpenCV and fostered an Animal Repellent device to victory the creatures. The
execution of the product required the design and improvement of a mind-boggling
gadget for shrewd creature shock, which coordinates recently progressed programming
added substances and permits perceiving the presence and types of creatures in genuine
time and to stay away from crop harm because of the creatures. Founded on the
classification of the creature recognized, the limit processing instrument executes its
DCNN Animal notoriety model to find the aim, and assuming a creature is identified,
it sends lower back a message to the Animal Repulsor module alongside the type of
ultrasound to produce in sync with the class of the creature. The proposed CNN became
assessed in the made creature data set. The general exhibitions had been gotten utilizing
an extraordinary assortment of tutoring previews and investigating pictures. The got
trial results of the accomplished analyses show that the proposed CNN gives a decent
acknowledgment charge for more scope of entering training photos (exactness of
around 98%). This task gave a genuine time observing arrangement essentially founded
on the AI period to manage the issues of yield harms against creatures.
5
4. DESIGN, DEVELOPMENT AND EVALUATION OF AN INTELLIGENT
ANIMAL REPELLING SYSTEM FOR CROP PROTECTION BASED ON
EMBEDDED EDGE-AI- 2021
Authors: Davide Adami, Mike Ojo, Stefano Giordano
In recent years, edge computing has become an essential technology for real-time
application development by moving processing and storage capabilities close to end
devices, thereby reducing latency, improving response time and ensuring secure data
exchange. In this work, we focus on a Smart Agriculture application that aims to protect
crops from ungulate attacks, and therefore to significantly reduce production losses,
through the creation of virtual fences that take advantage of computer vision and
ultrasound emission. Starting with an innovative device capable of generating
ultrasound to drive away ungulates and thus protect crops from their attack, this work
provides a comprehensive description of the design, development and assessment of an
intelligent animal repulsion system that allows to detect and recognize the ungulates as
well as generate ultrasonic signals tailored to each species of the ungulate. Taking into
account the constraints coming from the rural environment in terms of energy supply
and network connectivity, the proposed system is based on IoT platforms that provide
a satisfactory compromise between performance, cost and energy consumption. More
specifically, in this work, we deployed and evaluated various edge computing devices
(Raspberry Pi, with or without a neural compute stick, and NVIDIA Jetson Nano)
running real-time object detector (YOLO and Tiny-YOLO) with custom-trained
models to identify the most suitable animal recognition HW/SW platform to be
integrated with the ultrasound generator. Experimental results show the feasibility of
the intelligent animal repelling system through the deployment of the animal detectors
on power efficient edge computing devices without compromising the mean average
precision and also satisfying real-time requirements. In addition, for each HW/SW
platform, the experimental study provides a cost/performance analysis, as well as
measurements of the average and peak CPU temperature. Best practices are also
discussed and lastly, this article discusses how the combined technology used can help
farmers and agronomists in their decision making and management process

6
S.no Title Year Description
1 IOT Based Smart 2022 The Sensors and microcontroller of three
Agriculture nodes are successful interfaced with raspberry
Pi and wireless communication is achieved
between various nodes.

2 IOT Based Smart 2022 With the application for IOT the data can be
Crop Monitoring In stored and retrieved from anywhere in this
farm land purposed work, the sensor part is limited only
for monitoring Crops.

3 Animal Repellent 2022 In this, they use DCNN and RPN algorithms to
System for Smart identify the animal species. After that, it’ll
Farming Using AI create ultrasonic waves to annoy the animals
and Deep Learning and make them run far away from the fields.

4 Design, 2021 In this work, they deployed and evaluated


Development and various edge computing devices (Raspberry Pi,
Evaluation of an with or without a neural compute stick, and
Intelligent Animal NVIDIA Jetson Nano) running real-time
Repelling System object detector (YOLO and Tiny-YOLO) with
for Crop Protection custom-trained models to identify the most
Based on Embedded suitable animal recognition HW/SW platform
Edge-AI to be integrated with the ultrasound generator.
Experimental results show the feasibility of the
intelligent animal repelling system through the
deployment of the animal detectors on power
efficient edge computing devices without
compromising the mean average precision and
also satisfying real-time requirements.

Table 2.1 Literature Survey

7
2.2 SOFTWARE REQUIREMENT SPECIFICATION
2.2.1 Hardware Requirements
The hardware requirements may serve as the basis for a contract for the implementation
of the system and should therefore be a complete and consistent specification of the
whole system. They are used by software engineers as the starting point for the system
design. It shows what the system does and not how it should be implemented.
• Windows 10
• Intel® core™ i5 @1.70GH Processor
• 4GB Ram
• Web Camera or external camera

2.2.2 Software Requirements


The software requirements document is the specification of the system. It should
include both a definition and a specification of requirements. It is a set of what the
system should do rather than how it should do it. The software requirements provide a
basis for creating the software requirements specification. It is useful in estimating cost,
planning team activities, performing tasks and tracking the teams and tracking the
team’s progress throughout the development activity.
• Python
• Open CV
• Playsound
• Numpy

8
CHAPTER-3
METHODOLOGY AND DESIGN ANALYSIS

3.1 METHODOLOGY
• Firstly, using webcam animal will be identified with the help of OpenCV.
• Using yolov3 algorithm animal will be detected.
• Using coconames dataset name of the animal will be identified.
• After identifying the animal an extermination sound is produced by the
playsound module.
• If animal is detected sound is produced.
• Otherwise, terminate the process.

3.2 MODULES
3.2.1 Monitoring Module
• In this project, we will monitor the entire farm at regular intervals through a
camera which will be recording the surrounding throughout the day.
• It successfully runs and capture the video with camera.
3.2.2 Detection Module
• We are using YoloV3 Algorithm that is a Real time live object detection
algorithm that detects the animals in the live video
• Captured video is divided into frames and each frame is analysed. If animal is
detected in the captured picture then we can identifying animals and their names
with the help of coco.names data set. And Detection of animal is successfully
done.
3.2.3 Alert Module
• After identifying the name of the animal. A sound is produced based on the
name of the animal with the help of playsound module.
• This sound helps to exterminate the animal from the field.

9
3.3 SYSTEM ARCHITECTURE
A System Architecture is the conceptual model that defines the structure, behavior, and
more views of a system. An architecture description is a formal description and
representation of a system, organized in a way that supports reasoning about the
structures and behaviors of the system. A system architecture can consist of
system components and the sub-systems developed, that will work together to
implement the overall system. There have been efforts to formalize languages to
describe system architecture, collectively these are called architecture description
languages (ADLs).

Fig 3.1 System Architecture

10
3.4 SYSTEM DESIGN
3.4.1 UML Diagrams
Unified Modeling Language (UML) is a standardized general purpose modeling
languagein the field of object-oriented software engineering. The standard is managed
and was created by the Object Management Group (OMG). It was first added to the list
of OMG adopted technologiesin 1997, and has since become the industry standard for
modeling software intensive systems. UML is a language which provides vocabulary
and the rules for combining words in that vocabulary for the purpose of communication.
A modeling language is a language whose vocabulary and the rules focus on the
conceptual and physical representation of a system. Modeling yields an understanding
of a system. The UML is used to specify, visualize, modify, construct and document the
artifacts of anobject0oriented software intensive system under development. Later on, a
few types of method or system can also likewise be brought to; or related with, UML.
The Unified Modelling Language is a popular dialect for indicating, Visualization,
Constructing and archiving the curios of programming framework, and for business
demonstrating and different nonprogramming frameworks. The UML is an essential
piece of creating gadgets located programming and the product development method.
The UML makes use of commonly graphical documentations to specific the plan of
programming ventures.
GOALS: The Primary goals inside the plan of the UML are as in step with the
subsequent:
1. Provide clients a prepared to utilize, expressive visual showing Language on the way
to create and change massive models.
2. Provide extendibility and specialization units to make bigger the middle ideas.
3. be free of specific programming dialects and advancement manner.
4. Provide a proper cause for understanding the displaying dialect.
5. Encourage the improvement of OO gadgets exhibit.
6. Support large amount advancement thoughts, for example, joint efforts, systems,
examples and its components.
7. Integrate widespread procedures.
A unified modelling language (UML) is a general-purpose, development modelling
language in the field of software engineering that is intended to provide a standard way
to visualize the design of a system.

11
3.4.1.1 Use case diagram:
A use case diagram in the Unified Modelling Language (UML) is a type of behavioural
diagram defined by and created from a Use-case analysis. A use case diagram is used
to represent the dynamic behavior of a system. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals
(represented as use cases), and any dependencies between those use cases. The main
purpose of a use case diagram is to show what system functions are performed for which
actor. Roles of the actors in the system can be depicted.

Fig 3.2 Use case diagram

Description:
Here in our Use Case diagram, we explained that camera will capture the real time
video of the farm in regular time intervals. After that it extract the features and by using
the algorithm dataset, it will identify the animal and its name. Then according to the
name of the animal it will produce the sound according to the animal. Animal which
entered the field, it receives the sound. In this way, we drive the animal away.

12
3.4.1.2 Activity Diagram
Activity diagrams are graphical representations of workflows of stepwise activities and
actions with support for choice, iteration and concurrency. In the Unified Modelling
Language, activity diagrams can be used to describe the business and operational step-
by-step workflows of components in a system. An activity diagram shows the overall
flow of control. It is also termed as an object-oriented flowchart. It encompasses
activities composed of a set of actions or operations that are applied to model the
behavioral diagram.

Fig 3.3 Activity Diagram

Description:
Here in our Activity Diagram, we first take the input data and analysis it. We compare
it with the training dataset and test the input data. After the testing, we will get the result
according to our input data that is sound is produced according to the animal that is
detected.

13
3.4.1.3 Sequence Diagram
A sequence diagram in Unified Modelling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. The
sequence diagram represents the flow of messages in the system and is also termed as
an event diagram. It helps in envisioning several dynamic scenarios. It portrays the
communication between any two lifelines as a time-ordered sequence of events, such
that these lifelines took part at the run time. In UML, the lifeline is represented by a
vertical bar, whereas the message flow is represented by a vertical dotted line that
extends across the bottom of the page. It incorporates the iterations as well as branching.
It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagram.

Fig 3.4 Sequence Diagram

Description:
Here in our Sequence diagram, input data is passed to the application and then to the
algorithm. In algorithm, data analysis is done and comparison of the input data to the
training data will be done. After the testing, the appropriate results will be taken as the
output. Here, in dataset, animal names are stored. After capturing the image which is
taken as input, according to animal’s name, sound will be produced to drive the animal
away. This is shown in the sequence diagram that shown above.

14
3.4.1.4 Class Diagram
In software engineering, a class diagram in the Unified Modelling Language (UML) is
a type of static structure diagram that describes the structure of a system by showing
the system's classes, their attributes, operations (or methods), and the relationships
among the classes. It explains which class contains information. Class diagrams are the
blue prints of your system or subsystem. Class diagrams are useful in many stages of
system.

Fig 3.5 Class Diagram


Description:
In this Class Diagram, we have various classes, methods and their attributes. In web
cam, we have findobj() method which is used to find the object and coordinates. It is
linked to the image module in which weights are calculated and from the trained dataset
and coco.names, identifying animal name can be done and Sound is produced.

15
3.4.1.5 Collaboration Diagram
The collaboration diagram is used to show the relationship between the objects in a
system. Both the sequence and the collaboration diagrams represent the same
information but differently. Instead of showing the flow of messages, it depicts the
architecture of the object residing in the system as it is based on object-oriented
programming. An object consists of several features. Multiple objects present in the
system are connected to each other. The collaboration diagram, which is also known as
a communication diagram, is used to portray the object's architecture in the system. The
collaborations are used when it is essential to depict the relationship between the
object.. The collaboration diagrams are best suited for analysing use cases.

Fig 3.6 Collaboration Diagram

Description:
In Collaboration Diagram, the camera input is taken by capturing image. Then pre-
processing is done by calculating configurations and weights of the input. From the
dataset, we check if it’s the animal. The name of animal is identified from the dataset.
If the animal is detected, then the sound is produced and the animal will receive alert.
If animal is not detected, then again it goes for camera input and capture the image.
This process will continues until the camera is on. It will guard the camera 24/7.

16
3.4.2 Flow Chart
A flowchart is a type of diagram that represents a workflow or process. A flowchart can
also be defined as a diagrammatic representation of an algorithm, a step-by-step
approach to solving a task. First the image is taken as input for the YOLOv3 Algorithm
and then it is configured with its configurations and weight. Then it is mapped with
coco.names data set and the name of the object is obtained. If it is an animal
extermination sound is produced otherwise next frame is fed as input.

Fig 3.7 Flow chart of animal detection

17
3.5 ANALYSIS
Strengths
• Well aimed and Precise
• No need of Human Attention
• Causes No harm to Animal and Human

Weaknesses
• Cannot Identify Micro animals
• More memory and High GPU is required

Opportunities
• High requirement for Crop protection
• More attention towards AI in Agriculture

Threats
• Easily prone to Damage
• Rain leads to damage of System

Applications
• This project will help farmers in protecting their crops and fields and save them
from significant financial losses and will save them from the unproductive
efforts that they endure for the production of the fields.
• This will also help them in achieving better crop yields thus leading to their
economic well being.

18
CHAPTER-4
IMPLEMENTATION

4.1 INTRODUCTION TO ARTIFICIAL INTELLIGENCE


The ability of a digital computer or computer-controlled robot to perform tasks
commonly associated with intelligent beings. The term is frequently applied to the
project of developing systems endowed with the intellectual processes characteristic of
humans, such as the ability to reason, discover meaning, generalize, or learn from past
experience. Since the development of the digital computer in the 1940s, it has been
demonstrated that computers can be programmed to carry out very complex tasks—as,
for example, discovering proofs for mathematical theorems or playing chess—with
great proficiency. Still, despite continuing advances in computer processing speed and
memory capacity, there are as yet no programs that can match human flexibility over
wider domains or in tasks requiring much everyday knowledge. On the other hand,
some programs have attained the performance levels of human experts and
professionals in performing certain specific tasks, so that artificial intelligence in this
limited sense is found in applications as diverse as medical diagnosis, computer search
engines, and voice or handwriting recognition.
What is intelligence?
All but the simplest human behaviour is ascribed to intelligence, while even the most
complicated insect behaviour is never taken as an indication of intelligence. What is the
difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the
female wasp returns to her burrow with food, she first deposits it on the threshold,
checks for intruders inside her burrow, and only then, if the coast is clear, carries her
food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food
is moved a few inches away from the entrance to her burrow while she is inside: on
emerging, she will repeat the whole procedure as often as the food is displaced.
Intelligence—conspicuously absent in the case of Sphex—must include the ability to
adapt to new circumstances.
Psychologists generally do not characterize human intelligence by just one trait but by
the combination of many diverse abilities. Research in AI has focused chiefly on the
following components of intelligence: learning ,reasoning, problem
solving, perception, and using language.
19
Learning
There are a number of different forms of learning as applied to artificial intelligence.
The simplest is learning by trial and error. For example, a simple computer program for
solving mate-in-one chess problems might try moves at random until mate is found.
The program might then store the solution with the position so that the next time the
computer encountered the same position it would recall the solution. This simple
memorizing of individual items and procedures—known as rote learning—is relatively
easy to implement on a computer.
Reasoning
To reason is to draw inferences appropriate to the situation. Inferences are classified as
either deductive or inductive. An example of the former is, “Fred must be in either the
museum or the café. He is not in the café; therefore, he is in the museum,” and of the
latter, “Previous accidents of this sort were caused by instrument failure; therefore this
accident was caused by instrument failure.” The most significant difference between
these forms of reasoning is that in the deductive case the truth of
the premises guarantees the truth of the conclusion, whereas in the inductive case the
truth of the premise lends support to the conclusion without giving absolute assurance.
Inductive reasoning is common in science, where data are collected and tentative
models are developed to describe and predict future behaviour—until the appearance
of anomalous data forces the model to be revised. Deductive reasoning is common
in mathematics and logic, where elaborate structures of irrefutable theorems are built
up from a small set of basic axioms and rules.
Problem solving
Problem solving, particularly in artificial intelligence, may be characterized as a
systematic search through a range of possible actions in order to reach some predefined
goal or solution. Problem-solving methods divide into special purpose and general
purpose. A special-purpose method is tailor-made for a particular problem and often
exploits very specific features of the situation in which the problem is embedded. In
contrast, a general-purpose method is applicable to a wide variety of problems. One
general-purpose technique used in AI is means-end analysis—a step-by-step,
or incremental, reduction of the difference between the current state and the final goal.
Some examples are finding the winning move (or sequence of moves) in a board game,
devising mathematical proofs, and manipulating “virtual objects” in a computer-
generating world.
20
Perception
In perception the environment is scanned by means of various sensory organs, real or
artificial, and the scene is decomposed into separate objects in various spatial
relationships. Analysis is complicated by the fact that an object may appear different
depending on the angle from which it is viewed, the direction and intensity of
illumination in the scene, and how much the object contrasts with the surrounding field.
At present, artificial perception is sufficiently well advanced to enable optical sensors
to identify individuals, autonomous vehicles to drive at moderate speeds on the open
road, and robots to roam through buildings collecting empty soda cans. One of the
earliest systems to integrate perception and action was Freddy, a stationary robot with
a moving television eye and a pincer hand, constructed at the University of Edinburgh,
Scotland, during the period 1966–73 under the direction of Donald Michie. FREDDY
was able to recognize a variety of objects and could be instructed to assemble
simple artifacts, such as a toy car, from a random heap of components.
Language
A language is a system of signs having meaning by convention. In this sense, language
need not be confined to the spoken word. Traffic signs, for example, form a
minilanguage, it being a matter of convention that ⚠ means “hazard ahead” in some
countries. It is distinctive of languages that linguistic units possess meaning by
convention, and linguistic meaning is very different from what is called natural
meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in
pressure means the valve is malfunctioning.”

Machine Learning
Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often
categorized as a subfield of artificial intelligence, but I find that categorization can often
be misleading at first brush. The study of machine learning certainly arose from
research in this context, but in the data science application of machine learning
methods, it's more helpful to think of machine learning as a means of building models
of data. Fundamentally, machine learning involves building mathematical models to
help understand data. Machine learning is a sub field of artificial intelligence, which is
broadly defined as the capability of a machine to imitate intelligent human behaviour.
Artificial intelligence systems are used to perform complex tasks in a way that is similar
21
to how humans solve problems. "Learning" enters the fray when we give these models
tuneable parameters that can be adapted to observed data; in this way the program can
be considered to be "learning" from the data. Once these models have been fit to
previously seen data, they can be used to predict and understand aspects of newly
observed data Understanding the problem setting in machine learning is essential to
using these tools effectively Applications of Machines Learning. Machine Learning is
the most rapidly growing technology and according to researchers we are in the golden
year of AI and ML. The python programming language best fits machine language due
to independent platform. It is used to solve many real-world complex problems which
cannot be solved with traditional approach. machine learning involves building
mathematical models to help understand data. Machine learning is a sub field of
artificial intelligence, which is broadly defined as the capability of a machine to imitate
intelligent human behaviour.
Machine learning enables computers to solve tasks without being explicitly
programmed to solve them. State-of-the-art methods teach machines via supervised
learning (i.e., by showing them correct pairs of inputs and outputs. For example, when
classifying images, the machine is trained with many pairs of images and their
corresponding labels, where the image is the input and its correct label (e.g., “buffalo”)
is the output.

Deep Learning Approach


A deep approach to learning concentrates on the meaning of what is learned. That
concentration may involve testing the material against general knowledge, everyday
experience, and knowledge from other fields or courses. A student taking a deep
approach seeks principles to organize information. In contrast a student using a surface
approach tries to capture material in total, rather than understand it. An example is the
student who busily copies down a diagram without listening to the explanation of it. As
the above problem stated is still prevailing despite of all the methods taken, we
approached the problem using deep learning to drive away the animals automatically.
In our project, we used packages like Kera’s and Play sound to do the pre-processing
steps involved and to create. Here, the input is received from the CCTV (Closed Circuit
Television), the code does the processing and prediction of the frames received from
the camera and appropriate repellent sound is played to drive away the detected animal.

22
Object Detection
Object Detection is a computer technology related to computer vision and image
processing that deals with detecting instances of semantic objects of a certain class
(such as humans, buildings, or cars) in digital images and videos.[1] Well-researched
domains of object detection include face detection and pedestrian detection. Object
detection has applications in many areas of computer vision, including image
retrieval and video surveillance.
When we look at images or videos, we can easily locate and identify the objects of our
interest within moments. Passing on of this intelligence to computers is nothing but
object detection — locating the object and identifying it. Object Detection has found its
application in a wide variety of domains such as video surveillance, image retrieval
systems, autonomous driving vehicles and many more. Various algorithms can be used
for object detection but we will be focusing on YoloV3 algorithm.

4.2 INTRODUCTION TO PYTHON


Python is a popular programming language. Python is a multi-paradigm programming
language. Object-oriented programming and structured programming are fully
supported, and many of its features support functional. Python uses dynamic typing.
Python uses dynamic typing and a combination of reference counting and a cycle
detecting garbage collector for memory management. It also features dynamic name
resolution (late binding), which binds method and variable names during program
execution.
Often, programmers fall in love with Python because of the increased productivity it
provides. Since there is no compilation step, the edit-test-debug cycle is incredibly fast.
Debugging Python programs is easy: a bug or bad input will never cause a segmentation
fault. Instead, when the interpreter discovers an error, it raises an exception. When the
program doesn't catch the exception, the interpreter prints a stack trace. A source level
debugger allows inspection of local and global variables, evaluation of arbitrary
expressions, setting breakpoints, stepping through the code a line at a time, and so on.
The debugger is written in Python itself, testifying to Python's introspective power. On
the other hand, often the quickest way to debug a program is to add a few print
statements to the source: the fast edit-test-debug cycle makes this simple approach very
effective.

23
Python is dynamically-typed and garbage-collected. Python is a programming language
with dynamic semantics. We used Python language to achieve the goal of detection of
animals. Python make it easy to understand every implementation. It is difficult to write
the code in another language but python provides a lot of support with the external
libraries and also easy to run. It also have a user friendly Data Structures which would
make the storing and manipulation of the binary image representation and frames
detection easy. It also provides a lot of libraries. We used the libraries like a pygame,
numpy e.t.c

Open CV
OpenCV is a huge open-source library for computer vision, machine learning, and
image processing. OpenCV supports a wide variety of programming languages like
Python, C++, Java, etc. OpenCV will help you know about the Image-processing from
Basics to Advance, like operations on Images, Videos using a huge set of OpenCV-
programs and projects. The CV component contains mainly the basic image processing
and higher-level computer vision algorithms; MLL the machine learning library
includes many statistical classifiers as well as clustering tools.
Computer vision is an approach to understanding how photos and movies are stored, as
well as manipulating and extracting information from them. Artificial Intelligence
depends on or is mostly based on computer vision. Self-driving cars, robotics, and
picture editing apps all rely heavily on computer vision.
Human vision has a resemblance to that of computer vision. Human vision learns from
the various life experiences and deploys them to distinguish objects and interpret the
distance between various objects and estimate the relative position.
The main features to choose OpenCV:
• Specific: OpenCV was designed for image processing. Every function and data
structure has been designed with an Image Processing application in mind.
Meanwhile, Matlab, is quite generic. You can get almost everything in the world by
means of toolboxes. It may be financial toolboxes or specialized DNA toolboxes.
• Speedy: After doing some real time image processing with both Matlab and
OpenCV, we usually got very low speeds, a maximum of about 4-5 frames being
processed per second with Matlab. With OpenCV however, we get actual real time
processing at around 30 frames being processed per second.

24
• Efficient: With OpenCV, we can get away with as little as 10mb RAM for a real-
time application. Although with today’s computers, the RAM factor isn’t a big thing
to be worried about. However, our drowsiness detection system is to be used inside
a car in a way that is non-intrusive and small; so a low processing requirement is
vital.

Numpy
NumPy is a Python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy was
created in 2005 by Travis Oliphant. It is an open source project and you can use it freely.
NumPy stands for Numerical Python. Numpy is core library of scientific computing in
python. It provides powerful tools to deal with various multi-dimensional arrays in
python. NumPy is a library for the Python programming language, adding support for
large, multi- dimensional arrays and matrices, along with a large collection of high-level
mathematical functions to operate on these arrays. NumPy is open-source software and
has many contributors. Mathematical algorithms are written for this version of Python
often run much slower than compiled equivalents. NumPy addresses the slowness
problem partly by providing multidimensional arrays and functions and operators that
operate efficiently on arrays, requiring rewriting some code, mostly inner loops using
NumPy. They allow the user to write fast programs if most operation work on arrays or
matrices instead of scalars. The NumPy array as a universal data structure in OpenCV
for images, extracted feature points, filter kernels and many more vastly simplifies the
programming workflow and debugging.
Working with machine learning and deep learning applications involve complex
numerical operations with large datasets. NumPy makes implementing these operations
relatively simple and effective when compared to their pure Python implementation.
At its core, NumPy implements its (n-dimensional array) data structure, which is similar
to a regular Python list. Most programming languages only have the concept of arrays.
Python implements lists, which work as arrays, but there are differences.
Operations using Numpy are:
• Mathematical and logical operations on arrays.
• Fourier transforms and routines for shape manipulations.

25
It stores the images in an array, which makes the looping over the mages easy and
efficient. And shapes of the frames in a proper way with an exact size was also done
with the help of Numpy.

Playsound
The playsound module contains – the function playsound. It requires one argument – the
path to the file with the sound you’d like to play. This may be a local file, or a URL.
We can set it to False for making the function run asynchronously. There’s an optional
second argument, block, which is set to True by default. Setting it to False makes the
function run asynchronously. On Windows, uses WAVE and MP3 have been tested and
are known to work. Other file formats may work as well. In this project we used
playsound to output the alarm sound when the Animal is detected. Play sound makes it
easy to attach an alarm sound audio file. There is no delay in making the output alert
sound with the help of the Playsound library.

YOLOv3 Algorithm:
YOLOv3 (You Only Look Once, Version 3) is a real-time object detection algorithm
that identifies specific objects in videos, live feeds, or images. YOLO uses features
learned by a Deep Convolutional Neural Network to detect an object. Versions 1-3 of
YOLO were created by Joseph Redmon and Ali Farhadi.
The first version of YOLO was created in 2016, and version 3, which is discussed
extensively in this article, was made two years later in 2018. YOLOv3 is an improved
version of YOLO and YOLOv2. YOLO is implemented using the Keras or OpenCV
deep learning libraries.

Coco.names Dataset:
A COCO dataset consists of five sections of information that provide information for
the entire dataset. The format for a COCO object detection dataset is documented at
COCO Data Format info – general information about the dataset. licenses – license
information for the images in the dataset.
COCO dataset, meaning “Common Objects In Context”, is a set of challenging, high
quality datasets for computer vision, mostly state-of-the-art neural networks. This name
is also used to name a format used by those datasets.

26
4.3 CODE TEMPLATES

Fig 4.1 Screen shots of code

27
Sample Code:
import pygame
import cv2
import numpy as np
import pygame
from pygame import mixer
from playsound import playsound
import os
cap = cv2.VideoCapture(0)
wht = 320
confThreshold = 0.5
nmsThreshold = 0.3
classesFile = 'coco.names'
classNames = []
with open(classesFile,'rt') as f:
classNames = f.read().rstrip('\n').split('\n')
modelConfi = 'yolov3.cfg'
modelwei = 'yolov3.weights'
net = cv2.dnn.readNetFromDarknet(modelConfi,modelwei)
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
def findobj(outputs,img):
ht,wt,ct= img.shape bbox=[]
classIds=[]
confs=[]
for output in outputs:
for det in output:
scores = det[5:]
classId = np.argmax(scores)
confidence =scores[classId]
if confidence > confThreshold:
w,h=int(det[2]*wt),int(det[3]*ht)
x,y = int((det[0]*wt)-w/2),int((det[1]*ht)-h/2)
bbox.append([x,y,w,h])
28
classIds.append(classId)
confs.append(float(confidence))
indices = cv2.dnn.NMSBoxes(bbox,confs,confThreshold,nmsThreshold)
#print(indices)
for i in indices:
box = bbox[i]
x,y,w,h = box[0],box[1],box[2],box[3]
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,0),2)
cv2.putText(img,f'{classNames[classIds[i]].upper()}{int(confs[i]*100)
}%',(x,y- 10),cv2.FONT_HERSHEY_SIMPLEX,0.9,(255,255,255),3)
x=classNames[classIds[i]]
if x=="horse" or x=="elephant" or x=="zebra" or x=="giraffe":
os.system('C:\\Users\\jeswa\\PycharmProjects\\Scarecrow\\soun
ds\\CrackersSound.mp3')
elif x=="cat" or x=="dog" or x=="sheep":
os.system('C:\\Users\\jeswa\\PycharmProjects\\Scarecrow\\soun
ds\\LionRoarSound.mp3')
elif x=="bird":
os.system('C:\\Users\\jeswa\\PycharmProjects\\Scarecrow\\sounds\
\OwlSound.mp3')
elif x=="cow":
os.system('C:\\Users\\jeswa\\PycharmProjects\\Scarecrow\\soun
ds\\RunningSound.mp3')
elif x=="bear":
os.system('C:\\Users\\jeswa\\PycharmProjects\\Scarecrow\\soun
ds\\PanSound.mp3')
while True:
outputnames = [] success, img = cap.read()
blob=cv2.dnn.blobFromImage(img,1/255,(wht,wht),[0,0,0],1,crop=
False)
net.setInput(blob)
layer_names = net.getLayerNames()
#print(layer_names)
#print(net.getUnconnectedOutLayers())
29
outputnames = [layer_names[i - 1] for i in
net.getUnconnectedOutLayers()]
outputs = net.forward(outputnames)
findobj(outputs,img)
cv2.imshow('image',img)
cv2.waitKey(1)
cap.release()
cv2.dertroyAllWindows()

4.4 TESTING
4.4.1 Introduction
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product It is
the process of exercising software with the intent of ensuring that the Software system
meets its requirements and user expectations and does not fail in an unacceptable
manner. There are various types of tests. Each test type addresses a specific testing
requirement.

4.4.2 Developing Methodologies


The test process is initiated by developing a comprehensive plan to test the general
functionality and special features on a variety of platform combinations. Strict quality
control procedures are used. The process verifies that the application meets the
requirements specified in the system requirements document and is bug free. The
following are the considerations used to develop the framework from developing the
testing methodologies.

Build the Test Plan


Any project can be divided into units that can be further performed for detailed
processing. Then a testing strategy for each of this unit is carried out. Unit testing helps
to identity the possible bugs in the individual component, so the component that has
bugs can be identified and can be rectified from errors.

30
4.4.3 TESTING STRATEGY
Types of Testing
1. Integration Testing
2. Unit Testing
3. Functional Testing
4. System Testing
5. White Box Testing
6. Black Box Testing
7. Acceptance Testing

INTEGRATION TESTING:
Integration tests are designed to test integrated software components to determine if
they actually run as one program. Testing is event driven and is more concerned with
the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically
aimed at exposing the problems that arise from the combination of components.

UNIT TESTING:
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an individual unit
before integration. This is a structural testing, that relies on knowledge of its
construction and is invasive. Unit tests perform basic tests at component level and test
a specific business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.

FUNCTIONAL TESTING:
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and
user manual.

31
Functional testing is centered on the following items:
Valid Input: identified classes of valid input must be accepted.
Invalid Input: identified classes of invalid input must be rejected.
Functions: identified functions must be exercised.
Output: identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes
must be considered for testing. Before functional testing is complete, additional tests
are identified and the effective value of current tests is determined.

SYSTEM TESTING:
System testing ensures that the entire integrated software system meets requirements.
It tests a configuration to ensure known and predictable results. An example of system
testing is the configuration-oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration
points.

WHITE BOX TESTING:


White Box Testing is a testing in which in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It is
purpose. It is used to test areas that cannot be reached from a black box level.

BLACK BOX TESTING:


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds
of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing
in which the software under test is treated, as a black box. you cannot “see” into it. The
test provides inputs and responds to outputs without considering how the software
works. Unit Testing Unit testing is usually conducted as part of a combined code and
unit test phase of the software lifecycle, although it is not uncommon for coding and
unit testing to be conducted as two distinct phases.
32
ACCEPTANCE TESTING:
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

4.4.4 FEASIBILITY STUDY


The major step in analysis is to verify the feasibility of the proposed system. “All projects
are feasible given unlimited resources and infinite time“. But in reality, both resources
and time are scarce. Project should confirm to be time effective and should be optimal
in their consumption of resources. This plays a constant role in approval of any project.
Three key considerations involved in the feasibility analysis are:
• Technical Feasibility
• Operational Feasibility
• Economical Feasibility

Technical Feasibility
This study is based on the compatibility of proposed system working in other
environment. It includes software and hardware maintenance for successful running of
theproject. The organization should maintain the required software. Android project uses
theandroid-based technologies, which is rampantly employed these days worldwide.

Operational Feasibility
The developed application should be user friendly. It should satisfy all the client
requirements in the requirement analysis phase of software development life cycle. This
system is operationally feasible since the users are familiar with the android technologies
andhence there is no need to gear up or train the personnel to use the cell phones.

Economical Feasibility
This deals with the cost analysis of the proposed project. This analysis is based onthe
effectiveness of the proposed system. It should attain maximum benefits in the estimated
cost. It requires average computing capabilities and access to the internet, which are
very basic requirements hence it doesn’t incur any additional economic overheads, which
rendersthe system to be economically feasible.

33
4.5 TEST CASES
Test Case 1:
We have tested that Monitoring module is running perfectly. Camera monitoring is
successful and it captured the image of an animal successfully. It successfully
calculated the weights and configurations of the captured animal. So, it is success.

Test Case 2:
We have tested that if Monitoring module is not running that means we are not
monitoring the camera, then there will be no images that will be captured and no need
to calculate the weights and configurations. This test case is also a success.

Test Case 3:
We have tested that if animal is detected, and the configurations and weights are
successfully calculated. From the coco.names dataset it detected the name, then the
sound corresponding the particular animal should be produced. This test case is also
runed successfully.

Test Case 4:
We have tested that if no animal is detected then there is no need to calculate weights
and detecting the animal from the dataset. So, sound is also not produced in this case.
We tested if any sound is being produced. But no sound is produced. So, it’s a success.

34
CHAPTER- 5
RESULT ANALYSIS

5.1 RESULTS
Implementation of animal detection with Python and OpenCV was done which includes
the following steps:
• It successfully runs and capture the video with camera.
• Captured video is divided into frames and each frame is analysed. Detection of
animal is successfully done.
If animal is detected in the captured picture, then it identifies the name of the animal.
Then a sound is produced based on the name of the animal. The sound is produced with
the help of playsound module. This sound helps to exterminate the animal from the
field. In this implementation during the detection of the animal the weights and cfg are
calculated. The detection of animal sample and playing extermination sound is shown
in below images.

35
5.2 OUTPUT SCREENS
Animal Detection and playing Extermination Sound

Fig 5.1 Screen shots of Output

36
CHAPTER -6
CONCLUSION AND FUTURE SCOPE

6.1 Conclusion
AI Based Scarecrow is a system that repels the wild animals that are trying to enter the
field and exterminates them by playing the sound that they fear off. So, it can be
concluded that we can recognize and reverse the animals before they enter the field by
playing various repelling sounds. The problem of crop vandalization by wild animals
has become a major social problem in current time.
In other words, while utilizing his/her crop production, every farmer should be aware
and take into consideration the fact that animals are living beings and need to be
protected from any potential suffering. It requires urgent attention and an effective
solution. By doing so, we reduce the crop loss and man power. This project is very
useful and affordable to the farmer. The module will not be dangerous to animal and
human being, and it protects farm. Thus, this project carries a great social relevance as
it will help farmers in protecting their fields and save them from significant financial
losses and will save them from the unproductive efforts that they endure for the
protection of their fields.
This ensures complete safety of crops from animals causing damage to it.

6.2 FUTURE SCOPE


We our using an integrative approach in the field of Artificial Intelligence to overcome
this. The goal of this work is to provide a repelling and monitoring system for crop
protection against animal attacks. In our future work, we will extend the current
functionalities of a model like increasing the dataset so as to achieve high accuracy and
investigate the chance of incorporating the future of the model to other sectors.
It can be made into a robot and do the needful actions like moving hands and make it
scare animals. It can be made to detect Human intrusion in order to stop robbery of
crops.

37
BIBLIOGRAPHY

1. Analogy on YOLOv3. Wikipedia, Mar 2018.


2 .M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal
visual object classes (voc) chal lenge. International journal of computer vision,
88(2):303– 338, 2010.
3. C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd: Deconvolutional single
shot detector. arXiv preprint arXiv:1701.06659, 2017.
4. D. Gordon, A. Kembhavi, M. Rastegari, J. Redmon, D. Fox, and A. Farhadi. Iqa:
Visual question answering in interactive environments. arXiv preprint
arXiv:1712.03316, 2017.
5. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn ing for image recognition.
In Proceedings of the IEEE con ference on computer vision and pattern recognition,
pages 770–778, 2016. 3
6. J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna,
Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional
object detectors.
7. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom,
J. Uijlings, S. Popov, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik,
D. Cai, Z. Feng, D. Narayanan, and K. Murphy. Open images: A public dataset for
large-scale multi-label and multi-class image classification. Dataset available from
https://github.com/openimages, 2017.

38
REFERENCES

Review: YOLOv3 — You Only Look Once (Object Detection) | by Sik-Ho Tsang |
Towards Data Science
Getting started with COCO dataset | by Jakub Adamczyk | Towards Data Science Play

sound in Python - GeeksforGeeks

Introduction to NumPy (w3schools.com)

https://github.com/pjreddie/darknet/blob/master/data/coco.names

https://pjreddie.com/darknet/yolo/

39

You might also like