You are on page 1of 24

DEPARTMENT OF COMPUTER SCIENCE AND

ENGINEERING
IIMT COLLEGE OF ENGINEERING Greater Noida

AN

“INTERNSHIP REPORT”
Submitted
In Partial Fulfilment of the Requirements
For the Degree of
Bachelor of Technology
IN
Computer Science & Engineering
By
Sriyut Singh
(1821610105)

1
2
3
ACKNOWLEDGEMENT
I would like to express my sincerest gratitude and indebtedness to the person who gave me a
moral and technical support & whose kind assistance has been instrumental in completion to
this internship.

It gives me immense pleasure to own my humble gratefulness to my mentor Mr. Ritesh Yadav
for this indispensable guidance and providing necessary ideas and facilities to carry out this
internship.

I would like to place on record my best regards and deepest sense of gratitude to Mr. Ritesh
Yadav (Senior Project Manager), of iNeuron.ai Pvt. Ltd. for their careful and precious
guidance which were extremely valuable for my study both theoretically and practically.

Signature of Student
Sriyut Singh

4
INDEX

S. No. Content Page No.

1 Learning Objectives/ Internship Objectives 6

2. Introduction 7

3. Objectives and Applications 8

4. Requirements 9

5. Technology 13

6. Functional Design 16

7. Model Building 19

8. Conclusion 23

9. Bibliography 24

5
Learning Objectives/Internship Objectives

• Internships are generally thought of to be reserved for college students looking to gain
experience in a particular field. However, a wide array of people can benefit from
Training Internships in order to receive real world experience and develop their skills.

• An objective for this position should emphasize the skills you already possess in the area
and your interest in learning more.

• Internships are utilized in a number of different career fields, including architecture,


engineering, healthcare, economics, advertising and many more.

• Some internship is used to allow individuals to perform scientific research while others
are specifically designed to allow people to gain first-hand experience working.

• Utilizing internships is a great way to build your resume and develop skills that can be
emphasized in your resume for future jobs. When you are applying for a Training
Internship, make sure to highlight any special skills or talents that can make you stand
apart from the rest of the applicants so that you have an improved chance of landing the
position.

6
INTRODUCTION

In this internship I worked in a project named “Unmanned Arial Vehicle for Agricultural
Solution”. An agricultural drone is an unmanned arial vehicle used to help
optimize agriculture operations, increase Crop Production, and monitor crop
growth. Sensors and digital imaging capabilities can give farmers a richer picture of their
fields. Using an agriculture drone and gathering information from it may prove useful in
improving crop yields and farm efficiency.

The aerial view provided by a drone can reveal many issues such as irrigation problems, soil
variation, and pest and fungal infestations. Multispectral images show a near-infrared view as
well as a visual spectrum view. The combination shows the farmer the differences between
healthy and unhealthy plants, a difference not always clearly visible to the human eye. Thus,
these views can assist in assessing crop growth and production. Crops can be surveyed at any
time using agricultural drones, allowing for rapid identification of problems.

There is a large capacity for growth in the area of agricultural drones. With technology
constantly improving, imaging of the crops will need to improve as well. With the data that
drones record from the crops the farmers are able to analyse their crops and make educated
decisions on how to proceed given the accurate crop information. Software Programs for
analysing and correcting crop production have the potential to grow in this market. Farmers
will fly a drone over their crops, accurately identify an issue in a specific area, and take the
necessary actions to correct the problem. This gives the farmer time to focus on the overall
task of production instead of spending time surveying their crops. Additional uses include
keeping track of livestock, surveying fences, and monitoring for plant pathogens.

7
In this project my task was for model building for various agricultural problems like
Cattle Detection Plant Diseases detection.

Objective:

The Objective of the project is to Prepare a Drone that will be able to perform a set of
tasks such as:

• Seeding and spraying.


• Mapping of the field with different camera sensors (Healthy plants reflect infrared
light).
• Heat signatures.(Would help in night too)
• Livestock Monitoring.

The Drone should be able to do all the tasks without human intervention and the data from
sensors will be used to prepare various reports like Air Quality index, Temperature, Moisture
etc.

Applications:

Health assessment: It’s essential to assess crop health and spot bacterial or fungal infections
on trees. By scanning a crop using both visible and near-infrared light, drone-carried devices
can identify which plants reflect different amounts of green light and NIR light. This
information can produce multispectral images that track changes in plants and indicate their
health.

Crop spraying: Drones can scan the ground and spray the correct amount of liquid, modulating
distance from the ground and spraying in real time for even coverage. The result: increased
efficiency with a reduction in the amount of chemicals penetrating into groundwater. In fact,
experts estimate that aerial spraying can be completed up to five times faster with drones than
with traditional machinery.

Crop monitoring: Vast fields and low efficiency in crop monitoring together create farming’s
largest obstacle. Monitoring challenges are exacerbated by increasingly unpredictable weather
8
conditions, which drive risk and field maintenance costs. Irrigation: Drones with hyper-
spectral, multispectral, or thermal sensors can identify which parts of a field are dry or need
improvements. Additionally, once the crop is growing, drones allow the calculation of the
vegetation index, which describes the relative density and health of the crop, and show the
heat signature, the amount of energy or heat the crop emits.

Requirements:

Hardware Requirements-

Drone

Cameras

9
Nvidia Jetson Nano

Software Requirements-

• Python 3.x

• Tensorflow 2.x or Pytorch for Model Training

10
• ROS NEOTIC

• Gazebo Simulator

• Power BI or tableau

11
• Ubuntu 20.04 LTS

12
TECHNOLOGY

Python- Python is an interpreted high-level general-purpose programming language. Its


design philosophy emphasizes code readability with its use of significant indentation. Its
language constructs as well as its object-oriented approach aim to help programmers
write clear, logical code for small and large-scale projects. Python is dynamically-typed
and garbage-collected. It supports multiple programming paradigms, including
structured (particularly, procedural), object-oriented and functional programming. It is
often described as a "batteries included" language due to its comprehensive standard
library. Python was conceived in the late 1980s by Guido van Rossum at Centrum
Wiskunde & Informatica (CWI) in the Netherlands as a successor to the ABC
programming language, which was inspired by SETL, capable of exception handling and
interfacing with the Amoeba operating system. Its implementation began in December
1989. Van Rossum shouldered sole responsibility for the project, as the lead developer,
until 12 July 2018, when he announced his "permanent vacation" from his
responsibilities as Python's "benevolent dictator for life", a title the Python community
bestowed upon him to reflect his long-term commitment as the project's chief decision-
maker. In January 2019, active Python core developers elected a five-member "Steering
Council" to lead the project. Python 2.0 was released on 16 October 2000, with many
major new features, including a cycle detecting garbage collector (in addition to
reference counting) for memory management and support for Unicode. Python 3.0 was
released on 3 December 2008. It was a major revision of the language that is not
completely backward-compatible. Many of its major features were backported to
Python 2.6.x and 2.7.x version series. Releases of Python 3 include the 2to3 utility, which
automates the translation of Python 2 code to Python 3. Python 2.7's end-of-life date
was initially set at 2015 then postponed to 2020 out of concern that a large body of
existing code could not easily be forward-ported to Python 3. No more security patches
or other improvements will be released for it. With Python 2's end-of-life, only Python
3.6.x and later are supported. Python 3.9.2 and 3.8.8 were expedited as all versions of

13
Python (including 2.7) had security issues, leading to possible remote code execution
and web cache poisoning

TensorFlow- TensorFlow is a free and open-source software library for machine


learning and artificial intelligence. It can be used across a range of tasks but has a particular
focus on training and inference of deep neural networks.
TensorFlow was developed by the Google Brain team for internal Google use in research and
production. The initial version was released under the Apache License 2.0 in 2015. Google
released the updated version of TensorFlow, named TensorFlow 2.0, in September 2019.
TensorFlow can be used in a wide variety of programming languages, most notably Python, as
well as JavaScript, C++, and Java. This flexibility lends itself to a range of applications in many
different sectors.
TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on
February 11, 2017. While the reference implementation runs on single devices, TensorFlow
can run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-
purpose computing on graphics processing units). TensorFlow is available on 64-
bit Linux, macOS, Windows, and mobile computing platforms including Android and iOS.
Its flexible architecture allows for the easy deployment of computation across a variety of
platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge
devices.
TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow
derives from the operations that such neural networks perform on multidimensional data
arrays, which are referred to as tensors. During the Google I/O Conference in June 2016, Jeff
Dean stated that 1,500 repositories on GitHub mentioned TensorFlow, of which only 5 were
from Google.
In December 2017, developers from Google, Cisco, RedHat, CoreOS, and CaiCloud
introduced Kubeflow at a conference. Kubeflow allows operation and deployment of
TensorFlow on Kubernetes.
In March 2018, Google announced TensorFlow.js version 1.0 for machine learning
in JavaScript.
14
In Jan 2019, Google announced TensorFlow 2.0. It became officially available in Sep 2019.
In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics.

Computer Vision-
Computer vision is an interdisciplinary scientific field that deals with how computers can gain
high-level understanding from digital images or videos. From the perspective of engineering, it
seeks to understand and automate tasks that the human visual system can do.
Computer vision tasks include methods for acquiring, processing, analysing and understanding
digital images, and extraction of high-dimensional data from the real world in order to produce
numerical or symbolic information, e.g. in the forms of decisions. Understanding in this
context means the transformation of visual images (the input of the retina) into descriptions of
the world that make sense to thought processes and can elicit appropriate action. This image
understanding can be seen as the disentangling of symbolic information from image data using
models constructed with the aid of geometry, physics, statistics, and learning theory.
The scientific discipline of computer vision is concerned with the theory behind artificial
systems that extract information from images. The image data can take many forms, such as
video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, or
medical scanning device. The technological discipline of computer vision seeks to apply its
theories and models to the construction of computer vision systems.
Sub-domains of computer vision include scene reconstruction, object detection, event
detection, video tracking, object recognition, 3D pose estimation, learning, indexing, motion
estimation, visual servoing, 3D scene modelling, and image restoration.
Computer vision is an interdisciplinary field that deals with how computers and can be made
to gain high-level understanding from digital images or videos. From the perspective
of engineering, it seeks to automate tasks that the human visual system can do. "Computer
vision is concerned with the automatic extraction, analysis and understanding of useful
information from a single image or a sequence of images. It involves the development of a
theoretical and algorithmic basis to achieve automatic visual understanding." As a scientific
discipline, computer vision is concerned with the theory behind artificial systems that extract

15
information from images. The image data can take many forms, such as video sequences,
views from multiple cameras, or multi-dimensional data from a medical scanner.[10] As a
technological discipline, computer vision seeks to apply its theories and models for the
construction of computer vision systems.

Functional Design-

16
1. Drone will have multiple cameras and sensors using these cameras and sensors
drone will capture infrared images of crops and thermal images of cattle.
2. These images will be passed to the Deep Learning models which will detect and
a. Identify the health of the crops
i. If the Crops are unhealthy, it determines that area and will Spray the
water or pesticides based on the health of the crops

b. Performs Livestock Monitoring


ii. If the cattle leave the bounded area an alarm will be raised.

3. The Sensor data from the drone will be used to generate reports like Air Quality
index, Temperature, Moisture etc.

ROS and UAV

ROS team will be responsible for:

4. Creating a basic Drone and field model in Gazebo.


5. Drone will be programmed in such a way that it will be able to start on its own.
6. Various cameras and sensors will be installed.
7. Using ROS to program the drone to work in the simulation environment
8. Using the drone, collect data from the simulation environment.
9. Once the Deep Learning models have been trained, integrate those
models into ROS to evaluate them in the Simulation environment.
10. Edge devices like jetson nano will be integrated.

What is ROS?

The Robot Operating System (ROS) is a set of software libraries and tools that help you build
robot applications. From drivers to state-of-the-art algorithms, and with powerful
developer tools, ROS has what you need for your next robotics project. And it's all open
source.

What is Gazebo and Why Gazebo?


17
Gazebo is an open-source 3D robotics simulator. Gazebo can use multiple high-
performance physics engines, such as ODE, Bullet, etc. (the default is ODE). It provides
realistic rendering of environments including high-quality lighting, shadows, and
textures. It can model sensors that "see" the simulated environment, such as laser range
finders, cameras (including wide-angle), Kinect style sensors, etc.

The real motto behind using gazebo is that it is not feasible to use a real drone for testing.
Gazebo is A well-designed simulator that makes it possible to rapidly test algorithms,
design robots, perform regression testing, and train AI system using realistic scenarios.
Gazebo At your fingertips is a robust physics engine, high-quality graphics, and convenient
programmatic and graphical interfaces. Best of all, Gazebo is free with a vibrant
community.

Types of Sensors we could use for the Project:

1) Accelerometers - Accelerometer sensors are ICs that measure acceleration, which


is the change in speed (velocity) per unit time. Measuring acceleration makes it possible to
obtain information such as object inclination and vibration.
2) Distance sensors ( ultrasonic Distance sensor) An ultrasonic sensor is an
electronic device that measures the distance of a target object by emitting ultrasonic
sound waves, and converts the reflected sound into an electric signal.

3) Compass Pressure Sensor Module - pressure, temperature and compass


parameters. This is a type of digital sensor module which consists of a high resolution
piezo-resistive pressure sensor, a compass sensor and an MCU. By using this module, one
can measure pressure, temperature and compass parameters. The output data is digitally
calibrated and users can easily access related data through the I2C interface, which
shortens the development time and simplifies the work of designers greatly.
4) GAS sensor for air quality - A gas sensor is a device which detects the presence
or concentration of gases in the atmosphere. Based on the concentration of the gas the
sensor produces a corresponding potential difference by changing the resistance of the
material inside the sensor, which can be measured as output voltage. Based on this
voltage value the type and concentration of the gas can be estimated.
18
5) Infrared - An Infrared (IR) sensor is an electronic device that measures and detects
infrared radiation in its surrounding environment. Infrared radiation was accidentally
discovered by an astronomer named William Herchel in 1800. While measuring the
temperature of each color of light (separated by a prism), he noticed that the temperature
just beyond the red light was highest. IR is invisible to the human eye, as its wavelength is
longer than that of visible light (though it is still on the same electromagnetic spectrum).
Anything that emits heat (everything that has a temperature above around five degrees
Kelvin) gives off infrared radiation.

Model Building
To enable the UAV to be able to perform various tasks we will need to integrate object
detection or object segmentation. This will be a Crucial step in determining the area that
the drone has to visit or focus on to perform any task.

Model Building team will be responsible for:

1. Collection of Data - Collection of data will be done from simulation environments


and various open-source datasets available on Google and Baidu.
2. Annotation and Labelling - Labelling and annotating the data will be done using
LabelImg and LabelMe according to the task[Object Detection/Segmentation] and
model format like COCO and PascalVOC.
3. Preprocessing of the data - Data/Image preprocessing involves various steps like
resizing the image, Denoising Images[Remove Noise] and Morphological
Transformations.
4. Training a Baseline Model - TensorFlow Object Detection will be used to create a
baseline model and we will also experiment on different models like YoloV4,YoloV5,
Faster RCNN, SSD, Mask RCNN. We will also convert the Data into TFRecord s for faster
processing and loading into batches.
5. Inferencing - We will be using the test set which include short video clips and images
to check how the model is performing. The performance of the model is calculated
19
over various evaluation metrics like mAP, IoU.
6. Hyperparameter Tuning - Once the baseline model has been obtained and trained
over train set, we will be performing hyperparameter tuning over parameters like
learning rate, training steps, experimenting with various optimizers like weight decay
optimizer,
momentum, etc.
7. Selecting the Final model - Comparing the results by seeing the metrics of various
models, the best model which worked well in the production environment will be used
as the final model.
8. Model Quantization - Initial Model size will be reduced to enabling it to deploy in
edge devices using techniques like Model Quantization, Model Pruning.

20
Predicted Results-

21
Predicted data-

22
Deployment of the model-
Once the model has been trained and tested the model will be deployed on Nvidia Jetson
Nano. The Jetson Nano will be integrated with ROS and Drone to enable it to be used in Real
world.

Dashboarding-

Conclusion-
The final outcome of this project should be that the drone should be able to perform the set
of tasks defined throughout the document with little to no human interventions. This will
result in an automated ecosystem that will be able to take care of a variety of aspects related
to Agriculture.

23
BIBLIOGRAPHY

References-

• https://www.python.org/
• https://www.tensorflow.org/tfx/guide/keras
• https://www.ros.org/
• https://en.wikipedia.org/wiki/Gazebo
• https://ubuntu.com/

24

You might also like