You are on page 1of 56

Modular Armed Advanced Robotic System 2018 -19

CHAPTER 1

INTRODUCTION
For the last few decades, robots are becoming very popular and common in military organizations.
One such robot is MAARS. There are many advantages of these robots as compared to human soldier. One of
the most important things about these robots is that they have the capability to perform missions remotely in
the field, without any actual danger to human lives. This shows a great impact of military robots. These robots
are sturdier and more capable of with-standing damage than human. Therefore they give greater chances of
success in dangerous environment. Whenever, a robot is shot down, the military simply roll out a new one.
But one should not forget about the certain effects and impact of military robots.

MAARS is a robot that can perform a task given such as locomotion, sensing, localization, and motion
planning without a control from the human during the task in progress. MAARS robot is designed for
reconnaissance, surveillance, and target acquisition (RSTA) to increase security at forward locations, It can be
configured for non-lethal, less-lethal, and lethal effects.

In this proposed system, such a military robot is designed to detect the unknown person in border area
or war zones or in any region, obstacle detection, bomb detection and GUN targeting system based on facial
recognition. Wi-Fi network is used to send the data’s to the host system wirelessly. All these functions are
done automatically or manually with the help of Lab view or ssh software which is to be installed in host
system.

The entire control is resided with the microcontroller. In addition to this, bomb detection, facial
recognition and gun trigger controlling are included. In this, the robot can move through the rugged surfaces.
The control of the robot from remote location can be done with a computer or any other smart computer based
devices.

Dept. of ECE, YDIT Page 1


Modular Armed Advanced Robotic System 2018 -19
CHAPTER 2

LITERERATURE SURVEY
The first MAARS robot was introduce by QinetiQ to the military on 5th June 2008, under a contract
from the Explosive Ordnance Disposal/Low-Intensity Conflict (EOD/LIC) program. On 5 August 2008, the
MAARS participated in a demonstration to showcase technology for the battlefield and urban environments.
Its exercise was a traffic control point encounter with a suspected suicide bomber or vehicle-emplaced
explosive. In another scenario, the MAARS provided overwatch as a different robot attached an explosive
charge to a door. After the door was blown open, MAARS entered the doorway, encountered hostile fire, and
returned fire with its machine gun.

Design and implementation of a multi-functional mobile robot

In this paper, an intelligent multi-functional mobile robot is presented. The hardware involves the
ultrasonic sensor, Bluetooth device, wireless camera, DC servo motor, and mechanical gripper. One single
ultrasound sensor is programmed to seek the object, and complete the object localization. A human-machine
interface is developed to remotely control the mobile robot. Through wireless communication and camera, the
exploration of a tiny and harsh environment can be carried out. Hardware description language is used in the
controller design and the peripheral I/O circuit. Human-machine interface is completed by C language.

Date of Conference: 18-21 Aug. 2009


Date Added to IEEE Xplore: 13 November 2009
Authors
Ying J. Huang
Department of Electrical Engineering, Yuan Ze University, Taiwan
Yuan Z. Chen
Department of Electrical Engineering, Yuan Ze University, Taiwan
Tzu C. Kuo
Department of Electrical Engineering, Ching Yun University, Taiwan
Hong S. Yu
Robot Control Technology Department, Industrial Technology Research Institute, Taiwan
Underwater construction robot for rubble leveling on the seabed for port construction
This research develops an underwater construction robot to level rubbles on the seabed for port
construction. The rubble leveling is carried out by an underwater robot equipped with imaging sonars and
underwater cameras, LBL and gyroscope sensors. A virtual reality system is developed to visualize the robot's
figure and the topography over the working environment; hence, the robot is virtually tele-operated by an

Dept. of ECE, YDIT Page 2


Modular Armed Advanced Robotic System 2018 -19
operator. This paper presents the robot's system and control, and it describes the working process of the rubble
leveling carried out by the robot. In addition, the performance of the robot is demonstrated through the
experimental results in subsea. The working speed of the robot is faster than that of a human diver, and the
robot can work longer than the diver who can work for a limited time to prevent submarine sickness. The
robot is expected to have much higher efficiency in deep water where a human diver is unable to work.
Date of Conference: 22-25 Oct. 2014
Date Added to IEEE Xplore: 18 December 2014
Authors
Tae Sung Kim
Research Institutes of Mechatronics of Changwon National University, Gyeongsangnam-do, 641-773, Korea
In Sung Jang
Korea Institute of Ocean Science & Technology, Gyeonggi-do, 426-744, Korea
Chang Joo Shin
Korea Institute of Ocean Science & Technology, Gyeonggi-do, 426-744, Korea
Min Ki Lee
Department of Control and Instrumentation Engr. of Changwon National University, Gyeongsangnam-do,
641-773, Korea

Multi-Functional Intelligent Robot DOC-2


This paper presents the intelligent robot, DOC-2, the second upgraded generation of intelligent robot
DOC-h DOC-2 is an autonomous multifunctional robot that can teach and entertain people. DOC-2 can speak
and spell English words when seeing the word image card. It can also solve simple algebraic problems
presented on a white board. DOC-2 is also a simple version of a speaking encyclopedia through which people
can learn things easily. In the entertainment function, DOC-2 can play Gobang game, Chess and Chinese
Chess with human. DOC-2 has quite superb vision capabilities with which it can recognize human faces in
front of it. Furthermore, it can interpret the facial expression of the human face. DOC-2 is capable of
recognizing specific persons, its masters. Upon request, DOC-2 will serve tea to its designated master.
Schematically identical to its ancestor DOC-1, DOC-2 has two hands, one CCD camera and two driving
wheels. However, the mechatronics design is very different form DOC-h In DOC-2, artificial intelligent
software system is much complex, and a PDA device is designed to conduct interactive control programs.
Using the window-based PDA device, people can communicate with DOC-2 easily. In this paper, we will
introduce the specification, mechanism, electronics and intelligent software of DOC-2.
Date of Conference: 4-6 Dec. 2006
Date Added to IEEE Xplore: 26 February 2007
Authors
C. Y. Lin

Dept. of ECE, YDIT Page 3


Modular Armed Advanced Robotic System 2018 -19
Mechanical Engineering Department, National Taiwan University of Science and Technology, Taipei,
Taiwan. jerrylin@mail.ntust.edu.tw, D9403106@mail.ntust.edu.tw, D9303102@mail.ntust.edu.tw
P. C. Jo
Mechanical Engineering Department, National Taiwan University of Science and Technology, Taipei,
Taiwan. D9403106@mail.ntust.edu.tw, D9303102@mail.ntust.edu.tw
C. K. Tseng
Mechanical Engineering Department, National Taiwan University of Science and Technology, Taipei,
Taiwan. D9403106@mail.ntust.edu.tw, D9303102@mail.ntust.edu.tw

The design of intelligent interactive service robot


This study aims to develop the intelligent interactive multi-functional robot to provide companion and
entertainment. To obtain the environmental information, we use Kinect depth image as the visual system
platform and to perform the image processing operations. Then, the image processing results are applied to the
robot behavior planning, integrated with omni-directional wheels, high power motors, FPGA and ARM-based
development board as motion system. The lower body of the robot is constructed by four omni-directional
wheels, and the upper body is in humanoid design. In this study, we use the information extracted from depth
image stream of Kinect and integrate the robot localization system to build the real-time environment map and
achieve the obstacle avoidance. In addition, we use Kinect skeleton detection and human facial expression
recognition to accomplish the task of the intelligent interaction.

Date of Conference: 6-8 Sept. 2017


Date Added to IEEE Xplore: 22 February 2018
Authors

Shu-Yin Chiang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Yi-Quan Jiang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Hsin-Tieh Yang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Chia-Chin Wang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Yu-Chen Lee
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan

Dept. of ECE, YDIT Page 4


Modular Armed Advanced Robotic System 2018 -19

CHAPTER 3

TECHNICAL BACKGROUND
3.1 Raspberry Pi Processor Architecture

In this project, ARM cortex A53 architecture is used. ARM, previously Advanced RISC Machine,
originally Acorn RISC Machine, is a family of Reduced Instruction Set Computing (RISC) architectures for
computer processors, configured for various environments. `who design their own products that implement
one of those architectures— including systems-on-chips (SoC) and systems-on-modules (SoM) that
incorporate memory, interfaces, radios, etc. It also designs cores that implement this instruction set and
licenses these designs to a number of companies that incorporate those core designs into their own products.

Fig 3.1: ARM Coertex a53 block diagram

Processors that have a RISC architecture typically require fewer transistors than those with a complex
instruction set computing(CISC) architecture (such as the x86 processors found in most personal computers),
which improves cost, power consumption, and heat dissipation. These characteristics are desirable for light,
portable, battery-powered devices— including Smart-phones, laptops and tablet computers, and other
embedded systems. For super computers, which consume large amounts of electricity, ARM could also be a
power-efficient solution.

Dept. of ECE, YDIT Page 5


Modular Armed Advanced Robotic System 2018 -19

Fig 3.2: ARM Cortex A53 configuration

ARM Holdings periodically releases updates to architectures and core designs. All of them support
a32-bitaddress space(only pre-ARMv3 chips, made before ARM Holdings was formed, as in original Acorn
Archimedes, had smaller) and 32-bit arithmetic instructions for ARM Holdings cores have 32-bit fixed-length
instructions, but later versions of the architecture also support a variable-length instruction set that provides
both 32- and 16-bit instructions for improved code density. Some older cores can also provide hardware
execution of Java byte-codes. The ARMv8-A architecture, announced in October 2011, adds support for a64-
bitaddress space and 64-bit arithmetic with its new 32-bit fixed-length instruction set.

3.2 OpenCV
OpenCV is a library of programming functions mainly aimed at real-time computer vision. It has a
modular structure, which means that the package includes several shared or static libraries. We are using
image processing module that includes linear and non-linear image filtering, geometrical image
transformations (resize, affine and perspective warping, and generic table-based remapping), color space
conversion, histograms, and so on. Our project includes libraries such as Viola-Jones or Haar classifier, LBPH
(Lower Binary Pattern histogram) face recognizer, Histogram of oriented gradients (HOG).

3.3 Facial recognition:

The total system is divided into 3 modules- Database creation, Training the dataset, Testing, sending
alert messages as an extension.
Database creation
a) Initialize the camera and set an alert message to grab the attention of the students.
b) Get user id as input

Dept. of ECE, YDIT Page 6


Modular Armed Advanced Robotic System 2018 -19
c) convert the image into gray scale, detect the face and
d) Store it in database by using given input as label up to 20 frames.
Training
a) Initialize LBPH face recognizer.
b) Get faces and Id’s from database folder to train the LBPH face recognizer.
c) Save the trained data as xml or yml file.

Testing
Load Haar classifier, LBPH face recognizer and trained data from xml or yml file.
a) Capture the image from camera,
b) Convert it into gray scale,
c) Detect the face in it and
d) Predict the face using the above recognizer.

This proposed system uses Viola Jones algorithm for face detection which uses modified Haar
Cascades for detection. Raspberry Pi is the main component in the project. We will be using USB webcam to
capture photos. We can access Raspberry Pi’s console either by using SSH in laptop or by using Keyboard
and mouse with the display device like TV connected to Pi. Firstly, the algorithm needs a lot of positive
images and negative images to train the Haar cascades classifier. Positive images are images with clear faces
where negative images are those without any faces.

3.4 Haar Cascades:


Each feature is represented as a single value obtained from the difference of the sums of pixels in
white rectangle from the sum of all pixels in the black rectangle. All different possible sizes and locations of
classifier is used for calculating of plenty of features. As the number of classifiers increase the arithmetic
computations seems to take a long time. To avoid this, we use the concept of Integral Image. In Image
Processing Integral image is a data structure which is summed area table and algorithm for quickly and
efficiently generating sum of values in a rectangular grid subset. Integral image is derived by using the
formula.

Integral image :
To solve the complexity of the number of classifiers applied for calculation we use Ad boost
machine learning algorithm, which is inbuilt in OpenCV library that is cascade classifier, to eliminate the
redundancy of the classifiers. Any classifier which has a probability of 50% of more in detection is treated as
weak classifier. The Sum of all weak classifier gives a strong classifier which makes the decision about
detection. Although it is very vague to classify with one strong classifier we use the cascade of classifiers.

Dept. of ECE, YDIT Page 7


Modular Armed Advanced Robotic System 2018 -19
Classification takes place in stages, if the selected region fails in the first stage, we discard it. We don’t use the
classifiers on that region which is discarded. The region which passes all the stages i.e. all strong classifiers is
treated as the detected face. Detected Faces are passed to the Face recognition phase.
In this phase we use Local Binary Patterns algorithm for face recognition. Local binary patterns are
simple at the same time very efficient texture operator which assigns the pixels of the image by comparing
with the adjacent pixels as threshold and which results in a binary result. The detected integral image is
subjected to this Local binary pattern which results in decimals are represented as histogram for every integral
image. Face recognition is extremely vulnerable to the environment changes like brightness, facial expressions
and position. Face pre-processing is the module which reduces the problems that makes the picture unclear to
recognize the face such as less brightness and contrast problems and noise in the image and make sure the
facial features always be in a constant position. In this project we use histogram equalization for face pre-
processing. For efficiency we use separate pre-processing which is histogram equalization for left and right
face. So histogram equalization is done three times, firstly for the whole face and the other two for side faces.

3.5 OpenCV-Python
Python is a general purpose programming language started by Guido van Rossum, which became
very popular in short time mainly because of its simplicity and code readability. It enables the programmer to
express his ideas in fewer lines of code without reducing any readability.

Compared to other languages like C/C++, Python is slower. But another important feature of Python
is that it can be easily extended with C/C++. This feature helps us to write computationally intensive codes in
C/C++ and create a Python wrapper for it so that we can use these wrappers as Python modules. This gives us
two advantages: first, our code is as fast as original C/C++ code (since it is the actual C++ code working in
background) and second, it is very easy to code in Python. This is how OpenCV-Python works, it is a Python
wrapper around original C++ implementation. And the support of Numpy makes the task more
easier. Numpy is a highly optimized library for numerical operations. It gives a MATLAB-style syntax. All
the OpenCV array structures are converted to-and-from Numpy arrays. So whatever operations you can do in
Numpy, you can combine it with OpenCV, which increases number of weapons in your arsenal. Besides that,
several other libraries like SciPy, Matplotlib which supports Numpy can be used with this. So OpenCV-
Python is an appropriate tool for fast prototyping of computer vision problems.

OpenCV-Python working
OpenCV introduces a new set of tutorials which will guide you through various functions available in
OpenCV-Python. This guide is mainly focused on OpenCV 3.x version (although most of the tutorials will
work with OpenCV 2.x also).

Dept. of ECE, YDIT Page 8


Modular Armed Advanced Robotic System 2018 -19
A prior knowledge on Python and Numpy is required before starting because they won’t be covered in
this guide. Especially, a good knowledge on Numpy is must to write optimized codes in OpenCV-Python.

This tutorial has been started by Abid Rahman K. as part of Google Summer of Code 2013 program, under the
guidance of Alexander Mordvintsev.

OpenCV Needs us..


Since OpenCV is an open source initiative, all are welcome to make contributions to this library. And
it is same for this tutorial also. So, if you find any mistake in this tutorial (whether it be a small spelling
mistake or a big error in code or concepts, whatever), feel free to correct it. And that will be a good task for
fresher’s who begin to contribute to open source projects. Just fork the OpenCV in GitHub, make necessary
corrections and send a pull request to OpenCV. OpenCV developers will check your pull request, give you
important feedback and once it passes the approval of the reviewer, it will be merged to OpenCV. Then you
become a open source contributor. Similar is the case with other tutorials, documentation etc.

As new modules are added to OpenCV-Python, this tutorial will have to be expanded. So those who knows
about particular algorithm can write up a tutorial which includes a basic theory of the algorithm and a code
showing basic usage of the algorithm and submit it to OpenCV.

Getting Started with Images

Goals
 Here, you will learn how to read an image, how to display it and how to save it back
 You will learn these functions : cv2.imread(), cv2.imshow() , cv2.imwrite()
 Optionally, you will learn how to display images with Matplotlib

Using OpenCV
Read an image
Use the function cv2.imread() to read an image. The image should be in the working directory or a full path of
image should be given.

Second argument is a flag which specifies the way image should be read.

 cv2.IMREAD_COLOR : Loads a color image. Any transparency of image will be neglected. It is the
default flag.
 cv2.IMREAD_GRAYSCALE : Loads image in grayscale mode
 cv2.IMREAD_UNCHANGED : Loads image as such including alpha channel

Dept. of ECE, YDIT Page 9


Modular Armed Advanced Robotic System 2018 -19
Display an image
Use the function cv2.imshow() to display an image in a window. The window automatically fits to the image
size.

First argument is a window name which is a string. second argument is our image. You can create as many
windows as you wish, but with different window ncv2.waitKey() is a keyboard binding function. Its argument
is the time in milliseconds. The function waits for specified milliseconds for any keyboard event. If you press
any key in that time, the program continues. If 0 is passed, it waits indefinitely for a key stroke. It can also be
set to detect specific key strokes like, if key a is pressed etc which we will discuss below.

cv2.destroyAllWindows() simply destroys all the windows we created. If you want to destroy any specific
window, use the function cv2.destroyWindow() where you pass the exact window name as the argument.

3.6 Image processing module

3.6.1 Purpose of Image processing

The purpose of image processing is divided into 5 groups. They are:


1. Visualization- Observe the objects that are not visible.
2. Image sharpening and restoration- To create a better image.
3. Image retrieval- Seek for the image of interest.
4. Measurement of pattern– Measures various objects in an image.
5. Image Recognition– Distinguish the objects in an image.

i. Haar Classifier

This object detection framework is to provide competitive object detection rates in real-time like detection of
faces in an image. A human can do this easily, but a computer needs precise instructions and constraints. To
make the task more manageable, Viola–Jones requires full view frontal upright faces. Thus in order to be
detected, the entire face must point towards the camera and should not be tilted to either side. While it seems
these constraints could diminish the algorithm's utility somewhat, because the detection step is most often
followed by a recognition step, in practice these limits on pose are quite acceptable the characteristics of
Viola–Jones algorithm which make it a good detection algorithm are:
a) Robust – very high detection rate (true-positive rate) & very low false-positive rate always.
b) Real time – For practical applications at least 2 frames per second must be processed.

Dept. of ECE, YDIT Page 10


Modular Armed Advanced Robotic System 2018 -19
c) Face detection only (not recognition) - The goal is to distinguish faces from non-faces (detection is the first
step in the recognition process). This algorithm includes Haar feature selection process.

All human faces share some similar properties. These regularities may be matched using Haar Features.
A few properties common to human faces:
a) The eye region is darker than the upper-cheeks.
b) The nose bridge region is brighter than the eyes.

Composition of properties forming matchable facial features:


a) Location and size: eyes, mouth, bridge of nose
b) Value: oriented gradients of pixel intensities

ii. Histogram of oriented gradients (HOG)

Histogram of oriented gradients (HOG) is a feature descriptor used to detect objects in


computer vision and image processing. The HOG descriptor technique counts occurrences of gradient
orientation in localized portions of an image - detection window, or region of interest (ROI).

Implementation of the HOG descriptor algorithm is as follows:


1. Divide the image into small connected regions called cells, and for each cell compute a histogram of
gradient directions or edge orientations for the pixels within the cell.
2. Discretize each cell into angular bins according to the gradient orientation.
3. Each cell's pixel contributes weighted gradient to its corresponding angular bin.
4. Groups of adjacent cells are considered as spatial regions called blocks. The grouping of cells into a block
is the basis for grouping and normalization of histograms.
5. Normalized group of histograms represents the block histogram. The set of these block histograms
represents the descriptor.

OpenCV modules

 cv - Main OpenCV functions.

 cvaux - Auxiliary (experimental) OpenCV functions.

 cxcore - Data structures and linear algebra support.

 highgui - GUI functions.

OpenCV working with video capturing

Dept. of ECE, YDIT Page 11


Modular Armed Advanced Robotic System 2018 -19
OpenCV supports capturing images from a camera or a video file (AVI).

 Initializing capture from a camera:

CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0

 Initializing capture from a file:

CvCapture* capture = cvCaptureFromAVI("infile.avi");

Capturing a frame:
IplImage* img = 0;
if(!cvGrabFrame(capture))
{ // capture a frame printf("Could not grab a frame\n\7"); exit(0);
}
img=cvRetrieveFrame(capture); // retrieve the captured frame

To obtain images from several cameras simultaneously, first grab an image from each camera.
Retrieve the captured images after the grabbing is complete.

 Releasing the capture source: cvReleaseCapture(&capture);

3.6.2 Advantages of OpenCV over MATLAB

 Speed: Matlab is built on Java, and Java is built upon C. So when you run a Matlab program, your
computer is busy trying to interpret all that Matlab code. Then it turns it into Java, and then finally
executes the code. OpenCV, on the other hand, is basically a library of functions written in C/C++. So
ultimately you get more image processing done for your computers processing cycles, and not more
interpreting.

As a result of this, programs written in OpenCV run much faster than similar programs written in
Matlab. OpenCV is damn fast when it comes to speed of execution. For example, we might write a
small program to detect people smiles in a sequence of video frames. In Matlab, we would typically
get 3-4 frames analysed per second. In OpenCV, we would get at least 30 frames per second, resulting
in real-time detection.

 Resources needed: Due to the high level nature of Matlab, it uses a lot of your systems resources.
And I mean A LOT! Matlab code requires over a gig of RAM to run through video. In comparison,
typical OpenCV programs only require ~70mb of RAM to run in real-time.

Dept. of ECE, YDIT Page 12


Modular Armed Advanced Robotic System 2018 -19
 Cost: List price for the base (no toolboxes) MATLAB (commercial, single user License) is around
USD 2150. OpenCV (BSD license) is free.

Portability: MATLAB and OpenCV run equally well on Windows, Linux and MacOS. However,
when it comes to OpenCV, any device that can run C, can, in all probability, run OpenCV.

Dept. of ECE, YDIT Page 13


Modular Armed Advanced Robotic System 2018 -19
CHAPTER 4

SYSTEM DESIGN
4.1 BLOCK DIAGRAM

This project targets in the development of a modular robot where it can be used for military
applications i.e. in the front lines of the war zone, rescues missions, bomb detection and diffuse. Modular
robots are robots that can be added with more equipment/sensor or it can be removed according to the
requirements. This design consist of two modules: Raspberry pi module and Arduino module. Each module
has its own function raspberry pi module consists of gun trigger control, motor driver control, PIR sensor and
metal detector whereas the Arduino module consist motor driver for controlling the motion of the land drone.

Fig 4.1: Raspberry pi module block diagram

Dept. of ECE, YDIT Page 14


Modular Armed Advanced Robotic System 2018 -19

Fig 4.2: Arduino module block diagram

4.2 Analysis and project plan

This project is implemented using Raspberry Pi board with ARM cortex A53 processor and
Arduino UNO microcontroller (AtMega328P). Here Raspberry pi module consist of camera, IR sensor, metal
detector, relay control for gun trigger and motor driver for motor control. The OS of Raspberry Pi system is
installed in SD Card the is inserted in its memory slot.

The Arduino module is used in land drone controlling system where it consist of motor driver to
control two motors using the GPIO pins and also a wireless transceiver for controlling the motors from an
external device.

The Raspberry module is plugged with USB Wi-Fi module so that it is connected to the network
to send programmable SMS on the activity of sensors and facial recognition when a unknown person is
identified. It is also connected to a network so that an external device can connect to the Raspberry Pi system
via ssh (Secured Shell) using a Putty application from any device with the help of its IP addresses to start the
VNC server application for screen mirroring.

4.3 TOOLS USED

List of hardware used in the project

 Raspberry Pi 3B
 Arduino UNO (ATmega328P)

Dept. of ECE, YDIT Page 15


Modular Armed Advanced Robotic System 2018 -19
 IR sensor
 Metal detector
 Motor Drivers (L293)
 DC motors
 Camera
 Relay
 SD card
 ZigBee wireless module
 Misc (resistors, wires, switches, battery, etc.)

List of Software used in the project

 Raspberrian OS
 OpenCV
 Python 2.0
 VNC server and client
 Arduino IDE

Here all the accessories are interconnected and the common devices to control and access data
from these are laptops/desktop computers/Android mobile phone/tablets. Both the host and the client device is
installed with VNC server and client respectively. To access the host the client system must use the IP
address to connect. Before that ssh and VNC server must be installed and configured in the Raspberry pi
board.

The BAUD rate for transmission of signals for controlling the motors in Arduino is set to 9600.

4.4 Components Detail:

A sensor is a transducer that measures a physical quantity and converts it into a signal which
can be read by an observer or by an instrument. The output signal of a sensor is linearly proportional to the
value of the measured property. Hence the sensors we are using in the project convert real life parameters to
an output voltage which can be read by a dashboard.

Those voltage signals are usually given to microcontrollers or any processing device to convert
those electrical values to real life parameters in order to be displayed.

4.4.1 Raspberry Pi 3 model B

The Raspberry Pi is a series of small single-board computers developed in the United


Kingdom by the Raspberry Pi Foundation to promote teaching of basic computer science in schools and

Dept. of ECE, YDIT Page 16


Modular Armed Advanced Robotic System 2018 -19
in developing countries. The original model became far more popular than anticipated, selling outside
its target market for uses such as robotics. It does not include peripherals (such as keyboards and mice)
and cases. However, some accessories have been included in several official and unofficial bundles.

Fig 4.3: Raspberry Pi 3 model B

Features of Raspberry pi 3 model B

 Quad Core 1.2GHz Broadcom BCM2837 64bit CPU


 1GB RAM
 BCM43438 wireless LAN and Bluetooth Low Energy (BLE) on board
 100 Base Ethernet
 40-pin extended GPIO
 4 USB 2 ports
 4 Pole stereo output and composite video port
 Full size HDMI
 CSI camera port for connecting a Raspberry Pi camera
 DSI display port for connecting a Raspberry Pi touchscreen display
 Micro SD port for loading your operating system and storing data

Dept. of ECE, YDIT Page 17


Modular Armed Advanced Robotic System 2018 -19

Fig 4.4: Raspberry pi 3 pinout

4.4.2 Arduino Uno (ATMega328P)

The Arduino UNO is an open-source microcontroller based on


the MicrochipATmega328P microcontroller and developed by Arduino.cc. The board is equipped with sets of
digital and analog input/output (I/O) pins that may be interfaced to various expansion boards(shields) and
other circuits. The board has 14 Digital pins, 6 Analog pins, and programmable with the Arduino
IDE (Integrated Development Environment) via a type B USB cable. It can be powered by the USB cable or
by an external 9-volt battery, though it accepts voltages between 7 and 20 volts.

Features of Arduino Uno

 Operating Voltage: 5 Volts


 It is a 8-bit micro controller
 Input Voltage: 7 to 20 Volts
 Digital I/O Pins: 14 (of which 6 provide PWM output)
 Analog Input Pins: 6
 DC Current per I/O Pin: 20 mA
 DC Current for 3.3V Pin: 50 mA
 Flash Memory: 32 KB of which 0.5 KB used by bootloader

Dept. of ECE, YDIT Page 18


Modular Armed Advanced Robotic System 2018 -19
 SRAM: 2 KB
 EEPROM: 1 KB
 Clock Speed: 16 MHz
 Length: 68.6 mm
 Width: 53.4 mm
 Weight: 25 g

Fig 4.5: Arduino Uno

4.4.3 IR Sensor

Fig 4.6: IR proximity sensor


Dept. of ECE, YDIT Page 19
Modular Armed Advanced Robotic System 2018 -19
An infrared sensor is an electronic device, that emits in order to sense some aspects of the
surroundings. An IR sensor can measure the heat of an object as well as detects the motion. These types
of sensors measures only infrared radiation, rather than emitting it that is called as a passive IR sensor.
Usually in the infrared spectrum, all the objects radiate some form of thermal radiations. These types of
radiations are invisible to our eyes, that can be detected by an infrared sensor.The emitter is simply an IR
LED (Light Emitting Diode) and the detector is simply an IR photodiode which is sensitive to IR light of the
same wavelength as that emitted by the IR LED. When IR light falls on the photodiode, The resistances and
these output voltages, change in proportion to the magnitude of the IR light received.
An infrared sensor circuit is one of the basic and popular sensor module in an electronic
device. This sensor is analogous to human’s visionary senses, which can be used to detect obstacles and it is
one of the common applications in real time.

4.4.4 Metal detector

Fig 4.7: Metal detector sensor

Metal detectors are Inductive proximity sensors operate under the electrical principle of
inductance. Inductance is the phenomenon where a fluctuating current, which by definition has a magnetic
component, induces an electromotive force (emf) in a target object. To amplify a device’s inductance effect, a
sensor manufacturer twists wire into a tight coil and runs a current through it. An inductive proximity sensor
has four components; the coil, oscillator, detection circuit and output circuit. The oscillator generates a
fluctuating magnetic field the shape of a doughnut around the winding of the coil that locates in the device’s

Dept. of ECE, YDIT Page 20


Modular Armed Advanced Robotic System 2018 -19
sensing face. When a metal object moves into the inductive proximity sensor’s field of detection, Eddy
circuits build up in the metallic object, magnetically push back, and finally reduce the Inductive sensor’s own
oscillation field. The sensor’s detection circuit monitors the oscillator’s strength and triggers an output from
the output circuitry when the oscillator becomes reduced to a sufficient level.

Fig 4.8: internal structure of metal detector

4.4.5 Motor Drivers (L293D)

The L 293 has 2 H-Bridges, can provide about 1 amp to each and occasional peak loads
to 2 amps. Motors typically controlled with this controller are near the size of a 35 mm film plastic canister.

FIG 4.9: L293 board


You often see motors between the size a of 35 mm film plastic canister and a coke can, driven by this
type H-Bridge. The LMD18200 has one h-bridge on board, can handle about 2 or 3 amps and can handle a
peak of about 6 amps. This H-Bridge chip can usually handle an average motor about the size of a coke. There
are several more commercially designed H-Bridge chips as well.

Dept. of ECE, YDIT Page 21


Modular Armed Advanced Robotic System 2018 -19
It works on the concept of H-bridge. H-bridge is a circuit which allows the voltage to be flown in
either direction. As you know voltage need to change its direction for being able to rotate the motor in
clockwise or anticlockwise direction, Hence H-bridge IC are ideal for driving a DC motor.

Fig 4.10: L293 IC

Fig 4.11: L293 pin out

There are two Enable pins on L293d. Pin 1 and pin 9, for being able to drive the motor, the pin 1 and 9
need to be high. For driving the motor with left H-bridge you need to enable pin 1 to high. And for right H-
Bridge you need to make the pin 9 to high. If anyone of the either pin1 or pin9 goes low then the motor in the
corresponding section will suspend working. It’s like a switch.
In order to activate L298, enable pin must be set to high. When C=H; D=L, the motor rotates in
clockwise direction (upward movement of elevator).When C=L; D=H, the motor rotates in Clockwise
direction (Downward movement of elevator). When C=D, motor stops rotating (Elevator stops moving).

Dept. of ECE, YDIT Page 22


Modular Armed Advanced Robotic System 2018 -19

Table 4.1: L293 Truth table

4.4.6 DC mototrs

Fig 4.12: DC motor


Electric motors are everywhere! In your house, almost every mechanical movement that you
see around you is caused by an AC (alternating current) or DC (direct current) electric motor. Let's start by
looking at the overall plan of a simple two-pole DC electric motor. A simple motor has six parts, as shown in
the diagram below:
 Armature or rotor
 Commutator
 Brushes
 Axle
 Field magnet
 DC power supply of some sort
An electric motor is all about magnets and magnetism: A motor uses magnets to create motion.
If you have ever played with magnets you know about the fundamental law of all magnets: Opposites attract
and likes repel. So if you have two bar magnets with their ends marked "north" and "south," then the north end

Dept. of ECE, YDIT Page 23


Modular Armed Advanced Robotic System 2018 -19
of one magnet will attract the south end of the other. On the other hand, the north end of one magnet will repel
the north end of the other (and similarly, south will repel south). Inside an electric motor, these attracting and
repelling forces create rotational motion.

Fig 4.13: principle of working of dc motor


In the above diagram, you can see two magnets in the motor: The armature (or rotor) is an
electromagnet, while the field magnet is a permanent magnet (the field magnet could be an electromagnet as
well, but in most small motors it isn't in order to save power).
To understand how an electric motor works, the key is to understand how the electromagnet works.
(See How Electromagnets Work for complete details.)
An electromagnet is the basis of an electric motor. You can understand how things work in the motor
by imagining the following scenario. Say that you created a simple electromagnet by wrapping 100 loops of
wire around a nail and connecting it to a battery. The nail would become a magnet and have a north and south
pole while the battery is connected.
Now say that you take your nail electromagnet, run an axle through the middle of it and suspend it in
the middle of a horseshoe magnet as shown in the figure below. If you were to attach a battery to the
electromagnet so that the north end of the nail appeared as shown, the basic law of magnetism tells you what
would happen: The north end of the electromagnet would be repelled from the north end of the horseshoe
magnet and attracted to the south end of the horseshoe magnet. The south end of the electromagnet would be
repelled in a similar way. The nail would move about half a turn and then stop in the position shown.

Dept. of ECE, YDIT Page 24


Modular Armed Advanced Robotic System 2018 -19

Fig 4.14: Electromagnet in Horse shoe


magnet
You can see that this half-turn of motion is simply due to the way magnets naturally attract and repel
one another. The key to an electric motor is to then go one step further so that, at the moment that this half-
turn of motion completes, the field of the electromagnet flips. The flip causes the electromagnet to complete
another half-turn of motion. You flip the magnetic field just by changing the direction of the electrons flowing
in the wire (you do that by flipping the battery over). If the field of the electromagnet were flipped at precisely
the right moment at the end of each half-turn of motion, the electric motor would spin freely.

4.4.7 Camera

Fig 4.15: Web cam


A webcam is a video camera that feeds or streams its image in real time to or through
a computer to a computer network. The term "webcam" (a clipped compound) may also be used in its original
sense of a video camera connected to the Web continuously for an indefinite time, rather than for a particular
session, generally supplying a view for anyone who visits its web page over the Internet. Some of them, for
example, those used as online traffic cameras, are expensive, rugged professional video cameras.

Dept. of ECE, YDIT Page 25


Modular Armed Advanced Robotic System 2018 -19
Specifications:
 HD video calling (1280 x 720 pixels) with recommended system
 Video capture: Up to 1280 x 720 pixels
 Photos: Up to 3.0 megapixels (software enhanced)
 Built-in mic with Logitech RightSound™ technology
 Hi-Speed USB 2.0 certified (recommended)
 Universal clip fits laptops, LCD or CRT monitors

Webcams typically include a lens, an image sensor, support electronics, and may also include
one or even two microphones for sound.

Fig 4.16: USB webcam PCB with and without lens close up

Dept. of ECE, YDIT Page 26


Modular Armed Advanced Robotic System 2018 -19

Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but
CCD cameras do not necessarily outperform CMOS-based cameras in the low-price range. Most consumer
webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many
newer devices can produce video in multi-mega pixel resolutions, and a few can run at high frame rates such
as the PlayStation Eye, which can produce 320×240 video at 120 frames per second. The Wii
Remote contains an image sensor with a resolution of 1024×768 pixels.

As the bayer filter is proprietary, any webcam contains some built-in image processing,
separate from compression.

Various lenses are available, the most common in consumer-grade webcams being a
plastic lens that can be manually moved in and out to focus the camera. Fixed-focus lenses, which have no
provision for adjustment, are also available. As a camera system's depth of field is greater for small image
formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have a
sufficiently large depth of field that the use of a fixed-focus lens does not impact image sharpness to a great
extent.

Most models use simple, focal-free optics (fixed focus, factory-set for the usual distance from
the monitor to which it is fastened to the user) or manual focus.

4.4.8 ZigBee Wireless transceiver

Fig 4.17: ZigBee 16Mhz transceiver

Dept. of ECE, YDIT Page 27


Modular Armed Advanced Robotic System 2018 -19
An ZigBee RF module (radio frequency module) is a (usually) small electronic device used to
transmit and/or receive radio signals between two devices. In an embedded system it is often desirable to
communicate with another device wirelessly. This wireless communication may be accomplished
through optical communication or through radiofrequency (RF) communication. For many applications the
medium of choice is RF since it does not require line of sight. RF communications incorporate
a transmitter and a receiver. They are of various types and ranges. Some can transmit up to 500 feet. RF
modules are widely used in electronic design owing to the difficulty of designing radio circuitry. Good
electronic radio design is notoriously complex because of the sensitivity of radio circuits and the accuracy of
components and layouts required to achieve operation on a specific frequency. In addition, reliable RF
communication circuit requires careful monitoring of the manufacturing process to ensure that the RF
performance is not adversely affected.

4.4.9 Relay Module

Fig 4.18: Single channel relay module

The Single Relay Board can be used to turn lights, fans and other devices on/off while keeping
them isolated from your microcontroller. The Single Relay Board allows you to control high-power devices
(up to 10 A) via the on-board relay. Control of the relay is provided via a 1 x 3 header – friendly to servo
cables and convenient to connect to many development boards.
Specification

 Supply Voltage-5V
Dept. of ECE, YDIT Page 28
Modular Armed Advanced Robotic System 2018 -19
 Control high-power devices up to 10 A with a simple high/low signal
 Provides isolation between the microcontroller and device being controlled
 Screw terminals for relay connections
 3-pin servo-style header for power/signal interface
 Equipped with a high-current relay (10A @ 28VDC)
 2xLED's that show the current state of the relay

Dept. of ECE, YDIT Page 29


Modular Armed Advanced Robotic System 2018 -19

CHAPTER 5
HARDWARE DESCRIPTION
5.1 RASPBERRY PI 3B
The Raspberry Pi is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation to promote teaching of basic computer science in schools and
in developing countries. The original model became far more popular than anticipated, selling outside
its target market for uses such as robotics. It does not include peripherals (such as keyboards and mice)
and cases. However, some accessories have been included in several official and unofficial bundles.

5.1.1 Architecture of Raspberry pi 3B

Fig 5.1: Architecture of Raspberry pi

5.1.2 Features of Raspberry pi 3 model B

 Quad Core 1.2GHz Broadcom BCM2837 64bit CPU


 1GB RAM
Dept. of ECE, YDIT Page 30
Modular Armed Advanced Robotic System 2018 -19
 BCM43438 wireless LAN and Bluetooth Low Energy (BLE) on board
 100 Base Ethernet
 40-pin extended GPIO
 4 USB 2 ports
 4 Pole stereo output and composite video port
 Full size HDMI
 CSI camera port for connecting a Raspberry Pi camera
 DSI display port for connecting a Raspberry Pi touchscreen display
 Micro SD port for loading your operating system and storing data
 Upgraded switched Micro USB power source up to 2.5A

5.1.3 Description of Raspberry Pi 3b


Raspberry pi is a multipurpose System that offers high performance computing with very low
power consumption. The GPU provides Open GL ES 2.0, hardware-accelerated Open VG, and 1080p30
H.264 high-profile decode and is capable of 1Gpixel/s, 1.5Gtexel/s or 24 GFLOPs of general purpose
compute. What’s that all mean? It means that if you plug the Raspberry Pi 3 into your HDTV, you could
watch BluRay quality video, using H.264 at 40MBits/s
The Raspberry Pi 3’s four built-in USB ports provide enough connectivity for a mouse,
keyboard, or anything else that you feel the RPi needs, but if you want to add even more you can still use a
USB hub. Keep in mind, it is recommended that you use a powered hub so as not to overtax the on-board
voltage regulator. Powering the Raspberry Pi 3 is easy, just plug any USB power supply into the micro-USB
port. There’s no power button so the Pi will begin to boot as soon as power is applied, to turn it off simply
remove power. The four built-in USB ports can even output up to 1.2A enabling you to connect more power
hungry USB devices (This does require a 2Amp micro USB Power Supply)
The Raspberry Pi 3 features the same 40-pin general-purpose input-output (GPIO) header as all
the Pis going back to the Model B+ and Model A+. Any existing GPIO hardware will work without
modification; the only change is a switch to which UART is exposed on the GPIO’s pins, but that’s handled
internally by the operating system.
The Raspberry Pi 3 shares the same SMSC LAN9514 chip as its predecessor, the Raspberry Pi
2, adding 10/100 Ethernet connectivity and four USB channels to the board. As before, the SMSC chip
connects to the SoC via a single USB channel, acting as a USB-to-Ethernet adaptor and USB hub.
5.2 Arduino UNO (ATMega328P)
The Arduino UNO is an open-source microcontroller based on
the MicrochipATmega328P microcontroller and developed by Arduino.cc. The board is equipped with sets of
digital and analog input/output (I/O) pins that may be interfaced to various expansion boards(shields) and
other circuits. The board has 14 Digital pins, 6 Analog pins, and programmable with the Arduino

Dept. of ECE, YDIT Page 31


Modular Armed Advanced Robotic System 2018 -19
IDE (Integrated Development Environment) via a type B USB cable. It can be powered by the USB cable or
by an external 9-volt battery, though it accepts voltages between 7 and 20 volts.

5.2.1 Architecture of Arduino UNO

Fig 5.2: Architecture of Arduino

5.2.2 Features
 Operating Voltage: 5 Volts
 It is a 8-bit micro controller
 Input Voltage: 7 to 20 Volts
 Digital I/O Pins: 14 (of which 6 provide PWM output)
 Analog Input Pins: 6
 DC Current per I/O Pin: 20 mA
 DC Current for 3.3V Pin: 50 mA
 Flash Memory: 32 KB of which 0.5 KB used by bootloader
 SRAM: 2 KB
 EEPROM: 1 KB
 Clock Speed: 16 MHz
 Length: 68.6 mm
 Width: 53.4 mm
 Weight: 25 g

Dept. of ECE, YDIT Page 32


Modular Armed Advanced Robotic System 2018 -19

5.2.3 Description of Arduino UNO


The Arduino Uno R3 is a microcontroller board based on the ATmega328 (datasheet). It has 14
digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal
oscillator, a USB connection, a power jack, an ICSP header, and a reset button. It contains everything needed
to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC
adapter or battery to get started.

The Uno differs from all preceding boards in that it does not use the FTDI USB-to-serial driver chip. Instead,
it features the Atmega16U2 (Atmega8U2 up to version R2) programmed as a USB-to-serial converter.
Revision 2 of the Uno board has a resistor pulling the 8U2 HWB line to ground, making it easier to put
into DFU mode.
Revision 3 of the board has the following new features:

 1.0 pinout: added SDA and SCL pins that are near to the AREF pin and two other new pins placed
near to the RESET pin, the IOREF that allow the shields to adapt to the voltage provided from the
board. In future, shields will be compatible both with the board that use the AVR, which operate with
5V and with the Arduino Due that operate with 3.3V. The second one is a not connected pin, that is
reserved for future purposes.
 Stronger RESET circuit.
 Atmega 16U2 replace the 8U2.

"Uno" means one in Italian and is named to mark the upcoming release of Arduino 1.0. The Uno and version
1.0 will be the reference versions of Arduino, moving forward. The Uno is the latest in a series of USB
Arduino boards, and the reference model for the Arduino platform

5.2.4 ON-CHIP FLASH PROGRAM MEMORY


Arduino consist of 32kb of flash memory. Flash memory is used to store your
program image and any initialized data. You can execute program code from flash, but you can't modify
data in flash memory from your executing code. To modify the data, it must first be copied into SRAM.
Flash memory is the same technology used for thumb-drives and SD cards. It is non-volatile, so your
program will still be there when the system is powered off. Flash memory has a finite lifetime of about
100,000 write cycles. So if you upload 10 programs a day, every day for the next 27 years, you might
wear it out.

Dept. of ECE, YDIT Page 33


Modular Armed Advanced Robotic System 2018 -19
5.2.5 ON-CHIP STATIC RAM
SRAM or Static Random Access Memory, can be read and written from your executing
program. SRAM memory is used for several purposes by a running program:
 Static Data - This is a block of reserved space in SRAM for all the global and static variables
from your program. For variables with initial values, the runtime system copies the initial value
from Flash when the program starts.
 Heap - The heap is for dynamically allocated data items. The heap grows from the top of the
static data area up as data items are allocated.
 Stack - The stack is for local variables and for maintaining a record of interrupts and function
calls. The stack grows from the top of memory down towards the heap. Every interrupt, function
call and/or local variable allocation causes the stack to grow. Returning from an i nterrupt or
function call will reclaim all stack space used by that interrupt or function.

Most memory problems occur when the stack and the heap collide. When this happens,
one or both of these memory areas will be corrupted with unpredictable results. In some cases it will
cause an immediate crash. In others, the effects of the corruption may not be noticed until much later.

Dept. of ECE, YDIT Page 34


Modular Armed Advanced Robotic System 2018 -19

CHAPTER 6
SOFTWARE DESCRIPTION
6.1 RASPIAN OS
The Raspberry Pi debuted in February 2012. The group behind the computer's development -
the Raspberry Pi Foundation - started the project to make computing fun for students, while also creating
interest in how computers work at a basic level. Unlike using an encased computer from a manufacturer, the
Raspberry Pi shows the essential guts behind the plastic. The Raspberry Pi is believed to be an ideal learning
tool, in that it is cheap to make, easy to replace and needs only a keyboard and a TV to run. These same
strengths also make it an ideal product to jumpstart computing in the developing world.

Fig 6.1: Raspian OS Logo

The idea behind a tiny and affordable computer for kids came in 2006, when Eben Upton, Rob
Mullins, Jack Lang and Alan Mycroft, based at the University of Cambridge‘s Computer Laboratory, became
concerned about the year-on-year decline in the numbers and skills levels of the A Level students applying to
read Computer Science. From a situation in the 1990s where most of the kids applying were coming to
interview as experienced hobbyist programmers, the landscape in the 2000s was very different; a typical
applicant might only have done a little web design.

6.1.1Setting up Raspian OS:

Let’s first connect the board with all the necessary accessories to install and run an operating
system.

Step 1: Take the Pi out of its anti static cover and place it on the non-metal table.

Dept. of ECE, YDIT Page 35


Modular Armed Advanced Robotic System 2018 -19
Step 2: Connect the display – Connect the HDMI cable to the HDMI port on the Pi and the other end of the
HDMI cable to the HDMI port of the TV.

Step 3: Connect your Ethernet cable from the Router to the Ethernet port on the Pi

Step 4: Connect your USB mouse to one of the USB ports on the Pi

Step 5: Connect your USB Keyboard to the other USB port on the Pi

Step 6: Connect the micro USB charger to the Pi but don’t connect it to the power supply yet

Step 7: Flash the SD Card with the Raspian OS.

 To prepare the card for use with the Pi we will need to put a OS on the card. We certainly cannot drag
and drop the OS files on to the card but the flashing the card is not too difficult either.
 Since we have already decided to install Raspbian, lets download the RASPBIAN image from the
following link. http://www.raspberrypi.org/downloads/.
 Unzip the contents of the Zip file into a folder on your machine, one of the unzipped files would be a
.img file which is what needs to be flashed on to the SD card.[In case there are more than one file, the
current version of the zip has only this file and none other]
 Flashing from Linux instructions.
 Start the terminal on your Linux OS
 Insert the empty SD Card into the card reader of your machine.
 Type sudo fdisk -l to see all the disks listed. Find the SD card by its size,and note the device address
(/dev/sdX, where X is a letter identifying the storage device. Some systems with integrated SDcard
readers may use /dev/mmcblkX— format, just change the target in the following instructions
accordingly).
 Use cd to change to the directory with the .img file you extracted from the Zip archive.
 Type sudo dd if=imagefilename.img of=/dev/sdX bs=2M to write the file imagefilename.img to the
SDcard connected to the device address. Replace imagefilename.img with the actual name of the file
extracted from the Zip archive. This step takes a while, so be patient! During flashing, nothing will be
shown on the screen until the process is fully complete.

Dept. of ECE, YDIT Page 36


Modular Armed Advanced Robotic System 2018 -19

Fig 6.2: Flashing Raspberry OS from windows

1. The Image Writer for Windows is used in place of dd which designed specifically for creating USB or
SDcard images of Linux distributions, it features a simple graphical user interface that makes the
creation of a Raspberry Pi SD card straight forward. Download the latest version of Image Writer for
Windows from the website: https://launchpad.net/win32-image-writer. Below are the steps.

i. Download the binary (not source) Image Writer for Windows Zip file, and extract it to a
folder on your computer.

ii. Plug your blank SDcard into a card reader connected to the PC.

iii. Double-click the Win32DiskImager.exe file to open the program, and click the blue folder
icon to open a file browse dialogue box.

iv. Browse to the imagefilename.img file you extracted from the distribution archive, replacing
imagefilename.img with the actual name of the file extracted from the Zip archive, and then click the Open
button.

v. Select the drive letter corresponding to the SDcard from the Device drop-down dialogue
box. If you’re unsure which drive letter to choose, open MyComputer or Windows Explorer to check.

vi. Click the Write button to flash the image file to the SDcard.

2. Once the OS is flashed, insert the SD card into the Pi SD Card slot
3. Connect the MicroUSB to the power source and switch it on.

Dept. of ECE, YDIT Page 37


Modular Armed Advanced Robotic System 2018 -19
4. Now the system boots into the below screen and the LED’s on the board will Blink. Below is a small
GIF showing the boot screen

Fig 6.3: Boot screen of Raspberry PI

5. Now you will need to login with username/password combination of pi/raspberry.


6. If you would like to use the GUI interface type startx. Below is the image showing the previous two
steps.

6.2 Arduino IDE


The Arduino integrated development environment (IDE) is a cross-platform application
(for Windows, macOS, Linux) that is written in the programming language Java. It is used to write and upload
programs to Arduino compatible boards, but also, with the help of 3rd party cores, other vendor development
boards.[2]
The source code for the IDE is released under the GNU General Public License, version
2.[3]The Arduino IDE supports the languages C and C++using special rules of code structuring. The Arduino
IDE supplies a software library from the Wiring project, which provides many common input and output
procedures. User-written code only requires two basic functions, for starting the sketch and the main program
loop, that are compiled and linked with a program stub main() into an executable cyclic executiveprogram

Dept. of ECE, YDIT Page 38


Modular Armed Advanced Robotic System 2018 -19
with the GNU toolchain, also included with the IDE distribution. The Arduino IDE employs the
program avrdude to convert the executable code into a text file in hexadecimal encoding that is loaded into the
Arduino board by a loader program in the board's firmware.
The figure below will give the insights of Arduino Ide

Fig 6.4: Arduino Ide Software page

1. Menu Bar: Gives you access to the tools needed for creating and saving Arduino sketches.
2. Verify Button: Compiles your code and checks for errors in spelling or syntax.
3. Upload Button: Sends the code to the board that’s connected such as Arduino Uno in this case. Lights on
the board will blink rapidly when uploading.
4. New Sketch: Opens up a new window containing a blank sketch.
5. Sketch Name: When the sketch is saved, the name of the sketch is displayed here.
6. Open Existing Sketch: Allows you to open a saved sketch or one from the stored examples.
7. Save Sketch: This saves the sketch you currently have open.
8. Serial Monitor: When the board is connected, this will display the serial information of your Arduino
9. Code Area: This area is where you compose the code of the sketch that tells the board what to do.
10. Message Area: This area tells you the status on saving, code compiling, errors and more.

Dept. of ECE, YDIT Page 39


Modular Armed Advanced Robotic System 2018 -19
11. Text Console: Shows the details of an error messages, size of the program that was compiled and
additional info.
12. Board and Serial Port: Tells you what board is being used and what serial port it’s connected to

6.2.1 Upload a program


Open the LED blink example sketch: File > Sketchbook > Examples > led_blink.

Fig 6.5: opening a sketch

Here's what the code for the LED blink example looks like.

Fig 6.6: Led Blink Program editing


Dept. of ECE, YDIT Page 40
Modular Armed Advanced Robotic System 2018 -19
Select the serial device of the Arduino board from the Tools | Serial Port menu. On Windows, this should
be COM1 or COM2 for a serial Arduino board, or COM3, COM4, or COM5 for a USB board. On the Mac,
this should be something like /dev/cu.usbserial-1B1 for a USB board, or something
like /dev/cu.USA19QW1b1P1.1 if using a Keyspan adapter with a serial board (other USB-to-serial adapters
use different names).

Fig 6.7: Tools tab

Push the reset button on the board then click the Upload button in the IDE. Wait a few seconds. If successful,
the message "Done uploading." will appear in the status bar.

Fig 6.8: RESET button being pressed

Dept. of ECE, YDIT Page 41


Modular Armed Advanced Robotic System 2018 -19

Fig 6.9: Sketch compiling and uploading tab

If the Arduino board doesn't show up in the Tools | Serial Port menu, or you get an error while uploading,
please see the FAQ for troubleshooting suggestions.

A few seconds after the upload finishes, you should see the amber (yellow) LED on the board start to blink.

6.3 OpenCV

OpenCV (Open source computer vision) is a library of programming functions mainly aimed at
real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez
(which was later acquired by Intel. The library is cross-platform and free for use under the open-source BSD
license. OpenCV supports the deep learning frameworks TensorFlow, Torch/Py Torch and Caffe.

6.3.1 OpenCV modules

 cv - Main OpenCV functions.

 cvaux - Auxiliary (experimental) OpenCV functions.

 cxcore - Data structures and linear algebra support.

 highgui - GUI functions.

6.3.2 OpenCV working with video capturing

OpenCV supports capturing images from a camera or a video file (AVI).

 Initializing capture from a camera:

CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0

 Initializing capture from a file:

CvCapture* capture = cvCaptureFromAVI("infile.avi");

Dept. of ECE, YDIT Page 42


Modular Armed Advanced Robotic System 2018 -19
Capturing a frame:
IplImage* img = 0;
if(!cvGrabFrame(capture))
{ // capture a frame printf("Could not grab a frame\n\7"); exit(0);
}
img=cvRetrieveFrame(capture); // retrieve the captured frame

To obtain images from several cameras simultaneously, first grab an image from each camera.
Retrieve the captured images after the grabbing is complete.

 Releasing the capture source: cvReleaseCapture(&capture);

6.3.3 OpenCV installation Guide


1. sudo raspi-config
2. sudo reboot
3. sudo apt-get purge wolfram-engine
4. sudo apt-get purge libreoffice*
5. sudo apt-get clean
6. sudo apt-get autoremove
7. sudo apt-get update && sudo apt-get upgrade
8. sudo apt-get update –y
9. sudo apt-get install build-essential cmake pkg-config
10. sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
11. sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
12. sudo apt-get install libxvidcore-dev libx264-dev
13. sudo apt-get install libgtk2.0-dev libgtk-3-dev
14. sudo apt-get install libatlas-base-dev gfortran
15. sudo apt-get install python2.7-dev python3-dev
16. cd ~
17. wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.3.0.zip
18. unzip opencv.zip
19. wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.3.0.zip
20. unzip opencv_contrib.zip
21. wget https://bootstrap.pypa.io/get-pip.py
22. sudo python get-pip.py
23. sudo python3 get-pip.py
24. pip install Numpy
Dept. of ECE, YDIT Page 43
Modular Armed Advanced Robotic System 2018 -19
25. cd ~/opencv-3.3.0/
26. mkdir build
27. cd build
28. cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D BUILD_EXAMPLES=ON ..
29. sudo nano /etc/dphys-swapfile
""""CONF_SWAPSIZE=1024""""""
30. sudo /etc/init.d/dphys-swapfile stop
31. sudo /etc/init.d/dphys-swapfile start
32. make -j4
33. sudo make install
34. sudo ldconfig
"check the import cv2"
35. rm -rf opencv-3.3.0 opencv_contrib-3.3.0
36. sudo nano /etc/dphys-swapfile
""""CONF_SWAPSIZE=100""""""
37. sudo /etc/init.d/dphys-swapfile stop
38. sudo /etc/init.d/dphys-swapfile start

Dept. of ECE, YDIT Page 44


Modular Armed Advanced Robotic System 2018 -19

Chapter 7
Results and Discussion

The main objective of this project is to detect unauthorized person and detain them either in
war zones or in hostage rescue situations, and to locate land mines or any metal based explosives in order
avoid casualties. Pictures are shown to indicate how the system is operated.

Fig 7.1: Complete MAARS Project with Raspberry and Arduino combined

Fig 7.2: Image shows process of adding facial data of person to facial dataset

Dept. of ECE, YDIT Page 45


Modular Armed Advanced Robotic System 2018 -19

Fig 7.3: Image shows the person seen by the camera is authorized and displays the name of the person and
prevents the gun module from firing

Fig 7.4: The image shows the person viewed by camera is unauthorized person and displays “unkown” over
the image from of the person.

ADVANTAGES
 Consistency of performance.
 24/7 continuous working.
 Improved quality of product.
 It can move from one location to another location.
 Robotic workers never get tired.
Dept. of ECE, YDIT Page 46
Modular Armed Advanced Robotic System 2018 -19
 Do not need to be paid.
 Can be made to perform even the most dangerous tasks without concern.
 Wide acceptance

DISADVANTAGES
 Wireless network range is less
 Power backup has to be provided after certain amounts of time since the power consumption is high.
 Facial recognition processing is slow for real time process
 Vision of camera is nill in the night or in dark regions hence it require additional modules for night
vision
 Cost of the project is high

Dept. of ECE, YDIT Page 47


Modular Armed Advanced Robotic System 2018 -19

CHAPTER 8
CONCLUSION
The proposed system is aimed towards the welfare of our infantry and the surveillance of
warzone areas to minimize the causalities to a great extent. It detects all metal objects like land mines using a
metal detector. Our system will also be able to detect smoke and fire and take evasive action. It can measure
infrared (IR) light radiating from objects in its field of view using the PIR sensor and thus detect any heat
radiations emitting from humans or animals alike. The robot can be manually controlled but it will be able to
take precautionary measures to protect itself and remain undetected. Hence, our system is sure to create a
revolution in its own field and ensure complete support from people of different societies.

8.1 APPLICATIONS
 It can be used to monitor any suspicious object where presence of humanmay be dangerous.
 It can be used in mining due to presence of gas detector and fire detector.
 It is used in gas industries to detect leaks which can be hazardous.
 It can be used in military; dangerous tasks can be carried out by the robot without worrying about loss
of human life.
 Military and aerospace embedded software applications
 C om m u ni c at i o n Ap p l i c at i on s
 In d us t ri al aut om at i o n and p ro c es s co nt r ol s o ft w a r e
 Mastering the complexity of applications.
 Reduction of product design time.
 Real time processing of ever increasing amounts of data.
 Intelligent, autonomous sensors.

8.2 FUTURE SCOPE


 Robotic arm can be placed for pick and place.
 In the robot water tank can be placed so as be used as fire extinguisher.
 Normal camera can be replaced with night vision camera.
 Zigbee technology can be replaced with other technology operate robot from long distance.
 RF sensor can be placed so that it does not collide the obstacles when not controlled manually.

Dept. of ECE, YDIT Page 48


Modular Armed Advanced Robotic System 2018 -19

CHAPTER 9
BIBLIOGRAPHY
[1] S. Y. Harmon and D. W. Gage CURRENT TECHNICAL RESEARCH ISSUES OF AUTONOMOUS
ROBOTS EMPLOYED IN COMBAT published in 17th Annual Electronics and Aerospace Conference
Washington DC, 11-13 September 1984.
[2] Surya Singh and Scott Thayer, "ARMS (Autonomous Robots for Military Systems): A Survey of
Collaborative Robotics Core Technologies and Their Military Applications," tech. report CMU-RI-TR-01-16,
Robotics Institute, Carnegie Mellon University, July, 2001
[3] E. Callaway, P. Gorday, L. Hester, J. A. Gutierrez, M. Naeve, B. Hile, et al. Home Networking with IEEE
802.15.4 developing standard for low-rate wireless personal area networks, IEEE Communications Magazine,
August, 2002, pp. 70-77.
[4] R. C. Luo, K. L. Su, S. H. Shen, K. H. Tsai, "Networked Intelligent Robots through the Internet: Issues
and Opportunities," Proc. IEEE, vol.91, March, 2003, pp.371-382.
[5] Khurshid, J.(School of Computer Sci. Technol., Harbin Inst. of Technol.,China)- Military robots - a
glimpse from today and tomorrow published in Control, Automation, Robotics and Vision Conference, 2004.
ICARCV 2004 8th (Volume:1 )
[6] Raj Reddy, Robotics and Intelligent Systems in Support of Society, IEEE Intelligent Systems, Vol.21,
No.3, May/June 2006.

Dept. of ECE, YDIT Page 49


Modular Armed Advanced Robotic System 2018 -19

APENDIX A
Raspberry pi Board schematics:

Fig A.1: Schematics of Raspberry pi 3 Model B

Dept. of ECE, YDIT Page 50


Modular Armed Advanced Robotic System 2018 -19

APENDIX B
Arduina Uno schematics:

Fig B.1: Arduino Uno R3 Schematics

Dept. of ECE, YDIT Page 51


Modular Armed Advanced Robotic System 2018 -19

ATMega328P Pin diagram:

Fig B.2: Atmga328p Pin digram

Arduino Boot loader:


The behaviour described above happens thanks to a special piece of code that is executed at
every reset of the microcontroller and that looks for a sketch to be uploaded from the serial/USB port using a
specific protocol and speed. If no connection is detected, the execution is passed to the code of your sketch.

Dept. of ECE, YDIT Page 52


Modular Armed Advanced Robotic System 2018 -19
This little (usually 512 bytes) piece of code is called the “Bootloader” and it is in an area of the memory of the
microcontroller – at the end of the address space - that can’t be reprogrammed as a regular sketch and had
been designed for such purpose.

Fig B.3: Memory Map of an ATmega328P

To program the bootloader and provide to the microcontroller the compatibility with the
Arduino Software (IDE) you need to use an In-circuit Serial Programmer (ISP) that is the device that connects
to a specific set of pins of the microcontroller to perform the programming of the whole flash memory of the
microcontroller, bootloader included. The ISP programming procedure also includes the writing of fuses: a
special set of bits that define how the microcontroller works under specific circumstances.

Dept. of ECE, YDIT Page 53


Modular Armed Advanced Robotic System 2018 -19

APENDIX C
LM293d Schematics:

Fig C.1: LM293D Schematics

Relay module schematics:

Fig C.2: Single channel relay schematics

Dept. of ECE, YDIT Page 54


Modular Armed Advanced Robotic System 2018 -19

IR proximity sensor schematics:

Fig C.3: IR proximity sensor schematics

Dept. of ECE, YDIT Page 55


Modular Armed Advanced Robotic System 2018 -19

Appendix D
Raspberry pi boot sequence flow chart:

Fig D.1 Raspberry pi Boot sequence flow chart

Dept. of ECE, YDIT Page 56

You might also like