Professional Documents
Culture Documents
CHAPTER 1
INTRODUCTION
For the last few decades, robots are becoming very popular and common in military organizations.
One such robot is MAARS. There are many advantages of these robots as compared to human soldier. One of
the most important things about these robots is that they have the capability to perform missions remotely in
the field, without any actual danger to human lives. This shows a great impact of military robots. These robots
are sturdier and more capable of with-standing damage than human. Therefore they give greater chances of
success in dangerous environment. Whenever, a robot is shot down, the military simply roll out a new one.
But one should not forget about the certain effects and impact of military robots.
MAARS is a robot that can perform a task given such as locomotion, sensing, localization, and motion
planning without a control from the human during the task in progress. MAARS robot is designed for
reconnaissance, surveillance, and target acquisition (RSTA) to increase security at forward locations, It can be
configured for non-lethal, less-lethal, and lethal effects.
In this proposed system, such a military robot is designed to detect the unknown person in border area
or war zones or in any region, obstacle detection, bomb detection and GUN targeting system based on facial
recognition. Wi-Fi network is used to send the data’s to the host system wirelessly. All these functions are
done automatically or manually with the help of Lab view or ssh software which is to be installed in host
system.
The entire control is resided with the microcontroller. In addition to this, bomb detection, facial
recognition and gun trigger controlling are included. In this, the robot can move through the rugged surfaces.
The control of the robot from remote location can be done with a computer or any other smart computer based
devices.
LITERERATURE SURVEY
The first MAARS robot was introduce by QinetiQ to the military on 5th June 2008, under a contract
from the Explosive Ordnance Disposal/Low-Intensity Conflict (EOD/LIC) program. On 5 August 2008, the
MAARS participated in a demonstration to showcase technology for the battlefield and urban environments.
Its exercise was a traffic control point encounter with a suspected suicide bomber or vehicle-emplaced
explosive. In another scenario, the MAARS provided overwatch as a different robot attached an explosive
charge to a door. After the door was blown open, MAARS entered the doorway, encountered hostile fire, and
returned fire with its machine gun.
In this paper, an intelligent multi-functional mobile robot is presented. The hardware involves the
ultrasonic sensor, Bluetooth device, wireless camera, DC servo motor, and mechanical gripper. One single
ultrasound sensor is programmed to seek the object, and complete the object localization. A human-machine
interface is developed to remotely control the mobile robot. Through wireless communication and camera, the
exploration of a tiny and harsh environment can be carried out. Hardware description language is used in the
controller design and the peripheral I/O circuit. Human-machine interface is completed by C language.
Shu-Yin Chiang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Yi-Quan Jiang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Hsin-Tieh Yang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Chia-Chin Wang
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
Yu-Chen Lee
Department of Information and Telecommunications Engineering, Ming Chuan University, Taiwan
CHAPTER 3
TECHNICAL BACKGROUND
3.1 Raspberry Pi Processor Architecture
In this project, ARM cortex A53 architecture is used. ARM, previously Advanced RISC Machine,
originally Acorn RISC Machine, is a family of Reduced Instruction Set Computing (RISC) architectures for
computer processors, configured for various environments. `who design their own products that implement
one of those architectures— including systems-on-chips (SoC) and systems-on-modules (SoM) that
incorporate memory, interfaces, radios, etc. It also designs cores that implement this instruction set and
licenses these designs to a number of companies that incorporate those core designs into their own products.
Processors that have a RISC architecture typically require fewer transistors than those with a complex
instruction set computing(CISC) architecture (such as the x86 processors found in most personal computers),
which improves cost, power consumption, and heat dissipation. These characteristics are desirable for light,
portable, battery-powered devices— including Smart-phones, laptops and tablet computers, and other
embedded systems. For super computers, which consume large amounts of electricity, ARM could also be a
power-efficient solution.
ARM Holdings periodically releases updates to architectures and core designs. All of them support
a32-bitaddress space(only pre-ARMv3 chips, made before ARM Holdings was formed, as in original Acorn
Archimedes, had smaller) and 32-bit arithmetic instructions for ARM Holdings cores have 32-bit fixed-length
instructions, but later versions of the architecture also support a variable-length instruction set that provides
both 32- and 16-bit instructions for improved code density. Some older cores can also provide hardware
execution of Java byte-codes. The ARMv8-A architecture, announced in October 2011, adds support for a64-
bitaddress space and 64-bit arithmetic with its new 32-bit fixed-length instruction set.
3.2 OpenCV
OpenCV is a library of programming functions mainly aimed at real-time computer vision. It has a
modular structure, which means that the package includes several shared or static libraries. We are using
image processing module that includes linear and non-linear image filtering, geometrical image
transformations (resize, affine and perspective warping, and generic table-based remapping), color space
conversion, histograms, and so on. Our project includes libraries such as Viola-Jones or Haar classifier, LBPH
(Lower Binary Pattern histogram) face recognizer, Histogram of oriented gradients (HOG).
The total system is divided into 3 modules- Database creation, Training the dataset, Testing, sending
alert messages as an extension.
Database creation
a) Initialize the camera and set an alert message to grab the attention of the students.
b) Get user id as input
Testing
Load Haar classifier, LBPH face recognizer and trained data from xml or yml file.
a) Capture the image from camera,
b) Convert it into gray scale,
c) Detect the face in it and
d) Predict the face using the above recognizer.
This proposed system uses Viola Jones algorithm for face detection which uses modified Haar
Cascades for detection. Raspberry Pi is the main component in the project. We will be using USB webcam to
capture photos. We can access Raspberry Pi’s console either by using SSH in laptop or by using Keyboard
and mouse with the display device like TV connected to Pi. Firstly, the algorithm needs a lot of positive
images and negative images to train the Haar cascades classifier. Positive images are images with clear faces
where negative images are those without any faces.
Integral image :
To solve the complexity of the number of classifiers applied for calculation we use Ad boost
machine learning algorithm, which is inbuilt in OpenCV library that is cascade classifier, to eliminate the
redundancy of the classifiers. Any classifier which has a probability of 50% of more in detection is treated as
weak classifier. The Sum of all weak classifier gives a strong classifier which makes the decision about
detection. Although it is very vague to classify with one strong classifier we use the cascade of classifiers.
3.5 OpenCV-Python
Python is a general purpose programming language started by Guido van Rossum, which became
very popular in short time mainly because of its simplicity and code readability. It enables the programmer to
express his ideas in fewer lines of code without reducing any readability.
Compared to other languages like C/C++, Python is slower. But another important feature of Python
is that it can be easily extended with C/C++. This feature helps us to write computationally intensive codes in
C/C++ and create a Python wrapper for it so that we can use these wrappers as Python modules. This gives us
two advantages: first, our code is as fast as original C/C++ code (since it is the actual C++ code working in
background) and second, it is very easy to code in Python. This is how OpenCV-Python works, it is a Python
wrapper around original C++ implementation. And the support of Numpy makes the task more
easier. Numpy is a highly optimized library for numerical operations. It gives a MATLAB-style syntax. All
the OpenCV array structures are converted to-and-from Numpy arrays. So whatever operations you can do in
Numpy, you can combine it with OpenCV, which increases number of weapons in your arsenal. Besides that,
several other libraries like SciPy, Matplotlib which supports Numpy can be used with this. So OpenCV-
Python is an appropriate tool for fast prototyping of computer vision problems.
OpenCV-Python working
OpenCV introduces a new set of tutorials which will guide you through various functions available in
OpenCV-Python. This guide is mainly focused on OpenCV 3.x version (although most of the tutorials will
work with OpenCV 2.x also).
This tutorial has been started by Abid Rahman K. as part of Google Summer of Code 2013 program, under the
guidance of Alexander Mordvintsev.
As new modules are added to OpenCV-Python, this tutorial will have to be expanded. So those who knows
about particular algorithm can write up a tutorial which includes a basic theory of the algorithm and a code
showing basic usage of the algorithm and submit it to OpenCV.
Goals
Here, you will learn how to read an image, how to display it and how to save it back
You will learn these functions : cv2.imread(), cv2.imshow() , cv2.imwrite()
Optionally, you will learn how to display images with Matplotlib
Using OpenCV
Read an image
Use the function cv2.imread() to read an image. The image should be in the working directory or a full path of
image should be given.
Second argument is a flag which specifies the way image should be read.
cv2.IMREAD_COLOR : Loads a color image. Any transparency of image will be neglected. It is the
default flag.
cv2.IMREAD_GRAYSCALE : Loads image in grayscale mode
cv2.IMREAD_UNCHANGED : Loads image as such including alpha channel
First argument is a window name which is a string. second argument is our image. You can create as many
windows as you wish, but with different window ncv2.waitKey() is a keyboard binding function. Its argument
is the time in milliseconds. The function waits for specified milliseconds for any keyboard event. If you press
any key in that time, the program continues. If 0 is passed, it waits indefinitely for a key stroke. It can also be
set to detect specific key strokes like, if key a is pressed etc which we will discuss below.
cv2.destroyAllWindows() simply destroys all the windows we created. If you want to destroy any specific
window, use the function cv2.destroyWindow() where you pass the exact window name as the argument.
i. Haar Classifier
This object detection framework is to provide competitive object detection rates in real-time like detection of
faces in an image. A human can do this easily, but a computer needs precise instructions and constraints. To
make the task more manageable, Viola–Jones requires full view frontal upright faces. Thus in order to be
detected, the entire face must point towards the camera and should not be tilted to either side. While it seems
these constraints could diminish the algorithm's utility somewhat, because the detection step is most often
followed by a recognition step, in practice these limits on pose are quite acceptable the characteristics of
Viola–Jones algorithm which make it a good detection algorithm are:
a) Robust – very high detection rate (true-positive rate) & very low false-positive rate always.
b) Real time – For practical applications at least 2 frames per second must be processed.
All human faces share some similar properties. These regularities may be matched using Haar Features.
A few properties common to human faces:
a) The eye region is darker than the upper-cheeks.
b) The nose bridge region is brighter than the eyes.
OpenCV modules
Capturing a frame:
IplImage* img = 0;
if(!cvGrabFrame(capture))
{ // capture a frame printf("Could not grab a frame\n\7"); exit(0);
}
img=cvRetrieveFrame(capture); // retrieve the captured frame
To obtain images from several cameras simultaneously, first grab an image from each camera.
Retrieve the captured images after the grabbing is complete.
Speed: Matlab is built on Java, and Java is built upon C. So when you run a Matlab program, your
computer is busy trying to interpret all that Matlab code. Then it turns it into Java, and then finally
executes the code. OpenCV, on the other hand, is basically a library of functions written in C/C++. So
ultimately you get more image processing done for your computers processing cycles, and not more
interpreting.
As a result of this, programs written in OpenCV run much faster than similar programs written in
Matlab. OpenCV is damn fast when it comes to speed of execution. For example, we might write a
small program to detect people smiles in a sequence of video frames. In Matlab, we would typically
get 3-4 frames analysed per second. In OpenCV, we would get at least 30 frames per second, resulting
in real-time detection.
Resources needed: Due to the high level nature of Matlab, it uses a lot of your systems resources.
And I mean A LOT! Matlab code requires over a gig of RAM to run through video. In comparison,
typical OpenCV programs only require ~70mb of RAM to run in real-time.
Portability: MATLAB and OpenCV run equally well on Windows, Linux and MacOS. However,
when it comes to OpenCV, any device that can run C, can, in all probability, run OpenCV.
SYSTEM DESIGN
4.1 BLOCK DIAGRAM
This project targets in the development of a modular robot where it can be used for military
applications i.e. in the front lines of the war zone, rescues missions, bomb detection and diffuse. Modular
robots are robots that can be added with more equipment/sensor or it can be removed according to the
requirements. This design consist of two modules: Raspberry pi module and Arduino module. Each module
has its own function raspberry pi module consists of gun trigger control, motor driver control, PIR sensor and
metal detector whereas the Arduino module consist motor driver for controlling the motion of the land drone.
This project is implemented using Raspberry Pi board with ARM cortex A53 processor and
Arduino UNO microcontroller (AtMega328P). Here Raspberry pi module consist of camera, IR sensor, metal
detector, relay control for gun trigger and motor driver for motor control. The OS of Raspberry Pi system is
installed in SD Card the is inserted in its memory slot.
The Arduino module is used in land drone controlling system where it consist of motor driver to
control two motors using the GPIO pins and also a wireless transceiver for controlling the motors from an
external device.
The Raspberry module is plugged with USB Wi-Fi module so that it is connected to the network
to send programmable SMS on the activity of sensors and facial recognition when a unknown person is
identified. It is also connected to a network so that an external device can connect to the Raspberry Pi system
via ssh (Secured Shell) using a Putty application from any device with the help of its IP addresses to start the
VNC server application for screen mirroring.
Raspberry Pi 3B
Arduino UNO (ATmega328P)
Raspberrian OS
OpenCV
Python 2.0
VNC server and client
Arduino IDE
Here all the accessories are interconnected and the common devices to control and access data
from these are laptops/desktop computers/Android mobile phone/tablets. Both the host and the client device is
installed with VNC server and client respectively. To access the host the client system must use the IP
address to connect. Before that ssh and VNC server must be installed and configured in the Raspberry pi
board.
The BAUD rate for transmission of signals for controlling the motors in Arduino is set to 9600.
A sensor is a transducer that measures a physical quantity and converts it into a signal which
can be read by an observer or by an instrument. The output signal of a sensor is linearly proportional to the
value of the measured property. Hence the sensors we are using in the project convert real life parameters to
an output voltage which can be read by a dashboard.
Those voltage signals are usually given to microcontrollers or any processing device to convert
those electrical values to real life parameters in order to be displayed.
4.4.3 IR Sensor
Metal detectors are Inductive proximity sensors operate under the electrical principle of
inductance. Inductance is the phenomenon where a fluctuating current, which by definition has a magnetic
component, induces an electromotive force (emf) in a target object. To amplify a device’s inductance effect, a
sensor manufacturer twists wire into a tight coil and runs a current through it. An inductive proximity sensor
has four components; the coil, oscillator, detection circuit and output circuit. The oscillator generates a
fluctuating magnetic field the shape of a doughnut around the winding of the coil that locates in the device’s
The L 293 has 2 H-Bridges, can provide about 1 amp to each and occasional peak loads
to 2 amps. Motors typically controlled with this controller are near the size of a 35 mm film plastic canister.
There are two Enable pins on L293d. Pin 1 and pin 9, for being able to drive the motor, the pin 1 and 9
need to be high. For driving the motor with left H-bridge you need to enable pin 1 to high. And for right H-
Bridge you need to make the pin 9 to high. If anyone of the either pin1 or pin9 goes low then the motor in the
corresponding section will suspend working. It’s like a switch.
In order to activate L298, enable pin must be set to high. When C=H; D=L, the motor rotates in
clockwise direction (upward movement of elevator).When C=L; D=H, the motor rotates in Clockwise
direction (Downward movement of elevator). When C=D, motor stops rotating (Elevator stops moving).
4.4.6 DC mototrs
4.4.7 Camera
Webcams typically include a lens, an image sensor, support electronics, and may also include
one or even two microphones for sound.
Fig 4.16: USB webcam PCB with and without lens close up
Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but
CCD cameras do not necessarily outperform CMOS-based cameras in the low-price range. Most consumer
webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many
newer devices can produce video in multi-mega pixel resolutions, and a few can run at high frame rates such
as the PlayStation Eye, which can produce 320×240 video at 120 frames per second. The Wii
Remote contains an image sensor with a resolution of 1024×768 pixels.
As the bayer filter is proprietary, any webcam contains some built-in image processing,
separate from compression.
Various lenses are available, the most common in consumer-grade webcams being a
plastic lens that can be manually moved in and out to focus the camera. Fixed-focus lenses, which have no
provision for adjustment, are also available. As a camera system's depth of field is greater for small image
formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have a
sufficiently large depth of field that the use of a fixed-focus lens does not impact image sharpness to a great
extent.
Most models use simple, focal-free optics (fixed focus, factory-set for the usual distance from
the monitor to which it is fastened to the user) or manual focus.
The Single Relay Board can be used to turn lights, fans and other devices on/off while keeping
them isolated from your microcontroller. The Single Relay Board allows you to control high-power devices
(up to 10 A) via the on-board relay. Control of the relay is provided via a 1 x 3 header – friendly to servo
cables and convenient to connect to many development boards.
Specification
Supply Voltage-5V
Dept. of ECE, YDIT Page 28
Modular Armed Advanced Robotic System 2018 -19
Control high-power devices up to 10 A with a simple high/low signal
Provides isolation between the microcontroller and device being controlled
Screw terminals for relay connections
3-pin servo-style header for power/signal interface
Equipped with a high-current relay (10A @ 28VDC)
2xLED's that show the current state of the relay
CHAPTER 5
HARDWARE DESCRIPTION
5.1 RASPBERRY PI 3B
The Raspberry Pi is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation to promote teaching of basic computer science in schools and
in developing countries. The original model became far more popular than anticipated, selling outside
its target market for uses such as robotics. It does not include peripherals (such as keyboards and mice)
and cases. However, some accessories have been included in several official and unofficial bundles.
5.2.2 Features
Operating Voltage: 5 Volts
It is a 8-bit micro controller
Input Voltage: 7 to 20 Volts
Digital I/O Pins: 14 (of which 6 provide PWM output)
Analog Input Pins: 6
DC Current per I/O Pin: 20 mA
DC Current for 3.3V Pin: 50 mA
Flash Memory: 32 KB of which 0.5 KB used by bootloader
SRAM: 2 KB
EEPROM: 1 KB
Clock Speed: 16 MHz
Length: 68.6 mm
Width: 53.4 mm
Weight: 25 g
The Uno differs from all preceding boards in that it does not use the FTDI USB-to-serial driver chip. Instead,
it features the Atmega16U2 (Atmega8U2 up to version R2) programmed as a USB-to-serial converter.
Revision 2 of the Uno board has a resistor pulling the 8U2 HWB line to ground, making it easier to put
into DFU mode.
Revision 3 of the board has the following new features:
1.0 pinout: added SDA and SCL pins that are near to the AREF pin and two other new pins placed
near to the RESET pin, the IOREF that allow the shields to adapt to the voltage provided from the
board. In future, shields will be compatible both with the board that use the AVR, which operate with
5V and with the Arduino Due that operate with 3.3V. The second one is a not connected pin, that is
reserved for future purposes.
Stronger RESET circuit.
Atmega 16U2 replace the 8U2.
"Uno" means one in Italian and is named to mark the upcoming release of Arduino 1.0. The Uno and version
1.0 will be the reference versions of Arduino, moving forward. The Uno is the latest in a series of USB
Arduino boards, and the reference model for the Arduino platform
Most memory problems occur when the stack and the heap collide. When this happens,
one or both of these memory areas will be corrupted with unpredictable results. In some cases it will
cause an immediate crash. In others, the effects of the corruption may not be noticed until much later.
CHAPTER 6
SOFTWARE DESCRIPTION
6.1 RASPIAN OS
The Raspberry Pi debuted in February 2012. The group behind the computer's development -
the Raspberry Pi Foundation - started the project to make computing fun for students, while also creating
interest in how computers work at a basic level. Unlike using an encased computer from a manufacturer, the
Raspberry Pi shows the essential guts behind the plastic. The Raspberry Pi is believed to be an ideal learning
tool, in that it is cheap to make, easy to replace and needs only a keyboard and a TV to run. These same
strengths also make it an ideal product to jumpstart computing in the developing world.
The idea behind a tiny and affordable computer for kids came in 2006, when Eben Upton, Rob
Mullins, Jack Lang and Alan Mycroft, based at the University of Cambridge‘s Computer Laboratory, became
concerned about the year-on-year decline in the numbers and skills levels of the A Level students applying to
read Computer Science. From a situation in the 1990s where most of the kids applying were coming to
interview as experienced hobbyist programmers, the landscape in the 2000s was very different; a typical
applicant might only have done a little web design.
Let’s first connect the board with all the necessary accessories to install and run an operating
system.
Step 1: Take the Pi out of its anti static cover and place it on the non-metal table.
Step 3: Connect your Ethernet cable from the Router to the Ethernet port on the Pi
Step 4: Connect your USB mouse to one of the USB ports on the Pi
Step 5: Connect your USB Keyboard to the other USB port on the Pi
Step 6: Connect the micro USB charger to the Pi but don’t connect it to the power supply yet
To prepare the card for use with the Pi we will need to put a OS on the card. We certainly cannot drag
and drop the OS files on to the card but the flashing the card is not too difficult either.
Since we have already decided to install Raspbian, lets download the RASPBIAN image from the
following link. http://www.raspberrypi.org/downloads/.
Unzip the contents of the Zip file into a folder on your machine, one of the unzipped files would be a
.img file which is what needs to be flashed on to the SD card.[In case there are more than one file, the
current version of the zip has only this file and none other]
Flashing from Linux instructions.
Start the terminal on your Linux OS
Insert the empty SD Card into the card reader of your machine.
Type sudo fdisk -l to see all the disks listed. Find the SD card by its size,and note the device address
(/dev/sdX, where X is a letter identifying the storage device. Some systems with integrated SDcard
readers may use /dev/mmcblkX— format, just change the target in the following instructions
accordingly).
Use cd to change to the directory with the .img file you extracted from the Zip archive.
Type sudo dd if=imagefilename.img of=/dev/sdX bs=2M to write the file imagefilename.img to the
SDcard connected to the device address. Replace imagefilename.img with the actual name of the file
extracted from the Zip archive. This step takes a while, so be patient! During flashing, nothing will be
shown on the screen until the process is fully complete.
1. The Image Writer for Windows is used in place of dd which designed specifically for creating USB or
SDcard images of Linux distributions, it features a simple graphical user interface that makes the
creation of a Raspberry Pi SD card straight forward. Download the latest version of Image Writer for
Windows from the website: https://launchpad.net/win32-image-writer. Below are the steps.
i. Download the binary (not source) Image Writer for Windows Zip file, and extract it to a
folder on your computer.
ii. Plug your blank SDcard into a card reader connected to the PC.
iii. Double-click the Win32DiskImager.exe file to open the program, and click the blue folder
icon to open a file browse dialogue box.
iv. Browse to the imagefilename.img file you extracted from the distribution archive, replacing
imagefilename.img with the actual name of the file extracted from the Zip archive, and then click the Open
button.
v. Select the drive letter corresponding to the SDcard from the Device drop-down dialogue
box. If you’re unsure which drive letter to choose, open MyComputer or Windows Explorer to check.
vi. Click the Write button to flash the image file to the SDcard.
2. Once the OS is flashed, insert the SD card into the Pi SD Card slot
3. Connect the MicroUSB to the power source and switch it on.
1. Menu Bar: Gives you access to the tools needed for creating and saving Arduino sketches.
2. Verify Button: Compiles your code and checks for errors in spelling or syntax.
3. Upload Button: Sends the code to the board that’s connected such as Arduino Uno in this case. Lights on
the board will blink rapidly when uploading.
4. New Sketch: Opens up a new window containing a blank sketch.
5. Sketch Name: When the sketch is saved, the name of the sketch is displayed here.
6. Open Existing Sketch: Allows you to open a saved sketch or one from the stored examples.
7. Save Sketch: This saves the sketch you currently have open.
8. Serial Monitor: When the board is connected, this will display the serial information of your Arduino
9. Code Area: This area is where you compose the code of the sketch that tells the board what to do.
10. Message Area: This area tells you the status on saving, code compiling, errors and more.
Here's what the code for the LED blink example looks like.
Push the reset button on the board then click the Upload button in the IDE. Wait a few seconds. If successful,
the message "Done uploading." will appear in the status bar.
If the Arduino board doesn't show up in the Tools | Serial Port menu, or you get an error while uploading,
please see the FAQ for troubleshooting suggestions.
A few seconds after the upload finishes, you should see the amber (yellow) LED on the board start to blink.
6.3 OpenCV
OpenCV (Open source computer vision) is a library of programming functions mainly aimed at
real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez
(which was later acquired by Intel. The library is cross-platform and free for use under the open-source BSD
license. OpenCV supports the deep learning frameworks TensorFlow, Torch/Py Torch and Caffe.
To obtain images from several cameras simultaneously, first grab an image from each camera.
Retrieve the captured images after the grabbing is complete.
Chapter 7
Results and Discussion
The main objective of this project is to detect unauthorized person and detain them either in
war zones or in hostage rescue situations, and to locate land mines or any metal based explosives in order
avoid casualties. Pictures are shown to indicate how the system is operated.
Fig 7.1: Complete MAARS Project with Raspberry and Arduino combined
Fig 7.2: Image shows process of adding facial data of person to facial dataset
Fig 7.3: Image shows the person seen by the camera is authorized and displays the name of the person and
prevents the gun module from firing
Fig 7.4: The image shows the person viewed by camera is unauthorized person and displays “unkown” over
the image from of the person.
ADVANTAGES
Consistency of performance.
24/7 continuous working.
Improved quality of product.
It can move from one location to another location.
Robotic workers never get tired.
Dept. of ECE, YDIT Page 46
Modular Armed Advanced Robotic System 2018 -19
Do not need to be paid.
Can be made to perform even the most dangerous tasks without concern.
Wide acceptance
DISADVANTAGES
Wireless network range is less
Power backup has to be provided after certain amounts of time since the power consumption is high.
Facial recognition processing is slow for real time process
Vision of camera is nill in the night or in dark regions hence it require additional modules for night
vision
Cost of the project is high
CHAPTER 8
CONCLUSION
The proposed system is aimed towards the welfare of our infantry and the surveillance of
warzone areas to minimize the causalities to a great extent. It detects all metal objects like land mines using a
metal detector. Our system will also be able to detect smoke and fire and take evasive action. It can measure
infrared (IR) light radiating from objects in its field of view using the PIR sensor and thus detect any heat
radiations emitting from humans or animals alike. The robot can be manually controlled but it will be able to
take precautionary measures to protect itself and remain undetected. Hence, our system is sure to create a
revolution in its own field and ensure complete support from people of different societies.
8.1 APPLICATIONS
It can be used to monitor any suspicious object where presence of humanmay be dangerous.
It can be used in mining due to presence of gas detector and fire detector.
It is used in gas industries to detect leaks which can be hazardous.
It can be used in military; dangerous tasks can be carried out by the robot without worrying about loss
of human life.
Military and aerospace embedded software applications
C om m u ni c at i o n Ap p l i c at i on s
In d us t ri al aut om at i o n and p ro c es s co nt r ol s o ft w a r e
Mastering the complexity of applications.
Reduction of product design time.
Real time processing of ever increasing amounts of data.
Intelligent, autonomous sensors.
CHAPTER 9
BIBLIOGRAPHY
[1] S. Y. Harmon and D. W. Gage CURRENT TECHNICAL RESEARCH ISSUES OF AUTONOMOUS
ROBOTS EMPLOYED IN COMBAT published in 17th Annual Electronics and Aerospace Conference
Washington DC, 11-13 September 1984.
[2] Surya Singh and Scott Thayer, "ARMS (Autonomous Robots for Military Systems): A Survey of
Collaborative Robotics Core Technologies and Their Military Applications," tech. report CMU-RI-TR-01-16,
Robotics Institute, Carnegie Mellon University, July, 2001
[3] E. Callaway, P. Gorday, L. Hester, J. A. Gutierrez, M. Naeve, B. Hile, et al. Home Networking with IEEE
802.15.4 developing standard for low-rate wireless personal area networks, IEEE Communications Magazine,
August, 2002, pp. 70-77.
[4] R. C. Luo, K. L. Su, S. H. Shen, K. H. Tsai, "Networked Intelligent Robots through the Internet: Issues
and Opportunities," Proc. IEEE, vol.91, March, 2003, pp.371-382.
[5] Khurshid, J.(School of Computer Sci. Technol., Harbin Inst. of Technol.,China)- Military robots - a
glimpse from today and tomorrow published in Control, Automation, Robotics and Vision Conference, 2004.
ICARCV 2004 8th (Volume:1 )
[6] Raj Reddy, Robotics and Intelligent Systems in Support of Society, IEEE Intelligent Systems, Vol.21,
No.3, May/June 2006.
APENDIX A
Raspberry pi Board schematics:
APENDIX B
Arduina Uno schematics:
To program the bootloader and provide to the microcontroller the compatibility with the
Arduino Software (IDE) you need to use an In-circuit Serial Programmer (ISP) that is the device that connects
to a specific set of pins of the microcontroller to perform the programming of the whole flash memory of the
microcontroller, bootloader included. The ISP programming procedure also includes the writing of fuses: a
special set of bits that define how the microcontroller works under specific circumstances.
APENDIX C
LM293d Schematics:
Appendix D
Raspberry pi boot sequence flow chart: