You are on page 1of 49

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/309154961

Advanced Robotics Training Report

Technical Report · October 2016


DOI: 10.13140/RG.2.2.25560.70407

CITATIONS READS

0 18,978

1 author:

Sudipto Shekhor Mondol


Kinetic Solar Racking and Mounting Inc.
15 PUBLICATIONS 5 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Design, manufacture and test a hydraulic ram View project

Innovations 2.0 View project

All content following this page was uploaded by Sudipto Shekhor Mondol on 14 October 2016.

The user has requested enhancement of the downloaded file.


Advanced Robotics
A report submitted for the partial fulfilment of the subject ‘Viva Voce on Vocational Training’ for the course of
Bachelor of Technology in Mechanical Engineering

25th September 2016

Submitted to:

Department of Mechanical Engineering


Heritage Institute of Technology

Prepared by:

Sudipto Shekhor Mondol


Roll Number: 1357024
Department of Mechanical Engineering
Heritage Institute of Technology

Under the Guidance of

Mr. Tunir Saha


Founder, Skubotics
Declaration of Originality and
Compliance of Academic Ethics
I hereby declare that this report contains original research work by the
undersigned student.
All the information in this document have been obtained and presented in
accordance with academic rules and ethical conduct.
I also declare that, as required by these rules and conduct, I have fully cited and
referred all material ad results that are not original to this work.

Sudipto Shekhor Mondol


Department of Mechanical Engineering
Heritage Institute of Technology
Acknowledgment
I am indebted to a number of people, who helped and motivated me to bring out
this project. I would like to thank Mr. Tunir Saha constantly guiding me in my
endeavours.

Sudipto Shekhor Mondol


Department of Mechanical Engineering
Heritage Institute of Technology
Nomenclature
PC Personal Computer
DTMF Dual Tone Multi Frequency
v Velocity
ω Angular Velocity (rad/s)
N Angular Velocity (rpm)
ASIMO Advanced Step in Innovative Mobility
GPS Global Positioning System
NASA National Aeronautics and Space Administration
BO Battery Operated
INS Inertial Navigation System
LED Light-Emitting Diode
LDR Light Dependent Resistor
IR Infrared
D Distance
c Speed of Light
t Time
IC Integrated Circuit
CdS Cadmium Selenide
TFT Thin Film Transistor
PWM Pulse Width Modulation
Hz Hertz
GND Ground
List of Figures
Figure Description Page
Number Number
1 Standard Wheel 4
2 Ball Caster Wheel 4
3 Support Polygon for the Robots in the Current Project 6
4 Mars Rover 7
5 BO2 Type 60mA Motor 8
6 Light Dependent Resistor 11
7 IR Pair 12
8 Concept of Ultrasonic Sensing 13
9 Schematic of Inaccuracy of Ultrasonic Sensors 14
10 Schematic of Inaccuracies arising from using Two Ultrasound Sensors 14
11 "Arduino Uno" SMD Revision 3 15
12 RoboBoard 1 17
13 RoboBoard 1 Uploader Connections 17
14 Motor Driver SN7544 | L293D 18
15 Line Follower Robot 19
16 Arena for the Line Follower Robot 21
17 Internal structure of photoresistor 23
18 Block diagram of Photovore Robot 23
19 Photovore Robot 24
20 Schematic Diagram of Ultrasonic Sensor Working 27
21 HC-SR04 Sensor Diagram 28
22 Ultrasonic Sensor 28
23 Obstacle Detecting and Avoiding Flowchart 29
24 Obstacle Detecting and Avoiding Robot 30
25 DTMF Decoder 35
26 Block Diagram of Cell phone –Operated Robot 36
27 Flow Chart for Mobile Controlled Robot 36
28 DTMF based Mobile Controlled Robot 37

List of Figures
Table Description Page
Number Number
1 Sensor Classification 10
Contents
Description Page
Number

Abstract 1

Introduction 1

Robot Components 2

Robot Locomotion 3

Sensor Technology 9

Embedded Systems 15

Line Follower Robot 18

Photovore Robot 22

PC Controlled Robot 25

Obstacle Detecting and Avoiding Robot 27

Sumobots 32

DTMF based Mobile Controlled Robot 34

Conclusion 40

References 41
Abstract
In this project, six different robots were assembled, programed and tested. They are: PC
Controlled Robot, Photovore Robot, Obstacle Detecting and Avoiding Robot, Basic Line
Follower Robot, DTMF based Mobile Controlled Robot and Sumo Robot. This report contains
the basics of automation in manufacturing processes, the robot assembly procedures and the
programs that were encoded in them.

1. Introduction
Until the early 1950s, most operations in a typical manufacturing plant were carried out on
traditional machinery, such as lathes, milling machines, drill presses, and various equipment
for forming, shaping, and joining materials. Such equipment generally lacked flexibility, and
it required considerable skilled labour to produce parts with acceptable dimensions and
characteristics. Moreover, each time a different product had to be manufactured, the machinery
had to be retooled, fixtures had to be prepared or modified, and the movement of materials
among various machines had to be rearranged. The development of new products and of parts
with complex shapes required numerous trial-and-error attempts by the operator to set the
proper processing parameters on the machines. Furthermore, because of human involvement,
making parts that were exactly alike was often difficult, time consuming, and costly.
During a typical manufacturing operation, raw materials and parts in progress are moved from
storage to machines, from machine to machine, from assembly to inventory, and, finally, to
shipment. For example, (a) workpieces are loaded on machines, as when a forging is mounted
on a milling-machine bed for machining, (b) sheet metal is fed into a press for stamping, (c)
parts are removed from one machine and loaded onto another, as when a machined forging is
to be subsequently ground for better surface finish and dimensional accuracy, and (d) finished
parts are assembled into a final product. Similarly, tools, moulds, dies, and various other
equipment and fixtures also are moved around in manufacturing plants. Cutting tools are
mounted on lathes, dies are placed in presses or hammers, grinding Wheels are mounted on
spindles, and parts are mounted on special fixtures for dimensional measurement and
inspection. These materials must be moved either manually or by some mechanical means, and
it takes time to transport them from one location to another. Material handling is defined as the
functions and systems associated with the transportation, storage, and control of materials and
parts in the total manufacturing cycle of a product. The total time required for actual
manufacturing operations depends on the part size and shape and on the set of operations to be
performed. Note that idle time and the time required for transporting materials can constitute
the majority of the time consumed, thus reducing productivity. Material handling must be an
integral part of the planning, implementing, and control of manufacturing operations;
furthermore, material handling should be repeatable and predictable.
The word robot was coined in 1920 by the Czech author K. Capek in his play R. U.R.
(Rossum’s Universal Robots). It is derived from the Word robota, meaning “Worker.”
An industrial robot has been described by the International Organization for Standardization
(ISO) as “a machine formed by a mechanism including several degrees of freedom, often
having the appearance of one or several arms ending in a wrist capable of holding a tool, a
workpiece, or an inspection device.” In particular, an industrial robot’s control unit must use a
memorizing method and may use sensing or adaptation features to take the environment and
special circumstances into account.
The first industrial robots were introduced in the early 1960s. Computer controlled robots were
commercialized in the early 1970s, and the first robot controlled by a microcomputer appeared
in 1974. Industrial robots were first used in hazardous operations, such as the handling of toxic
and radioactive materials, and the loading and unloading of hot Workpieces from furnaces and
in foundries. Simple rule-of-thumb applications for robots are described as the three D’s (dull,
dirty, and dangerous; a fourth D--demeaning-should also be included) and the three H’s (hot,
heavy, and hazardous). As described further in this section, industrial robots are now essential
components in all manufacturing operations and have greatly improved productivity at reduced
labour costs.
A mobile robot is an autonomous system that exists in the physical world, is not fixed to one
location, can sense its environment, and act on it to achieve some goals. Mobile robots can be
classified in two ways. The first way of classification is based on the kind of environment that
they work in. Unmanned ground vehicles work on the land, unmanned aerial vehicles work in
the sky, or in the air, and autonomous underwater vehicles operate underwater. The second,
and probably more common way of classifying robots is based upon how they move. So for
the land, wheel based robots, tracked robots, biped, quadruped, and hexapod robots. These
three are legged robots. In the air, quadcopters, drones, and unmanned aircraft. And
underwater, propeller driven robots, and swimming robots.

2. Robot Components
Four common components of a robot are:
2.1 The manipulator: The manipulator is the body of a robot, made of a collection of
mechanical linkages connected at joints to form an open-loop kinematic chain. The manipulator
is capable of movement in various directions and does the work of the robot. It can conveniently
be compared with the arm of a human. At the joint, the individual link can either rotate (revolute
joint) or make translatory motion (prismatic joint) by means of electric motors (servo or
stepper) and hydraulic or pneumatic cylinders. Through a combination of motions of the joint,
the manipulator can achieve different desired positioning and orientation. A manipulator can
have many joints up to 8, and a robot manipulator with six joints (six degrees of freedom) is
considered quite versatile for most the robot tasks. A manipulator generally has three structural
elements: the arm, the wrist and the hand (end effector). The end effector is individually
designed to grip individual tools or jobs, and simulates palm of a human arm.
2.2 Sensory devices: These elements inform the robot controller about status of the
manipulator. These sensors may be (i) non visual or (ii) visual.
Nonvisual sensors provide information about position, velocity, force etc. Connected with
manipulator motion. The visual sensors are used for tracking an object, its recognition and
grasping.
These are comparable to senses like kinesis, touch, vision etc.
2.3 The controller: Robot controllers generally perform three function which are:
(i) Initiation and termination of motion of different joints at desired sequence and specific
points.
(ii) Storage of positional and sequence data in memory.
(iii) Interfacing the robot with outside world through the sensors.
Generally a microcomputer or minicomputer acts as the robot controller, and acts as the brain
of the robot.
2.4 The power conversion unit: This component provides necessary power for movement of
the manipulator for doing work. It can be electrical power source with amplifiers to feed servo
motors or compressor or hydraulic power pack. With proper programming of the robot
controller, the manipulator can be made to undergo desired sequence of motions of linkages of
the manipulator, repeatedly and accurately and thus make the robot to perform its desired task.
Another advantage of a robot is that by changing the programme, the manipulator can instantly
change from one set of task to another, thus making it a flexible and versatile equipment.

3. Robot Locomotion
3.1 Wheeled Robots: Using wheels on a robot is the most common way of propelling a robot
around, and it also only requires two motors, so it's actually a really efficient way of moving.
But there are also limitations of using a wheeled robot, and these include the kind of terrain it
can handle. For instance, a typical wheeled robot can't really handle rough terrain. It often gets
stuck on the carpet when you drive it around at home, so you can't really run it outside, either.
In fact, they're quite limited to only flat surfaces, and they tend to only work inside. Another
problem is the type of obstacles that can be handled. The type of obstacles that a robot can get
over is typically limited to the size of the wheel. If the object is higher than the radius of the
wheel, then the robot just won't be able to get that wheel over it, and so it will get stuck. Another
problem is with holes in the ground. If a robot is travelling along and it's driving wheels get in
such a situation that they're no longer touching the ground, there's nothing, actually, to propel
the robot to go forward, and so it also gets stuck. One solution is to use bigger wheels, but this
soon becomes impractical.
There are many different ways of laying your wheels out on your robot. The first requirement
when you lay out your wheels is that when you put your wheels on, not one part of the body
remains touching the ground. Robots typically have two, three, or four wheels. They don't tend
to have many more than that, and even having a two wheeled robot is pretty unusual. The
reason that it's unusual to have a two wheeled robot is that if we had two wheels, we'd have to
line them up like a bicycle, and we would end up with a very unstable robot. If we do use two
wheels, we tend to locate them on the same axis, so the wheels will share a common axis
between them. This means it can drive forwards and backwards quite easily. But, much like
the bicycle style, it's also wobbly. So what we often do is we use another wheel at the back,
and that just allows the robot that extra degree of stability that it needs.
There are also lots of different kinds of wheels that can be used. They are: standard wheel,
caster wheel, omnidirectional wheel or spherical wheel and mecanum wheels. In our present
project all the robots have two standard wheels and a spherical wheel (or a ball caster). Standard
wheels are just a tyre on a wheel, with a mounting hub for the shaft to going through it. The
standard wheel has two degrees of freedom: rotation around the wheel axle and free rotation
around the contact point.

Figure 1: Standard Wheel


Castor wheel is similar to that of a standard wheel but it has an offset of the wheel centre from
the steering axle. They have three degrees of freedom. Ball caster wheels have a little ball
bearing which is inside a plastic or a metallic cover. It means that the robot can travel in any
direction it likes very easily and are great third contact points for small differential-drive robots.
The spherical wheels and mecanum wheels also have three degrees of freedom.

Figure 2: Ball Caster Wheel


3.2 Gears: Typically, the wheels are run by a motor. The torque and speed requirements of the
robots is usually different from that of a motor. Robots generally run at low speeds ad require
high torque to overcome obstacles. Gears are defined as toothed wheels or multilobed cams,
which transmit power and motion from one shaft to another by means of successive
engagement of teeth.
Pitch line velocity, v = ω1r1 = ω2r2 = 2 πN1r1 = 2 πN2r2
Or, ω1/ ω2 = N1/N2 = r2/r1
N = angular velocity (rpm)
ω = angular velocity (rad/s)
r = pitch radius of gear
A gear train is a mechanical system formed by mounting gears on a frame so that the teeth of
the gears engage. The Gear Ratio is defined as the input speed relative to the output speed. It
is typically written as:
Gear Ratio = win : wout
3.3 Robot Steering: There are two ways of steering a robot- differential steering and
synchronous drive steering.
Differential drive steering means that there's only two driving wheels on the robot. The wheels
are connected to one motor each, so two motors are needed for differential steering. The way
that we steer is we drive each wheel independently. To go in a straight line, the two wheels, or
the two motors, rotate at the same state, and it will travel in a straight line. But if we drive one
of the wheels slightly slower than the other wheel, then it will slightly veer towards one
direction. If we want the robot to turn a tight corner, one of the wheels is driven very slowly,
and the other one goes very fast. If this one stopped altogether and this one turns very fast, the
robot will actually turn on that wheel. One of the really great things about differential drive
steering is that we can actually turn the robot on the spot. If we rotate one wheel forward and
one wheel in the opposite direction backwards, the robot will actually rotate on the spot.
Synchronous drive steering means that all the wheels rotate in the same direction at the same
speed all the time. When the wheels drive straight forward it goes in a straight line. And then
there's a mechanical mechanism, a lot of gears, to rotate the wheels. So they all rotate by the
same amount to drive in a different direction.
3.4 Robot Stability: Robots need to be stable. They're really not much use if they're just
wobbling around all over the place. So we have to think about stability when we build a robot.
There are actually two types of stability when it comes to robotics: Static Stability and Dynamic
Stability. Static stability is the ability of a robot to remain upright when at rest, or under
acceleration and deceleration. Dynamically stable means that we have to move the robot to
balance. Stability all comes down to something called the centre of gravity. The centre of
gravity is a point from which the weight of a body or system may be considered to act. In
uniform gravity it is the same as the centre of mass. When a robot or a human is standing, the
centre of gravity must be above what's called the polygonal support. Basically, the polygonal
support is the area that is made up by the legs or the wheels of a robot.
Figure 3: Support Polygon for the Robots in the Current Project
An alternative to static stability is dynamic stability. A dynamically stable robot means that it
has to be moving in order to balance. An example is a one-legged robot. They look a lot like a
pogo stick, and it has to hop around in order to move. It can't stop, because then it won't be
balancing. Two-legged robots like ASIMO are also dynamically stable robots. They have to
balance on one leg as they swing the other leg around. And they're constantly adjusting in order
to maintain that balance. Basically, robots can be statically stable as long as they have enough
legs. But statically stable walking means that you're only picking up one or two legs at a time,
so it's not very efficient. So we have to trade-off between what's better dynamically stable and
a little bit unbalanced, or statically stable and not very fast.
3.5 Tracked Robots: Wheeled robots can't move over rough terrain. They just get stuck. And
if there's a large hole in the ground, and the driving wheel goes over that hole, then it can't go
anywhere. So legged robots can get around these sorts of things. But the number of joints that
you have to control means that it's really computationally expensive. Track wheels actually
have wheels inside them that are similar to differential drive. The wheels are controlled
independently. Thus, tracked wheels are actually much better at handling rough terrain than
normal wheeled robots. The tracks actually mean that so much more of the robot is in contact
with the ground when it moves around. This means that the weight is more distributed over the
area, and the robot won't sink into soft ground. This larger contact area also means that it's
much less likely that the robot will get stuck over those big holes in the ground. The wheels
won't be left hanging. Instead, the track will cover the whole lot, and the movement will be
able to be distributed, and the robot will be able to continue without getting stuck. One of the
problems we tracked wheels robot is they're actually much more inefficient than normal drive
robots. When the robot starts to turn, instead of turning on a single point like a wheeled robot
would, the whole track has to move. And so what that means is when it turns, it actually drags
the robot around. This frictions the robot’s ability to move. It's much more inefficient. It also
means that the wheels are turning sometimes when there's very little movement happening.
This means we can't use things like shaft encoders to keep track of how the robot is tracking.
Instead, we have to rely on exteroceptive sensors, like GPS, to know where the robot is located.
Some of the most famous robots are the Mars rovers, and they navigate the surface of Mars,
which is pretty rough terrain.

Figure 4: Mars Rover (Courtesy: NASA)


These robots have wheels on them. The wheels on the Mars rovers are actually connected to
the legs. These legs act as a form of suspension, which means that each wheel is actually
independent of the others. So that means it can move over a large bump in the ground without
having to disrupt the position of the other wheels. It's a great way of moving around.
3.6 Robot Design: The things that are important are the weight, it definitely needs to be
lightweight, and the size of the robot. It definitely needs to be quite small. As far as the base
goes, additional requirements that we need are things like making sure that we've got space to
put all our sensors, as well as to put all our motors, the batteries and the RoboBoard 1.

Figure 5: BO2 Type 60mA Motor


This motor is actually really beneficial, because it actually has the gearing inside, so we don't
need to worry about using gear boxes in order to change the speed of the motors to make it
slow enough for us to work with. Differential drive system is probably the best for this situation.
It will give us the fine motor control we need. We can turn on the spot so we can follow that
line. If we were to use a synchronous drive system, we would end up with our sensor only
pointing in one direction, which means we might not be able to follow the line if it turned a
corner. Also, if we had a steering wheel, we might end up steering it all over the place, so
differential drive system is probably the best here. So the first thing we're going to do is build
our differential drive system.
The two motors need to be in the same axis. The motors are mounted on the plastic chassis
with the help of double sided tapes. Screwing them to the chassis was another option. But the
use of double sided tapes make the assembly more flexible; i.e., the configuration of the motors,
seniors, embedded circuit can be changed more readily. The next thing we have to think about
is how to balance the robot. Those two motors are not going to balance very easily, so we really
should have a third wheel (ball castor wheel) in order to provide a bigger polygon of support
for our centre of gravity. The Roboboard 1 is mounted on another piece of rectangular plastic
frame which is again mounted above the motors. It's really good engineering practise to keep
testing your robots as you go along. This way you can make sure they're working before you
move to the next step, and you don't have to do a lot of debugging at the end to get it to move.
So, a simple program was uploaded on to the Roboboard 1 to test the motors. The program is
as follows:
All the commands were executed and the motors worked fine. So, our test was successful.

4. Sensor Technology
In the broadest definition, a sensor is a device that produces a signal in response to its detecting
or measuring a specific property, such as position, force, torque, pressure, temperature,
humidity, speed, acceleration, or vibration. There are two types of sensors used in robots:
Proprioceptive and Exteroceptive Sensors.
The robot measures a signal originating from within using proprioceptive sensors. These
sensors are responsible for monitoring self-maintenance and controlling internal status.
Common uses of proprioceptive measurements are for battery monitoring, current sensing, and
heat monitoring.
Examples of Proprioceptive Sensors:
Global Positioning System (GPS) - The drawbacks of GPS sensors include a slow refresh rate
and sensitivity to blackouts.
Inertial Navigation System (INS) - The drawbacks of INS include the tendency to drift over
time and with temperature.
Shaft Encoders - A shaft encoder, also known as a rotary encoder, is an electro-magnetic device
that works as a transducer to convert the angular position of a shaft or axle to an analog or
digital code.
Compass - A compass sensor is used to detect direction and accurately correct motion.
Inclinometer - An inclinometer sensor measures the tilt or angle of an axis.
Exteroceptive sensors determine the measurements of objects relative to a robot's frame of
reference. These sensors are categorized as a proximity sensors. Proximity sensors enable a
robot to tell when it is near an object. These sensors keep the robot from colliding with other
objects. They can also be used to measure distance from the robot to another object.
There are three main types of exteroceptive sensors.
Contact Sensors: Contact sensors are typically simple mechanical switches that send a signal
when physical contact is made. Contact sensors are used to detect the positive contact between
two mating parts and/or to measure the interaction forces and torques which appear while the
robot manipulator conducts part mating operations. Another type of contact sensors are tactile
sensors. These measure a multitude of parameters of the touched object surface.
Range Sensors: Range sensors measure the distance to objects in their operation area. A range
sensor can also be a distance detection devices that provides a simple binary signal when a
particular threshold is detected. Range sensors are used for robot navigation, obstacle
avoidance, or to recover the third dimension for monocular vision. Range sensors are based on
one of the two principles: time-of-flight and triangulation
Vision Sensors: Robot vision is a complex sensing process. It involves extracting,
characterizing and interpreting information from images in order to identify or describe objects
in environment.
Remote sensing systems which measure energy that is naturally available are called passive
sensors. Passive sensors can only be used to detect energy when the naturally occurring energy
is available. Active sensors, on the other hand, provide their own energy source for detection.
The sensor emits radiation which is directed toward the target to be investigated. The radiation
reflected from that target is detected and measured by the sensor.
Sensor Proprioceptive Exteroceptive Passive Active
Contact Switch, Bumper ✓ ✓
Shaft Encoder ✓ ✓
Electronic Compass ✓ ✓
Inclinometer ✓ ✓
Accelerometer ✓ ✓
GPS ✓ ✓
Active Optical or RF Beacons ✓ ✓
Ultrasonic Sensor ✓ ✓
Laser Range Finder ✓ ✓
Photoresistor ✓ ✓
Camera ✓ ✓
Table 1: Sensor Classification
4.1 Light Sensors: The light sensor measures the amount of light in the environment. One of
the simplest types of light sensing is called a photoresistor. A photoresistor will have a different
kind of resistance depending upon how much light is hitting it. Resistance is a really important
property in electronics, and it's a measure of how difficult it is for the current to pass through
a component. The more light that hits the photoresistor, the lower the resistance becomes and
vice versa. Photoresistors are a really good way of measuring light, and they can actually
measure a lot of light that we can't see. For instance, we can't see ultraviolet or infrared light,
but light resistors can. Reflective optosensors use reflected light to operate.

Figure 6: Light Dependent Resistor


Unlike photo resistors which are passive resistors and just gather light from around, a reflective
optosensor sends light out and then receives the light coming back to detect whether there's an
object in the way or not. So a reflective optosensor has an emitter and a detector. The emitter
is usually an LED, or something like LED. An LED sends out a specific wavelength of light.
The detector is a phototransistor which measures a specific wavelength of light and turns it into
a voltage or a current. A reflective optosensor can be arranged in two ways. One of the ways
is to arrange them side by side so the detector is next to the emitter. When an obstacle comes
in the way, or an object, the light is reflected from the object and goes back to the detector and
it detects that there's an object in the way. The other way to arrange them is called a break beam
sensor. They are made to sit facing one another. And if an obstacle comes in the way, it no
longer detects the light coming through and so it knows that there's an obstacle there. The
amount of light that's reflected off a surface depends on a number of things. It depends on the
colour of the surface. For instance, a black surface reflects a lot less light than a white surface.
The roughness of the surface also makes a difference. So a really shiny surface reflects a lot of
light. Whereas a mat surface or a rough surface doesn't reflect much light at all. The colour of
the light's very important as well. Red light reflecting off a blue background reflects very little.
Whereas if you reflect it off a red background, it reflects a lot. And also, the distance of the
light from the object itself also has an impact on how much light is actually reflected back. One
of the biggest difficulties with detecting light is something called ambient light. Ambient light
is the amount of light that already exists in an environment or in a room. Now, if we have a
reflective optosensor that's measuring the amount of light that's reflected back, we only want
to measure the amount of light that's come from the original LED that's reflected back. We
don't want to account for all the ambient light as well. One way around this is to use calibration.
We calibrate the sensor so that it doesn't count all the ambient light. One way to do this is to
take measures of the room with the sensor without the light on. That way we've got a number
which represents the ambient light. And we can remove that from our calculations so we're just
left with the amount that's coming from the LED. Now, our ambient light levels are constantly
changing. To accommodate this, we have to recalibrate often. The best time to recalibrate is
just before you want to use the robot. That way, you've taken into account the ambient light
levels that you've got in your current state.
An IR sensor is a device which detects IR radiation falling on it. There are numerous types of
IR sensors that are built and can be built depending on the application. Proximity sensors (Used
in Touch Screen phones and Edge Avoiding Robots), contrast sensors (Used in Line Following
Robots) and obstruction counters/sensors (Used for counting goods and in Burglar Alarms) are
some examples, which use IR sensors. An IR sensor is basically a device which consists of a
pair of an IR LED and a photodiode which are collectively called a photo-coupler or an opto-
coupler. The IR LED emits IR radiation, reception and/or intensity of reception of which by
the photodiode dictates the output of the sensor.
IR sensors are the main triggers of the whole line following robot’s action mechanism. IR
sensors are the ones which detect the colour of the surface underneath it and send a signal to
the microcontroller or the main circuit which then takes decisions according to the algorithm
set by the creator of the bot. The sensors used in them are based on reflective/non-reflective
indirect incidence. The IR LED emits IR radiation, which in normal state gets reflected back
to the module from the white surface around the black line, which gets incident on the
photodiode. But, as soon as the IR radiation falls on the black line, the IR radiation gets
absorbed completely by the black colour, and hence there is no reflection of the IR radiation
going back to the sensor module.

Figure 7: IR Pair
4.2 Ultrasonic Sensors: Ultrasonic sensors relies on echolocation to determine the distance
that the centre is from an object. Bats use echolocation to determine where they are in a dark
cave. And dolphins use echolocation to find out where they are underwater. Ultrasound works
by sending out a pulse of noise. That pulse then bounces off an object and is reflected back.
Depending upon how long the reflection takes to come back will tell the robot how far away
the object is. The human range of hearing goes from 20 hertz, which is a very low sound, to 20
kilohertz which is a very high-pitched whistle. Ultrasound occurs above that high-pitched
whistle range. Ultrasonic sensor sends out a ping or a chirp, and that then gets bounced off an
object and returns back to it. The ping or the chirp is actually a sound wave, that sound wave
occurring at a much higher frequency than we can hear. The sound wave bounces off an object
and then is returned back to the sensor, which detects it. If we start a timer the instant that the
sound wave is sent out and record the amount of time it takes for the sound wave to travel back,
we can actually calculate how far away the object is. To do this, we need to know the speed of
sound. The speed of sound is about 340 metres per second, and we need an equation to calculate
how it works.
D = ct
Where D is the distance that has been travelled by the sound wave; C is the speed of the sound
and t is the time. The speed of sound depends on many things like humidity in the air, the
temperature, the amount of dust particles that are in the air, etc.
Now we've calculated the distance that has been travelled, but now we need to divide it by 2
because when we send the signal out from our sensor, it goes to the object, it's reflected, and
then it bounces back. That means it's actually travelling that distance twice.
So the equation now becomes: 2D = ct or, D = ct/2.

Figure 8: Concept of Ultrasonic Sensing (Courtesy: Swinburne University of


Technology)
Some of the common errors that we face while using an ultrasonic sensor are:
One of the biggest errors we have has to do with the type of surface that we're looking at. If
we're looking at a really shiny surface like a mirror or piece of glass, the ultrasonic wave
actually bounces in and then bounces straight off again. It doesn't reflect back to our sensor, it
reflects off into the distance. It means that when we use our sensor we actually detect that
there's no object there, when in actual fact there is an object and we're likely to hit it. One way
around this is to use a diffused surface. So a diffused surface is a really rough surface.
Roughness is actually lots of teeny, tiny little surfaces which are bunched together at lots of
different angles. When we reflect light off a surface like this, chances are we'll hit one of those
surfaces that will reflect the light straight back to the sensor.
Ultrasonic sensors can be really inaccurate. And one of the reasons this occurs is because the
ultrasonic beam doesn't come out in a straight line it actually comes out in a cone of about 30
degrees.

Figure 9: Schematic of Inaccuracy of Ultrasonic Sensors


(Courtesy: Swinburne University of Technology)
What this means is that if we're at an angle to our object, what we expect to receive from our
ultrasonic sensor is the distance to ‘B’, and that's what we would expect to get. But what's
actually happening is that the sound that goes from the sensor to point ‘C’ returns before the
sound that goes to point ‘B’ returns. So we get this reflection coming back, and it indicates that
the object is much closer to our sensor than it actually is. This occurs because we're at an angle
to the object. If we were at right angles to the object, ‘B’ would be the closest point and we
wouldn't have this problem.
Another problem with ultrasound in robotics occurs when you've got multiple sensors on a
single robot.

Figure 10: Schematic of Inaccuracies arising from using Two Ultrasound Sensors
(Courtesy: Swinburne University of Technology)
The ultrasonic sensor 1 sends out a chirp. It bounces off an object, it bounces off another object,
and then it comes back to a different ultrasound sensor (ultrasound sensor 2). This makes for
some pretty confusing signals for the sensor. This phenomenon is called crosstalk, and it means
that the signal from one sensor is being received by another. This makes for some very strange
results, and it's not a good way of using ultrasound.
Another thing to consider when you're using ultrasound is environmental noise. Environmental
noise is noise that's caused by things like electric lights, arc welding, electric beaters. These
kind of things cause signals to come to the ultrasound that cause it to receive a signal when
there isn't actually anything being sent out.
So overall, we can see that ultrasound is a great method of detecting distance, but it's prone to
error. We use it mostly as an indication of what's going on around it. Or we use it in
combination with other sensors to get a feel for the space that's around the robot.

5. Embedded Systems
An embedded system is a computer system with a dedicated function within a larger
mechanical or electrical system, often with real-time computing constraints. It is embedded as
part of a complete device often including hardware and mechanical parts. Embedded systems
control many devices in common use today. Ninety-eight percent of all microprocessors are
manufactured as components of embedded systems.
Arduino is an open-source prototyping platform based on easy-to-use hardware and software.
Arduino boards are able to read inputs - light on a sensor, a finger on a button, or a Twitter
message - and turn it into an output - activating a motor, turning on an LED, publishing
something online. You can tell your board what to do by sending a set of instructions to the
microcontroller on the board. To do so you use the Arduino programming language (based on
Wiring), and the Arduino Software (IDE), based on Processing.
Over the years Arduino has been the brain of thousands of projects, from everyday objects to
complex scientific instruments. A worldwide community of makers - students, hobbyists,
artists, programmers, and professionals - has gathered around this open-source platform, their
contributions have added up to an incredible amount of accessible knowledge that can be of
great help to novices and experts alike.

Figure 11: "Arduino Uno" SMD Revision 3


(Courtesy: SparkFun Electronics from Boulder, USA)
Arduino was born at the Ivrea Interaction Design Institute as an easy tool for fast prototyping,
aimed at students without a background in electronics and programming. As soon as it reached
a wider community, the Arduino board started changing to adapt to new needs and challenges,
differentiating its offer from simple 8-bit boards to products for IoT applications, wearable, 3D
printing, and embedded environments. All Arduino boards are completely open-source,
empowering users to build them independently and eventually adapt them to their particular
needs. The software, too, is open-source, and it is growing through the contributions of users
worldwide.
Thanks to its simple and accessible user experience, Arduino has been used in thousands of
different projects and applications. The Arduino software is easy-to-use for beginners, yet
flexible enough for advanced users. It runs on Mac, Windows, and Linux. Teachers and
students use it to build low cost scientific instruments, to prove chemistry and physics
principles, or to get started with programming and robotics. Designers and architects build
interactive prototypes, musicians and artists use it for installations and to experiment with new
musical instruments. Makers, of course, use it to build many of the projects exhibited at the
Maker Faire, for example. Arduino is a key tool to learn new things. Anyone - children,
hobbyists, artists, programmers - can start tinkering just following the step by step instructions
of a kit, or sharing ideas online with other members of the Arduino community.
There are many other microcontrollers and microcontroller platforms available for physical
computing. Parallax Basic Stamp, Netmedia's BX-24, Phidgets, MIT's Handyboard, and many
others offer similar functionality. All of these tools take the messy details of microcontroller
programming and wrap it up in an easy-to-use package. Arduino also simplifies the process of
working with microcontrollers, but it offers some advantage for teachers, students, and
interested amateurs over other systems:
Inexpensive - Arduino boards are relatively inexpensive compared to other microcontroller
platforms. The least expensive version of the Arduino module can be assembled by hand, and
even the pre-assembled Arduino modules cost less than $50
Cross-platform - The Arduino Software (IDE) runs on Windows, Macintosh OSX, and Linux
operating systems. Most microcontroller systems are limited to Windows.
Simple, clear programming environment - The Arduino Software (IDE) is easy-to-use for
beginners, yet flexible enough for advanced users to take advantage of as well. For teachers,
it's conveniently based on the Processing programming environment, so students learning to
program in that environment will be familiar with how the Arduino IDE works.
Open source and extensible software - The Arduino software is published as open source tools,
available for extension by experienced programmers. The language can be expanded through
C++ libraries, and people wanting to understand the technical details can make the leap from
Arduino to the AVR C programming language on which it's based. Similarly, you can add
AVR-C code directly into your Arduino programs if you want to.
Open source and extensible hardware - The plans of the Arduino boards are published under a
Creative Commons license, so experienced circuit designers can make their own version of the
module, extending it and improving it. Even relatively inexperienced users can build the
breadboard version of the module in order to understand how it works and save money.
In the current project, RoboBoard 1 has been used. RoboBoard 1 is a product of Skubotics. It
is similar to Arduino Uno, only the uploader and the motor driver integrated circuit is not
embedded in the main circuit board. The uploader is placed off board to make the board more
robust. This also enables the parts to be replaced with ease on damage. The uploader IC is
sensitive to high currents circulating on the board and it gets damaged.

Figure 12: RoboBoard 1 (Courtesy: Skubotics)

Figure 13: RoboBoard 1 Uploader Connections (Courtesy: Skubotics)


Figure 14: Motor Driver SN7544 | L293D (Courtesy: Skubotics)

6. Line Follower Robot


Line follower robot is a robo car that can follow a path. The path can be visible like a black
line on the white surface (or vice-verse). It is an integrated design from the knowledge of
Mechanical, Electrical and Computer engineering.
6.1 Application: Line followers can be used to deliver mail within an office building and
deliver medications in a hospital. The technology has been suggested for running buses and
other mass transit systems, and may end up as part of autonomous cars navigating the freeway.
The line follower can be used in guidance system for industrial robots moving on shop floor.
An example might be in a warehouse where the robots follow 'tracks' to and from the shelves
they stock and retrieve from. An ‘electronically guided type’ of tractors require which follows
a white or a black line painted are used for point to point pickup and delivery of trailers.
6.2 Items Required: Two IR Pairs, Two BO2 Type 60mA Motors, Ball Caster, Plastic Chassis,
Two Batteries, Two Standard Wheels, Double Sided Tape and RoboBoard 1.
6.3 Working Principle: These robots usually use an array of IR (Infrared) sensors in order to
calculate the reflectance of the surface beneath them. The basic criteria being that: The black
line will have a lesser reflectance value (black absorbs light) than the lighter surface around it.
This low value of reflectance is the parameter used to detect the position of the line by the
robot. The higher value of reflectance will be the surface around the line.
6.4 Assembly: The main robot body is assembled as described in section 3.6. The IR Pairs are
mounted at the front of the chassis. The receiver and the emitter is bent to detect the line. To
ensure that the IR from the emitter does enter the receiver directly without hitting the line, the
receiver is bent slightly more than the emitter and the converse can also be done.
Figure 15: Line Follower Robot
6.5 Source Code: The following program was uploaded into the RoboBoard 1:
6.6 Testing: The robot was programed to follow the first black line on a white background.

Figure 16: Arena for the Line Follower Robot


The robot succesfully follwed the black line. To view follow the link:
https://youtu.be/gLdfTAbM1no
7. Photovore Robot
The project is a Robot which chases the light. This type of Robot is best suitable for military
application. Light can be chased using a light sensor. The Photovore is a robot that chases light,
for this to work, robot needs at least two light detecting sensors, typically photoresistor or IR
emitter/detectors, out in front and spaced apart from each other. One on left side of robot, the
other located on the right side.
RoboBoard 1 is used for implementing the Robot. The analog pins of RoboBoard 1 read the
analog value from both sensors. Then do a comparison - the sensor that reads more light is the
direction robot should turn. For example, if the left photoresistor reads more light than the right
photoresistor, robot should turn or tend towards the left. If both sensors read about the same
value, meaning the both get the same amount of light, then robot should drive straight.
7.1 Application:
Smoke detector: Smoke detector is a device that detects smoke, typically as an indicator of fire.
Commercial, industrial, and mass residential devices issue a signal to a fire alarm system while
household detectors, known as smoke alarms, generally issue a local audible or visual alarm
from the detector itself.This home smoke detector circuit warns the user against fire accidents.
It relies on the smoke that is produced in the event of a fire and passes between a bulb and an
LDR, the amount of light falling on the LDR decreases. This type of circuit is called optical
smoke detector. It should not be used as a home smoke detector. This causes the resistance of
LDR to increase and the voltage at the base of the transistor is pulled high due to which the
supply to (COB) chip on board is completed. The sensitivity of the smoke detector depends on
the distance between bulb and LDR as well as setting of preset VR1. Thus, by placing the bulb
and the LDR at appropriate distances; one may vary preset VR1 to get optimum sensitivity.
Energy Detection: It can be used in detecting maximum energy received from the sun. If the
solar plate is kept in the one direction whole day it cannot get maximum intensity of the
sunlight. So Photovore robot can be used to detect maximum intensity and it will also change
its direction. Hence solar plate can get maximum intensity whole day.
7.2 Items Required: Two LDR, Two BO2 Type 60mA Motors, Ball Caster, Plastic Chassis,
Two Batteries, Two Standard Wheels, Double Sided Tape and RoboBoard 1.
7.3 Working Principle: A photoresistor is made of a high resistance semiconductor. If light
falling on the device is of high enough frequency, photons absorbed by the semiconductor give
bound electrons enough energy to jump into the conduction band. The resulting free electron
(and its hole partner) conduct electricity, thereby lowering resistance. A photoelectric device
can be either intrinsic or extrinsic. An intrinsic semiconductor has its own charge carriers and
is not an efficient semiconductor, e.g. silicon. In intrinsic devices the only available electrons
are in the valence band, and hence the photon must have enough energy to excite the electron
across the entire band gap. Extrinsic devices have impurities, also called dopants, added whose
ground state energy is closer to the conduction band; since the electrons do not have as far to
jump, lower energy photons (i.e., longer wavelengths and lower frequencies) are sufficient to
trigger the device. If a sample of silicon has some of its atoms replaced by phosphorus atoms
(impurities), there will be extra electrons available for conduction. This is an example of an
extrinsic semiconductor. Photoresistor are basically photocells.
The sensor that we used in our project is Light Dependant Resistor (LDR).It is made up of
Cadmium Selenide (CdS). CdS is mainly used as a pigment. CdS and Cadmium Selenide are
used in manufacturing of photoresistor (light dependent resistors) sensitive to visible and near
infrared light.

Figure 17: Internal structure of photoresistor


In thin-film form, CdS can be combined with other layers for use in certain types of solar cells.
CdS was also one of the first semiconductor materials to be used for thin film Transistors
(TFTs). However interest in compound semiconductors for TFTs largely waned after the
emergence of amorphous silicon technology in the late 1970s.
Following block diagram shows the working of robot:

Figure 18: Block diagram of Photovore Robot


The block diagram shown above shows the working of a Photovore robot. There are two
sensors on the right and left side of the robot placed in front of the motor. Both the sensors are
connected to the input pin of the Arduino board. The output of is taken from the PWM pin of
the Arduino. This output goes to motor driver which drives the motor.
7.4 Assembly: The main robot body is assembled as described in section 3.6. The two LDRs
are mounted at the front of the chassis equispaced from the centre.
Figure: Photovore Robot
7.5 Source Code: The following program was uploaded into the RoboBoard 1:
7.6 Testing: The Photovore robot has fulfilled all its objectives that were planned at the time
of planning. To view follow the link: https://youtu.be/RMr1LNzlBJc

8. PC Controlled Robot
The robots that are controlled by interfacing with a personal computer are called as computer
controlled robots. A computer is an integral part of every robot system that contains a control
program and a task program. The control program is provided by the manufacturer and the
controls of each joint of the robot manipulator. The task program is provided by the user and
specifies the manipulating motions that are required to complete a specific job.
8.1 Application: Computer controlled robots can be used in different fields such as medical,
military operations, industrial, agricultural, and so on.Implement uplink communication from
the Robots to GUI Application through the Base Station (Wireless PC Controlled Robot).
8.2 Items Required: Two BO2 Type 60mA Motors, Ball Caster, Plastic Chassis, Two
Batteries, Two Standard Wheels, Double Sided Tape and RoboBoard 1.
8.3 Souce Code: The following program was uploaded into the RoboBoard 1:
9. Obstacle Detecting and Avoiding
Robot
The project “Obstacle Detecting and Avoiding Robot” deals with detection and avoidance of
the various obstacles found in an environment. Obstacle avoidance is a primary requirement of
any autonomous mobile robot. Obstacle avoidance Robot is design to allow robot to navigate
in unknown environment by avoiding collisions. Obstacle avoiding robot senses obstacles in
the path and avoids it and resumes its running.
9.1 Applications: Obstacle avoiding technique is very useful in real life, this technique can
also use as a vision belt of blind people by changing the IR sensor by a kinetic sensor ,which
is on type of microwave sensor whose sensing range is very high and the output of this sensor
vary in according to the object position changes. This technique makes a blind people able to
navigate the obstacle easily by placing three vibrato in left, right and the centre of a belt named
as VISION BELT and makes a blind people able to walk without outside help. On top of
obstacle avoiding robot temperature/pressure sensors can be added to monitor the atmospheric
conditions around. This is useful in places where the environment is not suitable for humans.
9.2 Items Required: Ultrasonic Sensor, Two BO2 Type 60mA Motors, Ball Caster, Plastic
Chassis, Two Batteries, Two Standard Wheels, Double Sided Tape and RoboBoard 1.
9.3 Working Principle: The ultrasonic sensors continuously emits the frequency signals, when
obstacle is detected this signals are reflected back which then considered as input to the sensor.
The ultrasonic sensor consists of a multi vibrator, which fixed at its base. The multi vibrator is
combination of a resonator and vibrator. The ultrasonic waves generated by the vibration are
delivers to the resonator. Ultrasonic sensor actually consists of two parts: the emitter which
produces a 40 kHz sound wave and detector which detects 40 kHz sound wave and sends
electrical signal back to the microcontroller.

Figure 20: Schematic Diagram of Ultrasonic Sensor Working


In our project, we are using HC-SR04 ultrasonic sensors which consist of 4 pins VCC, Trigger,
Echo and GND.
Figure 21: HC-SR04 Sensor Diagram
Features:
Power Supply: +5V DC
Working Current: 15mA
Effectual Angle: <15degree
Ranging Distance: 2cm – 400cm/1’’- 13ft
Resolution: 0.3cm
Measuring Angle: 30 degree
Input pulse width: 10uS

Figure 22: Ultrasonic Sensor


Figure 23: Obstacle Detecting and Avoiding Flowchart
9.4 Assembly: The main robot body is assembled as described in section 3.6. The ultrasonic
sensor is mounted at the front of the chassis with a double sided tape. It was positioned vertical
so as to ensure error free obstacle detection.
Figure 24: Obstacle Detecting and Avoiding Robot
9.5 Source Code: The following program was uploaded into the RoboBoard 1:
9.6 Testing: The Obstacle Detecting and Avoiding Robot has successfully fulfilled all its
objectives that were planned at the time of planning. To view follow the link:
https://www.facebook.com/SudiptoShekhorMondol/videos/1761049450851463/
10. Sumobots
Robot-sumo, or pepe-sumo, is a sport in which two robots attempt to push each other out of a
circle (in a similar fashion to the sport of sumo). The robots used in this competition are called
sumobots. The engineering challenges are for the robot to find its opponent (usually
accomplished with infrared or ultra-sonic sensors) and to push it out of the flat arena. A robot
should also avoid leaving the arena, usually by means of a sensor that detects the edge.
10.1 Items Required: Ultrasonic Sensor, Two BO2 Type 60mA Motors, Ball Caster, Plastic
Chassis, Two Batteries, Two Standard Wheels, Double Sided Tape and RoboBoard 1.
10.2 Source Code: The following program was uploaded into the RoboBoard 1:
10.3 Testing: The Sumobot successfully chases and attacks other Sumobots.
11. DTMF based Mobile Controlled
Robot
Dual-tone multi-frequency signalling (DTMF) is an in-band telecommunication signalling
system using the voice-frequency band over telephone lines between telephone equipment and
other communications devices and switching centers. DTMF was first developed in the Bell
System in the United States, and became known under the trademark Touch-Tone for use in
push-button telephones supplied to telephone customers, starting in 1963.
The Touch-Tone system using a telephone keypad gradually replaced the use of rotary dial and
has become the industry standard for landline and mobile service. Other multi-frequency
systems are used for internal signalling within the telephone network. In this project, the robot
is controlled by a mobile phone. In the course of a call, if any button is pressed, a tone, called
DTMF, corresponding to the button pressed is heard at the other end of the call. The robot
perceives this DTMF tone with the help of the phone stacked in the robot.
11.1 Applications: Cell phone controlled robot can be used in the borders for displaying hidden
Land mines. Military usage of remotely controlled military vehicles dates back to the first half
of 20th century. Soviet Red Army used remotely controlled Teletanks during 1930s in the
Winter War and early stage of World War II. Remote control vehicles have various scientific
uses including hazardous environments, working in deep oceans, and space exploration. The
majority of the probes to the other planets in our solar system have been remote control
vehicles, although some of the more recent ones were partially autonomous. The sophistication
of these devices has fuelled greater debate on the need for manned spaceflight and exploration.
The robot can be used anywhere there is the service provider tower of the connection provided
that is mounted on robot.
11.2 Items Required: DTMF Decoder, 3.5 mm Audio Cable, Two BO2 Type 60mA Motors,
Ball Caster, Plastic Chassis, Two Batteries, Two Standard Wheels, Double Sided Tape and
RoboBoard 1.
11.3 Working Principle: DTMF based robotic vehicle circuit consists of DTMF decoder IC,
driver IC l293D IC and motors. DTMF decoder IC used is HT9107B. It has 18 pins. Tone from
DTMF encoder is given to the DTMF decoder IC. The decoder IC internally, consists of
operational amplifier whose output is given to pre filters to separate low and high frequencies.
Then it is passed to code detector circuit and it decodes the incoming tone into 4bits of binary
data. This data at the output is directly given to the driver IC to drive the two motors. These
motors rotate according to the decoded output.
If the button pressed from mobile is ‘1’, it gives a decoded output of ‘0001’. Thus motor
connected to the first two pins will get 0 volts and second motor will have 5 volts to one pin
and 0 volts to the another pin. Thus second motor starts rotating and first motor is off. So, robot
moves in one direction either to left or right. If the robot is to rotate forward or backward then
the binary value should be either ‘0101’ or ‘1010’.These values indicate that two motors rotates
in the same direction i.e. either forward or backward.
A typical 4x4 keypad is shown below:
1 2 3 A
4 5 6 B
7 8 9 C
* 0 # D
Each of the 16 keys has a key code from 0-15.
Code Represented Key
0 D
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 0
11 *
12 #
13 A
14 B
15 C
Audio Signal is taken out from the mobile phone using the 3.5 mm audio socket. For this a
3.5mm audio cable is used. It has 3.5 mm plugs on both the sides.
One side is connected to the mobile phone and the other to the DTMF decoder module. The
DTMF decoder module must be powered from a regulated 5V supply. Positive supply should
be given to pin labelled 5V and negative should be given to GND.

Figure 25: DTMF Decoder


Figure 26: Block Diagram of Cell phone –Operated Robot

Figure 27: Flow Chart for Mobile Controlled Robot


11.4 Assembly: The main robot body is assembled as described in section 3.6. The DTMF
decoder is connected to a mobile phone. Received DTMF signals are sent to the decoder by the
mobile on board the chassis which then converts the tones to binary numbers. Remote mobile
phone dials to the phone on board the chassis and the call is received. An audio link is
established once the call is received. This link transfers the DTMF signals.

Figure 28: DTMF based Mobile Controlled Robot

1: Standard Wheel
2: Jumper Wires
3: 3.5mm Audio Cable (Plugs on both sides)
4: Mobile Phone (On board Chassis)
5: DTMF Decoder
6: Robot Chassis
7: Uploader Unit
8: RoboBoard 1 (Manufacturer: Skubotics)
9: Battery
11.5 Source Code: The following program was uploaded into the RoboBoard 1:
11.6 Testing: The DTMF based Mobile Controlled Robot has successfully fulfilled all its
objectives that were planned at the time of planning. To watch the robot follow the link:
https://youtu.be/3fQpNdhyfFA
12. Conclusion
Today we are in the world of robotics. Knowingly or unknowingly, we have been using
different types of robots in our daily life. This project had the goal of manipulating different
parts of the robot to make it react according to our desire. The objective of the line follower
robot was to make the robot follow a black track just by programming it to use the sensors
attached to it. The photovore robot successfully followed light. The PC control robot was easily
maneuvered by the commands from the keyboard. The obstacle avoider skilfully averted the
hindrances. The sumobot attacked other robots as programmed. The DTMF based mobile
controlled robot successfully executed all the commands given via the remote mobile phone.
The sensing range can be efficiently be increased by increasing the sensor quality.
References
[1] Serope Kalpakjian and Steven R. Schmid, “Manufacturing Engineering and Technology”,
sixth edition, “Automation of Manufacturing Processes”.
[2] Dr. Siddhartha Ray, “Introduction to Materials Handling”, New Age International
Publishers, 2013 ed.
[3] Dr. Michelle Dunn, “Mobile Robotics”, Swinburne University of Technology.
[4] SS Ratan, “Theory of Machines”, McGraw Hill Education, Fourth Edition.
[5] “What is Arduino?” and “Why Arduino?”, Arduino,
https://www.arduino.cc/en/Guide/Introduction
[6] M. S. Islam & M. A. Rahman, “Design and Fabrication of Line Follower Robot”, Asian
Journal of Applied Science and Engineering, Volume 2, No 2 (2013), Pages: 27-32
[7] Pravin Kumar Singh, “Arduino Based Photovore Robot”, International Journal of Scientific
& Engineering Research, Volume 4, Issue 4, April-2013, Pages: 1003-1015.
[8] Kirti Bhagat, Sayalee Deshmukh, Shraddha Dhonde and Sneha Ghag, “Obstacle Avoidance
Robot”, International Journal of Science, Engineering and Technology Research (IJSETR),
Volume 5, Issue 2, February 2016, Pages: 439-442
[9] Dhiraj Singh Patel, Dheeraj Mishra, Devendra Pandey, Ankit Sumele, Asst. Prof. Ishwar
Rathod, “Mobile Operated Spy Robot”, International Journal of Emerging Technology and
Advanced Engineering Volume 3, Special Issue 2, January 2013, Pages: 23-26.

View publication stats

You might also like