You are on page 1of 29

AUTOMATIC SCRAP COLLECTING

ROBOT

The rules basically went as this:

o gather and retrieve objects of unknown size/weight


o battery power is severely limited
o robot can be remote controlled
o allowable robot height/length/width limited
o further objects gained you more points
o time limit of 3 minutes

Specifications
The robot used one HS-311 servo for the actuated storage bucket,
one modifiedHS-805BB for the 1 DOF robot arm, one servo for the
robot gripper end-effector, and two modified servos for the differential
drive train.
The bucket was built from bended aluminum sheet metal, and the frame
was both milled and CNCed out from aluminum raw material.

Specially shaped foam was used inside the bucket to keep the objects
inside from rolling out while wall climbing.
1) The wheels on each side were linked together by a timing belt,
for tank style driving.
2) Rubberband was used as belt tensioner.
3) The four wheels were custom CNC machined.
4) Conveyor belt material was glued onto the wheels and
grippers for its high friction properties.
5) RC reciever antenna, wrapped so as to not tangle

Control, and the Driver


The remote control was a Hitec Laser 6 (has 6 channels), each channel
controlled each servo individually.

The agility of a remote control robot is very much a function of driver


skill. If you ever have a remote control robot contest, driver skill can
significantly affect robot performance. Practice practice practice. Know
exactly how your robot will perform. Practice in realistic settings, too.
We went as far as to reconstruct the test course ourselves, timing
everything the robot did for speed optimization, and pushing the limits to
see what the robot can do. In the video I was operating 5 servos
simultaneously with two hands on the remote, a skill that took many
many hours of practice to do. But it all paid off . . .

An image of a prototype version climbing a wall in our recreated test


course:
You probably did not gather this from the video, but the arm was used as
a balancing weight shift as it climbed the wall - not just a lifting
mechanism. The claws also had to be opened up during the climb, too, so
as to not break.

This early plastic-made prototype version attempted to climb the wall


before we learned about the weight shift feature of the arm.
Embarassingly, the basket was lowered accidently and the bot got stuck
on its way over. The gripper on this version was made from nylon, and
broke during the climb.

Difficulties
There were two main difficulties. The first is that the conveyor belt
material, in combination with tank style driving friction issues, made
turning on carpeting very difficult. At the end of the video you will
notice the robot doing weird dancing like motions as it turns around.
This is driver skill attempting to compensate for this problem. The other
major problem was arm torque. A lot of effort was put into making the
arm very light yet strong enough to support the weight of the robot while
wall climbing. If you plan to make one of your own, make sure you do
themoment arm calculations first to ensure it is strong enough. We had
to gear down the servo with external gears to have just barely enough
torque . . .

Results
So how did we fare? Beating out 12 teams in regionals, the team (four of
us) made it to nationals in California where we placed 7th (out of 14).
Competition was really impressive and I wish I had videos to show it . . .

The SolidWorks CAD file of this robot is available for download


(7.1mb). If you use the CAD (or any part of it) for your robot, please
give credit and link back to this page.

CONSTRUCTION OF ARM:
Mobile Manipulators
A moving robot with a robot arm is a sub-class of robotic arms. They
work just like other robotic arms, but the DOF of the vehicle is added to
the DOF of the arm. If say you have a differential drive robot (2 DOF)
with a robot arm (5 DOF) attached (see yellow robot below), that would
give the robot arm a total sum of 7 DOF. What do you think the
workspace on this type of robot would be?
Force Calculations of Joints
This is where this tutorial starts getting heavy with math. Before even
continuing, I strongly recommend you read the mechanical engineering
tutorials for statics anddynamics. This will give you a fundamental
understanding of moment armcalculations.

The point of doing force calculations is for motor selection. You must
make sure that the motor you choose can not only support the weight of
the robot arm, but also what the robot arm will carry (the blue ball in the
image below).

The first step is to label your FBD, with the robot arm stretched out to its
maximum length.

Choose these parameters:

o weight of each linkage


o weight of each joint
o weight of object to lift

o length of each linkage

Next you do a moment arm calculation, multiplying downward force


times the linkage lengths. This calculation must be done for each lifting
actuator. This particular design has just two DOF that requires lifting,
and the center of mass of each linkage is assumed to be Length/2.

Torque About Joint 1:

M1 = L1/2 * W1 + L1 * W4 + (L1 + L2/2) * W2 + (L1 + L3) *


W3

Torque About Joint 2:

M2 = L2/2 * W2 + L3 * W3
As you can see, for each DOF you add the math gets more complicated,
and the joint weights get heavier. You will also see that shorter arm
lengths allow for smaller torque requirements.

Too lazy to calculate forces and


torques yourself? Try my robot arm
calculatorto do the math for you.

Forward Kinematics
Forward kinematics is the method for determining the orientation and
position of the end effector, given the joint angles and link lengths of the
robot arm. To calculate forward kinematics, all you need is highschool
trig and algebra.

For our robot arm example, here we calculate end effector location with
given joint angles and link lengths. To make visualization easier for you,
I drew blue triangles and labeled the angles.

Assume that the base is located at x=0 and y=0. The first step would be
to locate x and y of each joint.

Joint 0 (with x and y at base equaling 0):

x0 = 0
y0 = L0

Joint 1 (with x and y at J1 equaling 0):


cos(psi) = x1/L1 => x1 = L1*cos(psi)
sin(psi) = y1/L1 => y1 = L1*sin(psi)

Joint 2 (with x and y at J2 equaling 0):

sin(theta) = x2/L2 => x2 = L2*sin(theta)


cos(theta) = y2/L2 => y2 = L2*cos(theta)

End Effector Location (make sure your signs are correct):

x0 + x1 + x2, or 0 + L1*cos(psi) + L2*sin(theta)


y0 + y1 + y2, or L0 + L1*sin(psi) + L2*cos(theta)
z equals alpha, in cylindrical coordinates

The angle of the end effector, in this example, is equal to theta + psi.

Too lazy to calculate forward kinematics yourself?


Check out my Robot Arm Designer v1 in excel.

Inverse Kinematics
Inverse kinematics is the opposite of forward kinematics. This is when
you have a desired end effector position, but need to know the joint
angles required to achieve it. The robot sees a kitten and wants to grab it,
what angles should each joint go to? Although way more useful than
forward kinematics, this calculation is much more complicated too. As
such, I will not show you how to derive the equation based on your robot
arm configuration.

Instead, I will just give you the equations for our specific robot design:

psi = arccos((x^2 + y^2 - L1^2 - L2^2) / (2 * L1 * L2))


theta = arcsin((y * (L1 + L2 * c2) - x * L2 * s2) / (x^2 + y^2))
where c2 = (x^2 + y^2 - L1^2 - L2^2) / (2 * L1 * L2);
and s2 = sqrt(1 - c2^2);

So what makes inverse kinematics so hard? Well, other than the fact that
it involvesnon-linear simultaneous equations, there are other reasons
too.

First, there is the very likely possibility of multiple, sometimes infinite,


number of solutions (as shown below). How would your arm choose
which is optimal, based on torques, previous arm position, gripping
angle, etc.?

There is the possibility of zero solutions. Maybe the location is outside


the workspace, or maybe the point within the workspace must be gripped
at an impossible angle.

Singularities, a place of infinite acceleration, can blow up equations


and/or leave motors lagging behind (motors cant achieve infinite
acceleration).

And lastly, exponential equations take forever to calculate on


a microcontroller. No point in having advanced equations on a
processor that cant keep up.

Too lazy to calculate inverse kinematics yourself?


Check out my Robot Arm Designer v1 in excel.

Motion Planning
Motion planning on a robot arm is fairly complex so I will just give you
the basics.
Suppose your robot arm has objects within its workspace, how does the
arm move through the workspace to reach a certain point? To do this,
assume your robot arm is just a simple mobile robot navigating in 3D
space. The end effector will traverse the space just like a mobile robot,
except now it must also make sure the other joints and links do not
collide with anything too. This is extremely difficult to do . . .

What if you want your robot end effector to draw straight lines with a
pencil? Getting it to go from point A to point B in a straight line is
relatively simple to solve. What your robot should do, by using inverse
kinematics, is go to many points between point A and point B. The final
motion will come out as a smooth straight line. You can not only do this
method with straight lines, but curved ones too. On expensive
professional robotic arms all you need to do is program two points, and
tell the robot how to go between the two points (straight line, fast as
possible, etc.). For further reading, you could use the wavefront
algorithm to plan this two point trajectory.

Velocity (and more Motion Planning)


Calculating end effector velocity is mathematically complex, so I will go
only into the basics. The simplest way to do it is assume your robot arm
(held straight out) is a rotating wheel of L diameter. The joint rotates at
Y rpm, so therefore the velocity is

Velocity of end effector on straight arm = 2 * pi * radius * rpm

However the end effector does not just rotate about the base, but can go
in many directions. The end effector can follow a straight line, or curve,
etc.

With robot arms, the quickest way between two points is often not a
straight line. If two joints have two different motors, or carry different
loads, then max velocity can vary between them. When you tell the end
effector to go from one point to the next, you have two decisions. Have it
follow a straight line between both points, or tell all the joints to go as
fast as possible - leaving the end effector to possibly swing wildly
between those points.

In the image below the end effector of the robot arm is moving from the
blue point to the red point. In the top example, the end effector travels a
straight line. This is the only possible motion this arm can perform to
travel a straight line. In the bottom example, the arm is told to get to the
red point as fast as possible. Given many different trajectories, the arm
goes the method that allows the joints to rotate the fastest.

Which method is better? There are many deciding factors. Usually you
want straight lines when the object the arm moves is really heavy, as it
requires the momentum change for movement (momentum = mass *
velocity). But for maximum speed (perhaps the arm isn't carrying
anything, or just light objects) you would want maximum joint speeds.

Now suppose you want your robot arm to operate at a certain rotational
velocity, how much torque would a joint need? First, lets go back to our
FBD:

Now lets suppose you want joint J0 to rotate 180 degrees in under 2
seconds, what torque does the J0 motor need? Well, J0 is not affected by
gravity, so all we need to consider is momentum and inertia. Putting this
in equation form we get this:
torque = moment_of_inertia * angular_acceleration

breaking that equation into sub components we get:

torque = (mass * distance^2) * (change_in_angular_velocity /


change_in_time)

and

change_in_angular_velocity = (angular_velocity1)-(angular_velocity0)

angular_velocity = change_in_angle / change_in_time

Now assuming at start time 0 that angular_velocity0 is zero, we get

torque = (mass * distance^2) * (angular_velocity / change_in_time)

where distance is defined as the distance from the rotation axis to the
center of mass of the arm:

center of mass of the arm = distance = 1/2 * (arm_length)


(use arm mass)

but you also need to account for the object your arm holds:

center of mass of the object = distance = arm_length


(use object mass)

So then calculate torque for both the arm and then again for the object,
then add the two torques together for the total:

torque(of_object) + torque(of_arm) = torque(for_motor)

And of course, if J0 was additionally affected by gravity, add the torque


required to lift the arm to the torque required to reach the velocity you
need. To avoid doing this by hand, just use the robot arm calculator.

But it gets harder . . . the above equation is for rotational motion and not
for straight line motions. Look up something called a Jacobian if you
enjoy mathematical pain =P
OBSTACLE AVOIDER CONSTRUCTION:

The photo below shows my test setup with some IR LED's (dark blue) as a
light source and two phototransistors in parallel for the reciever. You could
use one of each but I wanted to spread them out to cover a wider area. This
setup works like a FritsLDR but with IR. It has a range of about 10-15cm (4-
6 inches) with my hand as the object being detected.

I'm only running my LEDs about 20mA. My LEDs are capable of 50mA
continuous and some LEDs are capable of 100mA (see "Getting the most
from a LED").

I'm using this setup on Junior as a general purpose object advoidance sensor
to prevent him backing into anything. I'm getting a good response with less
than a volt when my hand is up close and reflecting the IR and over 4.5V
with no IR.

To get this to work well with an A/D input it needs to have a much lower
impedance (needs to let more current through). You can do this with an op-
amp but most op-amps like more than 5V and are usually more expensive
than my one transistor and three resistors. This is a simple one transistor
amplifier that gives my ADC good resolution. Click on the schematic for a
larger picture.

Starting
from the
left you
can see
my two
IR LEDs
with a
resistor
and
transistor
in series.
The
transistor
allows
the
processor
to turn the LEDs on or off. This is necessary to tell the difference between
the ambiant IR from daylight and indor lighting and the reflected light from
the LEDs that indicates the presence of an object.

Next are my two phototransistors in parallel with a 1M resistor in series.


You could use only one but I wanted to cover a wider area so my transistors
will point in slightly different directions. If either one detects IR it will allow
more current to flow. Since volts=current x resistance, even a small increase
in current will create a reasonable increase in voltage across the 1M resistor.
Unfortunately the low input impedance of many AD converters will act like
a small resistor in parallel with the 1M resistor and dramatically reduce the
output to the processor. This is where our BC549 transistor comes in to save
the day. In conjunction with the 1K and 10K resistors it amplifies the signal
so that the analog input on your processor gets a nice strong signal. The
BC549 is not too critical, just about any general purpose signal transistor
should do. My transistor had a hfe of 490 when measured with a multimeter.
You should probably have a hfe of at least 200-300.
As you
can see
my
sensor is
made
from
liberal
amounts
of
hotglue.
Click
image for
a bigger
picture.
This has
the
advantage that you can flex the leds and transistors outward to cover a larger
area. This is Juniors reversing sensor to prevent him reversing into anything
and as such will cover a wide area. I will make single Led/Phototransistor
sensors for front left and front right. This will allow him to avoid crashing
into obstacles when his rangefinder/object tracker is looking elsewhere.

Note that the phototransistors are slightly forward of the blue LEDs. This
helps stop stray light from the LEDs being detected.

Below is the sensor hooked up to Juniors mainboard which has three of my


amplifiers built in.
Using a simple test program that turns on the IR LEDs, stores the value of
the ADC, turns off the LEDs, reads the ADC again and then subtracts the
stored value from the recent value I was getting readings from 6 to 940. This
was with the curtains closed and the lights off. When the reading was 6, my
hand was about 300mm (1ft) away. With the lights on the values ranged
from about 60 to 940 with a value of 60 being with my hand only about
150mm (6inches) away. Considering the max possible resolution with a
10bit ADC is 0 to 1023, I thought 60-960 with the lights on was a very good
result.

After a comment about using sleeves I repeated these test with heatshrink
sleeves on the LEDs and phototransistors. The sleeves actually had a
negative effect and reduced the range. After I removed the sleeves I did not
get the same reduction in range with the lights on. I don't know if it is
because during the first test it was daylight outside and the curtains didn't
block it all or if it was the way I held the sensor but the second set of test
gave an almost identical range of approximately 300mm (12 inches)
reguardless of the lights being on or off. I'll have to try again tomorrow
when it is daylight again. It seems my initial test was at fault, maybe the way
I held the sensor?

This is
the single
version of
the sensor
and will
cost
about
half. In
the photo
you can
see the
current limiting resistor for the LED. Ignore the value as I had different
requirements for Junior. Use the values shown in the schematic.

I've joined the positives together so there is only three wires going back to
the mainboard.

Note that the phototransistor is slightly in front of the LED to prevent stray
light from the LED being detected.
Once
again I've
used
hotglue
and

heatshrink to make it solid and well insulated.

This is the schematic for the single version. Click on it and the photos for
larger images.

Because
this
sensor
only has a
single

phototransistor it isn't quite as sensitive. To compensate I've increased the


current to the LED to almost 50mA which is the maximum continuous
current allowed. Because the LED is pulsed on and off this is quite safe and
could have been increased to 100mA. The problem with pushing a LED to
its limits when controlled by a proccesor is that if a fault occurs in the
software then the LED could be destroyed.

When tested, The readings from the ADC of the picaxe ranged from about
100 - 910 reguardless of background lighting. Despite the slightly reduced
resolution due to a single phototransistor the range was about 400mm
(16inches). This increased range was due to the increased power to the LED.

Make certain your LED and phototransister are parallel to each other for
good range.

It was asked how wide is the detection area. Using my hand as the object at a
distance of aproximately 300mm (12 inches) from the single sensor the
detection area was about 150mm (6 inches) wide. The double sensor can
detect a wider area if the phototransistors are spread out at different angles.

Using my hand sideon to the single sensor the detection area was only about
60-70mm (2-3 inches). This is reasonably narrow due to the lenses in the
LEDs and the phototransistors.

It should be noted that this is not a linear sensor because the intensity of
light from the LEDs is 1 divided by distance squared. In other words, when
the object is twice the distance away, the IR from the LEDs is 1/4. As a
result, the closer the object, the better the resolution.

This would be a useful sensor to fill in for the dead zone of other IR sensors
such as the SHARP GP2D12. To prevent interferance, one should be
disabled when using the other.

As mentioned at the start, I've also experimented with using two of these
sensors for a simple object tracker inspired by Mintvelt's "four eyes". This
version can't tell the size or distance of an object but can track an object well
enough for a robot to recognise a moving object and give chase. Wish I still
had a cat, imagine a robot with a waterpistol chasing a cat around the house :

I've attached the code used in the video as well as an improved version
(V1.7) that eliminated the servo jitter.
This is the latest version of my object tracker as used in SplatBot. I've used
20 IR leds to increase the range. They are limited to 50mA at the moment so
that they can't be damaged by faulty code. If I was to push them to their limit
then the range could be increased further but they could then be damaged by
something like an interupt routine occuring when the LEDs are on.
This is the schematic.

Click on it for a larger picture. I found with all The LEDs on that the sensors
were swamped by reflected IR from my hand even at a distance of about
400mm. The circuit works fine and I definitely get a lot more range but I'm
going to have to remove the sensors from the board and mount them
seperately so that I can adjust their distance relative to each other to optimise
tracking and so I can better shield them from ambiant IR.

I've experimented with improving and simplifying the detection circuit. This
will give you better range.
The MPSA13 is a high gain darlington transistor with a hfe of over 5000. If
you get the MPSA14 it has about twice the gain. By adjusting the 500 ohm
trimpot you should get much better range than the old circuit.

Micro-Controller USED:
A microcontroller is an entire computer manufactured on a single chip.
Microcontrollers are usually
dedicated devices embedded within an application e.g. as engine controllers
in automobiles
and as exposure and focus controllers in cameras. In order to serve these
applications, they have a
high concentration of on-chip facilities such as serial ports, parallel
input/output ports, timers,
counters, interrupt control, analog-to-digital converters, random access
memory, read only
memory, etc. The I/O, memory, and on-chip peripherals of a microcontroller
are selected depending
on the specifics of the target application. Since microcontrollers are
powerful digital processors,
the degree of control and programmability they provide significantly
enhances the effectiveness
of the application.
Embedded control applications also distinguish the microcontroller from its
relative, the generalpurpose
microprocessor. Embedded systems often require real-time operation and
multitasking
capabilities. Real-time operation refers to the fact that the embedded
controller must be able to
receive and process the signals from its environment as they are received.
Multitasking is the capability
to perform many functions in a simultaneous or quasi-simultaneous manner
(Yeralan, S.
& Emery, 2000, p. 2).
Figure 2. Block diagrams of a Microcontroller (microElectronika, 2004)
The various components of the MCU shown in Figure 2 are explained
below:
Random Access Memory (RAM): RAM is used for temporary storage of data
during runtime.
ROM: ROM is the memory which stores the program to be executed.
SFR Registers: Special Function Registers are special elements of RAM.
Program Counter: This is the "engine" which starts the program and points
to the memory
address of the instruction to be executed. Immediately upon its execution,
value of
counter increments by 1.
Development of a Microcontroller Based Robotic Arm
552
Control Logic: As the name implies, it which supervises and controls every
aspect of operations
within MCU, and it cannot be manipulated. It comprises several parts, the
most
important ones including: instructions decoder, Arithmetical Logic Unit
(ALU) and Accumulator.
A/D Converter: A/D stands for analog to digital. They convert analog
signals to digital
signals.
I/O Ports: To be of any practical use, microcontrollers have ports which are
connected to
the pins on its case. Every pin can be designated as either input or output to
suit user's
needs.
Oscillator: This is the rhythm section of the MCU. The stable pace provided
by this instrument
allows harmonious and synchronous functioning of all other parts of MCU.
Timers: timers can be used for measuring time between two occurrences and
can also
behave like a counter. The Watchdog Timer resets the MCU every time it
overflows, and
the program execution starts anew (much as if the power had just been
turned on).
Power Supply Circuit: this powers the MCU. (MikroElectronika, 2004).
Methodology
The method employed in designing and constructing the robotic arm are
based on the operational
characteristics and features of the microcontrollers, stepper motors, the
electronic circuit diagram
and most importantly the programming of the microcontroller and stepper
motors.
Block Diagram
The block diagram of our work is as shown in Figure 3.
Figure 3. Block diagram
CONTROL
UNIT
[MCU]
KEYPAD
MOTORS
MAGNETIC
SENSORS
Sensor driver
(transistor)
Motor driver
(transistors)
POWER
SUPPLY
UNIT
Jegede, Awodele, Ajayi, & Miko
553
Circuit Diagram
The electronic circuit diagram of the development board is as shown in
Figure 4. The connection
of the identified components and devices are as shown. The components
shown are: the MCU, the
LATCH 74LS373, the EPROM 2732, Intel 8255 PIO, diodes, resistors,
capacitors, inductors,
transistors, and op-amps. This components work together to achieve the set
goal of controlling
the anthropomorphic-like arrangement of the stepper motor. The
microcontroller is the processing
device that coordinates all the activities of all the components for proper
functioning.
Power Supply
This is used to power the whole system i.e. the Control Unit, Magnetic
Sensing Unit, and the
Stepper Motors. The transformer is a 220/12V step down transformer. We
used a bridge rectifier
to convert the 12V alternating current to direct current.
The unregulated output from the filtering circuit is fed into a voltage
regulator LM7805 and
LM7812. These two are chosen for the design because the LM7805 has an
output of +5V which
is required to power the Control Unit, and the Magnetic Coil while the
LM7812 has an output of
+12v which is required to power the Stepper motors.
The TIP41 connected to the IC regulators functions as an emitter follower
amplifier making sure
that at least the required voltage by the Control Unit, the Magnetic Coil and
the Stepper Motors
produced.
MCU 8051
This is the processor. It coordinates the operation of the robotic arm by
collecting information
from the EPROM, the LATCH, and the PIO; interprets and then execute the
instructions. It is the
heart of the whole system.
LATCH 74LS373
This is a D-type transparent latch. It is an 8 bit register that has 3 state bus
driving outputs, full
parallel access for loading, and buffer control inputs. It is transparent
because when the enable
EN(enable) input is high, the output will look exactly like the D input. This
latch particularly
separates the data and address information from the MCU before sending to
the instructed destination.
The high-impedance state and increased high logic-level drive provide these
registers
with the capability of being connected directly to and driving the bus lines in
a bus-organized system
without need for interface or pull-up components. These latches are
particularly attractive for
implementing buffer registers, I/O ports, bidirectional bus drivers, and
working registers.
We are using this latch because there is a need to separate our 8 bit data and
8 bit address information
from the common line of the MCU, and send them to the appropriate
device(s).
8255 PIO
This is a programmable input/output device. It interfaces the connection
between the 8051, the
LATCH 74LS373, and the EPROM 2732 to external devices such as the
stepper motors, (as is
our own case) thereby allowing for communication.
(EPROM) 2732
EPROM stands for Electrically Programmable Read Only Memory. We
made use of this external
EPROM specifically because it makes the controller cheaper, allows for
longer programs, and
because its content can be changed during run time and can also be saved
after the power is off.
Development of a Microcontroller Based Robotic Arm
554
Figure 4. Control Unit
(Digitouch, 2006)
The overall diagrammatical layout of the complete circuit diagram of the
whole control unit is
shown in Figure 4.
Stepper Motor
The stepping motor is a motor that is driven and controlled by an electrical
pulse train generated
by the MCU (or other digital device). Each pulse drives the stepping motor
by a fraction of one
revolution, called the step angle.
Jegede, Awodele, Ajayi, & Miko
555
The Magnetic Sensing Unit
The magnetic sensing unit consists of a magnetic coil which can be
magnetized simply by the
action of the port P1.0 of the 8051. The port 1.0 was made use of because
when designated as
output, each of the pin can be connected up to four TTL inputs. That is why
we have connected
the pin 1.0 to the magnetic coil through three TTL logic.
The design is such that on the downward movement of the wrist, the 8051
sends an electrical signal
to the Darlington pair connected to the magnetic coil.
The magnetic sensing unit is powered on by three BC548 Darlington NPN
pair transistor, through
a diode each and a 5k resistor. The pair amplifies the current and makes the
magnetic coil turn
into magnet. Then any magnetic material could be picked (by attraction) and
then movement continues.
The magnetic material can then be dropped at another point when the wrist
is made to
come down, this also is an action from the 8051 as it withdraws the
electrical signal from the coil.
Control Circuit
This is the control panel of the system as it oversees the operations of the
mechanical arm, and
the magnetic sensing unit. The MCU 8051 of the control unit acts as the
brain of the control panel
as it coordinates all the activities of the other devices. When power (+5V)
was supplied to the
control unit, the MCU started off by loading the program from the EPROM
M2732A, interpreted
and executed the instruction codes through the various operational principles
which had been described
in details in chapter three (session 3.2).
The 8051 then sends signal to the stepper motor which moves 9° per step.
The stepper motor
(M3) at the wrist first moves five times (45°) turning the gears to cause a
downward movement of
the hand. The stepper motor at the shoulder (M2) moves next stepping five
times (45°) and makes
the connected gears to cause the movement of the arm 45° forward. Then the
stepper motor at the
base(M1) moves either ten times (90°) or twenty times (180°), depending on
the button pressed,
causing the whole structure to turn from right to left( or vice versa) through
the connected gears.
The magnetic coil resting on the hand becomes magnetized immediately the
last gear on the hand
stops moving. Then, it magnetizes (picks) any magnetic material it can find
and then M3 and M2
moves the arm up while M1 moves (rotates the structure) from left to right
(or vice versa) and
then the 8051 demagnetizes the magnetic coil thereby making the hand to
drop the metallic object.
Results
This work is able to successfully accomplish the defined functionality. A
sample robot which can
rotate, magnetize an object, lower and raise its arm, by being controlled by
the 8051 microcontroller
is built successfully. The 8051-development board is soldered and it used the
required procedure
for the correct operation of the controller. The 8051 development board has
been interfaced
to the stepper motors such that the anthropomorphic like structure can be
controlled from
the buttons at the base of the structure (robotic arm).
There are four buttons being controlled by the control unit at the base of the
arm:
• ON/OFF: the ON button puts on the system while the OFF button puts off
the system
• START/STOP: the START button starts the movement of the whole arm
from its reset
point, while the STOP button takes the arm back to its reset button after
completion of its
movement.
Development of a Microcontroller Based Robotic Arm
556
• RIGHT-LEFT/LEFT-RIGHT: when this button is switched to the RIGHT-
LEFT part it
causes movement from right to left, while the LEFT-RIGHT part causes
movement from
left to right.
• 180/90: when the button is on 180, it causes a rotation of 180 degree of the
base stepper
motor, but when put on 90 degrees, it causes rotation of 90 degrees.
Conclusion
In this paper we have interfaced the robot with different kinds of I/O devices
and our method allows
for storing more programs to enhance more functionality. From our work,
we deduced that
in comparison to humans, robots can be much stronger and are therefore able
to lift heavier
weights and exert larger forces. They can be very precise in their
movements, reduce labor costs,
improve working conditions, reduce material wastage and improve product
quality (Mair, 1988).
This is why they’re very important in industries because the overall
objective of industrial engineering
is productivity (Hicks, 1994, p. 61).
Meanwhile, intelligent Control is the discipline that implements Intelligent
Machines (IMs) to
perform anthropormorphic tasks with minimum supervision and interaction
with a human operator.
This project can be further enhanced to as a multi-disciplinary project
involving electrical and
computer engineers to work together to create more complex, intelligent
robots controlled by the
8051 micro-controller.
References
Bejczy, A. K., & Jau, B. M. (1986). Smart hand systems for robotics and
teleoperation. Proceedings of the
Sixth CISM-IFToMM Symposium on Theory and Practise of Robots and
Manipulators, Hermes Publishing.
Engelberger, J. F. (1989). Robotics in service (pp 224-226). Kogan Page.
Hall, D. V. (2004). Microprocessors and interfacing, Programming and
hardware (pp 278-279). McGraw
Hill.
Hicks, P. E. (1994). Industrial engineering and management. McGraw-Hill.
Mair, G. M. (1988). Industrial robotics. Prentice Hall.
microElectronika. (2004). Architecture and programming of 8051 MCU
(Chapter 1).
Pont, M. J. (2002). Embedded C. Addison-Wesley.
Robotics Research Group. (n.d.). Learn More History. Retrieved from
http://www.robotics.utexas.edu/rrg/learn_more/history/
Selig, J. M. (1992). Introductory robotics. Prentice Hall.
Valavanis, K. P., & Saridis, G. N. (1992). Intelligent robotics. Academic
Publishers.
Yeralan, S., & Emery, H. (2000). Programming and interfacing the 8051
microcontroller in C and Assembly.
Rigel Press.

You might also like