You are on page 1of 45

Robot Sensors

Unit-IV
Need for sensors, types of sensors, robot vision systems, robot tactile
systems, robot proximity sensors, robot speech and hearing, speech
synthesis, noise command systems, speech recognition systems.
Need for Sensing System
•Human operators are constantly receiving and acting upon large amounts of sensory feedback. It is via such feedback that
human operators are able to make sense of their surroundings and perform both simple and complex tasks. It also enables
human operators to make sense of uncertain situations and adapt themselves accordingly.
•If industrial robots are to be capable of performing the same types of tasks without constant human supervision they too
must be equipped with sensory feedback. The feedback from different sensors is then analysed via a digital computer and
associated software.
•A robot which can see and feel is easier to be trained in the performance of complex tasks. There are many reasons for
employing sensory feedback in robots, some are as following:
 To provide positional and velocity information concerning the joint, arm and end-effector status.
 To prevent damage to robot itself, its surroundings and humans
 To provide identification and real time information indicating the presence of different types of components and
concerning the nature of tasks performed.
• As the tasks to which industrial robots are applied become more demanding, there is an increasing need for sensory
feedback. Sensory feedback is needed whenever there is uncertainty and unpredictability
Sensory devices
• A sensor is a device that converts the physical variable into an electrical variable
• A sensor need calibration in order to be eligible for measuring
• Calibration is a process to establish the relation between measured variable and converted
output signal is established.
• Important characteristics of sensing devices:
• Range, Response, Accuracy, Sensitivity and Linearity
 Range: Minimum and maximum change in input signal to which the sensor can respond
(Wide range).
 Response: Instantaneous response i.e. Minimum time to respond to the change
 Accuracy: Output of a sensor should reflect the exact sensible input (High)
 Sensitivity: Response of a sensor towards the change in input (High)
 Linearity: Sensor must have same sensitivity over wide range of operating conditions
Other characteristics of sensory devices:

•The following are the other considerations.

1. The device should not disturb or have any effect upon the quantity it senses or measures.

2. The device should be suitable for the environment in which it is to be employed.

3. Ideally, the device should include isolation from receiving excess signals or electrical noise that could give

rise to the possibility of miss operation or damage of the sensor, circuit or the system.

• Also important are the physical size, cost and ease of operation
Types of sensors
• Classification of sensors:
• Contact and Non-Contact sensors
• Prominent Contact sensors: Touch, Slip, Force, and
Torque
• Prominent Non-Contact sensors: Range, Proximity and
Vision which rely on response of detector towards
variation in acoustic and electromagnetic radiation.
Contact Sensors
• Contact between manipulator hand and object in workspace is
identified using these sensors
Eg: Object location, Recognition, Control the force
• Categories of Contact sensors
1.Binary or Touch
2.Analog or Force
• Binary/Touch sensors respond to the presence or absence of an
object by giving a binary response either “0” or “1”
• Analog /Force sensors respond to the local force magnitude along
with the contact.
Touch Sensors
• Touch Sensors are used to indicate that contact has been
made between two objects without regard to the
magnitude of the contacting force. Included within this
category are simple devices such as limit switches,
microswitches, etc.
• In the simplest arrangement, a switch is placed on the inner
surface of each finger of a manipulator hand as illustrated in
the figure below:

• Multiple binary touch sensors can be used on the inside or


outside surface of each finger to provide further tactile
information. The latter use is for providing control signals
useful for guiding the hand throughout the workspaces
and is analogous to what humans do in feeling their way
in a totally dark room.
Position and Displacement (P&D) Sensors

•Position and displacement are used as components of the robot control system. The control structure of a
robot needs to know the position of each joint in order to calculate the position of the end-effector thus
enabling the successful completion of the programmed task. The movements of the joints can be either linear
or angular depending on the type of robot.
• Future developments in the robotics field may be allow the use of external sensors to actually measure the
end-effector position in relation to its surroundings, thereby dispensing with the need to calculate it from the
joint positions. But, for the present, internal position sensors remain the most accurate and reliable way of
determining the end-effector position within a robot control structure.
•There are two main types of position sensors: absolute and incremental, the latter being also called
displacement sensors.

• Some of the common type of position sensors are:


• Potentiometers, Encoders, Force and Torque, Wrist, Joint, Tactile Array, Slip sensor for robot gripers
Potentiometers (POT)
• “POT” is an analog device whose output voltage is proportional to the
position of a wiper.
• POTs are inexpensive and easy to apply but temperature sensitive.
Potentiometers (POT)
•These are analog devices whose output voltage is proportional to the position (linear or angular) of a wiper.
Potentiometers may be either linear or angular. When voltage is applied across the resistive element, the output voltage
between the wiper and the ground is proportional to the ratio of the resistance on one side of the wiper to the total
resistance of the resistive element, which essentially gives the position of the wiper. The function of a potentiometer
can be represented by the function,
•Vo= K.θ Where, Vo= output voltage
•K = voltage constant of the potentiometer in volts per radian (for angular pot)
Or volts per mm (for linear pot) Θ = position of the pot in radian or mm

• Potentiometers are relatively inexpensive and easy to apply. However, they are temperature sensitive, a characteristic
that also affects their accuracy. The wiper contact is another limiting factor, being subject to wear and producing
electrical noise.
Encoders
• Encoders are digital devices which are non-contact position sensors
• Classification of encoders
• Incremental
• Absolute
• Incremental encoders: a disc is encoded with alternating transparent and opaque (light and
dark) stripes aligned radially
• Photo transmitter (light source) and Photo receiver (photocell) are located on either side of
the disc
• When a disc rotates, light passes it alternatively and the received train of pulse frequency is
proportional to the speed of the disc.
• . By counting the number of pulses and by adding or subtracting based on the direction. It is
possible to use the encoder for position information with respect to a known starting position.
Absolute Encoder
• Absolute encoders with which position can be known in absolute terms employ the same basic construction as
incremental encoders except that there are more tracks of stripes and a corresponding number of transmitter and
receivers. The stripes are usually aligned to provide a binary number proportional to the shaft angle and the angle can be
read directly from the encoder without any counting. The resolution of an absolute encoder depends on the number of
tracks (n) and is given by,
Resolution=2n
• A 3-bit encoder is illustrated in fig. the binary code at each of the eight radial positions is generated by a unique
combination of ON’s and OFF’s of photo cells.
• The resulting code is shown in table. Malfunctions may occur if the photo cells become skewed from a radial line. At the
transition points, between numbers 1 and 2,3 and 4, and 5 and 6, a number of photocells change their states at a time,
which can be another source of malfunction.
• To overcome this problem the natural binary code is modified so that at any transition position. a change in only one
binary digit is necessary. The resulting code is known as the gray code. Gray coded disc increases reliability but requires
the use of additional decoder circuitry.
Encoder and binary/gray code
LVDT:

• The linear variable differential transformer (LVDT) is


another type of position sensors, whose construction is
shown in below figure. It consists of a primary, two
secondaries, and a movable core. the primary is excited with
a.c source.
• When the core is in its exact central location, the amplitude
of voltage included in secondary-1will be the same as that in
the secondary-2.
• The secondaries are connected to the cancel phase, and the
output voltage is zero at this point. The below figure
illustrates the nature of output voltage as the core is moved
to the left or the right. Finally, the ac output of LVDT can be
converted to dc using rectifiers.
LVDT OUTPUT VOLTAGE VS CORE POSITION
FORCE and TORQUE sensors
• Measurement of force permits the robot to perform the different
tasks like grasping parts, machine loading and assembly loading.
• Force sensing is like wrist sensing where strain gauges are
mounted between the tip of the robot arm and end effector
• Joint sensing is another technique where sensor measures the
Cartesian components of force and torque acting on robot joint
• Finally, a third technique is to form an array of force sensing elements so
that the shape and other information about the contact surface can be
determined. [https://www.youtube.com/watch?v=22XGs9I0dZs]
Strain gauge
type wrist
force sensors
• The purpose of a force sensing wrist is to provide information
about the three components of force (Fx, Fy, and Fz) and the
three moments (Mx, My, Mz) being applied at the end of the
arm.
• Wrist sensors are small, sensitive, light weight and compact in
design.
• As an example, the sensor shown in Fig. 5.7 (a) uses 8 pairs of
Wrist Semiconductor strain - gauges mounted on four deflection
bars — one gauge on each side of a deflection bar. Since the

Sensors eight pairs of strain gauges are oriented normal to the X, Y& Z
axes of the force co-ordinate frame, the three components of
the force, F and the the components of the moments, M can be
determined but a properly adding and subtracting the output
voltages respectively.
• Design of Force- Torque wrist sensor shown in Fig. 5.7(b) can
measure the three components of force and the three
components of torque in the cartesian coordinate frame.
JOINT SENSORS
• Many robots are powered by the DC Servo Motors for which the output
torque is linearly related to the armature current in the motor. T=kI

• Thus, it is possible to measure the torque of each robot joint by


inserting a suitable series resistor stair in one lead of each Servo motor
and measuring the voltage across it which is proportional to the motor
current[V=IR) and hence it is related to the motor torque.

• Motor current measurement is simple and inexpensive but is not


without drawbacks accuracy is affected by any friction in the motor
bearing, associated gears and joint bearings .

• The measurements not only effect the force being applied at the
tool(end effector) but also forces and torques are required to accelerate
the links of an arm to overcome the friction and the transmission losses
of the joints .
Different types of Tactile Sensors
Robot hand with Tactile Array Sensors
• Tactile sensor is a special type of force
sensor composed of a matrix of a forces
sensing elements.
• Characteristics about impression in
contact with array sensor surface like
object presence, contact area, shape,
location and orientation.
• Pressure and pressure distribution are
identified by tactile sensors can be
mounted in the fingers of the robot
gripper or attached to a worktable as a
flat touch surface.
• Light received by a photo transistor of
LED determines the force on each
sensor.
Robot hand with Tactile Array Sensors
• The inner surface of each finger has been covered with
the tactile sensing array.
• The external sensing plates are typically binary devices,
which are fitted to the outer jaw surface.
• The inner face has a matrix of a touch buttons to sense
the work piece.
• The outer sensor may be used to detect unavoidable
obstacles and prevent the damage to the manipulator.
• The inner mounted tactile sensors array useful to in
getting information about the object before it is acquired
at a particular location.
• The array may be composed of a multiple individual
sensors.
• The force on each sensor acts against the washer
displacing a vane that controls the amount of light
received by a phototransistor from a LED. The tactile
array can be made of an artificial skin
Robot Proximity Sensors
• The presence of an object can be sensed by a proximity
sensor
• Photoelectric proximity sensors control the motion of a
manipulator arm
• Proximity errors are located on the end effector or wrist of
a robot arm.
• It consists of solid-state LED which acts as transmitter of
IR light and a solid-state photodiode which acts as a
receiver.
• The sensing space is approximately the intersection of
two poles Infront of the sensor.
• If the reflectance and incidence angle are fixed the
distance may be measured with suitable calibration.
• When the received light exceeds a threshold value it
corresponds to a predetermined distance.
Robot Proximity Sensors
• Fig indicates a proximity sensor that
locates a part. The distance between a
target and array of light sensors is
given by d=1/2 tanθ.
• The surface of target is parallel to the
sensing array.
• Optical devices, Acoustics, eddy
currents, magnetic fields etc. are
involved in the designing Proximity
sensors
• Proximity is a contact less sensor
• Infrared LED (Transmitter) and
Photo diode (Receiver) mounted on a
small package.
Range Imaging sensor:

• A typical range energy sensor uses a laser


scanner which is classified into two basic
schemes
1. Based on transmitting a laser pulse and
measuring the time of arrival of the reflected
signal.
2. It is based on transmitting an amplitude
modulated laser beam and measuring a phase
shift of the reflected signal.
• The transmitted beam and the received light
are essentially coaxial. The principal of range
sensor is shown in figure.
[https://www.youtube.com/watch?
v=cSe7kQiASPY]
Robot Vision Systems
• Robot Vision or Computer Vision or Machine Vision is a prominent sensor technology used in many industrial applications
• Robot vision may be defined as a process of extracting, characterizing and Interpreting information from images of a 3D world
• Robot vision consist of 3 functions:
1.Sensing and digitizing: it is a process that yields a Visual image of sufficient contrast that is typically digitized and stored in computer
memory
2.Image processing and analysis: The digitalized image is subjected to image processing and analysis for data reduction and
interpretation of the image. i.e. (Preprocessing, Segmentation, Description, Recognition, Interpretation)
 Pre-processing- technique deals with noise reduction and enhancement.
 Segmentation- partitions and image objects of interest
 Description- Computes various features like size, shape etc. suitable for diffentiating one object from another.
 Recognition- It identifies the object.
 Interpretation- it assigns meaning to an ensemble of recognized objects in the scene.
3.Application: The current applications of robot vision includes Inspection, Part Identification, Location and Orientation.
Research is on going in advanced applications in complex inception guidance and navigation.
Machine vision system functions
Levels of Robot vision system
• Low level vision: Involves sensing and Preprocessing (Stereo-Imaging)
• Medium level vision: Consists of extraction, characterization and
labelling components in an image resulting from low level vision
• Segmentation, description, and recognition of individual objects are
treated as medium level vision functions
• High level vision: It refers to the process that attempt to emulate
cognition.
• Interpretation is treated as high level vision processing
Applications of Robot Vision System
• Inspection: Inspection is carried by vision system and robot is used as a secondary role to support
the application
• Objectives of vision system include checking for surface defects, verification of presence of
components in assembly, measuring for dimensional accuracy, discovery of flaws in labelling during
final inspection, checking for the presence of holes and other features in a part
• Identification: Recognize and classify an object will be the action here rather than inspection i.e.
either accept or reject
• Objectives include part sorting, palletizing, and depalletizing and picking parts that are randomly
oriented from a conveyor or a bin
• Navigation: Here, Robot takes action or move as per the visual input.
• Objective is to control the trajectory of the robot’s end effector towards the object in workspace
• Activities include part positioning, retrieving and reorienting parts that are moving over conveyor,
assembly, bin picking, seam tracking in continuous arc welding, and automatic robot path planning
and collision avoidance using visual data.
• [https://www.youtube.com/watch?v=6NoE-m_Jzl0]
Robot Speech and hearing
• A language is a system of communication in speech and writing.
• A language that has developed in a natural way and is not designed by
a human.
• The main challenge of Artificial Intelligence (AI) is to identify the
correct meaning for a particular situation.
Speech synthesis:

• Speech Synthesis is a process by which a machine produces speech.


• Robots working in a noisy factory have no need for a voice, but robots in a quieter environment could
benefit from one.
• Voice output or speech synthesis is a considerably easier to add to a machine than vision processing or
voice input.
• When phonic integrated circuits are used, voice output requires very little computing power. All the
computer has to do is supply a series of addresses to the integrated circuit.
• A machine capable of speaking could communicate with a human without requiring the human to look at
the lights or other output devices on the machine. Speech is omnidirectional and is understood by most
humans; thus, humans could listen to the speech output of a machine while focusing their vision on some
other task.
• [https://www.youtube.com/watch?v=XsMRxNSDccc]
HOW IS SPEECH SYNTHESIS DONE?

• Three main methods are used to produce machine speech output:


1 . using pre­recorded messages produced by human voices;
2.using digitized words produced by human voices; and
3 . using phonic integrated circuits.
• All of these methods rely on the use of electronics, and each method has its advantages and disadvantages. No
matter what method is used for speech synthesis, the speech must be understandable to humans.

Using Prerecord ed Human Voice:


• Early electronic speech units used tape recordings of a human voice. This method produces a machine voice that
sounds like a natural human voice. The human voice is an analog form of energy composed of a complex mixture
of tones. Because it is not a simple tone, it is not easy to reproduce by machine.
• Of course using prerecorded messages requires anticipating every possible message that the machine will have to
speak. Then there is the problem of storing and accessing these messages. If they are stored on a single magnetic
tape, the tape will have to be searched each time for the appropriate message. If each message is placed on a
separate tape, you must supply a tape player for each message.
• Perhaps the messages can be put on short loops of tape and you can use a tape player with a method of finding,
playing, and storing the different loops-something like a juke­box.
Using Digitized Human Voice

• A system that uses a digitized human voice has the advantage of being able to store the sounds in the computer's
memory, like any other numeric information.
• Once again, the sounds are originally made by a human being, but in this case the sounds are converted into
numbers and stored as digital information.
• While present-day digitally recorded compact disk systems sound as good as the older analog recorded
records, digitized speech systems have not been as successful, since they cannot record all the tones in human
speech. Still, the sounds do closely resemble human speech.
• With the information being stored in the computer's memory, it is possible to find the proper message for each
occasion within less than 1 second, so the digitized system has a fast response and still sounds good.
• It is possible to record and store digitized messages one word at a time. How­ever, humans speak not just in
individual words but in whole sentences with wide variations in tone, speed, and emphasis. When individual
words are recorded and later strung together into messages, the messages sound flat, mechanical, and
emotionless.
Using Phonic Integrated Circuits
• The least expensive and most popular method of producing speech output for ma­chines is to use phonic integrated circuits. A phonic
integrated circuit uses pho­nemes-the smallest distinct units of speech to produce speech.

• A typical phonic integrated circuit may have up to 64 phonemes, including 5 different lengths of pauses. The other 59 phonemes are
such things as the "OY" in boy and the "A Y" in sky.

• . Each pho­neme is stored in the speech processor integrated circuit at one of 64 addresses.

• When the external parts are connected to the processor to form a working audio amplifier, all you need to do to get a sound is to give
the processor the address of the desired phoneme.

• By giving the processor a series of addresses, you can produce words and sentences. To produce the word may, for example, you
would simply address an "MM," as in milk, and then an "EY," as in beige. Producing the word six is a little more complicated. It
requires first an "SS," as in vest; then another "SS"; then an "IH" as in sit; then another "IH"; then a pause of 50 milliseconds called
"PA3''; then a "KK2," as in sky; and finally another "SS." As you can see, using a phonic integrated circuit requires a little bit of work.
WHAT IS SPEECH UNDERSTANDING?
• Speech understanding is really composed of two separate tasks:
1.Speech recognition and
2.Word understanding.
• Speech recognition deals with recognizing that a certain series of .sounds
represents a certain word.
• Word understanding-or natural language understanding-deals with the
relationships between different words. It turns out that, at best, people think that
they hear and understand 80 percent of what is said. Actually, the percentage
may well be less than 50 percent.
• One factor that contributes to our low speech recognition percentage is the
prevalence in English of homonyms--words that sound alike, but have different
meanings.
• Following is a short list of homonyms:
• Homonyms may also consist of two groups of words. For example, try saying the following phrases over and over
quickly: a. Recognize speech
b. Wreck a nice beach.

WHY IS SPEECH UNDERSTANDING DESIRABLE?


• Having a robot respond to spoken commands can make programming the robot easier. Currently, most robots are
programmed through the pressing of keys or buttons on a keyboard, control panel, or teaching pendant. A
person whose hands are full at a time when a command must be given to the robot cannot manage this job.
• Voice-controlled robots are also helpful to the disabled. A quadriplegic, for example, can use a voice-controlled
educational or small industrial robot to perform tasks.

HEARING:
• Speech understanding starts with hearing sounds, which must then be recognized as words. Finally, the words
must be combined to produce idea understanding.
• Hearing is the process of responding to sound waves. This generally involves having some type of specialized
sound-wave receptor, or ear. For a machine, the ear frequently takes the form of some type of microphone, which
is a transducer that converts sound waves to electrical energy.
Figures 20-5 and 20-6 show the signals produced by the words up, down, left, right, forward, and reverse. The
similarity in pattern among these words is especially a problem for a voice recognition system that only looks at
how long the words are,
NOISE COMMAND
SYSTEMS:
• Noise command systems use the most primitive level of hearing.
• The simplest of them wait for and respond to noises above some minimum volume.
• The first such sound might cause the device to start running forward; a second such
sound might cause it to reverse directions, and a third such sound might cause it to
stop.
• Even when dealing with such simple hearing, it is possible to work out a series of
commands for a robot.
• For instance, the robot might measure the length of time between noises . Or it
might count the number of noises in a given time period thus, one noise in a given
time period. Might be the command to start, two noise detections in one time period
might be the command to stop, and three and four noises in the time period might be
the commands for left and right.
SPEECH RECOGNITION SYSTEMS

• Speech recognition systems are used to recognize spoken words. These may work on one
word at a time or on continuous speech.
•The system may be tuned to a single person’s voice, or it may be usable by many
different persons.
•The least expensive systems operate with isolated words and or speaker dependent.
• The most expensive systems while still highly experimental are being directed
toward recognition or continuous speech and toward speaker independence.
Speaker-dependent Systems
• Speaker dependent system is a speech recognition system that recognizes only the words or commands spoken
by one particular human.
• A word spoken by a human has a very complicated waveform that is as unique as a fingerprint.
• In fact, the electronic recordings of human words are called voice prints.
• A speaker-dependent system is easier to build than a speaker-independent system; however, it must be trained
by having the intended user speak the command words while the system records them.
• One obvious problem with using a speaker-dependent system is that it must be retrained when you switch
workers. Indeed, if the machine is used on more than one shift, it may require retraining at the beginning of each
shift.
• Alternatively, the speech recognition system might be required to memorize voice prints of the same commands
as given by several persons.
Speaker-Independent
Systems
• A speaker-independent system is a voice recognition system designed to recognize commands
given to it, no matter who does the speaking. This presents many problems to the system, since
different persons say words at different speeds, frequencies and inflections. Consequently, a
speaker-independent system must analyze the speech in several steps and then base its
identification on these efforts.
• First it must break the sounds it hears down into phonemes; that is, it must find the pauses in the
words. Then it must look the phonemes up in its memory and try to find the words they
represent, which may require storing several pronunci­ations of the words. Then it must compare
these words against known commands. Finally, it must carry out the command. If a command is
not understood, the robot­at least the speech recognition system-must request that the command
be given again.
• It may be necessary to train people in the use of the speaker-independent system. In particular,
they must make an effort to speak very distinctly.
• Even the speaker-independent system has problems, however, under very noisy
conditions or with persons with heavy accents.
[https://www.youtube.com/watch?v=oTbPry0tk1A]
Isolated-word
Recognition Systems
• The term isolated-word recognition is applied to a speech recognition
system that only hears one word at a time. It processes the word it hears
and performs the associated command.
• If multiple words are spoken to an isolated-word recognition system, it
treat them as a single word or command. If two commands are spoken
too close together, they are taken as a single unrecognized commands
a result, an isolated-word recognition system is slow and cannot follow
a conversation.
• Most present-day word recognition systems operate with isolated words,
whether they are speaker-dependent or speaker-independent. These
systems are all right for giving simple commands to a machine or robot,
but they do not approach the intelligence of human speech understanding.
Isolated-word Recognition System Block diagram:
• Olivetti of Italy is working on a
listening and talking machine that
might function as a listening
typewriter.
• It is envisioned as an isolated-word
speaker-in­ dependent word
recognition system.
• It works with a vocabulary of
10,000 words. When a word is first
spoken, the machine selects the
best 3 or 4 candidates for the actual
word and places the likeliest of
these in the text.
• If later words give better clue to
the word just spoken, the machine
goes back and changes that word in
the text.

You might also like