You are on page 1of 10

101seminartopics.

com ROBOTICS
1. Introduction
One of the main goals of the study of robotics today is to understand not only the control systems used to stabilize the position, motion, and other forces of the robot, but also to allow robots to effectively work via sensors and actuators in dynamic environments. The field of robotics has its origin in science fiction, the term ROBOT was derived from the English translation of a fantasy play written in Czecholosolvakia around 1920. It took another 40 years before the modern technology of industrial ROBOTS began. Today robots are highly automated mechanical manipulators controlled by computers. First, let's decide what a 'robot' is. To be a robot, it should have the ability to think - make decisions. This may sound hard IF FRONT LEFT WHISKER SENSOR IS ON THEN STOP, GO BACKWARDS 2 FEET, TURN RIGHT, CONTINUE. This is a very common 'IF-THEN' statement. A machine that can perform this instruction is truly a robot. So, the conclusion is that to be called a robot; you really need an on-board brain and a way to program it. Classification of ROBOTS: Robot like devices: 1. Prostheses 2. Exoskeletons 3. Telecherics 4. Locomotive mechanism Classification by coordinate system: 1. Cylindrical coordinate robots 2. Spherical coordinate robots 3. Jointed arm robots 4. Cartesian coordinate robots Classification by control method: 1. Non-servo controlled robots 2. Servo controlled robots 3. Point-to-point servo controlled robots 4. Continuous-path servo-controlled robots

2. SENSORS IN ROBOTICS
The sensors used in robotics include wide range of devices that can be divided into the following general categories: 1. Tactile sensors:

101seminartopics.com
Touch Sensors Force sensors Force sensing wrist Joint sensing Tactile array sensors

2. Proximity and Range sensors. 3. Miscellaneous sensors and sensor based system

2. 1. Tactile Sensors:
Tactile sensors are devices that indicate contact between themselves and some other solid object.

2.1.1. Touch Sensors:


Touch sensors are used to indicate that contact has been made between two objects without regard to the magnitude of the contacting force. Another use for touch sensing device would be as a part of inspection probe that is manipulated by the robot to measure the dimensions on the work part.

2.1.2. Force Sensors:


Capacity to measure forces permits the robots to perform a number of tasks. Force sensing in robotics can be accomplished in several ways. A commonly used technique is force-sensing wrist. This consists of a special load-cell mounted between the gripper and the wrist. Another technique is to measure the torque being exerted by each joint; this is usually accomplished by sensing motor current for each of the joint motors.

2.1.3 Force-sensing wrist:


The purpose of force sensing wrist is to provide information about three components of force and three moments being applied at the end of one arm. Since the forces are usually applied to the wrist in the combination it is necessary to first resolve the forces and moments into their six components. The kind of computation can be carried out by the robot controller or by a specialized amplifier designed for this purpose. Based on this calculations the, the robot controller can obtain the required information of the forces and the moments being applied at the wrist.

2.1.4. Joint sensing:


If the robot uses dc servomotors then the torque being exerted by the motors is proportional to the current flowing through the armature. A simple way to measure this current is to measure the voltage drop across a small precision resistor in series with the motor and power amplifier. This simplicity makes this technique attractive. However measuring the joint torque has several disadvantages. First measurements are made in joint space, while the forces of interest are applied by the tool. The measurements therefore not only reflect the forces being applied at the tool, but also the forces and torques required to accelerate the links of the arms to overcome the friction and transmission losses of the joints.

2.1.5. Tactile array sensing:


A tactile array sensor is a special type of force sensor composed of a matrix of force sensing elements. The force data provided by this type of device may be combined with pattern recognition to describe a number of characteristics about

101seminartopics.com
the impression contacting the array sensor surface. Among these characteristics are (1.) The presence of an object (2.) The objects contact area, shape, location and orientation, (3.) The pressure and pressure distribution and (4.) Force magnitude and location. Tactile array sensors can be mounted on the fingers of the robot gripper or attached to a worktable as a flat touch surface. The device is typically composed of an array of conductive elastomer pads. As each pad is squeezed its electrical resistance changes in response to the amount of deflection in the pad, which is proportional to the applied force. By measuring the resistance of each pad, information about the shape of the object against the array of sensing elements can be determined; in the background is the CRT monitor display of the tactile impression made by the object placed on the surface of the sensor device. Research into potential materials for tactile sensors has lead to the development of a force sensing skin of polyvinylidene di-Fluoride. This is a piezoelectric material which means that it generates an output voltage when it is squeezed.

2 .2. Proximity Sensors:


Proximity sensors are used in robotics for the near-field work in connection with object grasping or avoidance. The proximity sensors are further classified as:

2.2.1. Inductive sensors:


Sensors based on the change of inductance due to the presence of metallic object are among the most widely used industrial proximity sensors. The effect of bringing the sensor in close proximity to a ferromagnetic material causes a change in position of the permanent magnet. Under static conditions there is no movement of the flux lines and therefore, no current is induced in the coil. However as a ferromagnetic object enters or leaves the field of the magnet, the resulting change in the flux lines induces the current pulse whose amplitude and shape are proportional to the rate of change in the flux. The voltage waveform observed at the output of the coil provides an effective means for proximity sensing.

2.2.2. Hall Effect Sensors:


Hall effect relates the voltage between two points in a conducting or semi conducting material to magnetic field across the material. When used by themselves, Hall-effect sensors can only detect magnetized objects. However when used in conjunction with a permanent magnet in a configuration they are capable of detecting all ferromagnetic materials. Hall effect sensors are based on the principle of Lorentz force which acts on a charged particle through a magnetic field. This force acts on a charged particle traveling through a magnetic field whose axis is perpendicular to the plane established by the direction of motion of the charged particle and the direction of the field. Bringing ferromagnetic material close to the semiconductor-magnet device would decrease the strength of the magnetic field, thus reducing the Lorentz force and ultimately, the voltage across the semiconductor. This drop in voltage is the key for sensing proximity with Hall-effect sensors.

2.2.3. Capacitive sensors:


Capacitive sensors are potentially capable of detecting all solid and liquid materials. These sensors are based on detecting a change in capacitance induced by a surface that is brought near the sensing element. The sensing element is a capacitor composed of sensitive electrode and a reference electrode. A cavity of dry air is usually placed behind

101seminartopics.com
the capacitive element to provide isolation. The rest of the sensor consists of electronic circuitry which can be included as an integral part of the unit, in which case it is normally embedded in a resin to provide sealing and mechanical support. Typically, these sensors are operated in a binary mode so that a change in the capacitance greater than a preset threshold T indicates the presence of the object, while changes below the threshold indicate the absence of an object with respect to detection limits established by the value of T.

2.2.4. Ultrasonic sensors:


The basic element is an electro acoustic transducer, often of the piezoelectric ceramic type. The resin layer protects the transducer from humidity, dust, and other environmental factors; it also acts as an acoustical impedance matcher, since the same transducer is generally used for both transmitting and receiving fast damping of acoustic energy is necessary to detect objects at close range.

2.2.5. Optical proximity sensors:


This sensor consists of a solid-state light emitting diode (LED), which acts as a transmitter of infrared light, and a solidstate photodiode which acts as the receiver. The cones of light formed by focusing the source and the detector on the same plane intersect in a long, pencil like volume. This volume defines the field of operation of the sensor since a reflective surface which intersects the volume is illuminated by the source and simultaneously seen by the receiver.

2. 3. Miscellaneous Sensors and Sensor-based Systems:


This category includes devices with the capability to sense variables such as temperature, pressure, fluid flow, and electrical properties. Voice programming systems can be used in robotics for oral communication of instructions to the robot. Voice sensing relies on the techniques of speech recognition to analyze spoken words uttered by a human being and compare those words with a set of stored word patterns. When the spoken word matches the stored word pattern, this indicates that the robot should perform some particular actions which correspond to the word or series of words.

3. Evolutionary Algorithms
Charles Darwin first identified the process of natural selection in his monumental work The Origin of Species. Certain characteristics that govern an individual's chances of survival are passed to offspring during reproduction. Individuals with poor characteristics die off, making the species stronger in general. Inspired by this natural process of "survival of the fittest," evolutionary algorithms (EAs) attempt to find a solution to a problem using simulated evolution in a computer. There are two related, yet distinct, types of EAs. The first type, genetic algorithms (GAs), involves manipulating a fixed-length bit string. The bit string represents a solution to the problem being solved; it is up to the programmer to determine the meaning of the string. The second, genetic programming, involves generating expression trees as used by languages such as LISP and SCHEME. With genetic programming, actual programs can be created and then executed. Generally speaking, the process used in evolutionary learning begins by randomly generating a population of individuals where each individual is a potential solution to the problem. The population is the set of individuals generated. Each individual contains a genome, the content produced during the execution of EAs. In the case of GAs,

101seminartopics.com
the fixed-length bit string is the genome. Next, each individual in the population is evaluated using a fitness function. Fitness functions use a method of testing solution quality for the respective problem domain. How the fitness of an individual is used depends on the implementation of the evolutionary algorithm. Usually, a higher fitness value corresponds to a greater chance of the individual being selected for reproduction. Different methods of producing the next generation exist, but most commonly, two operators are employed: mutation and breeding. Mutation involves randomly altering one or more genes in an individual's genome. Breeding uses a crossover operation to combine components of two parents' genomes to produce one or more children. Once a new generation is created, the old one is discarded. A cycle of evaluation and reproduction is repeated through several generations, as specified by the programmer. The evolution process ceases after a termination criterion has been met, and the result of the evolution run is the individual found to be best so far. A summary of the components needed for EAs follows as: A genetic representation for potential solutions to the problem (i.e. fixed (or variable)-length bit string, expression tree) A method to create an initial population of individuals A fitness function that plays the role of the environment, rating solutions in terms of their "fitness" Operators that determine the composition of children Values for various parameters, which the evolutionary algorithm uses (solution set size, probabilities of applying genetic operators, etc.)

3. 1 Robot Navigation:
As humans, we enjoy the luxury of having an amazing computer, the brain, and thousands of sensors to help navigate and interact with the real world. The product of aeons of evolution has enabled our minds to model the world around us based on the information gathered by our senses. In order to navigate successfully, we can make high-level navigation decisions, such as how to get from point A to point B, as well as low-level navigation decisions, such as how to pass through a doorway. The brain's capacity to adapt has also made it possible for people without certain sensory capabilities to navigate throughout their environments. For example, blind people can maneuver through unfamiliar areas with the aid of seeing-eye dogs or canes. Even without all of our sensors, we are able to cope with familiar and unfamiliar environments.

3. 2. Applying EAs to Robotic Navigation:


Evolutionary algorithms have been implemented to solve problems in robot navigation. In particular, EAs have been used to get a robot to learn how to adapt to its limited capabilities. Using GP in this way is termed evolutionary learning. Robot navigation is a difficult problem to solve, and it becomes increasingly more difficult when a robot encounters a failure in either its sensors, devices that tell a robot about its environment, or its actuators, devices that allow a robot to physically interact with its environment. In most other EA applications, two distinct steps occur: an initial training period is conducted by running the EA on a training set, followed by the execution of the best-fit solution. With Continuous and Embedded Learning, the two

101seminartopics.com
steps are linked and operate concurrently while the robot is performing its task. Thus, the learning process is continuous. Figure 1 shows an outline of this approach.

Figure 1. The Continuous and Embedded Learning Model.

Key components of the model are:


Learning continues indefinitely, allowing adaptation to sensory failure. Learning is done on a simulation model. The simulation model is updated to reflect changes in the real robot or environment.

3. 3. Performance:
In an experiment, a robot was given the task of navigating to the opposite side of a room through a passage in a wall starting from one wall and heading in a random direction from -90 degrees to 90 degrees, with 0 pointing directly to the opposite wall. An example course is shown in Figure 2. The measure of success of CEL was the percentage of times the robot successfully reached the goal without bumping into walls along the way or running out of time.

Figure 2 A Nomadic Technologies Nomad 200 Mobile Robot was used for the real-world tests. The robot is a threewheeled synchronized-steering unit that features seven front-mounted sonar units. The monitor in the execution system combines the output of sensors with information about the robot's actuator execution to determine sensor failures. If a frontal sonar unit continuously outputs zero as the distance from the robot to an obstacle but the robot

101seminartopics.com
keeps moving forward, then that sensor is marked as having failed. In the experiment, sonar "failure" was simulated by covering the sensor with a hard material. The initial rule-set used in the experiment gave the robot a basic notion of how to get to the other side of the room, but did not take into account obstacles or walls. The simulation model initially assumed all sensors were working. With no evolution and all sensors functioning, the robot was able to navigate successfully about 25% of the time. After 50 generations of evolution, the robot's success rate increased to about 61%. With three sensors on the right side of the robot disabled after 50 generations, the success rate dropped to about 42%, but increased over 50 more generations to about 63%. Figure 3 shows the learning curve.

Figure 3 The results of the experiment show that a robot using CEL can not only learn how to improve its navigation abilities by itself, but also re-learn how to navigate after suffering the loss of some sensory capability. While the

results may not be terribly impressive, improvement in navigation and adaptation to changes in capability are clearly shown. 4. Living on Their Own
Ecosystem-like settings are interesting from an alife perspective. Within ecosystems, the main goal of a robot is self-preservation (staying operational for an extended period of time). Resources, especially energy, are limited in time and space. Consequently, robots must compete for them. Consequently, competition forms the basis of all robot interactions in the system. There is a substantial amount of alife research based on simulated ecosystems. The basic ecosystem located at the Flemish Free University of Brussels (VUB) is a 5 m X 3 m space enclosed by walls (Figure 4). Initially it includes simple mobile robots, the moles (Figure 5). The photo sensors, positioned at the front of the robot, are used to navigate and to find objects in the ecosystem.

Figure 4

101seminartopics.com

Figure 5
The arena also contains a charging station, where the robots can autonomously recharge their batteries. The robots drive into the charging stations and make contact with conductive plates that connect them to the energy source. This electrical energy is food for robots. The robots constantly monitor their energy level and in this way know when they are hungry. The robots' whole world revolves around earning and competing for the electrical food. There is a limited amount of food in the ecosystem. Also, the ecosystem contains competitors: small boxes that house lamps emitting a modulated light. These lamps are connected to the same global energy-source as the charging station; they therefore feed on the same source as the robots. The robots must knock out the competitors (Figure 6). If a robot knocks a competitor several times, the lamps inside the competitor dim, and the robot has an additional amount of energy at its disposal from the charging station. After a while, the competitor recovers and the lamps inside return to their default intensity. The competitors establish a kind of work task for the robots that is paid in electric energy.

Figure 6
Research on robotic ecosystems usually deals with homogeneous agents, which is also the case for the ecosystem described above. A heterogeneous set-up, in contrast, is more interesting for several reasons. First, heterogeneity substantially adds to the complexity of the environment, which is of key interest to alife-oriented research. Second, artificial ecosystems with just one species are hardly biologically relevant. The extended ecosystem features a new inhabitant in the form of the head, which consists of a camera on a pan-tilt-unit and substantial vision capabilities. However, as it is fixed to its position, the head cannot access the charging station, and it is forced to cooperate with the mobile moles. When a mobile robot approaches a pitfall, which it cannot distinguish from the charging station, the head can warn the mobile robot. The mobile robot in exchange can share the benefit of the saved energy with the head. Recent developments have also shown that communication between robots is very important; robots need to be able to share information and intentions. And they have to negotiate their relationship with each other: "I will show you a food source! Let me warn you of danger! What will you give me if I do this?" Recently scientists have started to study how robots can construct a primitive language to communicate exactly these kinds of concepts.

101seminartopics.com
5. Emerging Trends and Applications of Robotics 5.1 Mars Rovers:
Planetary scientists have a unique problem when trying to collect data: in the absence of an interplanetary manned space program, they must rely upon remote sensing techniques. An invaluable part of their arsenal is spacecraft that can study the region of interest in detail. Landers in particular are able to bring sensors and instruments into close contact with the planetary environment. This allows them to take direct measurements of properties such as soil mechanics and elemental composition, as well as image surface features in greater detail than is possible with orbiters or Earth-bound instruments. Landers, however, have the shortcoming that they are limited to a single site for study. An important addition, then, is a mobile robot, which can roam over a much larger segment of the terrain and can carry imagers and other instruments to a variety of features spread over this larger area. The recent Mars Pathfinder mission demonstrated the utility of such an approach. The Sojourner Rover was carried to the surface of Mars by a lander. Over an area approximately 10 meters in radius, it conducted soil experiments in several different terrains, took detailed images of rocks and soils from centimeters away, and placed its on-board spectrometer on 16 distinct targets (9 rocks and 7 soil sites).

Figure 8
Future missions plan to expand this successful technology by incorporating mobile robots (rovers) which are capable of traversing even larger distances, carrying their instruments to a wider variety of features and even caching samples along the way. A key requirement for these new planetary robots is greater navigational autonomy, since longer distances than in prior missions must be covered between communications with Earth. The field of planetary exploration offers a rich environment for mobile robotics research, encompassing many complicated issues which must be tackled in order to produce successful missions. In particular, some of the issues related to autonomous pathplanning for planetary rovers. With the tools provided by improvements in these areas, mobile robots will prove to be an even more useful and robust addition to the techniques available for planetary exploration. 5.2 Biological Inspiration: When trying to re-create biological phenomena, it makes sense to look at the biological world for some good initial pointers. For instance, the study of robotic locomotion is aided by observing and imitating biological systems. A robot is limited when using wheels (very un-biological) in rough terrains, whereas a biologically inspired robot would have legs like that of a spider for maneuvering around and over obstacles. Another example may be how a spider-like robot that loses a leg might adjust its gait to complete a mission-critical task. One can look at an experiment performed as an example. Here two legs from opposite sides of a spider were amputated. Two possibilities existed for the spider: it could either retain a diagonal rhythm of locomotion using four

101seminartopics.com
legs, but lose its ability to balance; or it could move three legs at a time, offsetting its diagonal rhythm. The experiment showed that the spider chose the second alternative, which was more mechanically stable. This apparently has to do with the fact that although the walking patterns are determined in the spiders central oscillator, they can be modified by feedback from sensory organs in the legs. In effect, the spider adapted by reprogramming itself.

You might also like