Professional Documents
Culture Documents
Andrea A. Ordean
B.A.Sc. in Electrical Engineering
University of Waterloo, 1995
Examining Committee:
- --
Associate Professor Shaliram Payandeh
Senior Supervisor
Professor VeronicaJhM-
Supervisor
-
* . - .- --
Date Approved:
PARTIAL COPYRIGHT LICENSE
Author: -, --
(signature)
(name)
(date)
Abstract
I would like to thank Dr. Shahram Payandeh for his guidance and support in the
course of this work. I would also like to thank Dr. Veronica Dahl for the interesting
discussions and moral support she has given me.
Glossary
An autonomous system is one which can act on its own, i.e. it is self-contained.
A distributed control architecture is one where the control is shared among many
modules. On the other hand, a centralized control architecture is one where one
module is responsible for the control of all others.
The plane in which the grasp triangle lies is called the grasp plane.
A grasp triangle is the triangle which is formed by the fingertips of three fingers
of a robotic hand, when they are in contact with an object.
The grasp configuration is the geometric figure formed by the contacts of the fin-
gers on an object.
Haptic sensory inputs refer to sensory inputs which are received from the sense of
touch.
The internal force is that component of the applied force at a contact point which
does not contribute to resisting external forces and moments acting on the object.
A modular system is one with relatively independent components, which com-
municate or interact with each other in a well defined manner.
A polygon is a shape in two dimensions which has any number of strait edges,
and no curved edges.
The shape primitive is the basic shape an object has. For example, a cup has a
handle, but the main feature of the cup is a cylinder. Thus, the cylinder is the
shape primitive of the cup.
A tip (prehension)grasp is a grasp in which only the fingertips of the fingers are
in contact with the object.
Probability Notation:
P(A,B) is the probability of A and B occurring at the same time.
P(A I B) is the probability of A occurring, given that B has occurred.
Table of Contents
Approval ..........................................................................................................................ii..
...
Abstract ..........................................................................................................................iii
Acknowledgements ......................................................................................................iv
Glossary ..........................................................................................................................v
Table of Contents .........................................................................................................vii
..
List of Tables ..................................................................................................................xi
List of Figures ...............................................................................................................xi1
..
1 Introduction 1
1.1 Rising to the Challenge ...................................................................................2
1.2 Literature Review ............................................................................................4
1.2.1 Types of Control Architectures .........................................................4
1.2.2 Object Reconstruction ........................................ . . . . . . . . . . 8
1.2.3 Object Gamut .......................................................................................8
1.3 Contributions ...................................................................................................9
1.4 Thesis Layout ...................................................................................................9
vii
2.3 The EOS Architecture .................................................................................... 16
2.3.1 Information Board (IB) ..................................................................... 18
2.3.2 Agents ................................................................................................. 19
2.3.3 Controller ...........................................................................................35
2.3.4 Data Flow ........................................................................................... 36
2.4 Rating System .................................................................................................36
2.4.1 Assigning the Default Sub-Rating ..................................................38
2.4.2 Assigning the Opportunistic Sub-Rating ....................................... 38
xii
Figure 20: Friction Cone .............................................................................................. 65
Figure 21: Grasp Profile of Spheres and Cylinders ................................................. 66
Figure 22: Grasping with and without Friction ....................................................... 67
Figure 23: Geometrical Equivalent ............................................................................ 68
Figure 24: Agent Execution Profile ............................................................................ 72
Figure 25: info Stage Behaviour Evolution ................................................................ 73
Figure 26: Agent Execution Profile .info Stage ........................................................ 74
Figure 27: Agent Execution Profile - info Stage Revisited ...................................... 75
Figure 28: grasp Stage Behaviour Evolution .............................................................76
Figure 29: Agent Execution Profile - grasp Stage .....................................................77
Figure 30: Sphere vs . Cylinder Performance - info Stage ........................................ 79
Figure 31: Sphere vs . Cylinder Performance - grasp Stage .....................................80
Figure 32: Sphere vs . Cylinder Performance ............................................................ 80
Figure 33: Sphere-Based Objects ................................................................................82
Figure 34: Sphere-Based Objects Performance ........................................................ 83
Figure 35: Encountering Secondary Features of Spheres ....................................... 84
Figure 36: Cylinder-Based Objects ............................................................................. 86
Figure 37: Cylinder-Based Objects Performance .....................................................87
Figure 38: Encountering Secondary Features of Cylinders .................................... 88
Figure 39: Contact Position Reference ....................................................................... 90
Figure 40: Finger 2 Contact Positions ........................................................................92
Figure 41: Position 2a Agent Execution Profile - info Stage ...................................93
Figure 42: Position 1Agent Execution Profile - info Stage .....................................94
Figure 43: Comparing Agent Execution Profiles .....................................................95
Figure 44: Comparing info Stage Agent Execution Profiles ................................... 96
Figure 45: System Performance for Various Weighting Factors ........................... 99
Figure 46: Parallel Reconfigurable Jaw Gripper [12] ........................................ 103
...
Xlll
Figure 47: Parallel Jaw and Rotary Direction of Motion ...................................... 104
Figure 48: Face and Side Contact of Pins ................................................................ 104
Figure 49: Agent Pin Configurations ....................................................................... 105
Figure 50: Rotary Disc Symmetry ............................................................................ 106
Figure 51: Defining the Rotary Disc Pin Location ................................................. 106
Figure 52: Y-Axis Object Symmetry ........................................................................ 107
Figure 53: Disc Rotation Angles ............................................................................... 107
Figure 54: Object Face Representation .................................................................... 108
Figure 55: Types of Contacts ....................................................................................109
Figure 56: Parallel Reconfigurable Jaw Gripper .Algorithm .............................. 113
Figure 57: Triangular Face #1 ................................................................................... 119
Figure 58: Triangular Face #2 ................................................................................... 120
Figure 59: Triangular Face #3 ................................................................................... 121
Figure 60: Square Face #1 .......................................................................................... 122
Figure 61: Square Face #2 .......................................................................................... 122
Figure 62: Square Face #3 .......................................................................................... 123
Figure 63: Pentagon Face #1 .....................................................................................124
Figure 64: Pentagon Face #2 ..................................................................................... 124
Figure 65: Pentagon Face #3 .....................................................................................125
Figure 66: Dual EOS System ..................................................................................... 127
Figure 67: Frontal View of Robotic Hand ............................................................... 132
Figure 68: Kinematic Model of a Finger ..................................................................
133
Figure 69: Finger Positions on Robotic Hand ........................................................133
Figure 70: Finger Coordinate Frame and Joint Position Variables .....................134
Figure 71: Sample Finger Configuration with Coordinate Frame ...................... 136
Figure 72: Determining Workspace Constraints ................................................... 140
xiv
Chapter 1
Introduction
The general problem which has yet to be ultimately solved in robotics is that of a
robotic hand manipulating an arbitrary object with the ease with which human
beings do. The attempts to solve this problem are numerous, however, there is still
much work to be done. This thesis contributes to the solution of this general prob-
lem by focusing on aspects of this problem which still need much research: grasp-
ing curved objects whose shape and location is unknown, the use of haptic
exploration for object reconstruction, and the design and development of an archi-
tecture which is flexible enough to support the integration of this and a variety of
other related tasks.
The task of manipulating an object involves three phases: locating the object in its
environment, establishing a contact with the object, and applying forces to the
object such that the object can be manipulated as desired, Figure 1.The location of
the object in the environment requires the robot hand to scan its environment until
the hand comes into contact with the object. This is analogous to the challenge a
visually impaired person faces in trying to find an item in a room. The contact
establishment of the object involves the hand achieving a specific grasp configura-
tion on the object. At this time it is not important how much force is applied to the
object, only the points of contact of the hand with the object. The last phase of
manipulating an object is the application of forces to the object. This means that
given the contact points of the robotic hand fingers with the object, the fingers can
then apply forces with magnitudes and directions at these points, such that the
object can be manipulated as desired. For example, the forces required to simply
hold an object are different than the forces required to move the object from one
place to another.
In this thesis, the second phase is addressed, i.e. establishing a contact. As seen in
Figure 1, this phase is further sub-divided into two stages. The goal of the first
stage, the Object Information Gathering (info) stage, is to (partially) identify the
object, while the second stage, the Evolution to Tip Grasp (gasp) stage, aims to pro-
duce a stable grasp of the object.
The system developed here is the EOS, Enhanced Opportunistic System. The EOS
is an autonomous system which is used to accommodate all task requirements for
Contact Establishment, such as: object representation, robotic hand modelling and
control thereof, object reconstruction method integration, and a grasp evaluation
method. The architecture itself is modular and flexible, while exhibiting a central-
ized control.
Objects come in many shapes and sizes. Ideally, we would like to be able to deal
with any type of object, but this is not yet realizable. Much of the current research
using only haptic exploration, limits the class of objects to polygons and polyhe-
dra, [19], [32]. This work will focus on grasping curved objects, such as spheres
and cylinders.
In order to ensure that the resultant grasp is good, i.e. stable, the grasp is required
to meet the active closure criteria, as proposed by Yoshikawa [40]. The active clo-
sure criteria ensure that the resultant grasp of the robotic hand is stable, i.e. the
object can be held/moved without it slipping out of the hand which has grasped
it. Grasp stability can be achieved through two methods: direct computation or
grasp evolution. The EOS control architecture enables an evolutionary approach
to achieving a stable grasp. The goal of this evolutionary method is to help the fin-
gers move in such a way as to ultimately produce a stable grasp. Direct computa-
tion assumes exact knowledge of the shape and location of the object. It must deal
with many possible answers and must take them all into account before deciding
on one. The method of direct computation has been well studied in the literature
[I], [21], [22], [31], including a survey of the area [351.
These control modes are all suited to different types of situations. The situations
are characterized by how constrained the goals of the system are and by the
amount of uncertainty in the environment of the system. The environment
includes all objects, as well as the robot hand. Table 1 summarizes the control
modes and the situations to which they best lend themselves.
Seitz and Kraft [34] have used the goal-specific reaction control mode in vis:ion
assisted grasping and implemented it as a set of planning algorithms controlled
by a set of pre-defined situations. A hierarchical architecture was used.
Tomovic et al. [38], Overgaard et al. [27], and Stansfield [36] have all used the reflex
control mode, thus these systems require no planning. Tomovic et al. [38] and
Overgaard et al. [27] used a distributed control architecture for this implementa-
tion, while Stansfield [36] used a knowledge-based system. The reflex control
mode is suited to these grasping problems because the shape and location of the
object to be grasped is either known or can be determined with great certainty by
use of a vision system.
Ananthanarayanan et al. [I] have combined the use of two control modes, best
classified as goal-specific reaction and dead-reckoning, within a hierarchical Black-
board architecture. An off-line planner is responsible for mapping out a series
tasks primitives which are to be executed.
Halpern et al. [91 have acknowledged the need for agents of Multi-Agent systems
to compute their own knowledge. The authors distinguish between two types of
knowledge: externally ascribed knowledge and explicit knowledge. Externally
ascribed knowledge is the type of knowledge which the system programmer gives to
the system, while explicit knowledge is the type of knowledge, which the system
acquires through its sensors. The explicit knowledge is what determines an
agent's behaviour. The EOS makes use of these two types of knowledge, as Halp-
ern et. al. have done, but in the case of the EOS this knowledge is used to
empower the agents, thus distributing some of the roles of the controller, of the
Blackboard architecture, to the autonomous modules, the agents. This agent
autonomy resembles more the action system modules of the Opportunistic Con-
trol Model. The knowledge given to the agents is used by the agents to determine
their confidence in themselves at any point in time. The confidence of an agent is
its usefulness factor in the current situation.
The approach of augmenting the Blackboard architecture with the help of Bayes'
Rule has been previously mentioned in the literature within the context of evi-
dence incorporation and hypothesis generation [6], [30], [37] in the area of speech
and image recognition. This idea can also be applied to rating agents and even to
the way in which the controller chooses the most appropriate agent.
1.2.2 Object Reconstruction
The idea of object reconstruction has also been explored by Seitz and Kraft [34].
However, they used vision instead of haptic exploration, as have Rodrigues et al.
[33], Stansfield [36], and Tomovic et al. [38].
The need for haptic exploration has been recognized by many researchers, as
vision is not always available or usable. For example, Okamura et al. [24] use hap-
tic exploration in the manipulation of cylindrical objects. Nagata et al. [19] use
haptic exploration to construct models of polyhedrons. Haptic explorations of
unknown curved shapes have been investigated by Charlebois et al. [2], [3], as
well as by Chen et al. [4]. Charlebois et al. have identified two types of exploratory
procedures (EPs) for the purpose of object identification. The first, EP1, requires
one fingertip to roll about a contact point without sliding, and the second, EP2,
requires three fingers to be dragged across the surface of the object. The output of
EP1 is a set of two radii of curvature of the object at the point of contact. EP2
returns a description of the shape of the patch of the object probed.
Shimoga's survey [35] traces the synthesis of grasps from 1981 to 1990 with
respect to achieving force closure grasps. Of the 15 accomplishments listed only 1
is not constrained to apply to polygonal or polyhedral objects. The area which
needs more attention is that of curved objects, thus this thesis focuses on grasping
curved objects.
1.3 Contributions
First of all, this thesis makes a contribution towards solving the general problem
of one day being able to grasp and manipulate any object with a robot hand by
extending the work of Charlebois [21 to show how curved objects such as spheres,
cylinders, and combinations thereof can be reconstructed and grasped in a suc-
cessful manner.
Lastly, the introduction of a novel rating system is proposed to enhance the Black-
board based architectures, such as the EOS. The rating system is used to rate the
utility of the agents of the EOS so as to simplify the controller's agent selection
process. By doing this, it also distributes some of the controller's work among the
agents.
Since this work deals with a simulated task, the representation of the simulated
environment, consisting of the robotic hand and objects to be grasped, is the topic
of Chapter 3.
The EOS starts off with the object in contact with one or more fingers and must go
through the Object Information Gathering stage to (partially) identify the shape of
the object to be grasped. Upon object identification, the tip-prehension grasp
begins to evolve within the Evolve to Tip Grasp stage. The manner in which the
object to be grasped is reconstructed and grasped is discussed in Chapter 4.
Finally, Chapter 7 presents the conclusions drawn from this work and discusses
ideas for future work.
Chapter 2
The EOS is developed to suit the particular needs of being able to simulate the
interaction between a robotic hand and objects in the environment. As a result,
this system's modularity is achieved through the agent representation of the phys-
ical, behavioural, and task oriented components. The novel rating system is used
to implement an evolutionary approach to achieving the desired goal and to
enforce the autonomy of the agents. Finally, the whole system is tied together
through the controller which ensures the continuity of the program execution.
The EOS has been implemented in SICStus Prolog v3.3. SICStus Prolog is logic
programming language which is easy to learn and use and where the code can be
written with few errors. Since SICStus Prolog v3.3 does not have adequate graph-
ing capability, MATLAB ~ 4 . was
2 ~ used to visualize the status of the EOS, such as
the modeled hand and the modeled objects. This visualization is very valuable in
seeing what the system is doing at a given time.
Lastly, the match-based control process ensures that only the most suited agent is
allowed to execute its actions at the current time.
Each of these three processes is part of the EOS, although in a different fashion
than envisioned by Hayes-Roth [Ill. The event-based triggering mechanism is
embedded within each agent and is covered in section 2.3. The strategic planning
process is present through the division of the task into two stages, as explained in
section 2.2 and through the presence of behavioural agents, introduced in section
2.3.2. Finally, the match-based control process is implemented as a novel rating sys-
tem, introduced by Ordean and Payandeh [26], which allows agents to rate their
own confidence and influence the rating of other agents' confidence. The novel
rating system is distributed among all the agents and is discussed in detail in sec-
tion 2.4.
The Opportunistic Control Model is based on the Blackboard system, thus so is its
architecture. As a result, the Opportunistic Control Model architecture consists of
three types of structures. Perception systems are responsible for monitoring the
environment and keeping track of relevant parameters such as the current loca-
tion of the fingers of the robot hand, the current orientation of the hand with
respect to the object, etc. The reasoning system interprets events and decides which
of the actions are to be performed. Action systems cause the physical actions to be
performed.
Section 2.3 presents the details of how the Opportunistic Control Model has
evolved into the Enhanced Opportunistic System to deal with the complex prob-
lem of establishing a stable grasp through evolution.
2.2 Stages
In the EOS, the strategic planning process component is partly satisfied by the
division of the task into stages. In this case there are two stages and the goal of the
first stage must be met before the second stage is entered.
Given an initial contact point between one of the fingers and the object, the system
derives a rough shape and location of the object and then it produces a good tip-
prehension grasp of the object by the robot hand. This is accomplished in two
stages: object information gathering (info), and evolution to tip grasp (grasp).
The goal of the info stage is to (partially) identify the object, by requiring that at
least one of the estimated shapes in the environment meet a minimum level of
confidence. An estimated shape is one which the robotic hand has already come in
contact with and, thus, has already partially identified. The confidence in an
object is increased by continuously gathering information about it, as in section
4.1.3.The realization of the goal from its initial condition is accomplished by shape
description and shape matching.
Shape description is the method of identifying an object by probing it and using the
contact point data to estimate a quantitative description of the object. This is the
way in which an object is first identified by the robotic hand. Shape matching is the
method of identifying an object by probing it and using the contact point data to
find a match for the object in a database. This method avoids the burden of having
to estimate an approximate shape of the object. Shape matching is used to verify the
objects already (partially) identified through shape description and to identify new
objects not already encountered.
LA lnfo Stage
DO
Shape Description
(ep1 & epl-set,
post_shape)
DO
No Shape Matching
(ep3 finger1/2/3, wrist)
I
Yes
END Yes No
lnfo Stage
14
dure, see sections 2.3.2 epl and 4.1. EP1 is executed iteratively at different points
on the object. A confidence value is assigned to the data of EP1 at every execution
based on the shape estimated. These confidence values are reinforced every time
the estimated shape is contacted again. EP1 continues to be executed in different
locations on the object surface until the confidence in one of the estimated shapes
meets a certain minimum value. The minimum value was set at 0.75, although this
value can easily be changed for a higher or lower requirement, as desired. Only
one of the objects is required to meet this criteria, although here too, this can easily
be changed to include all objects.
Once at least one object has been (partially) identified, shape matching is performed
via EP3 to verify the estimated shape of the object. EP3 is a probing enveloping
grasp, see sections 2.3.2 ep3 and 4.2. The contact points between the robotic hand
and the object are sensed during EP3 and these points are then evaluated for a
match with one of the estimated shapes. The info stage is summed up with the
flow chart in Figure 2.
-
2.2.2 Stage 2 Grasp
The initial condition of the second stage is the same as the goal of the first stage,
i.e. at least one object has been (partially) identified. The goal of this stage is to
produce a stable tip-prehension grasp of the object. The realization of the goal
from the initial condition is an evolutionary process guided by the behavioural
agent tip. Agent tip helps coordinate the physical agents fingerl, finger2, finger3,
and wrist to achieve the desired effect, see section 2.3.2.
The fingerl, finger2, finger3, and wrist agents each take the current situation under
consideration and suggest a possible course of action to transform the enveloping
grasp into a stable tip-prehension grasp. Tip-prehension is used here to denote a
type of grasp where only the fingertips of the robotic hand are in contact with the
object. The grasp stage is illustrated with the help of the flow chart in Figure 3.
DO
Ensure
Tip Grasp Behaviour
--
(tip)
DO
t
Move Wrist
( wrist)
DO
Actuate Finger1 Joints -
(finger 1)
DO
Actuate Finger2 Joints
(finger2)
DO
Actuate Figner3 Joints -
(finger3)
END
Grasp Stage
The agents correspond to the action systems and they are the ones which execute
actions when they are permitted.
-
-
--
Rating
I Rating
I Rating
I
Body
I -
Body
I
Information Board
Simulated
Environment )
These perceived state variables are known to the agents, controller, and environ-
ment simulation modules, but the actual state variables are only known to the
environment simulation modules.
<agent-namel>:<ib>
<agent_name2>:<ib>
. . .
ent-nameN>:<i
Figure 5: Information Board Representation
18
The IB also contains agent-specific status data which can be accessed by only one
agent. An example is the number of times an agent is called. The IB can be repre-
sented as shown in Figure 5.
The way in which the status variables are stored is always the same; each variable
corresponds to one asserted clause on the IB. This clause consists of two parts, the
label of the variable and its corresponding value, i.e. [ l a b e l , Value]. The
value can be a number, a string of characters, or a list thereof.
Note that in Prolog, a capitalized word represents a variable value, and a lower
case word represents the name of a predicate/function or the label of a variable on
the IB.
2.3.2 Agents
The agents are responsible for executing actions, given the current stage and sta-
tus of the system. As a result there are several of them, each one specializing in a
different type of action.
All agents have the same structure and are subject to the rating system, see section
2.4, in an equal manner. As seen in Figure 4, each agent is made up of three parts,
each part having a counterpart in the Opportunistic Control Model:
(i) The pre-condition is the triggering mechanism for controlling the partici-
pation of agents during each cycle.
(ii) The rating is a means of determining the value of the agent's utility dur-
ing each cycle. This is part of the overall rating system.
(iii) The body consists of a set of event-action clauses to be executed when
the agent is allowed to do so. Event-action clauses are actions which are
executed when the corresponding event occurs.
Given its structure, an agent is defined as in Listing 1:
These agents control physical parts of the robotic hand and the actions, which
these agents perform, are a function of the current system stage.
During the info stage, the fingers are required to make contact with the object with
at least two of their links. The algorithm used for this purpose simulates the
underactuated finger presented by Pollard [31]. This means that the finger's joint2
angle is actuated while holding joint3 angle fixed, until link2 makes contact. Once
link2 makes contact with the object, the joint3 angle is actuated until link3 makes
contact. Ensuring that link2 makes contact prior to link3, means that the wrap
grasp is guaranteed, Figure 6.
(d) Link2 Contact; Joint3 Actuated (e) Link2 Contact; Link3 Contact
Figure 6: Control of an Underactuated Finger
The control of each finger during the info stage is implemented as shown in the
pseudo code segment for finaerhr:
/ * Get fingerN's link contact status. * /
get-from-IB(fingerN:contact,
[Llcontact,L2contactIL3contact])
if and(L2contact,not(L3contact)
then / * only link2 has made contact * /
increase(Joint3,~ewJoint3),
assert-on-IB(fingerN:angles,
[Jointl,Joint2,NewJoint3]),
elseif and(not(L2contact),L3contact)
then / * only link3 has made contact * /
assert-on-IB(fingerN:angles,
[Resetl,Reset2,Reset3])
get~from~IB(wrist:opportunistic,Wrist0),
increase(WristO,NewWristO),
,
put~on~IB(wrist:opportunisticINewWristO)
elseif and(not(L2contact),not(L3contact))
then neither link2 nor link3 has made contact * /
increa~e(Joint2~~ewJoint2)
assert-on-IB(fingerN:angles,
[Jointl,~ew~oint2,Joint3])
else / * both link2 and link3 have made contact * /
<do nothing>
/ * Decrease fingerNrs opportunistic sub-rating. * /
,
get~from~IB(finger~:opportunistic~ngerO~
decrease(Finger0,NewFingerO) I
assert~on~IB(finger~:opportunistic,~ew~ingerO).
Listing 2: Finger Control During info Stage
During the grasp stage, the joint angles of the fingers are actuated so that only the
fingertips make contact with the object, as shown in the following pseudo code
segment:
/ * Get fingerN1s link contact status. * /
get~from~IB(fingerN:contactl
[Llcontact,L2contactIL3contact])
if or(Llcontact,L2contact)
then / * link1 or link2 has made contact * /
assert-on-IB(fingerN:angles,
[Resetl1Reset2,Reset3])
get~from~1B(wrist:opportunistic,Wrist0),
increase(WristOINewWristO),
put~on~IB(wrist:opportunisticINewWristO),
elseif L3contact
then / * link3 has made contact * /
get-from-IB(fingerN:fingertip,TipContact)
if Tipcontact = true
then / * have fingertip contact * /
<do nothing>
else / * have link3 contact * /
/ * Actuate joint2 angle. * /
decrea~e(Joint2~NewJoint2)
assert-on-IB(fingerN:angles,
[JointllNewJoint2,Joint3])
else / * have no finger contact * /
/ * Actuate joint2 and joint3 angles. * /
increa~e(Joint2~NewJoint2)
increa~e(Joint3~NewJoint3)
assert-on-IB(fingerN:angles,
[~ointl,NewJoint2~NewJoint3])
/ * Decrease fingerN1s opportunistic sub-rating. * /
get~from~IB(finger~:opportunistic,FingerO)
decrease(Finger0,Ne~FingerO)~
assert~on~IB(finger~:opportunistic,NewFingerO).
Listing 3: Finger Control During Grasp Stage
wrist agent
The wrist agent is a physical agent and controls the motions of the wrist. The pos-
sible wrist directions of motion are up/down, forward/backward, and left/right.
The wrist agent can participate in execution during either stage, but its actions do
differ from one stage to another.
right if fingerl joint angles are much more actuated than fin-
ger2 joint angles
During the grasp stage, the wrist assists the fingers in trying to achieve only finger-
tip contact with the object. The wrap grasp has already centered the hand around
the object by having to envelop the object, thus, during this stage, achieving the
tip-prehension grasp mainly involves forward and backward motion of the wrist.
In the case of complex objects, the fingers may need to move around secondary
features, thus the up/down and left/right motions are used. Table 4 is a summary
of the events which drive the movements of the wrist during the grasp stage.
The wrist determines whether it is off-center from the object in any direction by
estimating the center of the object from its knowledge about the object.
ep3 agent
The job of the ep3 agent is twofold: (i) coordinate the behaviour of the physical
agents to achieve a wrap grasp, and (ii) verify the shape of the object in contact.
As a result, ep3 is only active during the info stage. In both cases, the opportunistic
sub-rating is used to control the behaviour of the physical agents, as discussed
below.
It is possible that ep3 acts when the fingers have not all gotten a chance to make
contact with the object. In this case, ep3 resets the opportunistic sub-ratings, see
section 2.4, of the finger agents to 0.85 to ensure that each finger can act so as to
facilitate a contact with the object. This is to ensure that if a finger agent's rating
ends up much lower than the others, the finger does not become stuck there. List-
ing 4 shows the pseudo code for resetting these finger opportunistic sub-ratings.
assert~on~IB(fingerl:opportunistic,0.85)
assert~on~IB(finger2:opportunistic,0.85)
assert~on~IB(finger3:opportunistic,0.85)
Listing 4: Keeping the finger Agents Going
If the fingers have all made contact, but the configuration of the wrap grasp has
not yet been achieved, ep3 resets the fingers so that they can try again:
,
assert~on~IB(ep3:opportunisticI0.20)
assert~on~IB(epl~set:opportunistic,O.O)
Listing 5: Stimulating the finger Agents
Once the wrap grasp has been achieved the following pseudo code describes the
second task of ep3:
,
assert~on~IB(ep3:opportunisticI0.0~
,
assert~on~IB(fingerl:opportunisticIO.O)
assert~on~IB(finger3:opportunistic,O.O),
assert-on-IB(shape-verifyltrue),
Listing 6: Verifying the Shape Grasped
find-f oreign is discussed in more detail in section 4.2.
Putting all three pseudo code segments together, the skeleton of ep3 looks like:
grasp triangle
finger1 /
In task (i), tip increases the opportunistic sub-rating of wrist if the grasp plane
does not intersect the center of the object, see section 4.4.
In task (ii), the opportunistic sub-ratings of fingerl, finger2, and finger3 are altered
as a function of the offset of the corresponding grasp-triangle angle, Figure 7, from
60" . The 60"angle represents the optimum configuration, see section 4.4. Using
the offsetfrom this ideal value as a way of setting the opportunistic sub-ratings of
the finger agents is one way in which the rating system achieves its opportunistic
nature. The opportunity here is to allow the fingers which really need to act to do
just that. The following pseudo code segment illustrates the task (ii) actions.
remainder = remainder_of(#EPl_probes/3),
if remainder = 0,
then <find new point above/below-vertical move>
else <find new point in z-plane-horizontal move>
Listing 9: Choosing the Next Contact Point for EP1 Execution
Unless the number of contacts is a multiple of 3, the next point is found by mov-
ing in the horizontal direction, i.e. positive or negative y-direction, depending on
where the next contact can be found, see Figure 8(a). If the number of contact
points is a multiple of 3, then the next point is sought in the vertical direction, i.e.
a move in positive or negative z-direction, depending on where the next contact
point can be found, see Figure 8(b).
The reason for the value "3" as the point at which the hand moves in the vertical
direction is arbitrary.
I
epl agent
The task of epl is to simply perform the EP1 exploratory procedure during the info
stage. EP1 is performed at the current point of contact, unless a list of foreign con-
tact points (from EP3) has already been asserted. In the case of foreign contact
points, EP1 is performed at each of the points in the list and the resultant data, see
section 4.1.3, is asserted on the IB. A foreign contact point is a contact point which
ep3 cannot match to any currently estimated shapes.
The sub-routine for actually performing EP1, e p l :do-epl, was written by Char-
lebois [I] and it is called by epl when necessary.
post-shape agent
Once epl has been executed, its output is available on the IB. The task of the post--
shape agent is to postulate an object shape from this data, see section 4.1.2, and as
such it is active during the info stage. The postulated shape is appended to the list
of estimated shapes, which resides on the IB. If the shape already exists, then the
confidence in the estimated shape on the IB is increased, as discussed in section
4.1.3.
In addition to processing epl's output data, post-shape sets the epl-set opportunis-
tic sub-rating to 1.00, in case there is a need for more EP1 probing. It also resets its
own opportunistic sub-rating to 0.00. The following is the pseudo code segment
which is used to implement post-shape:
get_•’ ,
rom-IB (pshape,EPlget_from_IBo,data)
if <there is no EPl-data>
then <do nothing>,
else
get-from-IB(est-~hape,Estimated~shapes),
translate(EP1-data to New.-shapes),
append(New-shapes to Estimated-shapes
= New-Estimated-shapes),
,
assert~on~IB(est~shapeINew~Estirnated~shapes)
assert~on~IB(epl~set:opportunistic,1.00),
assert~on~IB(post~shape:opportunistic~.~O).
Listing 10: post-shape Implementation
The estimated shapes are stored under the label est-shape on the IB and the
structure of the data associated with this label is in the form shown below:
Should there be no estimated shapes at the current time, the label is associated
with an empty list. The translate predicate verifies whether the epl data
matches a current estimated shape by first searching for a matching shape class,
and then for a matching radius value among the list of estimated shapes, e s t - -
shape. It is assumed that there cannot be more than one shape in the environ-
ment with the same radius value.
orientation agent
The orientation agent has the task of centering (3D translations) and orienting
(rotating about an axis) the hand in preparation for the wrap grasp. Thus, it is
active during the info stage. When the hand is said to have been oriented about the
object, in fact it is the objects in the environment which have been oriented with
respect to the hand, due to ease of computation. In addition, it is important to
know that given the global x-y-z coordinate frame, the object lies somewhere
within the first quadrant of this coordinate frame and the hand approaches the
object from the "front" with approach vector (x,O,O), as in Figure 9.
Since the curvature of the object has been estimated, the hand is centered with
respect to the curved object in the y- and z-direction. For example, the following is
the code segment used to center the hand about a sphere:
/ * [Xo,Yo,Zo,Radius]are the parameters describing * /
/ * the center and radius of the sphere in contact. * /
1 ,Objects),
member( [sphere,[Xo,Yo,Zo,Radius]
/ * Determine translation vector for applying to the * /
/ * Sphere so that it is centered w.r.t. wrist. * /
/ * i.e. instead of moving wrist, move object * /
eval(Xc+Radius,NewXc),
add( [NewXc,Yc,Zc], [-Xo,-Yo,-Zo],Transition),
/ * Apply transition vector to Objects in the * /
/ * environment. * /
orientation:move(Transition,Objects,[],NewObjects),
/ * Update ~ ~ ~ i r o n m eMO~ification
nt data on the IB*/
bbsut (env-mod,[Transition,[Xo,Yo,Zol , O,x]) ,
/ * Move the wrist forward (in the x-direction) by * /
/ * (~adius-13)- this is to give the wrist an * /
/ * initial push toward the object. * /
eval(~x+Radius-13,NewWx),
/ * New wrist relocation target is
bbsut (wrist-target,[NewWx,Yc,Zc] )
Listing 11: Centering the Wrist about the Grasped Object
There are two main reasons for orienting the hand with respect to the object:
Secondary
Feature
The primary feature is considered to be the object which the robotic hand has
decided to grasp. The secondary features are additional object shapes which have
been identified in the environment and which may obstruct the grasp of the pri-
mary feature. If no secondary features exist, the object is said to be simple, Figure
10(a),otherwise, the object is said to be complex, Figure 10(b).
(a) Front Grasp (b) Front Side Grasp (c) Top Grasp
Figure 11: Types of Grasps
Grasping a simple object is done using a Front Grasp or a Front Side Grasp, Figure
11, depending on the simple object shape.
Grasping an object shape in the presence of secondary features requires some rea-
soning as to the location of the secondary features with respect to the primary fea-
ture. A secondary feature is classified to lie above/below/in-front/behind/
right/left/other of the primary feature, as in Table 5.
Once all secondary feature locations are classified, possible approaches are evalu-
ated. A possible approach is one which does not encounter any secondary fea-
tures en route to grasping the main feature.
Given this information, the hand attempts to orient itself so as find a clear
approach toward the primary feature. The objects in the environment may be
rotated about the z-axis in preparation for attempting a Front Side Grasp. Table 6
illustrates how a complex shape may be grasped given the availability of an
approach. The preference of a certain approach decreases as you move down the
table rows.
-
I above
I Top Grasp
below - can't be done
in-front - Front Side Grasp
behind 180•‹ Front Side Grasp
right -90•‹ Front Side Grasp
end agent
The task of the end agent is self explanatory. end ensures that the program exits
gracefully when deemed necessary by one of the agents or the controller. When
the ending of the program is necessary, the opportunistic sub-rating of end is set to
1.00.
2.3.3 Controller
The controller is responsible for coordinating the system's agents to achieve a
Ir goal. In this case, the goal is that of achieving a stable grasp, i.e. an active closure
grasp. The manner in which the controller coordinates the components is through
the rating system, which is discussed in detail in section 2.4.
The controller starts off with a list of names of all agents in the system. The label
associated with this list is agents:
If an agent is not in the a g e n t s list, then it cannot be considered for action. This
list is pruned by testing the pre-condition of every agent in the a g e n t s list. If an
agent's pre-condition does not hold, then the agent's name is not included in the
pruned version of the a g e n t s list, agent s s r e list. The pre-condition usually
requires that the system be in a given stage. For example the wrist agent's pre-con-
dition requires that the current stage be the grasp stage, else the pre-condition
fails:
wrist:pre:- bb-get(stage,S),
if (S = ' g r a s p ' , t r u e , f a i l ) .
Listing 12: wrist Pre-condition
Only the following agents are allowed to participate during the grasp stage: {end,
fingerl, finger2, finger3, wrist, tip). Thus, if the current stage is grasp, then the
agent s s r e list is:
(agents~rel[end,fingerl,finger2,finger3lwristltip])
Once the agent list has been pruned, the controller's decision of which agent to
allow to execute in the current cycle is a function of the rating of each agent. Each
agent calculates its rating as shown in section 2.4 and the list, r a t i n g , is assem-
bled with the agent's name and rating. For example,
Given the r a t i n g list, the controller simply chooses the highest rated agent,
Execut eAgent, as in the code segment below:
lRateSet)
findall(Rate~,member([Agent~Rate]~rating)
max~value(RateSet,MaxRate),
member([ExecuteAgent,MaxRate],rating),
call(ExecuteAgent:body)
Listing 13: Choosing the Agent to be Executed
In this case ExecuteAgent is wrist. The order of the agents in the list is not
important, unless there is a tie among agent ratings. In this case, the first item in
the list which has a highest rating is selected.
The controller's request for the agent's pre-condition, <agent-name> :pre, sim-
ply succeeds or fails, depending on whether the pre-condition is true or false.
However, the controller's request for an agent's rating, <agen t-name > :r a t -
i n g ( x ) ,returns the value, X.
If we let Aj be an agent, then the rating of agent Aj is P(Aj).The rating of the agent
is also the confidence in the agent. Next, we assume that each agent rating consists
of several sub-ratings, which are combined with the use of weighting factors. If
we let P(Bi) be the weighting of sub-rating Bi, then P(Aj I Bi) is the sub-rating of
agent Aj with respect to Bi. Thus, we can calculate P(Aj)as a function of P(Aj I Bi)
and P(Bi) using Bayes' Rule, as in equation (1).
n
P(Aj) = (P(AjlBi)x P ( B i ) ) , where 1SjSrn (1)
i= 1
The only restrictions implied by equation (1) are as indicated in equation (11-3).
Equation (1) can be used to combine several traditional competing sub-ratings
into one rating.
Halpern et al. [91 have introduced two types of knowledge: externally ascribed
and explicit knowledge. The externally ascribed knowledge is a default type of
knowledge, acquired from the system engineer, while the explicit knowledge is
knowledge gained from the system's sensors. Let us call the first type default and
the second type opportunistic. Thus, let there be two sub-ratings, i.e. n=2. The
default sub-rating has a fixed, default value and the opportunistic sub-rating is
variable and changes as discussed in sub-section 2.4.2.
Let P(Bd)be the weighting of the default sub-rating and let P(B,) be the weighting
of the opportunistic sub-rating. Then the default sub-rating of agent Aj is
P(AjI Bd), and the opportunistic sub-rating is P(AjI B,). Thus, the rating for agent
Aj is P(Aj),equation (2).
eval(~efau1tSubRating* Defaultweight +
OpportunisticSubRating * OpportunisticWeight,
Rating) .
Note that while the sub-rating values are agent specific, the weighting factors are
not.
Example
Let there be six agents in the agentssre list,
and assume that the system has equal confidence in the two sub-ratings. Then, the
weight of the default sub-rating and the weight of the opportunistic sub-rating is,
(weight,[0.50,0.50]1
If the default, P(AjI Bd), and opportunistic, P(Aj I B,), sub-ratings of the agent Aj
( 1 I j 2 4 ), are as indicated in Table 7, then, using Equation (2) the total rating of
finger1 is calculated as follows:
P (Ail Bd) X P (Bd) + P (AA go) x P (go)
= 0.60 x 0.50 + 0.20 x 0.50 (3)
= 0.40
The rating of the other agents can be calculated in a similar fashion.
The total rating of the agents is as shown in Table 7. Obviously, wrist has the high-
est rating among the six agents in this example.
This flexibility allows the sub-rating to take advantage of the appropriate oppor-
tunity to influence the agent's ability to act. The utility of each of these methods is
r
discussed below. ,
First, the agent which executes its action, e.g. tip, may alter the sub-rating of any
other agent, e.g. wrist, including its own, depending on the current status of the
system. The current status of the system is defined by the perceived state variables
on the information board. These variables, e.g. GraspPlane,can be used as mea-
sures for the sub-rating.
In the following example, Listing 14, the IB label GraspPlane is associated with a
true/false value which indicates whether the grasp plane is acceptable or not
during the current cycle, see section 4.4 for desirable grasp plane location. If the
grasp plane is not acceptable, then the opportunistic sub-rating of wrist is set to a
high value, eg. 0.7, relative to the other agents. This allows the wrist to consider a
potential movement and improve the location of the grasp plane.
if (GraspPlane,
/ * then * /
true,
/ * else * /
bb_put(wrist:opportunistic,0.7))
Listing 14: Currently Active Agent Sets Opportunistic Sub-rating
A second way in which an agent's, e.g. orientation, opportunistic sub-rating may
be modified is by the controller. The controller may choose to do this so as to facil-
itate the transition of the system from one stage to another according to a pre-
determined plan, or to ensure that the goal of the current stage is being achieved.
For example, the end of the shape description period in the info stage is marked
when one of the estimated shapes has achieved a confidence greater than 75%.Up
to this time the sub-rating of ep3 has been zero, thus in order to ensure that shape
matching begins, the orientation sub-rating is set to maximum, 1.0. This ensures
i
that the hand orients itself properly with respect to the objects it has identified,
before allowing ep3 to take over and coordinate the physical agents. The following
code segment illustrates this implementation:
The requirement for thought for determining the opportunistic sub-rating alter-
ation value can be illustrated by extending the Example from page 37. If the sub-
rating of agent fingerl is to be increased prior to the calculation of the ratings
shown in Table 7, then the value by which this increase should be made, should be
large enough to produce a total rating closer to 0.65, which is the value of the
highest rated agent in the table.
Given Case I, fingerl would be a contender for execution during the upcoming
cycle. However, given Case 11, fingerl would need to wait until the rating of wrist
decreases below its own rating so that it could be executed. Either of these scenar-
ios could be used, depending on the desired effect.
Chapter 3
Environment Simulation
'
As shown in Chapter 2 Figure 4, the EOS interacts with a simulated environment.
This chapter, discusses the representation of this simulated environment, which
includes the robotic hand, objects to be grasped, and the method for detecting
points of contact between the robotic hand and the objects in the environment.
The parameters of the simulated environment are stored on the information
board of the EOS as discussed in the following sub-sections.
The wrist serves as a relative coordinate frame for the fingers, thus its location,
(Wx,Wy,Wz), Figure 12, in the global coordinate frame must be specified. The
coordinates of the wrist center are asserted with the datum label wri st-coord
and are asserted on the IB as follows:
(wrist-coord,[Wx,Wy,Wz])
z-axis
x-akis
Figure 12: Wrist Location & Dimension
Knowing the location of the wrist in the global coordinate frame, the wrist relative
coordinate frame can be specified. This frame of reference is a translation of the
global reference frame to the wrist center, as shown in Figure 13.
z-axis
I
x-axis
Figure 13: Wrist Relative Coordinate Frame
Next, the coordinates of the origin of each finger, with respect to the wrist coordi-
nate frame, are asserted separately on the IB, by specifying the origin of link 1,
org-f 1/ 2 /3,of each finger with respect to the wrist center, Figure 14.
(org-f1,10,-15,+401)
(org-f2,[ O , +15,+4O] )
(org-f3,[0,0,-4011
Listing 16: Asserting Finger Origins
z-axis
origin-finger
x-axis
Figure 14: Finger Link Origins
Knowing the location of the wrist center, (Wx,Wy,Wz), in the global coordinate
frame and the location of the origin of a finger, (X,Y,Z), in the wrist coordinate
frame, the origin of the finger in the global coordinate frame, (Xo,Yo,Zo),can be
calculated as in equation (4).
Two more pieces of information are needed to determine the coordinate of each
fingertip contact in the global coordinate frame: the length of each link and the
angle of each joint. The link lengths are associated with the datum labels
f 1-links, •’ 2_links, and f 3-links and the joint angle labels are f1-angles,
f 2_angles, and f 3-angles:
( f l - l i n k s , [20,50,50] )
(•’2-links, [20,50,50] )
(•’3-links, [20,50,50] )
(•’1-angles, [Theta-joint1,Theta-joint2,Theta-joint31)
(•’2~angles~[Theta~jointl~Theta~joint2,Theta~joint31)
(•’3-angles. [Theta-jointl.Theta-joint2,Theta-joint31)
Listing 17: Asserting Finger Configuration
Given (i) the origin of the finger in the global frame, (Xo,Yo,Zo),(ii) the length of
each link, and (iii) the value of each joint angle, equations (1-4) and (1-7) can be
used to calculate the coordinate of the fingertip of each finger in the global frame.
These coordinates are asserted with the labels f 1-coord, f 2_coord, and
f 3-coord.
(•’1-coord, [ T i p l X , T i p l Y , T i p l ~ ] )
( f2_coord, [Tip2X, Tip2Y1~ i p 2 z )I
( •’3-coord, [Tip3X1T i p 3 Y , T i p 3 ~)]
Listing 18: Asserting Fingertip Coordinates
The fingertip calculations are performed at the end of every cycle by the respec-
tive finger simulation modules: s i m : f 1-coord, s i m : f 2_coord, and s i m : f 3--
coord. The finger simulation modules are responsible for:
The finger status consists of the following data, in addition to the items from List-
ing 16 to Listing 18:
f 1-contact keeps track of where the contact points of each link of finger 1 are.
Link1/2 / 3 _ c o n t a c t is a list of contact points, such a s :
f1joint-coord contains the location of every finger joint and fingertip in global
coordinates. This informationmay be needed by other agents. Thus after this
information is calculated once, its results are stored on the IB to be referred to dur-
ing the current cycle.
f1-locked maintains the status of the finger joints. If a joint is locked then it may
not be actuated during the current cycle.
The following pseudo code listing defines the finger 1 simulation module. Finger
2 and 3 simulation modules are identical in function and structure.
sim:fl-coord:-
/ * Calculate fingertip coords w.r.t. finger * /
/ * frame. * /
evalkfingertip x-coord> = TempX),
evalkfingertip y-coord = TempY),
eval (<fingertip z-coord> = TempZ),
/ * Calculate fingertip coords w.r.t. global*/
/ * frame. * /
eval(<fingertip x-coord> = Xfl),
eval(<fingertip y-coord> = Yfl),
evalkfingertip z-coord> = Zfl),
assert~on~IB(fl~coordI
[XfllYfl,Zfl]),
/ * Calculate the joint coords of each link. * /
f1~joints(Joint1,Joint2,Joint3,Finger~ip),
/ * Assert the location of these joints for * /
/ * future use. * /
assert~on~IB(fljoint~coordI
[Jointl,Joint2,Joint3,FingerTip]),
/ * Determine contact between link1 (described * /
/ * by its end-points: Joint1 and Joint2) and * /
/ * Objects in the environment.*/
Objects, [I ,Linkl-contact),
if Linkl-contact
then Lockl = true,
else Lockl = fail
/*~eterminecontact between link2 and Objects.*/
Objects, [ I , Link2-contact) ,
if Link2-contact
then Lock2 = true,
else Lock2 = fail
/*Determine contact between link3 and Objects.*/
Objects, [ I ,Link3-contact),
if Link3-contact
then Lockt3 = true,
else Lock3 = fail
(env,[Objectl,. . . ,ObjectN]) ,
where N is the number of objects asserted in the environment. The location of the
objects in the environment is given with respect to the global coordinate frame.
Sphere ~bjects are described with respect to the location of their center and the
length of their radius:
Object = 1
[sphere,[Xcenter,~center,Zcenter,~adius]
However, cylinder Objec ts require a few more parameters. The center of the cyl-
inder and its radius are needed, in addition to the cylinder's length and its orien-
tation. First of all, the cylinder length is restricted to lie only in the, x-, y-, or z-
direction, as in Figure 15.
Xcenter
Min
Max x
Figure 16: Representing a Cylinder
Ycenter
The objects in the environment may be displaced or rotated about the x-, y-, or z-
axis. When this happens, the env-mod label is updated as:
( env-mod,
,
[Displacement,[Xo,Yo,Zo],RotAngle,RotationAxis])
where, Displacement= a vector
[XO,YO,ZO] = the center of the object which was in contact with the
robotic hand at the time of rotation
RotAngle = the angle through which the objects were rotated
~ oat
tionAxis = the axis about which the objects were rotated.
( Y - Y ~ )+ Z 1 , where ( y 2- y l ) z 0
Checking for each of the above cases is done with a series of i f - then-else
statements.
Since the equation for each shape class is different, one contact detection algo-
rithm is needed for the sphere and one for the cylinder objects. The equation for a
sphere is:
However, due to the three possible orientations of the cylinder, there are three
possible equations for the cylinder:
Once the points of intersection between the line segment and the objects are
found, only one of the points is taken as a contact point. However, it is possible
that one link be in contact with more than one object shape. Assuming that the
links are rigid, as are the objects, then it is not possible to have more than one con-
tact point between a sphere or a cylinder and a line segment. The only reason why
it would appear that there is more than one contact point is that there is some
overlap of the line segment with the object. The amount of overlap between the
finger link and an object in the simulated environment is equivalent to the force
which the link is applying to the object in the real world, given that the finger link
and the object are rigid objects. The bigger the overlap, the more force is applied.
However, there is only one point at which this force is applied while the overlap
situation produces two points of intersection. The contact point which is chosen is
the one which is nearest to the preceding joint.
Given the wrist center, (Wx,Wy,Wz), and wrist radius, R, the following parame-
ters can be calculated:
Wymin = wrist Y minimum
Wymax = wrist Y maximum
Wzmin = wrist Z minimum
Wzmin = wrist Z maximum
A box is drawn around the object to be tested for wrist contact. The resultant box
is the smallest box which can be drawn around this object, i.e. the bounding box,
and it requires the following parameters to be calculated:
Minx = minimum X value for the shape
MaxX = maximum X value for the shape
MinY = minimum Y value for the shape
MaxY = maximum Y value for the shape
MinZ = minimum Z value for the shape
MaxZ = maximum Z value for the shape
Then, contact between the wrist and an object is said to have occurred if the fol-
lowing conditions are satisfied:
M i n x I W x I MaxX
and
{ Wymin I MinY I Wymax or Wymin I MaxY I Wymax} (9)
and
{ Wzmin I MinZ I Wzmax or Wzmin I MaxZ I Wyzmax}
Chapter 4
In order to establish a stable grasp of an object we are faced with two possible sit-
ua tions:
(i) grasping an object with a priori knowledge
(ii) grasping an unknown object
Grasping the known object becomes an exercise in calculating points on this object
where the fingers could be placed to render the grasp stable [31]. However, in the
case of the unknown object, it is first necessary to, at least partially, reconstruct the
object's shape and determine its location, through haptic exploration. Unfortu-
nately, it is not always the case that an object can be completely known, thus the
second case must be addressed. The challenge is increased by the large variety of
objects, although the class of objects has been constrained to curved surfaces, such
as spheres and cylinders. Due to manipulator size and shape constraints, only a
sub-class of the whole set of objects is suitable for grasping with a particular
robotic hand.
Partial object reconstruction means that the object is explored to get an idea of its
size, shape class, and location. The size and shape class of the object are used to
establish the ability of the robotic hand to grasp the object. Shape classification of
the object, e.g. spherical, cylindrical, etc., is performed to get an idea as to how the
object should be approached for grasping.
Achieving this partial object reconstruction employs the aid of two haptic explor-
atory procedures (EPs): EP1 and EP3. EP1 is the exploratory procedure used for
shape description and EP3 is the exploratory procedure used for shape matching,
as already introduced in section 2.3.
Using EP3 in conjunction with EPI means that secondary features of the object are
more likely to be detected. Using the example of a mug, the mug is the primary
feature, while the handle is a secondary feature. Detecting secondary features
makes it possible to concentrate on the primary feature, the mug, and focus on
grasping it, while avoiding, if possible, secondary features, such as the handle.
The sections which follow present the details of each of these haptic EPs.
4.1.1 Assumptions
Before going into the details of EPI, it is important to keep in mind the assump-
tions which are being made:
the normal to the object at the point of contact can be derived
no slipping occurs at the contact point
the contact point detection sensor has fine to infinite resolution
the noise in sensory inputs is constant and low
the fingertips are hard and hemispherical, with a radius of one unit
4.1.2 Background
The finger roll is performed in a cross-hair pattern [3], i.e. in the direction of the
arrows and in the order indicated by the numbers adjacent to the arrows shown in
Figure 17(a).Notice that although the surface is in 3D, the probing is seemingly
done in only two directions, defined as u- and v-direction.
Although the u-v map has only two dimensions, this map can be overlaid on any
surface, thus causing the u-v map to take the shape of any object. As a result, the
cross-hair pattern may be made up of arcs, instead of straight lines, as shown in
Figure 17(b).Indeed, the key measurements taken during EP1 are the lengths of
these arcs and the angles of rotation which the finger went through to achieve
these arcs. One last requirement is that the finger must roll with constant angular
velocity.
Curved Surface
u-direction 1l4
v-direction
(a) Cross-hair pattern in u-v (b) u-v map on object
Figure 17: Executing EP1
EP1 was studied for the purpose of curvature estimation by Charlebois, Gupta,
and Payandeh [31. The type of output which EP1 produces as a result of the rolling
procedure, is in the form of two surface curvature parameters, [ku, kv] one for
each rolling direction (u-v), see Appendix I11 for more details. These parameters
are valid at the point of contact of a finger with the object where this EP was per-
formed. The curvature parameters can be converted to local curvature radii (ru,
rv) of the curved object, as shown in equation (10):
1 1
ru = - and rv = -
ku kv
where, ru = the radius of curvature at the point of contact in the u-direction
rv = the radius of curvature in the v-direction
Comparing the relative sizes of the two radii makes it possible to deduce whether
the object probed is locally spherical, flat, cylindrical, parabolic, or hyperbolic.
Two curvature parameters close in value and less than a maximum value, mux (i.e.
a value less than infinity, which defines the largest possible radius of curvature),
denote a locally spherical area.
{ r u E rut0 < ( r u , r v ) < max} a locally spherical (11)
A plane is categorized by an infinite radius of curvature in both directions.
{ r u z rvI 0 < ( r u ,r v ) > max} a locally flat surface (12)
One of the curvature parameters of a cylinder is much higher than the other, since
a cylinder is only curved in one plane. In addition, the larger radius of curvature
is greater than the maximum value, rnax.
{ r u >> rvl ru > rnax, 0 < r v }
3 locally cylinderical (13)
{ r v >> ru(rv > rnax, 0 < r u }
Paraboloids have one parameter larger than the other, but both parameters are
less than the rnax value.
{ r u > rvl ru < rnax, 0 < r v }
3 locally parabolic (14)
{ r v > rul rv < rnax, 0 < r u }
Hyperboloids have curvature parameters of opposite sign. In addition, the magni-
tude of both parameters is less than the rnax value, but the magnitude of one
parameter is much larger than the other:
{IruI > lrull lrul < rnax, ( r u - r v ) < 0 )
a locally hyperbolic (15)
{lrvl > lrull lrvl < max, ( r u . r v ) < 0 )
An additional assumption is made for cylinders. These objects are oriented such
that they are aligned with one of the coordinate axes (x, y or z), as shown in Fig-
ure 18.
Since EP1 can only be used to determine the radius of curvature at the point
contact, the use of successive probings in different locations on the object is used
58
to postulate a shape of the object and its radius of curvature.
4.1.3 Implementation
This exploratory procedure is implemented as an agent of the EOS. The datum
label foreignst s contains a list of all points resulting from the wrap grasp
which have not been matched to an already identified object. If this list has values
within it, then epl uses these points as locations for EP1 executions, otherwise, a
list of current contact points is determined and used for EP1 execution.
ep1:body:-
bb-get(foreign_pts,Foreign),
if (Foreign=[I,
/ * then * /
contact_pts(Contacts),
/ * else * /
(Contacts=Foreign,
b b s u t (shape-verify,true),
b b s u t (foreign_pts,[I) ) ) ,
/ * Call C sub-routine by Charlebois [I]:*/
/ * input=Contacts; output=Pshape-list * /
ep1:do-epl(Contacts, [],Pshape-list),
/ * The output of the e p l agent is asserted * /
/ * the IB under the label pshape.*/
bb-get (pshape,Plist),
append(Pli~t,Pshape~list~New_plist),
,
b b s u t (pshape,New_plist)
/ * e p l is subject to the rating system. * /
,
bb_put(post~shape:opportunisticI1.O)
bb-get(epl:opportunistic,EplRateO),
eval(EplRate0*0.90,NewEplRateO),
bb_put(epl:opportunistic,NewEplRateO).
Listing 21: Implementing EP1
The list of postulated shapes, pshape has the following representation:
,
(pshape,[[Shapel,Contact_pointl,Output~datal]
[Shape2,Contact_point2,0utput-data2],
...I )
ShapeN is the character of the first letter of the shape postulated. This variable is
not used later on to postulate a shape, as this piece of information must be
deduced. It is simply used as a checking means for the author. ContactsointN
is the [X,Y,Zl coordinate of the location of the contact at which EPI was per-
formed, and Output-da taN is the output of EP1.
Thus, Charlebois also found that the accuracy of estimating spheres was much
better than of estimating paraboloids. Assuming that conditions (ii) and (iii)
above are kept constant, only difference between estimating spheres vs. cylinders
is the Relative Curvature Radius (RCR) of the object w.r.t. the probe. As already
mentioned in section 4.1.2, cylinders have one radius of curvature approaching
infinity. Thus, one of the RCR values is very large, making one of the estimated
radii of curvature much less reliable. As a result, it becomes evident that the initial
confidence in the output of EP1 must be weighed against the shape which the out-
put data predicts, such that the EPI output predicting a sphere should be
weighted with higher confidence than the output predicting a cylinder or parabo-
loid.
As already mentioned, repetitive iterations of EPI on the same object increases the
confidence in the shape of the object as well. To ensure that both factors, the shape
of the object and the reinforcement of the shape of the object, are taken into
account, calculating the confidence in an object shape, consisting of the shape clas-
sification, radii of curvature, and location, can then be done as follows:
C (ShapeX) = c, x c (ShapeX)
i=l
wrist
contact
Figure 19: Grasping Profile of Finger 1 and Finger 3
Since the robotic hand has three fingers, a set of six contact points are produced
between the hand and the object. Once the six contact points are achieved, each
point is tested to see if it does or does not satisfy the equation of the estimated
shape. The predicate responsible for this function is f ind-f oreign. Before going
into the pseudo code for f ind-foreign, the following two instances of this
predicate illustrate special cases which have been provided for: (i) the case of no
estimated objects to test for and (ii) the case of no contact points between the fin-
gers and the object.
For case (i), the output variable, Foreign,is equated to the input variable Con-
tacts:
For case (ii), the output variable, Foreign,is equated to an empty list:
find-foreign(Contacts = [],Est-Shapes,Foreign):-
Foreign = [I .
The main body of find-f oreign is recursive so that the Foreign points can be
assessed for belonging to any of the estimated shapes:
find-foreign(Contacts,Est_Shapes,Foreign) :-
remove(Shape from Est-shapes = NewEst-Shapes),
find-foreign_pts(Contacts,Shape, [],TempForeign)
find~foreign(TempForeign,NewEst~~hapes,Foreign)
Listing 23: Determining the Foreign Contact Points
Solving for each individual point requires a recursive predicate, find-f or-
eignsts.The recursion halts when all points in the list of contacts points have
been tested. The pseudo code for the find-f oreignst s implementation is as
follows:
find-foreignsts ( [1 ,ShapeITempVar,Foreign) : -
Foreign = TempVar.
find~foreign_pts(Contacts,Shape,TempVar,F'oreign):-
remove(C0ntactPt from Contacts = New-contacts),
determine (ContactPt E Shape,Belonging)
if(Belonging,
then
NewTempVar = TempVar
else
NewTempVar = append(C0ntactPt to TempVar)
Listing 24: Matching Contact Points to Estimated Shapes
4.3 Tip-Prehension
Tip-prehension is a grasp configuration in which each finger makes a single con-
tact with the object and that contact is made with the fingertip. This grasp can be
achieved from many different pre-grasp approaches.
Grasping from above and from the side were the two approaches implemented by
Seitz and Kraft [34]. Their rationale was to grasp from above unless side features,
such as handles, are detected. In this case, a grasp from the front (Front Grasp),
from the side (Front Side Grasp), or from the top (Top Grasp) is used, depending
on the shape of the object, see orientation in section 2.3.2. The transition to the tip-
prehension grasp is the subject of section 4.6.
Assuming that:
a) 3 p a t Ci, i: 1, 2, ..., N
b) 3 {M DOF at Ci I fappTi is arbitrary)
Assuming the coulomb model of friction, the relationship between the normal
and tangential force components applied by a finger at a point of contact can be
expressed by equation (17).
Since the internal forces must satisfy the friction cone (FC), the internal force in
the grasp plane can only be applied within the FC. Figure 22(a) and (b) show two
configurations of the fingertips of a robot hand around the grasp plane. Since the
resultant internal force must equal to zero, the internal forces resulting from each
contact point must meet at one point [14] called the centroid. The FC criteria is
then satisfied in this plane if the centroid lies within the boundaries of the region
of overlapping FCs, which in turn must lie within the grasp triangle.
-
(a) Configuration A wl FC -
(b) Configuration B wl FC
-
(c) Configuration A wlo FC -
(d) Configuration B wlo FC
In the absence of friction, Figure 22(c) and (d), the only area of the overlapping
FCs is found at the intersection of the normals from the contact points, i.e. at the
centroid. Thus, small changes in the angle of the internal forces can easily render
the grasp unstable. Furthermore, note that in the absence of friction, the centroid
must still lie within the grasp triangle, thus any grasp triangle with an angle
greater than 90•‹,Figure 22(d), results in an unstable grasp.
Although this work deals with grasping in the presence of friction, the proposal is
to use the case of grasping without friction as a way for finding a more optimal
grasp configuration. The constraint then imposed by this optimization on a three
fingered grasp is:
This constraint is used by the tip agent to set the opportunistic sub-rating of the
finger agents. Since no grasp triangle angle is to exceed 90•‹,the opportunistic sub-
ratings of the finger agents are a function of the size of the grasp triangle angle at
the corresponding fingertip, see t i p agent in section 2.3.2. By doing this, the equi-
lateral grasp triangle, i.e. a grasp triangle where all angles = 60•‹,is the grasp con-
figuration at which all finger agents have the same opportunistic sub-rating.
Figure 23 illustrates the case when the grasp profile is a circle. Note that if the
grasp plane lies outside the friction cone, then so do the internal forces.
From Figure 23, the perpendicular distance between the grasp plane and the cen-
ter of the circle, d, can be calculated as follows:
d = r - sina
but,
amax
= 0max (21)
Therefore, the maximum allowable value for d can be expressed as a function of
the radius of the circle profile, r, and the coefficient of friction, p, as follows:
P d < r . sin(tan(p)- )
(22)
The constraint imposed by (19) not only satisfies condition (i), but in conjunction
with equation (22), it also satisfies condition (ii) of active closure. This is the geo-
metrical equivalent proposed to test for the quality of a grasp for the case of
grasping spheres and cylinders, or any object with a grasp profile of a circle.
Given this grasp configuration and assuming that arbitrary internal forces can be
applied at the established contact points, the matter of how much force to apply is
then simple and is not the subject of this thesis. Ji and Roth [I41 have presented
one method of choosing the optimal internal forces to be applied, once the grasp
configuration is established.
The two conditions for active closure are verified by the controller during every
cycle.
This coordinated, dynamic behaviour is synthesized with the help of the rating
system. The behavioural agent ep3 ensures that the desired transition is achieved
by reinforcing the opportunistic sub-rating of the physical agents and itself, every
time it is allowed to act, see ep3 agent in section 2.3.2. Once the wrap grasp is
achieved, ep3 also verifies the estimated shape of the object in contact.
The physical agents are active during this time and allowed to participate in the
competition for execution of their actions. The wrist acts to propel the hand for-
ward (toward the object) so that the fingers can wrap around the object, see wrist
agent in section 2.3.2, and the fingers actuate their joints to encompass the object,
see fingerl/2/3 agent in section 2.3.2.
The transition implemented here is coordinated through the use of the rating sys-
tem. This behaviour is tracked and enforced by the behavioural agent tip. The
opportunistic sub-ratings of the physical agents are set or reinforced every time tip
is allowed to act, thus ensuring that the resultant behaviour is going to be
achieved.
Chapter 5
The previous chapters have introduced the details of the EOS. This chapter pre-
sents the test results of the EOS simulating tip-prehension of unknown curved
objects with a three-fingered robotic hand. The analysis is divided into four areas:
(1)Resultant Behaviour
(2) Effect of Object Shape
(3) Effectof Initial Finger Contact
(4) Effect of Rating System
Please note that the rating system weights used in sections 5.1 through 5.3 are:
default weight = 0.30 and opportunistic weight = 0.70. These are the weights for
which the system was designed. The much larger opportunistic weight ensures
the greater focus on the opportunistic sub-rating vs. the default sub-rating.
In this case, initial contact is made with finger 3. Looking at the patterns of agent
execution, the two stages of the EOS implementation become very obvious. The
info stage has taken 471 cycles, while the grasp stage required only 208 cycles.
Although the cycle numbers vary as indicated above, the grasp stage does usually
take less cycles to complete than the info stage. The answer lies in the complexity
of each of the stage goals. The greatest challenge of the info stage is achieving the
wrap grasp; however, in the grasp stage, achieving the tip-prehension grasp is rel-
atively simple, since the wrist and fingers effectively back up from their current
positions until only the fingertips are in contact with the object. For a detailed
description of the agents' actions, please see section 2.3.2.
info
I'+ I
end I I I I I
t
wrist
t . . .-.;... -. .; . . - . . . . . .
ep3
*.. m..m . . mi. m...w.... .mmmmm.
I.. m . . ..m..
i .m..
. .m.
I ! . . . ; . . . . . . . . . . . . . . ; . . . . . . .. -
post-shape:~ .............................................
~PI a I
100
I
200
I
300 400
I I
500 600
I
Cycle Number
Figure 24: Agent Execution Profile
The individual stages are discussed in more detail in the sub-sections to follow.
5.1.1 info Stage Behaviour
,
200~
y-axlS
\
200 150
x-axis
100 50
\
0 200
y-axis
0
200 150 100 50
x-axis
0
2
y-axis
0
200 150
x-axis
0
100 50
0
0 200
y-axis
0
200 150 100 50
x-axis
O
2 0 0 0 0 200 1
200 150
,
100 50 O
y-axis x-axis y-axa X-axIS
(e) EP1 to EP3 Transition (9 info Goal Met - EP3 Done
Figure 25: info Stage Behaviour Evolution
The behaviour shown in Figure 25, looks as though the robotic hand is snapping
at the object while moving ever closer to it, until finally the hand envelopes its tar-
get. Figure 25(a) shows the configuration of the robotic hand and the location of
the sphere at cycle #I, and the next set of diagrams show the transition of the
robotic hand with respect to the object at cycles #13, #131, #331, #429, and #471,
respectively. This behaviour shows an evolutionary achievement of the stage's
goal and is representative of the robotic hand-object interaction during this stage.
end I I I I I I 1 I I
Figure 26 shows the agent execution profile during the info stage. The shape
description duration is quite short, cycles 1 - 13, while the shape matching task lasts
most of the stage, cycles 14 - 471.
Taking a closer look at the first 100 cycles of this stage, Figure 27, shows the differ-
ence in the agent execution patterns of both object reconstruction methods.
Shape description is a regular alternation among three agents: epl, post-shape, and
epl-set. This regular alternation among agents can be viewed as a pattern. The
pattern seen during the shape description period is common in all simulations, irre-
spective of the objects shape or initial finger contact, although the number of such
patterns may vary. Shape matching has more complex patterns, e.g. Pattern A,
involving all other agents active during this stage.
end
tip
wrist
finger3
finger2
finger1
orientation
eP3
, . . . ..........................
epl-sel
pattern A
post-shape
eP1
Cycle Number
Figure 27: Agent Execution Profile - info Stage Revisited
Consequently, the hand must accomplish this transition by backing up from the
wrap grasp until the tip grasp is achieved. Indeed, during the grasp stage the wrist
tends to back up from the object as the finger joints reset and make contact with
the object, Figure 28, until a tip grasp is found, which satisfies the active closure
criteria presented in section 4.3. Figure 28(a) to (f) shows an instantaneous
excerpts of this transition at cycles M71, M74, #529, #562, #617, and #679 respec-
tively. This evolutionary solution to finding a tip-prehension grasp is typical of
the interactions in this stage.
\
\ \ \
\
\
200\ \ \ \ \
\
, \
76
The manner in which the agents are activated during this stage is as shown in the
agent execution profile of Figure 29. Between a start-up and ending period, the reg-
ular alternation among agents fingerl, finger2,finger3, wrist, and tip, generates Pat-
tern B. This pattern repeats itself throughout this period.
Constant Pattern
end
tip
wrist
finger3
finger2
fingerl
orientation
ep3
epl-set
post-shape
epl
480 500 b20 540 560 580 600 620 640 I 6 6 0
Cycle Number
Figure 29: Agent Execution Profile - grasp Stage
10.0 Yes
12.5 1 Yes I
15.0 1 Yes I
1 17.5 1 Yes 17.5 1 Yes I
The performance of the system, i.e. the number of cycles of the system, is used in
the comparison of the two simple shapes, spheres and cylinders. The radius of
each of the simple shapes is varied in order to determine whether or not the two
shape primitives have different radii ranges which this robotic hand can accom-
modate and whether the radius variation affects the performance of the system in
any way.
Table 8 summarizes the ability of the robotic hand to grasp the two object primi-
tives. Clearly, the range of successfully grasped cylinder radii, 10.0 to 37.5 units, is
slightly larger then that of the sphere radii, 17.5 to 37.5.
V)
i 8 i
-
Q)
0 p
)r 600 - . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . .0. . . . . . .1 . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .-
.c
0 : o :
St: 5 6
400 - . . . . . . . . - . . i . . . . . . . . . . . . .. i . . . . . . . . . . . . . . i . ........... .I.. . . . . . . . . .%. . . . . . . . . . . . . :
?"....- . --
: f :
; Q * o
200 - .... .........................................................
.--x
x i cylinder i
0 ; sphere
I 1 I I I I
0
5 10 15 20 25 30 35 40
Radius [units]
Figure 30: Sphere vs. Cylinder Performance - info Stage
Figure 30 shows the number of cycles required to complete the info stage for both
shape classes: spheres and cylinders. In most cases, the sphere requires more
cycles to complete the info stage than does the cylinder. The general trend for both
shapes is that the number of cycles decreases as the radius is increased. However,
this trend is not monotonic for either the sphere or cylinder shapes. The reason
why it is easier to grasp larger objects that, given the object fits inside the robotic
hand, the larger objects are closer to the fingers, thus they require less cycles for
the fingers to come into contact with them.
250 I I I I I I
*
200 -.... . . . . . . . [
: x
-
m
:
j
m
o
'Y
0
:
Y
0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . I . . . . . . . . . . -
; 0 : : m
i x Q
f
" 6
: m j
a 150 - ............ ............. ;. ...... .:... . . . . . . . . .I . . . . . . . . . . . . . : . . . . . . . . . . . . .f. . 0
-
0
: o : o ;
0"
+
*0 10
0 - - . . . . . . . . . . :.. .......... !. . . . . . . . . . . . . . . . . . . . . . . . . . . . I..... ........................... -
50 - . . . . . . .
m
1;0 ; sphere
I I
...........................................................
I 1 I I
-
0
5 10 15 20 25 30 35 40
Radius [units]
Figure 31: Sphere vs. Cylinder Performance -grasp Stage
200
o ; sphere
I 1 I I I I I
5 10 15 20 25 30 35 40
Radius [units]
Figure 32: Sphere vs. Cylinder Performance
Figure 31 presents the comparison of the number of cycles for the g a s p stage.
Whereas in the info stage, the sphere shapes needed more cycles, in this case, the
number of cycles produces no such trend. Furthermore, the general trend of each
of these shapes is still to decrease in non-monotonous fashion, however this trend
is not as pronounced during this stage.
Adding the number of cycles for the info and grasp stages produces the graph in
Figure 32. Since the number of cycles in the info stage were generally greater than
those in the grasp stage, it is not surprising to see that the trends exhibited in Fig-
ure 32 resemble those of the info stage.
To summarize, (i) achieving tip grasp of a sphere requires more cycles than of a
cylinder, (ii) the number of cycles required to complete the info stage are generally
greater than those required to complete the grasp stage, and (iii) increasing the
radius for both shapes generally decreases the number of cycles required.
The reason for "approximately" is seen in Case V in Figure 33 and Figure 36. The
comparison among different objects is made with respect to the number of cycles
required to complete the tip-prehension grasp.
Sphere-Based Objects
In total four sphere-based objects are compared to the case of a simple sphere. A
sphere-based shape means that the primary feature is a sphere, to which a second-
ary feature, a cylinder, is attached. The cylinder can be attached to the back of the
sphere, the left side of the sphere, the top of the sphere, or the front of the sphere.
It is assumed that finger 2 is in contact with the sphere or cylinder, as in Case V , at
the y-z plane center of the object. Recall that the robotic hand grasps spheres
head-on unless secondary features are detected.
Case I: Sphere Only
The number of cycles required for each of the five cases in Figure 33 is shown
graphically in Figure 34. Clearly, the performance of the EOS varies among these
configurations, especially for Case 11, Case and Case V.
m Total
0 Info
I I I I I
0
Case l Case II Case Ill Case lV Case V
Sphere-Based Objects
A possible source of this variation is the location of the cylinder, the secondary
feature, with respect to the sphere, the primary feature. Figure 35 is a sequence of
instantaneous excerpts which show the point at which the secondary feature is
discovered, if it is discovered at all. These excerpts help identify the reason for the
differences in performance among the sphere-based objects and in comparison
with the simple sphere.
Note that the views for each of these images were chosen to best emphasize the
discovery, or lack thereof, of the secondary feature. Also, the cylinder shading has
been chosen to ensure a proper view of it.
\
\
In Case I1 and Case 1% the secondary feature is encountered as the wrap grasp is
attempted, however the resultant number of cycles differ due to the point at
which the encounter takes place. In Case I1 the cylinder is encountered by finger 3,
well into the shape matching period, cycle #154. However, in Case IV the cylinder is
encountered by fingers 1 and finger 2 at the beginning of the shape matching
period, cycle #15.
Due to the location of the secondary feature in Case III, the cylinder is not encoun-
tered at all, thus the number of cycles for each of the stages equals those of Case I.
Lastly, in Case V the secondary feature is encountered at the beginning of the shape
description period, as it is the point of initial contact. However, due to the small
radius of the cylinder, both shapes are discovered during the period of shape
description.
Cylinder-Based Objects
In addition to the sphere-based objects, Figure 36 shows four cylinder-based
objects investigated. A cylinder-based shape is a shape which has a cylinder as its
primary feature and a sphere as its secondary feature. As in the case of the sphere-
based objects, finger 2 is assumed to be in contact with the cylinder or sphere, in
Case K
Given the cylinder-based objects in Figure 36, the performance of the system for
grasping each combination is as shown in Figure 37. As opposed to spheres, cylin-
ders are grasped by default head-on and by rotating the wrist, or in this case the
object, by 90". If secondary features are encountered, then alternate rotation
schemes are contemplated, see orientation in section 2.3.2. In this simulation, the
object is rotated, instead of the hand, for ease of computation.
Case I: Cylinder Only
Similarly to the sphere-based objects, the number of cycles for grasping each
object varies as a function of the location of the secondary feature, the sphere, with
respect to the primary one, the cylinder. In addition, no data is available for Case
Ill since this particular object configuration proves to be an impossible challenge
for this program, as discussed below.
Figure 38 shows the instances at which the secondary features of the cylinder
were encountered or not. These figures facilitate the understanding of the differ-
ences among these object configurations.
Grasp
In Case I1 and Case 111, the secondary feature is not discovered until the wrap grasp
is attempted. In Case I1 the sphere encounter is successful, but has a higher cycle
requirement overall, than the simple case, Case I. However, in Case 111the sphere
encounter makes it impossible to progress. The secondary feature blocks the for-
ward motion of the wrist as the fingers cannot move around the unknown imped-
iment. The program assumes that if the object to be grasped is too big, it was
already detected and the program would have picked a different object to grasped
or would have caused the end agent to be executed. However, the secondary fea-
ture in this case effectively increase the size of the object to be grasped and found
a weak point in the program.
In Case IV the secondary feature is not detected, thus the performance of this case
is similar to that of Case I.
Lastly, in Case V the secondary feature is initially in contact with the second finger,
thus it is detected during the shape description period. Consequently, the wrap
grasp is planned around the sphere.
3
\ \
- ~
ya x i s yaxts x-axis
Case II: Case Ill:
Sphere Contact (Finger 3) Sphere Contact (Finger 3)
U
x-axis
Case IV: Case V:
Cylinder Contact (Fingers 1, 2, & 3) Cylinder Contact (Finger 2)
Figure 38: Encountering Secondary Features of Cylinders
88
In conclusion, the ability of the robotic hand to grasp an object varies according to
the shape of the object. In this case, the location of the secondary feature with
respect to the primary one, plays a very important role. Furthermore, the proba-
bility of success of the grasp is increased if the complex object is more precisely
reconstructed early on, during the shape description period.
The results of varying the initial finger in contact with each of the two object prim-
itives, is as shown in Table 9.
The results in Table 9 clearly indicate that the finger which makes initial contact
with the object, plays no role in the performance of the system and its ability to
grasp an object.
-Y - axis
Figure 39: Contact Position Reference
Furthermore, Figure 39 shows the reference position numbers, which have been
assigned to each initial contact location. Although this is a 2D view of this map,
the contact points are points in 3D, with corresponding x-coordinates. This refer-
ence position map can be superimposed on spheres and cylinders.
As before, the number of cycles is used as the performance measure for compari-
91
son purposes. Table 10 shows the summary of these simulations.
Obviously, the change in contact location has some impact on the performance of
the system. The positions experiencing differences in performance have been
highlighted in the table and in Figure 40, to help visualize the affected areas. The
rest of the positions are referred to as the "majority".
Before trying to explain the reasons for the differences at these positions, it helps
to note the results for doing these simulations with finger 3 initially in contact.
The finger 3 initial contact simulations have produced an entirely uniform system
performance where every info and grasp stage at every position required a consis-
tent number of cycles, Table 11.
The main difference between allowing finger 2 vs. finger 3 to be initially in contact
with the object, is the physical location of the origin of these fingers, with respect
to the other fingers, see Figure 14 in section 3.1.
The vertical distance between fingers 1/2 and finger 3 is 80 units, while the hori-
zontal distance between finger 1 and finger 2 is only 30 units. Given that the
radius of the sphere used for this simulation is 30 units, that means that if finger 1
or finger 2 was initially in contact with the object, then the other finger may come
in contact with the object as well during the shape description period of the info
stage. However, if finger 3 is initially in contact with the object, fingers 1 and/or
finger 2 have no chance of also coming into contact with the object during shape
description.
orientation . . . . .
J I I I I I I I I
0 20 30 40 50 60 70 80 90 100
Cycle Number
Figure 41 Position 2a Agent Execution Profile - info Stage
To verify this hypothesis, let us first take a look at the circumstances surrounding
positions 2a, 3a, 4f, 5f, 4g, and 5g. In all these cases the number of cycles for the
grasp stage are indifferent from the majority of positions and the number of cycles
for the info stage are slightly less than the majority of positions. Figure 41 shows
the agent execution profile for position 2a during the early part of the info stage,
while Figure 42 shows the equivalent status of position 1. As seen, the number of
pattern repetitions during the shape description period at position 2a is one less
than the number of pattern repetitions at position 1. In addition, each pattern has
a 3 agent duration, which explains the differences for positions 2a, 3a, 4f, 5f, 4g,
and 5g.
end - I I I I I I I I I
wrist . . :...m..m:
. . . . . m . m l . m k . . m . . r. ;
. . .* . . - w . ; ; x . . x . . . ' * . M ~ x-
- , ...: .m..m...#..m.*.;..
finger2 . . . . . . . . :. . . . . . m..m.m.. ~ . a.... ...#. m..m.m..r. . . . .).... .m...)c
ep3
- ........ :. m . . . . . ;.......... :. . . . . . . . m:. . . . . . . . . .:. . . . . . . . . . i . . . x . . . .:. . . . . . .: . . . . . . . :.x...
I I I I I
" " 15
I I I
ePl;
20 30 40 50 60 70 80 90 100
Cycle Number
Figure 42: Position 1 Agent Execution Profile - info Stage
The reason for the difference in system performance at positions 3f and3g remains
to be determined. Note that the number of cycles at each of these positions is iden-
tical and so is their horizontal placement in Figure 40. Obviously, there is a rela-
tion between these two positions.
A first look at the agent execution profiles for position 1 vs. 3f, Figure 43, discloses
nothing. A close look at the first part of the info stage, Figure 44, shows a similar
case of a double contact as for positions 2a, 3a, 4f, 5f, 4g, and 5g, but this does not
account for the big variation.
Figure 43: Comparing Agent Execution Profiles
95
Figure 44: Comparing info Stage Agent Execution Profiles
The only alternative is the relative positioning of the hand with respect to the
object. This is controlled at the onset of the shape matching period by the orienta-
tion agent. This particular configuration evidently enables the faster achievement
of the goals of each stage.
The first set of simulations, section 5.3.1, have shown that the finger which ini-
tially makes contact plays no role in the system performance. Section 5.3.2 has
shown that the location of the initial finger contact does not matter either, unless
the position of this finger facilitates a double finger contact with the object during
the shape description period. In this case, the performance is better. Thus, although
the initial finger in contact does not matter when varied on its own, when varied
in conjunction with the initial contact point, it can facilitate such dual contacts,
thus influencing the performance of the system.
The item to keep in mind is that the EOS was designed to work with a default
weight of 0.30 and an opportunistic weight of 0.70. The results in Table 12 show
that the system is unstable for:
Looking at the region of stability, the system performance changes with varying
weighting factors.
As the weights vary, so do the number of cycles required for each of the stages.
The total number of cycles starts off high, total = 932, and decreases monotonically
until Wo = 0.75 (and Wd = 0.25). Beyond this point, the numbers oscillate up and
down. In addition, the variation of the total number of cycles is mostly due to the
variation in the number of cycles during the info stage.
This analysis also provides a means for determining the optimum weights for the
EOS rating system. Assuming that it is desirable to have the least number of
cycles for both stages, the optimum weights are: Wo = 0.80 and Wd = 0.20.
5.5 Discussion
The EOS implementation in addressing the grasping of curved objects with a dex-
terous end-effector has been analyzed in the previous sections. The three main fac-
tors which affect the resultant behaviour of the system have been identified to be:
the object's shape, the initial finger contact, and the rating system. Varying these
parameters one by one has revealed the manner in which they affect the system.
Object Shape
The object shape, when studied at the simple level, has shown that spheres, with
their 3D curvature require more execution cycles than do their counterparts, the
cylinders, which are only curved in 2D. This finding is in spite of the lower confi-
dence allotted to the EP1 output predicting a cylindrical shape vs. a spherical
shape.
When each of these shape primitives has been enhanced by a secondary feature,
the sphere-based objects vary significantly in cycle requirements, Figure 34, while
the cylinder-based objects seem to have a much more stable cycle requirement.
Again, it seems that the degree of the object's curvature affects the performance of
the system.
The analysis of grasping simple shapes of varying radii also showed that both
shape classes have minimum and maximum boundaries on the size of radius
which this robotic hand configuration can accommodate. Both maxima are the
same, while the minima boundaries vary, with the cylinder's minimum being
lower than that of the sphere.
The boundaries are a direct result of the robot hand configuration. Since the verti-
cal distance between fingersl/2 and finger3 is 80 units, and since the fingers do
not exceed the vertical limit of their link1 origins, this hand can only grasp objects
less than 80 units tall. In fact, if an object's size is within 5%of this upper limit, the
object is deemed not graspable and the program ends. Obviously, this means that
the radius of the sphere and the cylinder is responsible for the radius maximum
and it is why the upper boundaries of spheres and cylinders are the same.
If the total height of a complex object exceeds the maximum and the hand has not
been able to identify the features with which it is in contact, then this may cause
the hand to stagnate and never achieve a grasp of the desired object. Such a failure
was seen in Case III of the cylinder-based object. The radius of the cylinder was 30
units and the radius of the sphere was 10 units, thus the combined height of this
object is 80 units. Since the system did not detect the sphere at the side of the cyl-
inder during shape description, the cylinder was grasped as usual, from the side.
Unfortunately, this resulted in a configuration which was not graspable with the
given robotic hand.
Furthermore, since the horizontal distance between finger 1 and finger 2 is equal
to 30 units and since simple spheres are grasped from the front, spheres with a
diameter equal to or less than 30 units will not be able to be grasped. However,
since simple cylinders are grasped from the side, the length of the cylinders, in
addition to their diameter are the parameters which determine its graspability.
Thus, a cylinder with a length greater than 30 units may be grasped even when its
radius is smaller than 15 units. However, eventually the system will fail as the cyl-
inder's radius becomes too small for this robotic hand to grasp.
Changing the hand configuration and/or allowing the hand to open wider would
obviously change the values of the maxima and the minima boundaries.
Rating system
Lastly, the system was designed with the weighting factors set to [0.30, 0.701 for
the default and opportunistic sub-ratings, respectively. The desire was to empha-
size the opportunistic sub-rating much more than the default sub-rating, so that
the dynamic behaviour of the system could be achieved in less cycles. The expec-
tation was to have the best performance at this weighting combination. However,
varying these weights has shown that the optimum weights for this system are,
[0.20,0.80]. Also, it was discovered that as long as the opportunistic sub-rating is
greater than the default sub-rating, the rating system can do its job, although at
varying system performance.
Of the three factors which played a role in determining the cycle requirement of
the system, the object shape, with respect to the radius of the main feature, had
the most influence on the system performance. Varying the radius of the sphere
and cylinder from 17.5 units to 37.5 units caused the system cycle requirement to
decrease from about 900 to about 400 cycles, see Figure 32.
The factor which least affected the cycle requirement is also tied to the shape of
the object, and it is the location of the secondary feature with respect to the pri-
mary one. Hardly any change is seen in the cylinder-based objects, Figure 37, and
the sphere-based objects exhibit a cycle range of less than 200, Figure 34.
Chapter 6
The EOS presented in the previous chapters pertained to the grasp achievement of
objects by a robotic hand. This chapter presents an example of a simple EOS
implementation in grasping with a novel end-effector - the parallel reconfigurable
jaw gripper designed by Hong and Payandeh [12][13].
6.1 Overview
The modeled end-effector is shown in Figure 46. This end-effector has two parallel
jaw grippers, each one enhanced with a reconfigurable rotary disc.
Naturally, not all pin patterns result in a force closure grasp of the object. Given a
certain object and a pre-determined configuration of the pins within each jaw disc,
the jaw discs rotate until some of the pins catch the edges of the object to be
grasped, thus preventing the disc from rotating any further. The pins which do
not make edge contact may make contact with the face of the object or they may
make no contact at all. It is the combination of pins in contact with an object edge
and face which makes all pin configurations unique. This combination of contacts
also determines whether the resultant grasp is force closure or not [121[131.
6.2 Implementation
As before, the system has been implemented in SICStus Prolog v3.3, maintaining
the same overall architecture. The initial condition of this program is that the par-
allel jaws have been actuated and both rotary discs are in contact with the object to
be grasped. The goal of the system is to achieve a certain combination of edge and
face contacts. In this implementation, that combination is: minimum of 2 edge
contacts and 1 face contact for each disc.
The pin configuration of the rotary disc is very important in the success of the sys-
tem, thus the agents represent different pre-determined pin configurations. In this
case, four agents, see section 6.2.3, were chosen, Figure 49.
[b,clI ,
(object:face,[ [ [-b,cl,[O,-dl 1 , [ [Or-dl,
[[b,cI,[o,aIl,[[(),a1A-b,clll)
Listing 25: Specifying an Object Face
An additional piece of information about the object's face shape is asserted. This
information consists of two parameters: shape regularity and edge parity. The need
for this information becomes evident in section 6.2.4. The regularity of the shape of
the face is classified as: regular, semi-regular, or irregular. A regular object is an
object which has all sides the same length, as a square does. A kite is an example
of a semi-regular shape, because it is symmetric, although the sides are a variety
of lengths. Irregular shapes have all lengths different sizes. The parity of the num-
ber of sides is classified as odd or even, depending on whether the number of
sides of the face is odd or even. This information is asserted on the information
board under the label name object :shape.The object regularity is represented
by a number, 0, 1, and 2 for irregular, semi-regular, and regular shapes respec-
tively. The arity is a string field indicating odd or even.Thus the shape in Figure
54 has the following data asserted on the IB:
(object:shape,[l,even])
- = face edge
@ = spring-loaded pin
Figure 55: Types of Contacts
Given a list of edges as in Listing 25, a list of pins on a rotary, as in Listing 26, and
the current angle of rotation of the disc, rotary-angl e,a set of pin status lists is
assembled. These lists are used to determine the contact status of a pin, i.e. edge
contact, face contact, or no contact.
(configB:pins,[ [0,0],[60~,2],
[180~,21 1)
[240•‹,2]
,
Listing 26: Pin Configuration for Rotary Disc B
Each list contains the location of one pin with respect to every edge of the object
face. The location of a pin with respect to an edge, is classified as (i) possible face
contact (value = 0), (ii) edge contact (value = I), or (iii) no contact (value = 2). Tak-
ing the triangle of Figure 55a as an example, the pin has made edge contact with
one edge. Since the pin also lies on the same side of the other two lines as the cen-
ter of the coordinate frame, the pin is categorized as "possible face contact" by the
other two sides. Thus the list generated for the pin in Figure 55a is [ 1, 0,01.
Note that the list has three elements, one for every edge.
Once the status list is assembled, a search of this list is performed and the contact
status is reasoned as shown below:
if member(1,List)
then <pin has made edge contact>
elseif member(0,List)
then <pin has made no contact with object>
else <pin has made face contact>
Listing 27: Reasoning a Pin's Contact Status
Once all pins have been classified, the number of pins which have made face con-
tact with the object is stored under the label face-contacts and the number of
pins which have made edge contact are stored under the edge-contacts label
on the information board.
The contact detection algorithm for establishing the pin location classification
with respect to an edge, i.e. possible face contact, edge contact, or no contact, is
called on-1 ine and it is implemented as in the following code segment:
The three components are integrated together as shown in Figure 56. The program
ends when all agents have been discarded or when a pin configuration has been
found which satisfies the goal of the program.
CONTROLLER
BEGIN - pick highest rated agent
- allow one agent to act
q7 - verify contaEt criteria I
AGENT C AGENT D
- rotate - rotate
- check - check - check - check
contact contact contact
INFORMATION BOARD
- store status variables
Controller
The controller plays a similar role as before and is responsible for the following:
pick the highest rated agent at current cycle
intervene in the rating process by taking the current agent off the rating
list and picking the next highest rated agent for execution, if the highest
rated agent is not successful in accomplishing its goal
determine when the pin contact status meets the required contact criteria
(i.e. 2 edge contacts and 1face contact)
An agent is deemed to have been not successful if one of the two conditions below
is satisfied:
(i) the corresponding disc has rotated through 360" and has not made any
pin contacts
(ii) the resultant pin contacts do not meet the minimum contact require-
ment
The agent structure is the same as before, except that there is no need for the pre-
condition component. Should the need for a pre-condition arise at a later time,
then this can be easily added, see section 6.5.
The implementation of the agents are similar to each other, as the finger agents
were in the tip grasp achievement with a three fingered hand. The task of each
agent is to:
rotate rotary by a small amount
determine contact status of the rotary pins and the object face
assert the contact status on the IB
configA:body:-
/ * Get object face information from IB * /
bb-get(object:shape,[Regularity,-I),
bb-get(object:face,Face),
bb~get(config~:pins,PinConfiguration),
/ * Determine which pins from Pinconfiguration * /
/ * made edge or face contact * /
contact(~in~onfiguration,Face,
[],EdgeContacts,
[],FaceContacts),
/ * Assert output on IB; the Edgecontacts and * /
/ * the Facecontacts lists may be empty if no * /
/ * pins have made contact. * /
bb_put(edge~contacts,EdgeContacts),
bb_put(face~contacts,FaceContacts).
Listing 30: configA Implementation
The pin configuration of each agent is stored as <agent-name> :pins on the IB
and has the following structure:
N is the number of pins on the rotary disc and each pin location is given as a (x,y)
coordinate, which corresponds to a location disc surface, i.e. no z-coordinate is
needed.
6.2.4 Rating System
Given an object, each pin-configuration-spe~ifi~
agent must rate its own confi-
dence in achieving the minimum contact requirement. This rating is based on four
influencing factors:
(i) the number of pins in the rotary
(ii) the arrangement of the pins in the rotary
(iii) the shape of the object face (regular, semi-regular, or irregular)
(iv) the number of sides (odd or even) of the object face
Since the number of pins and the arrangement of the pins in the rotary disc is
fixed for each agent and for all objects, influencingfactors (i) and (ii) are taken into
account in the default sub-rating of the agents. Table 13 shows the default sub-rat-
ings assigned to each of the agents. The motivation behind these sub-ratings is
that the less pins a rotary has, the better. Also, pin arrangements which exhibit a
variety of distances are rated better than regular distance pin arrangements with
the same number of pins.
The last two influencing factors, (iii) and (iv), are dependent on the object, and
thus they must change every time the program is run. Thus, the influencing fac-
tors serve as the basis of the opportunistic sub-rating of the agents. Note that since
the shape of the object does not change throughout the program, neither do the
opportunistic sub-ratings. The sub-ratings vary from agent to agent,
as in Table 13.
The weights used for the two sub-ratings are: 0.4 for default weight and 0.6 for
opportunistic weight. Thus, slightly more emphasis is placed on the opportunistic
than default sub-rating.
Triangular Faces
Three slightly different triangular faces have been tested. The number of cycles for
each face is as shown in Table 15. Note that 631 is the maximum number of cycles
of one program execution.
Disc A 1 Disc C
-2 - -2 -
Cycle #I9 Cycle #208
3 - -3 -
3 - 2 - 1 0 1 2 3 -3 -2 -1 0 1 2 3
x-axis x-axis
1 Disc B Disc D
-22
3
3 - 2 - 1
x-axis
0
Cycle W34
1 2 3
-2
-3
-
-
3 -2 -1 0
x-axis
Cycle #490
1 2 3
Figure 58 shows the results of grasping the second triangular face. This time, disc
configuration B was successful in grasping the object face, after configurations A
and C failed to do so. Three edge contacts and one face contact ensured that the
contact criteria was met.
Disc A $1 Disc C
3
-21 ,
-3
,
-2 -1 0
,
x-axis
-&
1
72
,
3
,
3
,
-2
,
-1
,
0
x-axis
7 c I e #'63
1 2 3
3t Disc B
-/2
3 -3 -2 -1 0
x-axis
Cycle #340
1 2 3
Disc B
-.i
-3 3 -2 -1
0
x-axis
Cycle #343
1 2 3
Disc A
-2 Cycle 133
- 2 5 " ' ' " . . ' a
-25 -2 -15 -1 -05 0 05 1 15 2 25
X-axIS
Disc A Disc C
a
-1 5 .
-2 -
a Cycle #50
-15.
2. Cycle 11.94
-2b
-2 5
-; -1'5 -; -0.5 D15
Cycle Y1
1'5 ; i5
-,I
-'-g? -4 -1'5 .; 4'5 A 015
Cycle # I 5 8
1'5 d 215
x-axis x--IS
Disc B Disc D
a
1 No 631 A, C, B, D
2 Yes 378 A, C, B
3 No 631 A, C, B, D
Unfortunately, the first pentagon face was not successfully grasped. All four disc
configurations were allowed to run until none were left. Figure 63 shows the
unsatisfactory contacts which were made with this face.
The second pentagon face was successfully grasped, as seen in Figure 64.
Although disc configurations A and C were unsuccessful at first, configuration B
prevailed after 378 cycles.
Figure 65 shows the unsuccessful attempt of the four disc configurations at grasp-
ing the third pentagon face.
4 - 2 - 1 0 2 3 3 -2 -1 0 1 2 3
x-axis x-axis
Disc B 1 Disc D
(--J
.-
4
m r1:
X
a- .q::
X 1
-2 - -2 -- a
Cycle #388 Cycle #480
3. 3-
- 3 - 2 - 1 0 1 2 3 3 -2 -1 0 1 2 3
x-axis x-axis
6.4 Discussion
The three object faces investigated in section 6.3 were chosen for their simplicity.
Varying the shape face slightly within each class showed that small changes in the
shape of the object affected the ability of any of the disc rotaries to meet the goal
contact criteria. Looking at the cases when the face failed to be grasped success-
fully, it is evident that many of these cases were close, although not close enough.
Disc configuration C was close in grasping the first triangular face, Figure 57, and
the third square face, Figure 62. Changing this configuration slightly could result
in successful grasps of these faces in the future.
Going through a test of M object faces and N disc configurations is a good way of
seeing what works and what does not. Those configurations which do not work,
can then be modified so as to be potentially useful in the next set of tests. In this
case, configuration D, never succeeded in grasping any of the objects. Thus, given
the set of faces tested here, this configuration can be discarded.
As in the case of grasping curved objects with a three fingered hand, here too the
physical distribution of the pins on the rotary discs limits the ranges of objects
which a given pin configuration can grasp. Thus, a large number of discs is
required to ensure the successful grasp of a wide range of objects, or each pin con-
figuration would require a much greater number of pins.
Although the task of grasping an object face with the parallel reconfigurable jaw
end-effector is not as challenging as that of grasping with a three-fingered grasp,
the EOS still lent itself to this problem.
6.5 Extensions
The EOS architecture is very flexible and modular, thus facilitating the expansion
of this program as further functionality is incorporated.
One consequence of the lack of y-axis symmetry in the face is that each disc, the
left and right one, has to undergo its own evaluation of which rotary disc configu-
ration is best for the face in front of it. Furthermore, it is necessary to analyze the
grasp quality of these resultant pin contacts together. The manner in which this
can be accomplished is to use one EOS system for each disc and to add a higher
level controller which supervises the cooperation of these two systems, Figure 66.
HIGH-LEVEL CONTROLLER
END
- verify grasp
t
HIGH-LEVEL IB
- store status variables
6.5.2 Modularity
Currently the EOS for the Parallel ~econfigurableJaw Gripper has only four
agents. However, the addition and/or deletion of agents is easily accomplished.
Adding an agent means that the new agent's body must be added to the program,
suitable sub-ratings must be decided on, and the agent's name must be added to
the list of agents, agents, on the IB. ~eletingan existing agent involves only the
removal of that agent's name from the agents list. The code itself can be deleted
at a later time, if desired. Given the zero success rate of the agent responsible for
disc configuration D, the EOS modularity makes it easy to remove this agent or
modify it.
7.1 Conclusions
The Enhanced Opportunistic System (EOS) was successfully implemented as the
architecture for the grasping tasks of a dexterous end-effector and of a parallel
reconfigurable jaw end-effector.This agent-based architecture exhibited an oppor-
tunistic centralized control and was enhanced through the novel rating system
based on the Bayesian Formalism. This rating system is used to calculate the util-
ity of agents during the current cycle and to distribute the task of scheduling the
agents to the agents themselves.
The two EOS implementations discussed in this thesis have presented the corre-
sponding EOS agents as belonging to one of three categories: physical, behav-
ioural, and task agents. These categories do not span the space of all possible
agents, thus it is possible that other grasping problems call for other categories of
agents as well. Franklin and Graesser [8] have surveyed many agent-based archi-
tectures and have found that each researcher has implemented agent categories
which match his/her problem. Other agent categories which have been identified
are reasoning agents, communicative agents, and information agents. It is up to
the researcher to determine what agent categories are required to solve the prob-
lem at hand. Once the agent categories are identified, the researcher can then start
to identify decoupled entities within a category. The agents within a category
have similar tasks, but they are responsible for different parts of the system. For
example, in the EOS implementation of the dexterous end-effector, two types of
behaviour were required, the wrap grasp and the tip grasp. Since one behaviour is
decoupled from the other, each behaviour was assigned its own agent. Keeping
these two behaviours decoupled means the system modularity can be maintained,
There is no formula which the author is aware of which can predict the number of
agent categories and agents withing each category; however, a methodology has
been identified: (i) identify the agent categories needed to address the problem
and then (ii) identify the agents within each category .
The dexterous end-effector analysis showed that the shape of curved objects such
as spheres and cylinders can be determined through haptic exploratory proce-
dures, such as EP1 and EP3. The data gathered from the object reconstruction data
can then be used to establish a stable tip-prehension grasp of the object. Studying
object shapes composed of combinations of spheres and cylinders, has shown that
once the primary feature has been identified and the secondary feature has been
located, the tip-prehension of the main feature can still be accomplished in spite of
the object's more complex shape.
The flexibility of the EOS to other grasping problems has been shown through its
use in the problem of grasping using the parallel reconfigurablejaw end-effector.
This simple implementation has shown that not only is the EOS suited to this task,
but that the resultant system is open to many areas of expansion.
EP1 and EP3 are the two haptic exploratory procedures used in this thesis, how-
ever, they are not the only haptic exploratory procedures which have been identi-
fied and classified. Humans use a wide variety of EPs in their environment
sensing [I71 and there is no reason why a robot cannot do the same. The object
reconstruction method presented here addresses only the identification of an
object's shape and size; however, objects have many more properties. For exam-
ple, identifying the texture of a surface can say a lot about what that object is.
The rating system of the EOS has successfully done its job of scheduling agents in
the grasping environment. However, currently the rating system has no means of
improving itself based on past experiences; the challenges of grasping the same
type of object are encountered with every program iteration. Consequently, work
needs to be done in allowing the system to learn from its grasping experience.
This learning factor can be implemented as a third sub-rating of the rating system,
thus pooling the knowledge acquired by three sub-ratings:
3
rating (Agentj) = Weighti - SubRating,, (24)
i= 1
In this program the environment, i.e. the robotic hand and the objects to be
grasped, have been simulated, although these simulations have been isolated
from the rest of the system architecture. As a result, this EOS implementation
could easily be used with actual hardware. Assuming that the fingers of the actual
robotic hand can be controlled through the joints of the fingers, then the robotic
hand can use the same inputs as the modeled hand. In addition, the output of the
modeled hand can be duplicated with outputs from an actual robotic hand, given
that the robotic hand has tactile sensors at least at the fingertips, to provide con-
tact detection information. Contact with finger links can be done with either joint
torque sensors or tactile sensors along the link.
The EOS implementation of grasping with the parallel reconfigurable jaw gripper
has shown that choosing the appropriate disc configurations for a set of object
faces is not easy. The shape of the face and the number of edges it has is a good
starting point, but there is much work to be done in being able to predict the util-
ity of a disc pin configuration given an object.
Appendix I
The robotic hand modeled has three fingers, each consisting of three links. The
fingers are attached to the wrist as shown in Figure 67:
finger 1 I finger 2
origin of
\
finger 3
Consequently, the origin of the fingers, do not coincide with the origin of the
coordinate frame of the wrist. However, this is easily taken care of through a
simple transformation. For the purpose of devising the inverse kinematic
equations for each finger, it is assumed that all the origins coincide.
Each finger can be modeled by three links and three joints, as shown in Figure
68.
fingertip
0 2
Figure 68: Kinematic Model of a Finger
The above configuration is for the bottom finger, finger 3, since joint angles 0 2
and O3 are between 0" and 180'. This is called the elbow down configuration.
Joint angles 0 2 and 03 of the top two fingers are between 180' and 360•‹,see
Figure 69, thus this configuration is called elbow up.
Due to the difference in the elbow configuration between fingers 1/2 and finger
3, it is necessary to look at the finger kinematics from two points of view: elbow
up and elbow down.
1.1 Forward Kinematics
Figure 70 shows the coordinate frame of a finger and its corresponding joint
variables to be used for the forward kinematics of the robotic hand. Solving the
forward kinematic equations for this manipulator, requires the solution of the
following three sets of coordinates, with respect to the (xf,yf,zf)coordinate
frame of the finger: (xl,yl,zl), (x2,y2,z2), and (x3,y3,z3). {11,12,13) are the link
lengths of links 1,2, and 3 respectively.
(x3,y3,z3)is the last coordinate and it is the fingertip location of the finger.
Allowing (xo,yo,z,) to be the origin of the finger in the global frame, then the
fingertip location with respect to the global frame is:
X3,g
= (1, + 1, + l3 x COS (4n - 0, - 03)) x coso, + x,
x cos (2n - 0,)
Y3, g
= (I, + I, x cos (2x - 0,) + 1, x cos (4n - 0, - 0,) ) x sin@, + yo (1-4)
z = -1, x sin (2n - 0,) - l3 x sin (4n - 0, - 0,) + zo
8
3 7
X3,g
= (I,+ l2X cos (02)+ I3 X COS (02+ 03)) X coso, + Xo
= (1, + l2x cos (02) + I3 x cos (02+ 0 3 )) x sin@, +yo
Y3, *
= l2 x sin ( 0 2 ) + l3 x sin (02+ 03) + Z,
33 *
z
n
2 -< O 1 -< 0,
-- for finger 1
where,
n
0 10 < - for finger 2
1-2
In addition,
2 2
and l2cos02 + l3 cos (02+ 03)= ,,/( x 3 + y3) - l I
therefore,
x ~ +2 Y 23 + ~ 3 - 2 x l l x ~ ~ + l =~ cos
- l ( ~2 -O2 + 03)
l ~
2 1213
:. O3 = acos 2 1213
I where ( 7<
~ O3< 2r)
Given a target point P = (p,, p,, p,), it is important to first check to make sure
that the point is within the workspace of the finger. If the point is not within
the finger's workspace, then there is no need to go on with the calculations of
the joint angles.
Assuming that:
the coordinate frame of the finger is {xf,yf, zf1
the length of the finger links are 11, 12,and l3
the target point is P = (p,, py, p,)
Using the notation in Figure 72, in conjunction with the Cosine Law, the third
finger constraint can be calculated. The Cosine Law equates the values c, 12, 13,
a (the triangle angle between 12 and 13) as follows:
Therefore,
7C
for Ola<-
2
7C
for -<a50
2
7C
for a = -
2
Furthermore,
where, a = ,/=-
c= i
a'
+b
ll and b = p,
Combining equations (1-23) and (I-24), the third finger constraint is:
'7I:
d ~ / ~ - ~ , ) 2 + p ~ > for
J x 0 5 a < -2
, , / ( ~ a - ~ , ) ~ + p ~ < , , /for
i ~ -7c:
2< a s 0
2 '7I:
- + = for u =-
2
1.3.2 Finger 3
The first two constraints of finger 3 differ from those of finger 1and finger 2:
I Px20 and p, 2 0
Given a set of correlated events {A, B1, B2, ..., Bn), the probability of event A,
denoted as P(A), can be calculated from the probability of the simultaneous occur-
rence of events A and Bit P(A,Bi) [29],
n
P (A) = (P (A, Bi) (11-1)
i=l
Then, using Bayes' Rule, P(A,Bi) can be calculated as in equation 01-21:
where P(A I Bi) is the conditional probability that A will happen given Bi. Two
requirements are imposed on Equations (11-1) and (11-2):
Combining equations (11-1) and (11-21, P(A) can be calculated gi.ven P(A I Bi) and
P(Bi) as in equation (11-4):
i=l
The probability of A, rain, can be calculated by summing the products of the prob-
ability of rain on a given day and the probability of that day:
EP1 Background
The haptic exploration investigated in this thesis is that of a rolling finger on the
surface of the object which was defined as EPI by Charlebois, Gupta, and Payan-
deh [3].
EP1 is executed by slightly rolling the robot finger in the neighbourhood of the
contact point on the object and is done by rolling the finger along the object in a
cross pattern in two directions {u-direction,v-direction).
The rolling must be done at a known and constant angular velocity around a fixed
axis in the instantaneous contact frame. This curvature estimation method is
based on the following equation:
where,
p = contact point on probe in [u,v] direction
M = fingertip metric
K1 = curvature form of fingertip(known)
K~ curvature form of the object in contact with the fingertip
[a,,9
1are angular velocities of the fingertip's contact frame w.r.t. the
object's contact frame around the x and y axes
[v,, vy] are the linear velocities of the fingertip's contact frame w.r.t. the
object's contact frame in the x and y directions ([v,, vy v,] = [0,0,0] without
slippage)
K2can be solved for and the diagonal elements of K2 give the normal curvatures in
the u and v directions.
The type of information which can be retrieved about an object with EP1 is the
surface curvature, radius (r), of the object at the point of contact with the object.
Bibliography
[21 M. Charlebois, Exploring the Shape of Obiects with Curved Surfaces using
Tactile Sensing, M.A.Sc. Thesis, Simon Fraser University, Burnaby, B.C.,
December, 1996.
141 N. Chen, R. Rink, and H. Zhang "Local Object Shape From Tactile Sensing,"
IEEE International Conference on Robotics and Automation, 1996, pp. 3496 -
3501.
[5] M.R. Cutkosky, Robotic Graspin and Fine Manipulation, Kluwer Aca-
demic Publishers, 1985, pp. 87 - 109.
[6] L.D. Erman, F. Hayes-Roth, V.R. Lesser, and D.R. Reddy "The Hearsay-I1
Speech-Understanding System: Integrating Knowledge to Resolve Uncer-
tainty" in Engelmore and Morgan (editors), Blackboard Systems, Addison-
Wesley Publishing Company, 1988, pp. 61 - 64.
[71 J.D. Foley, A. van Dam, S.K. Feiner, and J.F. Huges, Computer Graphics
Principles and Practice, Addison-Wesley Publishing company/-1990, pp.557 -
558.
[I21 M. Hong and S. Payandeh "Novel Design of a Class of Robust and Dexter-
ous End-Effectors/Fixtures for Agile Assembly," IEEE International Confer-
ence on Systems, Man, and Cybernetics, vol. 2,1996, pp. 1393 - 1398.
[I31 M. Hong and S. Payandeh "Design and Planning of a Novel Modular End-
Effector for Agile Assembly," IEEE International Conference on Robotics and
Automation, 1997, pp. 1529 - 1535.
[I51 M. Kaneko, Y. Hino, and T. Tsuji "On Three Phases for Achieving Envelop-
ing Grasps," IEEE International Conference on Robotics and Automation,
1997, pp. 385 - 390.
[I61 M. Kaneko, and K. Honkawa "Contact Point and Force Sensing for Inner
Link Based Grasps," IEEE International Conference on Robotics and Automa-
tion, 1994, pp. 2809 - 2814.
[I71 R.L. Klatzky and S.J. Lederman "Stages of manual exploration in haptic
object identification," Perception & Psychophysics, 52 (6), 1992, pp. 661 - 670.
[I81 R. Liscano, A. Manz, E.R. Stuck, R.E. Fayek, and J-Y. Tigli "Using a Black-
board to Integrate Multiple Activities and Achieve Strategic Reasoning for
Mobile-Robot Navigation," IEEE Expert, April 1995, pp. 24 - 36.
[24] A.M. Okamura, M.L. Turner, and M.R. Cutkosky "Haptic Exploration of
Objects with Rolling and Sliding," IEEE International Conference on Robotics
and Automation, 1997, pp. 2485 - 2490.
[27] L. Overgaard, B.J. Nelson, and P.K. Khosla "A Multi-Agent Framework
For Grasping Using Visual Servoing and Collision Avoidance," IEEE Interna-
tional Conference on Robotics and Automation, April 1996, pp. 2456 - 2461.
[32] K.S. Roberts "Robot Active Touch Exploration: Constraints and Strate-
gies," IEEE International Conference on Robotics and Automation, 1990, pp.
980 - 985.
[33] M.A. Rodrigues, Y.F. Li, M.H. Lee, J.J. Rowland, and C. King "Robotic
Grasping of Complex Objects Without Full Geometrical Knowledge of the
Shape," IEEE International Conference on Robotics and Automation, 1995, pp.
737 - 742.
[34] M. Seitz and J. Kraft "Some Approaches to Context Based Grasp Planning
for a Multi-fingered Gripper," EEE/RSJ International Conference on Intelli-
gent Robots and Systems, September 1994, pp. 360 - 365.
[35] K.B. Shimoga "Robot Grasp Synthesis Algorithms: A Survey," The Interna-
tional Journal of Robotics Research, vol. 15, no. 3, June, 1996, pp. 230 - 266.