You are on page 1of 163

DESIGN AND ANALYSIS OF AN

ENHANCED OPPORTUNISTIC SYSTEM


FOR GRASPING

Andrea A. Ordean
B.A.Sc. in Electrical Engineering
University of Waterloo, 1995

A THESIS SUBMITTED IN PARTIAL FULFILLMENT


OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF APPLIED SCIENCE
in the School
of
Engineering Science

O Andrea A. Ordean 1997


SIMON FRASER UNIVERSITY
November, 1997

All right reserved. This work may not be reproduced


in whole or in part, by photocopy or other means,
without the permission of the author.
Approval

Name: Andrea A. Ordean

Degree: Master of Applied Science

Title of thesis: Design and Analysis of an Enhanced

Opportunistic System for Grasping

Examining Committee:

Professor Tom Calvert


Chair Person

- --
Associate Professor Shaliram Payandeh
Senior Supervisor

Professor VeronicaJhM-
Supervisor

-
* . - .- --

P fessor John Dill


&miner

Date Approved:
PARTIAL COPYRIGHT LICENSE

I hereby grant to Simon Fraser University the right to lend my thesis,


project or extended essay (the title of which is shown below) to users of the
Simon Fraser University Library, and to make partial or single copies only for
such users or in response to a request from the library of any other university, or
other educational institution, on its own behalf or for one of its users. I further
agree that permission for multiple copying ,of this work for scholarly purposes
may be granted by me or the Dean of Graduate Studies. It is understood that
copying or publication of this work for financial gain shall not be allowed without
my written permission.

Title of Thesis/Project/Extended Essay

"Desi~nand Analysis of An Enhanced Op~ortunisticSystem for


2-

Author: -, --
(signature)

(name)

(date)
Abstract

The design and analysis of an enhanced opportunistic system (EOS) is presented


within the realm of grasp achievement through object reconstruction. The EOS is
based on a centralized opportunistic control architecture driven by a novel rating
system. The novel rating system is embedded within the agents of the opportunis-
tic architecture and it is a part of the control mechanism of the EOS allowing each
agent to estimate its own worth during each cycle. The grasp achievement strate-
gies incorporated within the EOS include object reconstruction methods for
curved objects and a measure of the quality of the resultant grasp. The analysis of
the EOS investigates the ability of the EOS to generate good tip-prehension grasps
of spheres and cylinders with a three-fingered robotic hand. The robustness of the
novel rating system is also assessed. The EOS is a modular, thus flexible, system
and these features are illustrated through the implementation of grasping with a
parallel reconfigurable jaw end-effector.
Acknowledgments

I would like to thank Dr. Shahram Payandeh for his guidance and support in the
course of this work. I would also like to thank Dr. Veronica Dahl for the interesting
discussions and moral support she has given me.
Glossary

An agent is a module situated within and a part of an environment that senses


that environment and acts on it over time, in pursuit of its own agenda.

An autonomous system is one which can act on its own, i.e. it is self-contained.

A compliant fingertip is a fingertip which deforms at the point of contact, when it


comes into contact with an object, as the human fingertip does. The degree of this
deformation determines the degree of compliance of the fingertip.

A distributed control architecture is one where the control is shared among many
modules. On the other hand, a centralized control architecture is one where one
module is responsible for the control of all others.

A flexible system is one which is easily modifiable and extendable.

The plane in which the grasp triangle lies is called the grasp plane.

A grasp triangle is the triangle which is formed by the fingertips of three fingers
of a robotic hand, when they are in contact with an object.

The grasp configuration is the geometric figure formed by the contacts of the fin-
gers on an object.

Haptic sensory inputs refer to sensory inputs which are received from the sense of
touch.

The internal force is that component of the applied force at a contact point which
does not contribute to resisting external forces and moments acting on the object.
A modular system is one with relatively independent components, which com-
municate or interact with each other in a well defined manner.

Object reconstruction refers to the task of identifying an object whose shape is


unknown by using the sense of sight, touch, or other. The identification task con-
sists of incremental probings of the object and of converting the probing output
data into an object shape.

A polygon is a shape in two dimensions which has any number of strait edges,
and no curved edges.

A polyhedron is a shape in three dimensions which has any number of polygon-


shaped faces.

The shape primitive is the basic shape an object has. For example, a cup has a
handle, but the main feature of the cup is a cylinder. Thus, the cylinder is the
shape primitive of the cup.

A tip (prehension)grasp is a grasp in which only the fingertips of the fingers are
in contact with the object.

The environment of a robot is a portion of space which the robot is allowed to


move in. This portion of space contains the robot and any other objects with
which the robot interacts.

Probability Notation:
P(A,B) is the probability of A and B occurring at the same time.
P(A I B) is the probability of A occurring, given that B has occurred.
Table of Contents

Approval ..........................................................................................................................ii..
...
Abstract ..........................................................................................................................iii
Acknowledgements ......................................................................................................iv
Glossary ..........................................................................................................................v
Table of Contents .........................................................................................................vii
..
List of Tables ..................................................................................................................xi
List of Figures ...............................................................................................................xi1
..
1 Introduction 1
1.1 Rising to the Challenge ...................................................................................2
1.2 Literature Review ............................................................................................4
1.2.1 Types of Control Architectures .........................................................4
1.2.2 Object Reconstruction ........................................ . . . . . . . . . . 8
1.2.3 Object Gamut .......................................................................................8
1.3 Contributions ...................................................................................................9
1.4 Thesis Layout ...................................................................................................9

2 Enhanced Opportunistic System (EOS) ...............................................................11


2.1 EOS Origins ....................................................................................................11
2.2 Stages ...............................................................................................................13
2.2.1 Stage 1- Info .......................................................................................13
2.2.2 Stage 2 - Grasp ................................................................................... 15

vii
2.3 The EOS Architecture .................................................................................... 16
2.3.1 Information Board (IB) ..................................................................... 18
2.3.2 Agents ................................................................................................. 19
2.3.3 Controller ...........................................................................................35
2.3.4 Data Flow ........................................................................................... 36
2.4 Rating System .................................................................................................36
2.4.1 Assigning the Default Sub-Rating ..................................................38
2.4.2 Assigning the Opportunistic Sub-Rating ....................................... 38

3 Environment Simulation .......................................................................................41


3.1 Robotic Hand Representation ...................................................................... 41
3.2 Object Representation ...................................................................................49
3.3 Contact Detection ..........................................................................................51
3.3.1 Finger Contact ....................................................................................51
3.3.2 Wrist Contact .....................................................................................53

4 Grasping & Object Reconstruction .....................................................................55


4.1 EP1: Shape Description ................................................................................. 56
4.1.1 Assumptions ......................................................................................56
4.1.2 Background ........................................................................................ 56
4.1.3 Implementation .................................................................................59
4.2 EP3: Shape Matching ..................................................................................... 62
4.3 Tip-Prehension ............................................................................................... 64
4.4 Grasp Quality ................................................................................................. 64
4.5 EP1 to EP3 Transition .................................................................................... 69
4.6 EP3 to Tip-Prehension Transition ...............................................................70

5 Analysis of the EOS for a Dexterous End-Effector ........................................71


5.1 Analysis of the Resultant Behaviour ........................................................... 71
5.1.1 info Stage Behaviour ..........................................................................73
5.1.2 grasp Stage Behaviour .......................................................................75
5.2 Effect of Object Shape ...................................................................................77
5.2.1 Sphere vs . Cylinder ...........................................................................78
5.2.2 Simple vs . Complex ..........................................................................81
5.3 Effect of Initial Finger Contact .....................................................................89
5.3.1 Finger 1& Finger 2 vs . Finger 3 ......................................................89
5.3.2 Contact Location ................................................................................ 90
5.4 Effect of Rating System ................................................................................. 97
5.5 Discussion .....................................................................................................100

6 Analysis of the EOS for a Parallel Reconfigurable Jaw End-Effector ........103


6.1 Overview ....................................................................................................... 103
6.2 Implementation ............................................................................................105
6.2.1 Types of Objects Accommodated ................................................. 108
6.2.2 Object Contact Detection ................................................................ 109
6.2.3 EOS Architecture ............................................................................. 112
6.2.4 Rating System .................................................................................. 117
6.3 Experimental Results .................................................................................. 118
6.4 Discussion .....................................................................................................125
6.5 Extensions ..................................................................................................... 126
6.5.1 Flexibility: Dual EOS System .................................................... 126
6.5.2 Modularity ....................................................................................... 127
7 Conclusions & Future Work ................................................................................129
7.1 Conclusions .................................................................................................. 129
7.2 Future Work .................................................................................................130
Appendix I: Modeling the Robotic Hand ..............................................................132
1.1 Forward Kinematics ..................................................................................... 134
1.1.1 Finger 1 and 2: elbow up Configuration ......................................... 134
1.1.2 Finger 3: elbow down Configuration ............................................... 135
1.2 Inverse Kinematics ....................................................................................... 136
1.2.1 Finger 1 and 2: elbow up Configuration ......................................... 137
1.2.2 Finger 3: elbow down Configuration ............................................... 138
1.3 Finger Constraints ........................................................................................ 138
1.3.1 Finger 1 and Finger 2 ................................................................ 139
1.3.2 Finger 3 .............................................................................................. 140
Appendix 11: The Bayesian Formalism ..................................................................141
Appendix 111: EP1 Background ................................................................................143
Bibliography ...............................................................................................................145
List of Tables

Table 1: Control Modes ................................................................................................. 6


Table 2: Agent Types ................................................................................................... 20
Table 3: Info Stage Wrist Actions ...............................................................................24
Table 4: Grasp Stage Wrist Actions ...........................................................................24
Table 5: Classifying Secondary Feature Location ....................................................
33
Table 6: Grasping Complex Objects ...................................................................... 34
Table 7: Rating System Scenario ................................................................................
38
Table 8: Grasping a Sphere vs. a Cylinder ................................................................ 78
Table 9: Varying Finger in Contact ............................................................................
89
Table 10: Finger 2 Contact Position Analysis ...........................................................91
Table 11: Finger 3 Contact Position Analysis ...........................................................92
Table 12: Varying Weighting Factors ........................................................................98
Table 13: Agent Default Sub-ratings .......................................................................117
Table 14: Agent Opportunistic Sub-ratings ............................................................ 118
Table 15: Triangular Face Summary ........................................................................ 119
Table 16: Square Face Summary ..............................................................................121
Table 17: Pentagon Faces Summary ........................................................................ 123
List of Figures

Figure 1: Manipulating an Unknown Object ............................................................. 1


Figure 2: Info Stage Flow Chart ..........................................................................14
Figure 3: Grasp Stage Flow Chart .............................................................................. 16
Figure 4: EOS Architecture and Data Flow .............................................................. 17
Figure 5: Information Board Representation ........................................................... 18
Figure 6: Control of an Underactuated Finger ......................................................... 21
Figure 7: The Grasp Triangle ......................................................................................27
Figure 8: Choosing the Next Contact Point .............................................................. 29
Figure 9: Hand Direction of Approach to the Object ........................................ 31
Figure 10: Simple vs . Complex Shapes ..................................................................... 32
Figure 11:Types of Grasps ......................................................................................... 33
Figure 12: Wrist Location & Dimension ...................................................................42
Figure 13: Wrist Relative Coordinate Frame ............................................................ 42
Figure 14: Finger Link Origins ................................................................................... 43
Figure 15: Cylinder Orientations ............................................................................... 49
Figure 16: Representing a Cylinder ...........................................................................50
Figure 17: Executing EPI .............................................................................................57
Figure 18: Cylinder Orientation .................................................................................58
Figure 19: Grasping Profile of Finger 1 and Finger 3 ........................................ 62

xii
Figure 20: Friction Cone .............................................................................................. 65
Figure 21: Grasp Profile of Spheres and Cylinders ................................................. 66
Figure 22: Grasping with and without Friction ....................................................... 67
Figure 23: Geometrical Equivalent ............................................................................ 68
Figure 24: Agent Execution Profile ............................................................................ 72
Figure 25: info Stage Behaviour Evolution ................................................................ 73
Figure 26: Agent Execution Profile .info Stage ........................................................ 74
Figure 27: Agent Execution Profile - info Stage Revisited ...................................... 75
Figure 28: grasp Stage Behaviour Evolution .............................................................76
Figure 29: Agent Execution Profile - grasp Stage .....................................................77
Figure 30: Sphere vs . Cylinder Performance - info Stage ........................................ 79
Figure 31: Sphere vs . Cylinder Performance - grasp Stage .....................................80
Figure 32: Sphere vs . Cylinder Performance ............................................................ 80
Figure 33: Sphere-Based Objects ................................................................................82
Figure 34: Sphere-Based Objects Performance ........................................................ 83
Figure 35: Encountering Secondary Features of Spheres ....................................... 84
Figure 36: Cylinder-Based Objects ............................................................................. 86
Figure 37: Cylinder-Based Objects Performance .....................................................87
Figure 38: Encountering Secondary Features of Cylinders .................................... 88
Figure 39: Contact Position Reference ....................................................................... 90
Figure 40: Finger 2 Contact Positions ........................................................................92
Figure 41: Position 2a Agent Execution Profile - info Stage ...................................93
Figure 42: Position 1Agent Execution Profile - info Stage .....................................94
Figure 43: Comparing Agent Execution Profiles .....................................................95
Figure 44: Comparing info Stage Agent Execution Profiles ................................... 96
Figure 45: System Performance for Various Weighting Factors ........................... 99
Figure 46: Parallel Reconfigurable Jaw Gripper [12] ........................................ 103

...
Xlll
Figure 47: Parallel Jaw and Rotary Direction of Motion ...................................... 104
Figure 48: Face and Side Contact of Pins ................................................................ 104
Figure 49: Agent Pin Configurations ....................................................................... 105
Figure 50: Rotary Disc Symmetry ............................................................................ 106
Figure 51: Defining the Rotary Disc Pin Location ................................................. 106
Figure 52: Y-Axis Object Symmetry ........................................................................ 107
Figure 53: Disc Rotation Angles ............................................................................... 107
Figure 54: Object Face Representation .................................................................... 108
Figure 55: Types of Contacts ....................................................................................109
Figure 56: Parallel Reconfigurable Jaw Gripper .Algorithm .............................. 113
Figure 57: Triangular Face #1 ................................................................................... 119
Figure 58: Triangular Face #2 ................................................................................... 120
Figure 59: Triangular Face #3 ................................................................................... 121
Figure 60: Square Face #1 .......................................................................................... 122
Figure 61: Square Face #2 .......................................................................................... 122
Figure 62: Square Face #3 .......................................................................................... 123
Figure 63: Pentagon Face #1 .....................................................................................124
Figure 64: Pentagon Face #2 ..................................................................................... 124
Figure 65: Pentagon Face #3 .....................................................................................125
Figure 66: Dual EOS System ..................................................................................... 127
Figure 67: Frontal View of Robotic Hand ............................................................... 132
Figure 68: Kinematic Model of a Finger ..................................................................
133
Figure 69: Finger Positions on Robotic Hand ........................................................133
Figure 70: Finger Coordinate Frame and Joint Position Variables .....................134
Figure 71: Sample Finger Configuration with Coordinate Frame ...................... 136
Figure 72: Determining Workspace Constraints ................................................... 140

xiv
Chapter 1

Introduction

The general problem which has yet to be ultimately solved in robotics is that of a
robotic hand manipulating an arbitrary object with the ease with which human
beings do. The attempts to solve this problem are numerous, however, there is still
much work to be done. This thesis contributes to the solution of this general prob-
lem by focusing on aspects of this problem which still need much research: grasp-
ing curved objects whose shape and location is unknown, the use of haptic
exploration for object reconstruction, and the design and development of an archi-
tecture which is flexible enough to support the integration of this and a variety of
other related tasks.

Locate Object - - Apply Forces

Figure 1: Manipulating an Unknown Object

The task of manipulating an object involves three phases: locating the object in its
environment, establishing a contact with the object, and applying forces to the
object such that the object can be manipulated as desired, Figure 1.The location of
the object in the environment requires the robot hand to scan its environment until
the hand comes into contact with the object. This is analogous to the challenge a
visually impaired person faces in trying to find an item in a room. The contact
establishment of the object involves the hand achieving a specific grasp configura-
tion on the object. At this time it is not important how much force is applied to the
object, only the points of contact of the hand with the object. The last phase of
manipulating an object is the application of forces to the object. This means that
given the contact points of the robotic hand fingers with the object, the fingers can
then apply forces with magnitudes and directions at these points, such that the
object can be manipulated as desired. For example, the forces required to simply
hold an object are different than the forces required to move the object from one
place to another.

In this thesis, the second phase is addressed, i.e. establishing a contact. As seen in
Figure 1, this phase is further sub-divided into two stages. The goal of the first
stage, the Object Information Gathering (info) stage, is to (partially) identify the
object, while the second stage, the Evolution to Tip Grasp (gasp) stage, aims to pro-
duce a stable grasp of the object.

Addressing contact establishment of a robot hand with an unknown curved object


means overcoming several obstacles. First, an underlying system and architecture
must be developed to support the needs of this task. Second, the underlying archi-
tecture must be able to provide the type of control required to achieve this task.
Lastly, the system must contain models of all necessary environment objects and
their interactions, such as contact detection. This architecture is consequently
based on a modular, agent-based, centralized control model, such as the Opportu-
nistic Control Model [Ill

1.1 Rising to the Challenge


First of all, the robot hand construction is further specified. The hand to be used in
this work is a three-fingered robotic hand, i.e. a robotic hand model which has a
wrist and three fingers. Each finger has three links and three joints, as described in
Appendix I. A choice for the fingertip hardness must also be made. There are two
main classes of fingertips: soft and hard. For example, the human fingertip is an
example of a soft fingertip. One property of soft fingertips is that when they make
contact with an object, this contact cannot be pin-pointed to one point, but rather
to an area. Soft fingertips are good for grasping, because they provide much more
friction. In contrast, hard fingertips are less compliant and thus, result in relatively
lower frictional forces. An example of a hard fingertip is a fingertip which is made
out of a hard material, such as steel, and which may be covered with a thin rubber
skin to increase the friction at the point of contact. The hard fingertip, covered
with a thin skin of rubber, is chosen as the lower frictional forces enable the finger
to explore the object with more ease [5]. The hard fingertip is assumed to be hemi-
spherical in shape.

The system developed here is the EOS, Enhanced Opportunistic System. The EOS
is an autonomous system which is used to accommodate all task requirements for
Contact Establishment, such as: object representation, robotic hand modelling and
control thereof, object reconstruction method integration, and a grasp evaluation
method. The architecture itself is modular and flexible, while exhibiting a central-
ized control.

Objects come in many shapes and sizes. Ideally, we would like to be able to deal
with any type of object, but this is not yet realizable. Much of the current research
using only haptic exploration, limits the class of objects to polygons and polyhe-
dra, [19], [32]. This work will focus on grasping curved objects, such as spheres
and cylinders.

(Partial) Object reconstruction is very useful in achieving a tip-prehension grasp


of a curved object. In order to reconstruct an object for the purpose of object iden-
tification, the use of haptic exploratory procedures are implemented. One of the
exploratory procedures (EPs) used here is the EPI, presented by Charlebois et a1
[3]. This EP is based on rolling the fingertip of a finger on the surface of an object
in order to extract some information about the shape of the object.

Another exploratory procedure, called EP3, is proposed here, which provides a


means of verifying the shape and location of a curved object. EP3 is a probing of
the object by wrapping the fingers around the object and using the resultant con-
tact points to verify the estimated objects. Naturally, the object is not known 100%
at this point, unless every single point of the object has been probed. Thus, EP1, in
conjunction with EP3, can be used to reconstruct the object within a required
degree of confidence.

In order to ensure that the resultant grasp is good, i.e. stable, the grasp is required
to meet the active closure criteria, as proposed by Yoshikawa [40]. The active clo-
sure criteria ensure that the resultant grasp of the robotic hand is stable, i.e. the
object can be held/moved without it slipping out of the hand which has grasped
it. Grasp stability can be achieved through two methods: direct computation or
grasp evolution. The EOS control architecture enables an evolutionary approach
to achieving a stable grasp. The goal of this evolutionary method is to help the fin-
gers move in such a way as to ultimately produce a stable grasp. Direct computa-
tion assumes exact knowledge of the shape and location of the object. It must deal
with many possible answers and must take them all into account before deciding
on one. The method of direct computation has been well studied in the literature
[I], [21], [22], [31], including a survey of the area [351.

1.2 Literature Review


The following is a discussion of the background information which this thesis is
based on. The topics to be addressed include: types of control architectures, object
reconstruction methods already attempted, and classes of objects which have
already been addressed.

1.2.1 Types of Control Architectures


Although the idea of the Blackboard architecture has been alive since 1962 in arti-
ficial intelligence [20], it was not until Barbara Hayes-Roth [lo] came along, that
this architecture was introduced to planning problems. Since then, there has been
an increased interest in the Blackboard architecture in the field of robotics [181, [231,
[39] and beyond. This type of architecture is very modular, flexible, expandable,
adaptable, and has the ability of integrating a variety of sensors [18]. It has three
basic structures: the blackboard, the knowledge sources and the controller. The black-
board is the repository of all status variables, the knowledge sources are modules
dedicated to performing specific actions, and the controller is responsible for rea-
soning about the current state of the system and to picking the most suited knowl-
edge source to run at each instance.

Overgaard et al. [271 have introduced a Multi-Agent framework in grasp plan-


ning, where five types of agents, worked together to achieve a goal. The notion of
the Multi-Agent framework is similar in structure to the notion of the Blackboard.
The main difference is that the Multi-Agent architecture has an entirely distrib-
uted control design, whereas the Blackboard architecture emphasizes a central-
ized control design through the presence of the controller, which coordinates the
actions of the knowledge sources.

Hayes-Roth [Ill has presented a Blackboard architecture reformulation, called the


Opportunistic Control Model, which simplifies the structure of the knowledge sources,
now called action systems, and gives them more autonomy as to when they can
choose to contribute. In this thesis, the action systems in the Opportunistic Control
Model are synonymous to agents, due to their increased autonomy.

In order to be able to pick an appropriate architecture for a given task, Hayes-Roth


[Ill has categorized all control situations into four modes: open-loop plan execu-
tion, goal-specific reaction, dead reckoning, and reflex. Open-loop plan execution
means that a detailed plan is generated in advanced and executed step by step. In
goal-specificreaction control the system commits to perform specific actions under
specific conditions, but the system must determine which conditions hold, and
thus which actions should be executed. The dead-reckoning mode allows for a very
general plan to be made a priori and at run time, the sequences of actions are exe-
cuted which comply with a valid instance of the plan. Lastly, the reflex mode can
sense all classes of run-time conditions and can then perform actions which are
required of the current conditions.

These control modes are all suited to different types of situations. The situations
are characterized by how constrained the goals of the system are and by the
amount of uncertainty in the environment of the system. The environment
includes all objects, as well as the robot hand. Table 1 summarizes the control
modes and the situations to which they best lend themselves.

Table 1: Control Modes


Goal Environment
Control Mode
Constraints Uncertainty

Open-loop Plan Execution max. min.


Goal-specific Reaction max. max.
Dead Reckoning I min. I min.
Reflex
I min.
I max.

Seitz and Kraft [34] have used the goal-specific reaction control mode in vis:ion
assisted grasping and implemented it as a set of planning algorithms controlled
by a set of pre-defined situations. A hierarchical architecture was used.

Tomovic et al. [38], Overgaard et al. [27], and Stansfield [36] have all used the reflex
control mode, thus these systems require no planning. Tomovic et al. [38] and
Overgaard et al. [27] used a distributed control architecture for this implementa-
tion, while Stansfield [36] used a knowledge-based system. The reflex control
mode is suited to these grasping problems because the shape and location of the
object to be grasped is either known or can be determined with great certainty by
use of a vision system.

Ananthanarayanan et al. [I] have combined the use of two control modes, best
classified as goal-specific reaction and dead-reckoning, within a hierarchical Black-
board architecture. An off-line planner is responsible for mapping out a series
tasks primitives which are to be executed.

If the environment is assumed to be unknown, i.e. having the greatest amount of


uncertainty, and the goals of the program are highly constrained, the best suited
control mode is the goal-specificreaction. However, if the environment becomes less
uncertain, e.g. due to active exploration, then this mode of control is no longer as
desirable. Instead open-loop plan execution is favoured.

Recalling the task addressed within this thesis, it is necessary to implement a


framework which can deal with the following control situations. At first, the envi-
ronment in unknown, thus the uncertainty of the environment is high, however,
due to object exploration and partial object identification, the uncertainty of the
environment decreases. Consequently, the proposed EOS must be able to accom-
modate at least these two control modes. The Blackboard architecture and the
Opportunistic Control Model would be able to deal with these control modes as
needed. The Blackboard architecture has already been shown to be able to handle
at least two competing control modes [I], as has the Opportunistic Control Model
[Ill.

Halpern et al. [91 have acknowledged the need for agents of Multi-Agent systems
to compute their own knowledge. The authors distinguish between two types of
knowledge: externally ascribed knowledge and explicit knowledge. Externally
ascribed knowledge is the type of knowledge which the system programmer gives to
the system, while explicit knowledge is the type of knowledge, which the system
acquires through its sensors. The explicit knowledge is what determines an
agent's behaviour. The EOS makes use of these two types of knowledge, as Halp-
ern et. al. have done, but in the case of the EOS this knowledge is used to
empower the agents, thus distributing some of the roles of the controller, of the
Blackboard architecture, to the autonomous modules, the agents. This agent
autonomy resembles more the action system modules of the Opportunistic Con-
trol Model. The knowledge given to the agents is used by the agents to determine
their confidence in themselves at any point in time. The confidence of an agent is
its usefulness factor in the current situation.

The approach of augmenting the Blackboard architecture with the help of Bayes'
Rule has been previously mentioned in the literature within the context of evi-
dence incorporation and hypothesis generation [6], [30], [37] in the area of speech
and image recognition. This idea can also be applied to rating agents and even to
the way in which the controller chooses the most appropriate agent.
1.2.2 Object Reconstruction
The idea of object reconstruction has also been explored by Seitz and Kraft [34].
However, they used vision instead of haptic exploration, as have Rodrigues et al.
[33], Stansfield [36], and Tomovic et al. [38].

The need for haptic exploration has been recognized by many researchers, as
vision is not always available or usable. For example, Okamura et al. [24] use hap-
tic exploration in the manipulation of cylindrical objects. Nagata et al. [19] use
haptic exploration to construct models of polyhedrons. Haptic explorations of
unknown curved shapes have been investigated by Charlebois et al. [2], [3], as
well as by Chen et al. [4]. Charlebois et al. have identified two types of exploratory
procedures (EPs) for the purpose of object identification. The first, EP1, requires
one fingertip to roll about a contact point without sliding, and the second, EP2,
requires three fingers to be dragged across the surface of the object. The output of
EP1 is a set of two radii of curvature of the object at the point of contact. EP2
returns a description of the shape of the patch of the object probed.

1.2.3 Object Gamut


Pollard [31] used the Salisbury robotic hand model and focused on frictionless
grasping of polyhedral objects, i.e. objects with flat surfaces. She also used the
notion of grasp equilibrium, as defined by Nguyen [221, to arrive at stable grasps.

Kaneko et al. [I51 illustrated enveloping/wrap grasping of cylindrical objects,


given a known object location. Similarly, Omata and Sekiyama [25] proposed an
algorithm for reorienting cylindrical objects.

Shimoga's survey [35] traces the synthesis of grasps from 1981 to 1990 with
respect to achieving force closure grasps. Of the 15 accomplishments listed only 1
is not constrained to apply to polygonal or polyhedral objects. The area which
needs more attention is that of curved objects, thus this thesis focuses on grasping
curved objects.
1.3 Contributions
First of all, this thesis makes a contribution towards solving the general problem
of one day being able to grasp and manipulate any object with a robot hand by
extending the work of Charlebois [21 to show how curved objects such as spheres,
cylinders, and combinations thereof can be reconstructed and grasped in a suc-
cessful manner.

Second of all, it presents an appropriate architecture for fadlitating the representa-


tion of such an interaction. This architecture is known as the EOS. The EOS design
and development is presented in detail and its analysis confirms the EOS capabil-
ities as a flexible and modular architecture.

Lastly, the introduction of a novel rating system is proposed to enhance the Black-
board based architectures, such as the EOS. The rating system is used to rate the
utility of the agents of the EOS so as to simplify the controller's agent selection
process. By doing this, it also distributes some of the controller's work among the
agents.

1.4 Thesis Layout


The EOS was briefly introduced in the previous sections, but is discussed in detail
in Chapter 2, along with the novel rating system which has been imbedded within
it.

Since this work deals with a simulated task, the representation of the simulated
environment, consisting of the robotic hand and objects to be grasped, is the topic
of Chapter 3.

The EOS starts off with the object in contact with one or more fingers and must go
through the Object Information Gathering stage to (partially) identify the shape of
the object to be grasped. Upon object identification, the tip-prehension grasp
begins to evolve within the Evolve to Tip Grasp stage. The manner in which the
object to be grasped is reconstructed and grasped is discussed in Chapter 4.

Experimental results pertaining to the implementation of the EOS to grasp curved


objects with the three-fingered hand are presented in Chapter 5.

Chapter 6 presents an example of an equivalent EOS for the implementation of the


grasp of cylindrical objects by a parallel reconfigurable jaw gripper. In addition,
this chapter also presents some experimental results, verifying the utility of the
EOS within this framework.

Finally, Chapter 7 presents the conclusions drawn from this work and discusses
ideas for future work.
Chapter 2

Enhanced Opportunistic System (EOS)

The EOS is developed to suit the particular needs of being able to simulate the
interaction between a robotic hand and objects in the environment. As a result,
this system's modularity is achieved through the agent representation of the phys-
ical, behavioural, and task oriented components. The novel rating system is used
to implement an evolutionary approach to achieving the desired goal and to
enforce the autonomy of the agents. Finally, the whole system is tied together
through the controller which ensures the continuity of the program execution.

The EOS has been implemented in SICStus Prolog v3.3. SICStus Prolog is logic
programming language which is easy to learn and use and where the code can be
written with few errors. Since SICStus Prolog v3.3 does not have adequate graph-
ing capability, MATLAB ~ 4 . was
2 ~ used to visualize the status of the EOS, such as
the modeled hand and the modeled objects. This visualization is very valuable in
seeing what the system is doing at a given time.

2.1 EOS Origins


Hayes-Roth [Ill proposed a type of system called the Opportunistic Control Model
which has three component processes:
(i) event-based triggering process
(ii) strategic planning process
(iii) match-based control process
Given a set of agents, the event-based triggering process is used to identify a sub-set
of these agents which could be allowed to execute their actions at the current time.

The strategic planning process is a way of constructing and modifying execution


plans which in turn constrain the actions proposed by the agents. Changing the
degree of the constraints, allows the system to shift between control modes.

Lastly, the match-based control process ensures that only the most suited agent is
allowed to execute its actions at the current time.

Each of these three processes is part of the EOS, although in a different fashion
than envisioned by Hayes-Roth [Ill. The event-based triggering mechanism is
embedded within each agent and is covered in section 2.3. The strategic planning
process is present through the division of the task into two stages, as explained in
section 2.2 and through the presence of behavioural agents, introduced in section
2.3.2. Finally, the match-based control process is implemented as a novel rating sys-
tem, introduced by Ordean and Payandeh [26], which allows agents to rate their
own confidence and influence the rating of other agents' confidence. The novel
rating system is distributed among all the agents and is discussed in detail in sec-
tion 2.4.

The Opportunistic Control Model is based on the Blackboard system, thus so is its
architecture. As a result, the Opportunistic Control Model architecture consists of
three types of structures. Perception systems are responsible for monitoring the
environment and keeping track of relevant parameters such as the current loca-
tion of the fingers of the robot hand, the current orientation of the hand with
respect to the object, etc. The reasoning system interprets events and decides which
of the actions are to be performed. Action systems cause the physical actions to be
performed.

Section 2.3 presents the details of how the Opportunistic Control Model has
evolved into the Enhanced Opportunistic System to deal with the complex prob-
lem of establishing a stable grasp through evolution.
2.2 Stages
In the EOS, the strategic planning process component is partly satisfied by the
division of the task into stages. In this case there are two stages and the goal of the
first stage must be met before the second stage is entered.

Given an initial contact point between one of the fingers and the object, the system
derives a rough shape and location of the object and then it produces a good tip-
prehension grasp of the object by the robot hand. This is accomplished in two
stages: object information gathering (info), and evolution to tip grasp (grasp).

2.2.1 Stage 1 - Info


The initial conditions of this stage are as follows:
the object is constrained
the object type and location is unknown (although one finger of the hand
is in contact with the object, the location of the object is unknown since the
center of the object is unknown)
one of the robot fingers is in contact with the object

The goal of the info stage is to (partially) identify the object, by requiring that at
least one of the estimated shapes in the environment meet a minimum level of
confidence. An estimated shape is one which the robotic hand has already come in
contact with and, thus, has already partially identified. The confidence in an
object is increased by continuously gathering information about it, as in section
4.1.3.The realization of the goal from its initial condition is accomplished by shape
description and shape matching.

Shape description is the method of identifying an object by probing it and using the
contact point data to estimate a quantitative description of the object. This is the
way in which an object is first identified by the robotic hand. Shape matching is the
method of identifying an object by probing it and using the contact point data to
find a match for the object in a database. This method avoids the burden of having
to estimate an approximate shape of the object. Shape matching is used to verify the
objects already (partially) identified through shape description and to identify new
objects not already encountered.

LA lnfo Stage

DO
Shape Description
(ep1 & epl-set,
post_shape)

DO
No Shape Matching
(ep3 finger1/2/3, wrist)

I
Yes

END Yes No
lnfo Stage

Figure 2: Info Stage Flow Chart

Shape description is accomplished through the use of EP1, an exploratory proce-

14
dure, see sections 2.3.2 epl and 4.1. EP1 is executed iteratively at different points
on the object. A confidence value is assigned to the data of EP1 at every execution
based on the shape estimated. These confidence values are reinforced every time
the estimated shape is contacted again. EP1 continues to be executed in different
locations on the object surface until the confidence in one of the estimated shapes
meets a certain minimum value. The minimum value was set at 0.75, although this
value can easily be changed for a higher or lower requirement, as desired. Only
one of the objects is required to meet this criteria, although here too, this can easily
be changed to include all objects.

Once at least one object has been (partially) identified, shape matching is performed
via EP3 to verify the estimated shape of the object. EP3 is a probing enveloping
grasp, see sections 2.3.2 ep3 and 4.2. The contact points between the robotic hand
and the object are sensed during EP3 and these points are then evaluated for a
match with one of the estimated shapes. The info stage is summed up with the
flow chart in Figure 2.

-
2.2.2 Stage 2 Grasp
The initial condition of the second stage is the same as the goal of the first stage,
i.e. at least one object has been (partially) identified. The goal of this stage is to
produce a stable tip-prehension grasp of the object. The realization of the goal
from the initial condition is an evolutionary process guided by the behavioural
agent tip. Agent tip helps coordinate the physical agents fingerl, finger2, finger3,
and wrist to achieve the desired effect, see section 2.3.2.

The fingerl, finger2, finger3, and wrist agents each take the current situation under
consideration and suggest a possible course of action to transform the enveloping
grasp into a stable tip-prehension grasp. Tip-prehension is used here to denote a
type of grasp where only the fingertips of the robotic hand are in contact with the
object. The grasp stage is illustrated with the help of the flow chart in Figure 3.
DO
Ensure
Tip Grasp Behaviour

--
(tip)

DO
t
Move Wrist
( wrist)

DO
Actuate Finger1 Joints -
(finger 1)

DO
Actuate Finger2 Joints
(finger2)

DO
Actuate Figner3 Joints -
(finger3)

END
Grasp Stage

Figure 3: Grasp Stage Flow Chart

2.3 The EOS Architecture


As the Opportunistic Control Model, the EOS has three types of structural compo-
nents: the information board, the controller, and the agents.

The information board corresponds to the perception systems of the opportunistic


control model and it holds all environment variables, so that the updated data can
be accessed by all agents and the controller alike.
The controller corresponds to the reasoning system and it is responsible for ensuring
that the program is on track and for choosing the next agent whose actions are to
be executed.

The agents correspond to the action systems and they are the ones which execute
actions when they are permitted.

The architecture of the Opportunistic Control Model is enhanced through a novel


rating system [26], which allows each agent to calculate its own confidence, thus
distributing some of the task of the reasoning system. In turn, the rating system
simplifies the controller's agent selection process. This distribution of some of the
control to the agent resembles a multi-agent type of architecture.

-
-

Agentl: I Agent 2: Agent n:

--

Rating
I Rating
I Rating
I
Body
I -
Body
I
Information Board

Simulated
Environment )

Figure 4: EOS Architecture and Data Flow


Figure 4 shows the data flow among the different components which make up the
1 EOS. The data flow is repeated every cycle. A cycle is one iteration through the
I
I loop shown in Figure 4, starting with the controller and ending when the executed
agent's commands are finished and the information board is updated with the
new status of the system, see section 2.3.1.

2.3.1 Information Board (IB)


The information board, IB, is a repository of all actual and perceived state vari-
ables. These variables are updated every cycle, as they reflect the current status in
the environment. Actual state variables include the exact location and description
of the objects in the environment, because they are needed for environment simu-
lation purposes. In addition, it also contains perceived state variables such as:
the current location of the robotic hand
the current stage of the system
the weighting factors of the rating system
the current values of all agent default and opportunistic sub-ratings
the estimated objects data, i.e. the output of the EP1 and EP3 exploratory
procedures.

These perceived state variables are known to the agents, controller, and environ-
ment simulation modules, but the actual state variables are only known to the
environment simulation modules.

<environment ib> <general access ib>

<agent-namel>:<ib>
<agent_name2>:<ib>

. . .
ent-nameN>:<i
Figure 5: Information Board Representation

18
The IB also contains agent-specific status data which can be accessed by only one
agent. An example is the number of times an agent is called. The IB can be repre-
sented as shown in Figure 5.

The way in which the status variables are stored is always the same; each variable
corresponds to one asserted clause on the IB. This clause consists of two parts, the
label of the variable and its corresponding value, i.e. [ l a b e l , Value]. The
value can be a number, a string of characters, or a list thereof.

Note that in Prolog, a capitalized word represents a variable value, and a lower
case word represents the name of a predicate/function or the label of a variable on
the IB.

Examples of IB clauses are:


pivalueisanumber: [pi,3.1415926541
stagevalueisastring: [~tage,~info'I
weight value is alist: [weight, [ O . 30,O. 7 0 1 I
wrist agent specific variable, i t e r a t i o n s : [ w r i s t :i t e r a t i o n s ,5 I

2.3.2 Agents
The agents are responsible for executing actions, given the current stage and sta-
tus of the system. As a result there are several of them, each one specializing in a
different type of action.

All agents have the same structure and are subject to the rating system, see section
2.4, in an equal manner. As seen in Figure 4, each agent is made up of three parts,
each part having a counterpart in the Opportunistic Control Model:
(i) The pre-condition is the triggering mechanism for controlling the partici-
pation of agents during each cycle.
(ii) The rating is a means of determining the value of the agent's utility dur-
ing each cycle. This is part of the overall rating system.
(iii) The body consists of a set of event-action clauses to be executed when
the agent is allowed to do so. Event-action clauses are actions which are
executed when the corresponding event occurs.
Given its structure, an agent is defined as in Listing 1:

/ * This is what a comment looks like. * /


/ * Defining the agent's pre-conditions. * /
<agent-name>:pre:-
<get necessary data from IB>,
if <a triggering condition>,
/ * then * /
true,
/ * else * /
fail).
/*Defining an agent's calculation of its own rating*/
<agent-name>:rating(Rating) : - <calculate Rating>.
/ * Defining an agent's body. * /
<agent-name>:body:- <set of event-action commands>.
Listing 1: Agent Definition
In total there are eleven agents, which can be classified according to their use, as in
[8].The number of agents is not magical in any way, it merely reflects the per-
ceived need and logical allocation by the author. Three separate uses, as shown in
Table 2, have been identified in trying to solve the problem of contact establish-
ment. The purpose of classifying the agents is to show that not all agents are
responsible for executing actions which directly affect the (simulated) environ-
ment. Especially the behavioural agents, are used to satisfy part of the process cri-
teria of the Opportunistic Control Model, from which the EOS is derived.

Table 2: Agent Types


Agent Type Agents in Type Class

Physical fingerl, finger2, finger3, wrist


Behavioural ep3, tip
Task epl-set, epl, post-shape, orientation, end
In order to better understand the complexity of the system, each agent is briefly
described.

fingerl, finger2, finger3 agent


Due to their many similarities, these three agents are presented together.

These agents control physical parts of the robotic hand and the actions, which
these agents perform, are a function of the current system stage.

During the info stage, the fingers are required to make contact with the object with
at least two of their links. The algorithm used for this purpose simulates the
underactuated finger presented by Pollard [31]. This means that the finger's joint2
angle is actuated while holding joint3 angle fixed, until link2 makes contact. Once
link2 makes contact with the object, the joint3 angle is actuated until link3 makes
contact. Ensuring that link2 makes contact prior to link3, means that the wrap
grasp is guaranteed, Figure 6.

(a) Initial Condition (b) Joint2 Actuated (c) Link2 Contact

(d) Link2 Contact; Joint3 Actuated (e) Link2 Contact; Link3 Contact
Figure 6: Control of an Underactuated Finger

The control of each finger during the info stage is implemented as shown in the
pseudo code segment for finaerhr:
/ * Get fingerN's link contact status. * /
get-from-IB(fingerN:contact,
[Llcontact,L2contactIL3contact])
if and(L2contact,not(L3contact)
then / * only link2 has made contact * /
increase(Joint3,~ewJoint3),
assert-on-IB(fingerN:angles,
[Jointl,Joint2,NewJoint3]),
elseif and(not(L2contact),L3contact)
then / * only link3 has made contact * /
assert-on-IB(fingerN:angles,
[Resetl,Reset2,Reset3])
get~from~IB(wrist:opportunistic,Wrist0),
increase(WristO,NewWristO),
,
put~on~IB(wrist:opportunisticINewWristO)
elseif and(not(L2contact),not(L3contact))
then neither link2 nor link3 has made contact * /
increa~e(Joint2~~ewJoint2)
assert-on-IB(fingerN:angles,
[Jointl,~ew~oint2,Joint3])
else / * both link2 and link3 have made contact * /
<do nothing>
/ * Decrease fingerNrs opportunistic sub-rating. * /
,
get~from~IB(finger~:opportunistic~ngerO~
decrease(Finger0,NewFingerO) I

assert~on~IB(finger~:opportunistic,~ew~ingerO).
Listing 2: Finger Control During info Stage
During the grasp stage, the joint angles of the fingers are actuated so that only the
fingertips make contact with the object, as shown in the following pseudo code
segment:
/ * Get fingerN1s link contact status. * /
get~from~IB(fingerN:contactl
[Llcontact,L2contactIL3contact])
if or(Llcontact,L2contact)
then / * link1 or link2 has made contact * /
assert-on-IB(fingerN:angles,
[Resetl1Reset2,Reset3])
get~from~1B(wrist:opportunistic,Wrist0),
increase(WristOINewWristO),
put~on~IB(wrist:opportunisticINewWristO),
elseif L3contact
then / * link3 has made contact * /
get-from-IB(fingerN:fingertip,TipContact)
if Tipcontact = true
then / * have fingertip contact * /
<do nothing>
else / * have link3 contact * /
/ * Actuate joint2 angle. * /
decrea~e(Joint2~NewJoint2)
assert-on-IB(fingerN:angles,
[JointllNewJoint2,Joint3])
else / * have no finger contact * /
/ * Actuate joint2 and joint3 angles. * /
increa~e(Joint2~NewJoint2)
increa~e(Joint3~NewJoint3)
assert-on-IB(fingerN:angles,
[~ointl,NewJoint2~NewJoint3])
/ * Decrease fingerN1s opportunistic sub-rating. * /
get~from~IB(finger~:opportunistic,FingerO)
decrease(Finger0,Ne~FingerO)~
assert~on~IB(finger~:opportunistic,NewFingerO).
Listing 3: Finger Control During Grasp Stage
wrist agent
The wrist agent is a physical agent and controls the motions of the wrist. The pos-
sible wrist directions of motion are up/down, forward/backward, and left/right.
The wrist agent can participate in execution during either stage, but its actions do
differ from one stage to another.

Table 3: Info Stage Wrist Actions


Movement
Direction Events for Action

if finger3 joint angles are much more actuated than fin-


gerl or finger2 joint angles
down if fingerl or finger2 joint angles are much more actuated
than finger3 joint angles
- -

forward if more than 3 contacts exist with the object


backward if the wrist made contact with the object
left if finger2 joint angles are much more actuated than fin-
gerl joint angles
-

right if fingerl joint angles are much more actuated than fin-
ger2 joint angles

Table 4: Grasp Stage Wrist Actions


Movement Events for Action
Direction
if wrist is off-center from object to the bottom
down if wrist is off-center from object to the top
forward if the grasp plane is behind the center of the object
backward if the grasp plane is in front of the center of the object
left if wrist is off-center from object to the right I
right if wrist is off-center from object to the left
During the info stage, wrist tries to move so as to help the fingers achieve a wrap
grasp of the object. Consequently, Table 3 lists the directions of motion of the wrist
and the events which drive the movement is any of these directions.

During the grasp stage, the wrist assists the fingers in trying to achieve only finger-
tip contact with the object. The wrap grasp has already centered the hand around
the object by having to envelop the object, thus, during this stage, achieving the
tip-prehension grasp mainly involves forward and backward motion of the wrist.
In the case of complex objects, the fingers may need to move around secondary
features, thus the up/down and left/right motions are used. Table 4 is a summary
of the events which drive the movements of the wrist during the grasp stage.

The wrist determines whether it is off-center from the object in any direction by
estimating the center of the object from its knowledge about the object.

ep3 agent
The job of the ep3 agent is twofold: (i) coordinate the behaviour of the physical
agents to achieve a wrap grasp, and (ii) verify the shape of the object in contact.
As a result, ep3 is only active during the info stage. In both cases, the opportunistic
sub-rating is used to control the behaviour of the physical agents, as discussed
below.

It is possible that ep3 acts when the fingers have not all gotten a chance to make
contact with the object. In this case, ep3 resets the opportunistic sub-ratings, see
section 2.4, of the finger agents to 0.85 to ensure that each finger can act so as to
facilitate a contact with the object. This is to ensure that if a finger agent's rating
ends up much lower than the others, the finger does not become stuck there. List-
ing 4 shows the pseudo code for resetting these finger opportunistic sub-ratings.

assert~on~IB(fingerl:opportunistic,0.85)
assert~on~IB(finger2:opportunistic,0.85)
assert~on~IB(finger3:opportunistic,0.85)
Listing 4: Keeping the finger Agents Going
If the fingers have all made contact, but the configuration of the wrap grasp has
not yet been achieved, ep3 resets the fingers so that they can try again:

/ * Need to stimulate finger agents to create wrap * /


/ * grasp. * /
if <ep3 just started>
then
<do nothing>,
else
<lock joint3 of fingers>,
assert~on~IB(wrist:opportunisticI1.O~ I

,
assert~on~IB(ep3:opportunisticI0.20)
assert~on~IB(epl~set:opportunistic,O.O)
Listing 5: Stimulating the finger Agents
Once the wrap grasp has been achieved the following pseudo code describes the
second task of ep3:

/ * Determine which points are not part of the * /


/ * estimated objects. * /
get-from-IB(est-~hape,Estimated~Shapes),
determine~all~contacts(Contacts),
/ * ForeignPts is the output of find-foreign and it * /
/ * contains a list of points which do not belong to * /
/ * any estimated shapes. * /
,
find-foreign(Conta~ts,Est~Shapes,ForeignPts)
assert~on~IB(foreign~ts,ForeignPts),
assert~on~IB(epl:opportunisticI1.O) I

,
assert~on~IB(ep3:opportunisticI0.0~
,
assert~on~IB(fingerl:opportunisticIO.O)
assert~on~IB(finger3:opportunistic,O.O),
assert-on-IB(shape-verifyltrue),
Listing 6: Verifying the Shape Grasped
find-f oreign is discussed in more detail in section 4.2.
Putting all three pseudo code segments together, the skeleton of ep3 looks like:

if <all fingers have made contact with the object>


then
if <wrap grasp has been achieved>
then <verify shape grasped>
else <stimulate finger agents>
else
<keep finger agents going>
Listing 7: ep3 Skeleton
t i p agent
Similarly to the ep3 agent, the tip agent coordinates the behaviour of the physical
agents through their opportunistic sub-ratings to achieve a fingertip grasp of an
object in the environment. However, contrary to ep3, tip is active only during the
grasp stage. There are two main tasks which the tip agent addresses so that the
goal of the grasp stage may be achieved: (i) the location of the grasp plane, and (ii)
the configuration of the grasp triangle. The orientation of the grasp plane is kept
constant and is oriented in the y-z plane.

grasp triangle
finger1 /

Figure 7: The Grasp Triangle

In task (i), tip increases the opportunistic sub-rating of wrist if the grasp plane
does not intersect the center of the object, see section 4.4.

In task (ii), the opportunistic sub-ratings of fingerl, finger2, and finger3 are altered
as a function of the offset of the corresponding grasp-triangle angle, Figure 7, from
60" . The 60"angle represents the optimum configuration, see section 4.4. Using
the offsetfrom this ideal value as a way of setting the opportunistic sub-ratings of
the finger agents is one way in which the rating system achieves its opportunistic
nature. The opportunity here is to allow the fingers which really need to act to do
just that. The following pseudo code segment illustrates the task (ii) actions.

ratel= 1 - absolute(angle1 - 60•‹)/600,


rate2= 1 - absolute(angle2 - 60•‹)/600,
rate3= 1 - absolute(angle3 - 60•‹)/600,
finger1:opportunistic = ratel,
finger2:opportunistic = rate2,
finger3:opportunistic = rate3,
Listing 8: Achieving the Desired Grasp Triangle
epl-set agent
Due to its function, epl-set is only active during the info stage. Its task is to pick
another point at which the robotic hand can perform the EP1. The way in which
the new contact point is sought can be summarized by the following pseudo-code
segment:

remainder = remainder_of(#EPl_probes/3),
if remainder = 0,
then <find new point above/below-vertical move>
else <find new point in z-plane-horizontal move>
Listing 9: Choosing the Next Contact Point for EP1 Execution
Unless the number of contacts is a multiple of 3, the next point is found by mov-
ing in the horizontal direction, i.e. positive or negative y-direction, depending on
where the next contact can be found, see Figure 8(a). If the number of contact
points is a multiple of 3, then the next point is sought in the vertical direction, i.e.
a move in positive or negative z-direction, depending on where the next contact
point can be found, see Figure 8(b).

The reason for the value "3" as the point at which the hand moves in the vertical
direction is arbitrary.
I

Current Contact Choosing Next Contact Next Contact


(a) Choosing in Horizontal Direction

Current Contact Choosing Next Contact Next Contact


(b) Choosing in Vertical Direction
Figure 8: Choosing the Next Contact Point

epl agent
The task of epl is to simply perform the EP1 exploratory procedure during the info
stage. EP1 is performed at the current point of contact, unless a list of foreign con-
tact points (from EP3) has already been asserted. In the case of foreign contact
points, EP1 is performed at each of the points in the list and the resultant data, see
section 4.1.3, is asserted on the IB. A foreign contact point is a contact point which
ep3 cannot match to any currently estimated shapes.

The sub-routine for actually performing EP1, e p l :do-epl, was written by Char-
lebois [I] and it is called by epl when necessary.

post-shape agent
Once epl has been executed, its output is available on the IB. The task of the post--
shape agent is to postulate an object shape from this data, see section 4.1.2, and as
such it is active during the info stage. The postulated shape is appended to the list
of estimated shapes, which resides on the IB. If the shape already exists, then the
confidence in the estimated shape on the IB is increased, as discussed in section
4.1.3.

In addition to processing epl's output data, post-shape sets the epl-set opportunis-
tic sub-rating to 1.00, in case there is a need for more EP1 probing. It also resets its
own opportunistic sub-rating to 0.00. The following is the pseudo code segment
which is used to implement post-shape:

get_•’ ,
rom-IB (pshape,EPlget_from_IBo,data)
if <there is no EPl-data>
then <do nothing>,
else
get-from-IB(est-~hape,Estimated~shapes),
translate(EP1-data to New.-shapes),
append(New-shapes to Estimated-shapes
= New-Estimated-shapes),
,
assert~on~IB(est~shapeINew~Estirnated~shapes)
assert~on~IB(epl~set:opportunistic,1.00),
assert~on~IB(post~shape:opportunistic~.~O).
Listing 10: post-shape Implementation
The estimated shapes are stored under the label est-shape on the IB and the
structure of the data associated with this label is in the form shown below:

where M is the number of estimated shapes and OldContactPoints is a list of


all previous contact points with the object.

Should there be no estimated shapes at the current time, the label is associated
with an empty list. The translate predicate verifies whether the epl data
matches a current estimated shape by first searching for a matching shape class,
and then for a matching radius value among the list of estimated shapes, e s t - -
shape. It is assumed that there cannot be more than one shape in the environ-
ment with the same radius value.

orientation agent
The orientation agent has the task of centering (3D translations) and orienting
(rotating about an axis) the hand in preparation for the wrap grasp. Thus, it is
active during the info stage. When the hand is said to have been oriented about the
object, in fact it is the objects in the environment which have been oriented with
respect to the hand, due to ease of computation. In addition, it is important to
know that given the global x-y-z coordinate frame, the object lies somewhere
within the first quadrant of this coordinate frame and the hand approaches the
object from the "front" with approach vector (x,O,O), as in Figure 9.

sphere center at (Xo,Yo,Zo)


z.

Figure 9: Hand Direction of Approach to the Object

Since the curvature of the object has been estimated, the hand is centered with
respect to the curved object in the y- and z-direction. For example, the following is
the code segment used to center the hand about a sphere:
/ * [Xo,Yo,Zo,Radius]are the parameters describing * /
/ * the center and radius of the sphere in contact. * /
1 ,Objects),
member( [sphere,[Xo,Yo,Zo,Radius]
/ * Determine translation vector for applying to the * /
/ * Sphere so that it is centered w.r.t. wrist. * /
/ * i.e. instead of moving wrist, move object * /
eval(Xc+Radius,NewXc),
add( [NewXc,Yc,Zc], [-Xo,-Yo,-Zo],Transition),
/ * Apply transition vector to Objects in the * /
/ * environment. * /
orientation:move(Transition,Objects,[],NewObjects),
/ * Update ~ ~ ~ i r o n m eMO~ification
nt data on the IB*/
bbsut (env-mod,[Transition,[Xo,Yo,Zol , O,x]) ,
/ * Move the wrist forward (in the x-direction) by * /
/ * (~adius-13)- this is to give the wrist an * /
/ * initial push toward the object. * /
eval(~x+Radius-13,NewWx),
/ * New wrist relocation target is
bbsut (wrist-target,[NewWx,Yc,Zc] )
Listing 11: Centering the Wrist about the Grasped Object
There are two main reasons for orienting the hand with respect to the object:

(i) the shape of the primary feature


(ii) the types of secondary features associated with the primary feature

Secondary
Feature

(a) Simple Shape (b) Complex Shape


Figure 10: Simple vs. Complex Shapes

The primary feature is considered to be the object which the robotic hand has
decided to grasp. The secondary features are additional object shapes which have
been identified in the environment and which may obstruct the grasp of the pri-
mary feature. If no secondary features exist, the object is said to be simple, Figure
10(a),otherwise, the object is said to be complex, Figure 10(b).

(a) Front Grasp (b) Front Side Grasp (c) Top Grasp
Figure 11: Types of Grasps

Grasping a simple object is done using a Front Grasp or a Front Side Grasp, Figure
11, depending on the simple object shape.

Grasping an object shape in the presence of secondary features requires some rea-
soning as to the location of the secondary features with respect to the primary fea-
ture. A secondary feature is classified to lie above/below/in-front/behind/
right/left/other of the primary feature, as in Table 5.

Table 5: Classifying Secondary Feature Location


I Class I Condition I
I above I Zmax 2 Zo I
I below I Zmax < Zo I
I in-front
I Xmax < Xo
I
behind Xmin > Xo
right Ymax < Yo

I left I Ymin > Yo I


I other
I if all previous conditions fail
I
In preparation for this classification, a box is drawn around the secondary feature
and the following parameters are calculated: {Xmin, Xmax, Ymin, Ymax, Zmin,
Zmax). The center {Xo, Yo, Zo) of the primary object must also be determined.

Once all secondary feature locations are classified, possible approaches are evalu-
ated. A possible approach is one which does not encounter any secondary fea-
tures en route to grasping the main feature.

Given this information, the hand attempts to orient itself so as find a clear
approach toward the primary feature. The objects in the environment may be
rotated about the z-axis in preparation for attempting a Front Side Grasp. Table 6
illustrates how a complex shape may be grasped given the availability of an
approach. The preference of a certain approach decreases as you move down the
table rows.

Table 6: Grasping Complex Objects


I I I I

-
I above
I Top Grasp
below - can't be done
in-front - Front Side Grasp
behind 180•‹ Front Side Grasp
right -90•‹ Front Side Grasp

I left I +90•‹ 1 FrontSide~rayl


other - Front Side Grasp

end agent
The task of the end agent is self explanatory. end ensures that the program exits
gracefully when deemed necessary by one of the agents or the controller. When
the ending of the program is necessary, the opportunistic sub-rating of end is set to
1.00.
2.3.3 Controller
The controller is responsible for coordinating the system's agents to achieve a
Ir goal. In this case, the goal is that of achieving a stable grasp, i.e. an active closure
grasp. The manner in which the controller coordinates the components is through
the rating system, which is discussed in detail in section 2.4.

The controller starts off with a list of names of all agents in the system. The label
associated with this list is agents:

If an agent is not in the a g e n t s list, then it cannot be considered for action. This
list is pruned by testing the pre-condition of every agent in the a g e n t s list. If an
agent's pre-condition does not hold, then the agent's name is not included in the
pruned version of the a g e n t s list, agent s s r e list. The pre-condition usually
requires that the system be in a given stage. For example the wrist agent's pre-con-
dition requires that the current stage be the grasp stage, else the pre-condition
fails:

wrist:pre:- bb-get(stage,S),
if (S = ' g r a s p ' , t r u e , f a i l ) .
Listing 12: wrist Pre-condition
Only the following agents are allowed to participate during the grasp stage: {end,
fingerl, finger2, finger3, wrist, tip). Thus, if the current stage is grasp, then the
agent s s r e list is:
(agents~rel[end,fingerl,finger2,finger3lwristltip])

Once the agent list has been pruned, the controller's decision of which agent to
allow to execute in the current cycle is a function of the rating of each agent. Each
agent calculates its rating as shown in section 2.4 and the list, r a t i n g , is assem-
bled with the agent's name and rating. For example,
Given the r a t i n g list, the controller simply chooses the highest rated agent,
Execut eAgent, as in the code segment below:

lRateSet)
findall(Rate~,member([Agent~Rate]~rating)
max~value(RateSet,MaxRate),
member([ExecuteAgent,MaxRate],rating),
call(ExecuteAgent:body)
Listing 13: Choosing the Agent to be Executed
In this case ExecuteAgent is wrist. The order of the agents in the list is not
important, unless there is a tie among agent ratings. In this case, the first item in
the list which has a highest rating is selected.

2.3.4 Data Flow


Each agent can access the data residing in the IB, as can the controller. Further-
more, the controller can request information from the agents, such as the pre-con-
dition status and the rating of the agent.

The controller's request for the agent's pre-condition, <agent-name> :pre, sim-
ply succeeds or fails, depending on whether the pre-condition is true or false.
However, the controller's request for an agent's rating, <agen t-name > :r a t -
i n g ( x ) ,returns the value, X.

2.4 Rating System


The Bayesian formalism presents a natural framework to reason in the presence of
uncertainty and thus it has been used as the basis of the rating system embedded
within the EOS. Presenting the rating system of the EOS is easier when drawing a
parallel between it and the probabilistic nature of Bayes' Rule (Appendix 11).

If we let Aj be an agent, then the rating of agent Aj is P(Aj).The rating of the agent
is also the confidence in the agent. Next, we assume that each agent rating consists
of several sub-ratings, which are combined with the use of weighting factors. If
we let P(Bi) be the weighting of sub-rating Bi, then P(Aj I Bi) is the sub-rating of
agent Aj with respect to Bi. Thus, we can calculate P(Aj)as a function of P(Aj I Bi)
and P(Bi) using Bayes' Rule, as in equation (1).
n
P(Aj) = (P(AjlBi)x P ( B i ) ) , where 1SjSrn (1)
i= 1
The only restrictions implied by equation (1) are as indicated in equation (11-3).
Equation (1) can be used to combine several traditional competing sub-ratings
into one rating.

Halpern et al. [91 have introduced two types of knowledge: externally ascribed
and explicit knowledge. The externally ascribed knowledge is a default type of
knowledge, acquired from the system engineer, while the explicit knowledge is
knowledge gained from the system's sensors. Let us call the first type default and
the second type opportunistic. Thus, let there be two sub-ratings, i.e. n=2. The
default sub-rating has a fixed, default value and the opportunistic sub-rating is
variable and changes as discussed in sub-section 2.4.2.

Let P(Bd)be the weighting of the default sub-rating and let P(B,) be the weighting
of the opportunistic sub-rating. Then the default sub-rating of agent Aj is
P(AjI Bd), and the opportunistic sub-rating is P(AjI B,). Thus, the rating for agent
Aj is P(Aj),equation (2).

eval(~efau1tSubRating* Defaultweight +
OpportunisticSubRating * OpportunisticWeight,
Rating) .

Note that while the sub-rating values are agent specific, the weighting factors are
not.

Example
Let there be six agents in the agentssre list,
and assume that the system has equal confidence in the two sub-ratings. Then, the
weight of the default sub-rating and the weight of the opportunistic sub-rating is,
(weight,[0.50,0.50]1

If the default, P(AjI Bd), and opportunistic, P(Aj I B,), sub-ratings of the agent Aj
( 1 I j 2 4 ), are as indicated in Table 7, then, using Equation (2) the total rating of
finger1 is calculated as follows:
P (Ail Bd) X P (Bd) + P (AA go) x P (go)
= 0.60 x 0.50 + 0.20 x 0.50 (3)
= 0.40
The rating of the other agents can be calculated in a similar fashion.

The total rating of the agents is as shown in Table 7. Obviously, wrist has the high-
est rating among the six agents in this example.

Table 7: Rating System Scenario

2.4.1 Assigning the Default Sub-Rating


The default sub-rating, P(AjI Bd) is assigned to each agent at the time of creation
of the agent. It is a fixed value throughout the program execution as it serves as
the backup value during cycles in which there is no opportunistic sub-rating
available.

2.4.2 Assigning the Opportunistic Sub-Rating


The opportunistic sub-rating, P(Ajl B,), referred to as the sub-rating for the
remainder of this sub-section, may be altered at each cycle in one of two ways:

(i) by the currently active agent


(ii) by the controller

This flexibility allows the sub-rating to take advantage of the appropriate oppor-
tunity to influence the agent's ability to act. The utility of each of these methods is
r
discussed below. ,

First, the agent which executes its action, e.g. tip, may alter the sub-rating of any
other agent, e.g. wrist, including its own, depending on the current status of the
system. The current status of the system is defined by the perceived state variables
on the information board. These variables, e.g. GraspPlane,can be used as mea-
sures for the sub-rating.

In the following example, Listing 14, the IB label GraspPlane is associated with a
true/false value which indicates whether the grasp plane is acceptable or not
during the current cycle, see section 4.4 for desirable grasp plane location. If the
grasp plane is not acceptable, then the opportunistic sub-rating of wrist is set to a
high value, eg. 0.7, relative to the other agents. This allows the wrist to consider a
potential movement and improve the location of the grasp plane.

if (GraspPlane,
/ * then * /
true,
/ * else * /
bb_put(wrist:opportunistic,0.7))
Listing 14: Currently Active Agent Sets Opportunistic Sub-rating
A second way in which an agent's, e.g. orientation, opportunistic sub-rating may
be modified is by the controller. The controller may choose to do this so as to facil-
itate the transition of the system from one stage to another according to a pre-
determined plan, or to ensure that the goal of the current stage is being achieved.
For example, the end of the shape description period in the info stage is marked
when one of the estimated shapes has achieved a confidence greater than 75%.Up
to this time the sub-rating of ep3 has been zero, thus in order to ensure that shape
matching begins, the orientation sub-rating is set to maximum, 1.0. This ensures
i
that the hand orients itself properly with respect to the objects it has identified,
before allowing ep3 to take over and coordinate the physical agents. The following
code segment illustrates this implementation:

/ * if State = 'info' and Prob > 0.75, then . . . * /


bb~get(ep3:opportunistic,Ep3Rate0),
if (Ep3RateO > 0,
true,
...I 1
(bb_put(orientation:opportunistic,l.0),
Listing 15: Controller Sets Opportunistic Sub-rating
Although the method of altering the opportunistic sub-rating is unique for every
case, the exact values/percentages by which the alteration is to be performed
needs to be given some thought so that the alteration achieves the desired effect.
Naturally, this consideration was given to the EOS sub-rating alterations.

The requirement for thought for determining the opportunistic sub-rating alter-
ation value can be illustrated by extending the Example from page 37. If the sub-
rating of agent fingerl is to be increased prior to the calculation of the ratings
shown in Table 7, then the value by which this increase should be made, should be
large enough to produce a total rating closer to 0.65, which is the value of the
highest rated agent in the table.

Case I: new opportunistic sub-rating = 1.00


therefore, rating (fingerl) = 0.80

Case 11: new opportunistic sub-rating = 0.60


therefore, rating(finger1) = 0.60

Given Case I, fingerl would be a contender for execution during the upcoming
cycle. However, given Case 11, fingerl would need to wait until the rating of wrist
decreases below its own rating so that it could be executed. Either of these scenar-
ios could be used, depending on the desired effect.
Chapter 3

Environment Simulation
'
As shown in Chapter 2 Figure 4, the EOS interacts with a simulated environment.
This chapter, discusses the representation of this simulated environment, which
includes the robotic hand, objects to be grasped, and the method for detecting
points of contact between the robotic hand and the objects in the environment.
The parameters of the simulated environment are stored on the information
board of the EOS as discussed in the following sub-sections.

3.1 Robotic Hand Representation


The simulation of the robotic hand is presented first. The kinematic model of the
robotic hand used is presented in Appendix I. This hand has three fingers and
three links per finger, in addition to a cuboid wrist.

The wrist serves as a relative coordinate frame for the fingers, thus its location,
(Wx,Wy,Wz), Figure 12, in the global coordinate frame must be specified. The
coordinates of the wrist center are asserted with the datum label wri st-coord
and are asserted on the IB as follows:

(wrist-coord,[Wx,Wy,Wz])

In addition to the wrist coordinates, it is necessary to know the dimensions of the


wrist. This is done through the wr i st-radius parameter, Figure 12:
(wrist-radius,R) / * R = 40 * /

z-axis

x-akis
Figure 12: Wrist Location & Dimension

Knowing the location of the wrist in the global coordinate frame, the wrist relative
coordinate frame can be specified. This frame of reference is a translation of the
global reference frame to the wrist center, as shown in Figure 13.

z-axis

I
x-axis
Figure 13: Wrist Relative Coordinate Frame

Next, the coordinates of the origin of each finger, with respect to the wrist coordi-
nate frame, are asserted separately on the IB, by specifying the origin of link 1,
org-f 1/ 2 /3,of each finger with respect to the wrist center, Figure 14.

(org-f1,10,-15,+401)
(org-f2,[ O , +15,+4O] )
(org-f3,[0,0,-4011
Listing 16: Asserting Finger Origins
z-axis
origin-finger

x-axis
Figure 14: Finger Link Origins

Knowing the location of the wrist center, (Wx,Wy,Wz), in the global coordinate
frame and the location of the origin of a finger, (X,Y,Z), in the wrist coordinate
frame, the origin of the finger in the global coordinate frame, (Xo,Yo,Zo),can be
calculated as in equation (4).

Two more pieces of information are needed to determine the coordinate of each
fingertip contact in the global coordinate frame: the length of each link and the
angle of each joint. The link lengths are associated with the datum labels
f 1-links, •’ 2_links, and f 3-links and the joint angle labels are f1-angles,
f 2_angles, and f 3-angles:

( f l - l i n k s , [20,50,50] )
(•’2-links, [20,50,50] )
(•’3-links, [20,50,50] )
(•’1-angles, [Theta-joint1,Theta-joint2,Theta-joint31)
(•’2~angles~[Theta~jointl~Theta~joint2,Theta~joint31)
(•’3-angles. [Theta-jointl.Theta-joint2,Theta-joint31)
Listing 17: Asserting Finger Configuration
Given (i) the origin of the finger in the global frame, (Xo,Yo,Zo),(ii) the length of
each link, and (iii) the value of each joint angle, equations (1-4) and (1-7) can be
used to calculate the coordinate of the fingertip of each finger in the global frame.
These coordinates are asserted with the labels f 1-coord, f 2_coord, and
f 3-coord.

(•’1-coord, [ T i p l X , T i p l Y , T i p l ~ ] )
( f2_coord, [Tip2X, Tip2Y1~ i p 2 z )I
( •’3-coord, [Tip3X1T i p 3 Y , T i p 3 ~)]
Listing 18: Asserting Fingertip Coordinates
The fingertip calculations are performed at the end of every cycle by the respec-
tive finger simulation modules: s i m : f 1-coord, s i m : f 2_coord, and s i m : f 3--
coord. The finger simulation modules are responsible for:

(i) determining current joint coordinates and fingertip location


(ii) determining link or fingertip contact points between the finger and the
objects in the environment
(iii) locking links according to the algorithm for controlling an underactu-
ated finger, see fingerl/finger2/finger3 agent in section 2.3.2
(iv) ensuring that the finger status is updated after every action.

The finger status consists of the following data, in addition to the items from List-
ing 16 to Listing 18:

f 1-contact keeps track of where the contact points of each link of finger 1 are.
Link1/2 / 3 _ c o n t a c t is a list of contact points, such a s :

where M is the number of contact points in the list and


ContactPt = [Shape-name,[<coord. of contact point>]]

In addition, the Contactstatus parameter is used to determine whether a fin-


ger has achieved its goal during a given stage.

f1joint-coord contains the location of every finger joint and fingertip in global
coordinates. This informationmay be needed by other agents. Thus after this
information is calculated once, its results are stored on the IB to be referred to dur-
ing the current cycle.

f1-locked maintains the status of the finger joints. If a joint is locked then it may
not be actuated during the current cycle.

The following pseudo code listing defines the finger 1 simulation module. Finger
2 and 3 simulation modules are identical in function and structure.

sim:fl-coord:-
/ * Calculate fingertip coords w.r.t. finger * /
/ * frame. * /
evalkfingertip x-coord> = TempX),
evalkfingertip y-coord = TempY),
eval (<fingertip z-coord> = TempZ),
/ * Calculate fingertip coords w.r.t. global*/
/ * frame. * /
eval(<fingertip x-coord> = Xfl),
eval(<fingertip y-coord> = Yfl),
evalkfingertip z-coord> = Zfl),
assert~on~IB(fl~coordI
[XfllYfl,Zfl]),
/ * Calculate the joint coords of each link. * /
f1~joints(Joint1,Joint2,Joint3,Finger~ip),
/ * Assert the location of these joints for * /
/ * future use. * /
assert~on~IB(fljoint~coordI
[Jointl,Joint2,Joint3,FingerTip]),
/ * Determine contact between link1 (described * /
/ * by its end-points: Joint1 and Joint2) and * /
/ * Objects in the environment.*/

Objects, [I ,Linkl-contact),
if Linkl-contact
then Lockl = true,
else Lockl = fail
/*~eterminecontact between link2 and Objects.*/

Objects, [ I , Link2-contact) ,
if Link2-contact
then Lock2 = true,
else Lock2 = fail
/*Determine contact between link3 and Objects.*/

Objects, [ I ,Link3-contact),
if Link3-contact
then Lockt3 = true,
else Lock3 = fail

if or(Stage = 'graspl,<atstart of i n f o > )


then
Lock-joint1 = Lockl,

if <any links have made contact>


then / * Finger contact status = true. * /

else / * No link made contact, thus contact * /


/ * status = fail. * /
assert~on~IB(fl~contact,
[fail1Linkl~contact,Link2~contact,
Link3-contact]),
/ * Assert Joint status. * /
assert-on-IB(f1-locked,
[Lock~jointl,Lock~joint2,L0ck~joint3])
else / * Must be in info stage. * /
if or(Link1-contact or Link2-contact)
then Lock-joint1 = true,
else Lock-joint1 = fail
if or(Link2-contact or Link3-contact)
then Lock-joint2 = true,
else Lock-joint2 = fail
if Link3-contact
then Lock-joint3 = true
else Lock-joint3 = fail
assert~on~IB(fl~locked,
[Lock~jointl,Lock~joint2,Lock~joint3])
if Lock-joint3
then / * Finger contact status = true. * /
assert~on~IB(fl~contact,
[true,Linkl~contact,Link2~~ontact,
Link3-contact]),
else / * Finger contact status = fail. * /
assert~on~IB(fl~contact,
[fail,Linkl~contact,Link2~~0ntact,
Link3-contact])
/ * Determine whether any link3 contact is a * /
/ * fingertip contact of just a link contact. * /
if and(Link3-contact, Status = 'grasp')
then / * link3 contact during 'grasp' stage. * /
get~from~IB(fl~coord,Fingertip),
tip~contact(Link3~contact,JointCoords,
ResultantTipContactShape),
if ResultantTipContactShape
then / * Have tip contact. * /
/ * Determine distance between fingertip * /
/ * and link 3 contact point, to see if * /
/ * finger has overlapped too much with * /
/ * object. * /
[-,[ContactPt3]] = Link3-contact,
distance(ContactPt3,FingerTip,TipDist),
if TipDist > MaxDOverlap
then / * Fingertip overlap too big. * /
assert-on_IB(tipl,2)
else / * Have Fingertip contact. * /
assert-on-IB(tip1,l)
else / * Have link contact only. * /
bbsut (tipl,0)
else / * No link3 contact, or not in 'grasp'. * /
bbsut (tipl,0))
Listing 19: finger1 Simulation Module
In addition to the finger simulation modules, there is also a wrist simulation mod-
ule. It checks whether the wrist has come in contact with any objects in the envi-
ronment and ensures that the current wrist status is updated with every cycle, as
shown in the code segment to follow:

/ * Synchronize current location of wrist with * /


/ * desired/target location of wrist. * /
bb-get(wristtarget,[WxIWyIWz]),
b b s u t (wrist-coord,[Wx,Wy,Wz] ) ,
bb-get (env,Objects),
/ * Determine if wrist is in contact with any * /
/ * object in the environment. * /
contact:wrist(Objects,[],Wcontact),
if (Wcontact = [I,
/ * then no wrist contact * /
bb_put(wrist-contact, [fail,[I]),
/ * else have wrist contact * /
bbsut (wrist-contact,[true,Wcontact] ) ) .
Listing 20: Wrist Simulation Module

3.2 Object Representation


The only other entities in the simulated environment are the objects which are to
be grasped by the robotic hand. These objects are associated with the env label on
the information board and are asserted in list form. The general form is:

(env,[Objectl,. . . ,ObjectN]) ,

where N is the number of objects asserted in the environment. The location of the
objects in the environment is given with respect to the global coordinate frame.

Sphere ~bjects are described with respect to the location of their center and the
length of their radius:

Object = 1
[sphere,[Xcenter,~center,Zcenter,~adius]

X-direction Y-direction Z-direction


Cylinder Orientation Cylinder Orientation Cylinder Orientation
Figure 15: Cylinder Orientations

However, cylinder Objec ts require a few more parameters. The center of the cyl-
inder and its radius are needed, in addition to the cylinder's length and its orien-
tation. First of all, the cylinder length is restricted to lie only in the, x-, y-, or z-
direction, as in Figure 15.

Second, this orientation must be represented among the object parameters.


Assuming that the cylinder is in the z-direction orientation, Figure 16, then the
center of the cylinder would be at (Xcenter, Ycenter, Z). In this case, the Z-value is
meaningless, as the cylinder is elongated in this direction. Since the Z-value is
meaningless, letting the Z-value equal to 0.0 is one way of noting in which direc-
tion the cylinder lies. Furthermore, the cylinder length extends from z = min to z =
max.

Xcenter
Min
Max x
Figure 16: Representing a Cylinder
Ycenter

Now the parameters for the z-orientation of a cylinder are:

O b j e c t = [cyl,[ X c e n t e r , Ycenter,O, Radius,Min,Max] 1


Similarly, the x- and y- orientation of cylinders are represented as, respectively:

Object = [cyl,[0,Ycenter,Zcenter,Radius ,MinMax]]


Object = [cyl,[Xcenter,0,Zcenter,Radius,Min,Max] ]

The objects in the environment may be displaced or rotated about the x-, y-, or z-
axis. When this happens, the env-mod label is updated as:

( env-mod,

,
[Displacement,[Xo,Yo,Zo],RotAngle,RotationAxis])
where, Displacement= a vector
[XO,YO,ZO] = the center of the object which was in contact with the
robotic hand at the time of rotation
RotAngle = the angle through which the objects were rotated
~ oat
tionAxis = the axis about which the objects were rotated.

3.3 Contact Detection


In section 3.1, the robotic hand representation was presented. Recall that the links
of the fingers were modeled as simple line segments, and the wrist was modeled
as a square. As a result, the problem of detecting contacts between the robotic
hand and objects in the environment is reduced to the problem of detecting con-
tact points between line segments or a square and objects in the environment.

3.3.1 Finger Contact


Given the two end-points of a line segment, {(xl,yl,zl),(x2,y2,~2)},
the general
equations of the line in 3D can be easily derived:

- (x-xl) +yl, where ( x 2- x l ) z 0

( Y - Y ~ )+ Z 1 , where ( y 2- y l ) z 0

Seven special cases must be considered as a result of the line representation in


equation (5):
( i ) x2 - x1 = 0, ( y z plane)
( i i ) y2 - y1 = 0, ( x z plane)
...
(111) z2 - z1 = 0, ( x y plane)
( i v ) x2 - x l = 0 and y2 - y l = 0, (Z axis)
( v ) y2 - y l = 0 and z2 - z l = 0, ( x axis)
( v i ) z2 - z l = 0 and x2 - x l = 0, ( y axis)
( v i i ) x2 - x1 = 0 and y2 - yl = 0 and z2 - zl = 0
(a point)

Checking for each of the above cases is done with a series of i f - then-else
statements.

Since the equation for each shape class is different, one contact detection algo-
rithm is needed for the sphere and one for the cylinder objects. The equation for a
sphere is:

However, due to the three possible orientations of the cylinder, there are three
possible equations for the cylinder:

( y - y,) + ( z - z,) = Radius


2
and xmin5 x I xmax
( x - x,) + ( z - z,) = Radius 2 and Ymin'Y'Ymax
(X - x,) + (y - yJ = Radius 2 and zminI z l z max

where {q,yc,zc)are the coordinates of the center of the cylinder.

In order to determine the points of intersection between a line segment and an


object, the special cases in (6), must be considered prior to attempting equation
(9,so as to ensure that there is no division by zero. Whichever case it is, one line
segment must be solved with equation (7) or equations (8),to solve for points of
intersection.

Once the points of intersection between the line segment and the objects are
found, only one of the points is taken as a contact point. However, it is possible
that one link be in contact with more than one object shape. Assuming that the
links are rigid, as are the objects, then it is not possible to have more than one con-
tact point between a sphere or a cylinder and a line segment. The only reason why
it would appear that there is more than one contact point is that there is some
overlap of the line segment with the object. The amount of overlap between the
finger link and an object in the simulated environment is equivalent to the force
which the link is applying to the object in the real world, given that the finger link
and the object are rigid objects. The bigger the overlap, the more force is applied.
However, there is only one point at which this force is applied while the overlap
situation produces two points of intersection. The contact point which is chosen is
the one which is nearest to the preceding joint.

3.3.2 Wrist Contact


The wrist contact detection is quite simple due to the shape of the wrist. The exact
point of contact is not used in this program, thus an algorithm which compares
the wrist coordinates with each object is sufficient.

Given the wrist center, (Wx,Wy,Wz), and wrist radius, R, the following parame-
ters can be calculated:
Wymin = wrist Y minimum
Wymax = wrist Y maximum
Wzmin = wrist Z minimum
Wzmin = wrist Z maximum

A box is drawn around the object to be tested for wrist contact. The resultant box
is the smallest box which can be drawn around this object, i.e. the bounding box,
and it requires the following parameters to be calculated:
Minx = minimum X value for the shape
MaxX = maximum X value for the shape
MinY = minimum Y value for the shape
MaxY = maximum Y value for the shape
MinZ = minimum Z value for the shape
MaxZ = maximum Z value for the shape

Then, contact between the wrist and an object is said to have occurred if the fol-
lowing conditions are satisfied:

M i n x I W x I MaxX
and
{ Wymin I MinY I Wymax or Wymin I MaxY I Wymax} (9)
and
{ Wzmin I MinZ I Wzmax or Wzmin I MaxZ I Wyzmax}
Chapter 4

Grasping & Object Reconstruction

In order to establish a stable grasp of an object we are faced with two possible sit-
ua tions:
(i) grasping an object with a priori knowledge
(ii) grasping an unknown object

Grasping the known object becomes an exercise in calculating points on this object
where the fingers could be placed to render the grasp stable [31]. However, in the
case of the unknown object, it is first necessary to, at least partially, reconstruct the
object's shape and determine its location, through haptic exploration. Unfortu-
nately, it is not always the case that an object can be completely known, thus the
second case must be addressed. The challenge is increased by the large variety of
objects, although the class of objects has been constrained to curved surfaces, such
as spheres and cylinders. Due to manipulator size and shape constraints, only a
sub-class of the whole set of objects is suitable for grasping with a particular
robotic hand.

Partial object reconstruction means that the object is explored to get an idea of its
size, shape class, and location. The size and shape class of the object are used to
establish the ability of the robotic hand to grasp the object. Shape classification of
the object, e.g. spherical, cylindrical, etc., is performed to get an idea as to how the
object should be approached for grasping.

Achieving this partial object reconstruction employs the aid of two haptic explor-
atory procedures (EPs): EP1 and EP3. EP1 is the exploratory procedure used for
shape description and EP3 is the exploratory procedure used for shape matching,
as already introduced in section 2.3.

Using EP3 in conjunction with EPI means that secondary features of the object are
more likely to be detected. Using the example of a mug, the mug is the primary
feature, while the handle is a secondary feature. Detecting secondary features
makes it possible to concentrate on the primary feature, the mug, and focus on
grasping it, while avoiding, if possible, secondary features, such as the handle.
The sections which follow present the details of each of these haptic EPs.

4.1 EP1: Shape Description


The first haptic exploration investigated is that of a rolling finger on the surface of
an object.

4.1.1 Assumptions
Before going into the details of EPI, it is important to keep in mind the assump-
tions which are being made:
the normal to the object at the point of contact can be derived
no slipping occurs at the contact point
the contact point detection sensor has fine to infinite resolution
the noise in sensory inputs is constant and low
the fingertips are hard and hemispherical, with a radius of one unit

4.1.2 Background
The finger roll is performed in a cross-hair pattern [3], i.e. in the direction of the
arrows and in the order indicated by the numbers adjacent to the arrows shown in
Figure 17(a).Notice that although the surface is in 3D, the probing is seemingly
done in only two directions, defined as u- and v-direction.

Although the u-v map has only two dimensions, this map can be overlaid on any
surface, thus causing the u-v map to take the shape of any object. As a result, the
cross-hair pattern may be made up of arcs, instead of straight lines, as shown in
Figure 17(b).Indeed, the key measurements taken during EP1 are the lengths of
these arcs and the angles of rotation which the finger went through to achieve
these arcs. One last requirement is that the finger must roll with constant angular
velocity.

Curved Surface

u-direction 1l4
v-direction
(a) Cross-hair pattern in u-v (b) u-v map on object
Figure 17: Executing EP1

EP1 was studied for the purpose of curvature estimation by Charlebois, Gupta,
and Payandeh [31. The type of output which EP1 produces as a result of the rolling
procedure, is in the form of two surface curvature parameters, [ku, kv] one for
each rolling direction (u-v), see Appendix I11 for more details. These parameters
are valid at the point of contact of a finger with the object where this EP was per-
formed. The curvature parameters can be converted to local curvature radii (ru,
rv) of the curved object, as shown in equation (10):
1 1
ru = - and rv = -
ku kv
where, ru = the radius of curvature at the point of contact in the u-direction
rv = the radius of curvature in the v-direction

Comparing the relative sizes of the two radii makes it possible to deduce whether
the object probed is locally spherical, flat, cylindrical, parabolic, or hyperbolic.

Two curvature parameters close in value and less than a maximum value, mux (i.e.
a value less than infinity, which defines the largest possible radius of curvature),
denote a locally spherical area.
{ r u E rut0 < ( r u , r v ) < max} a locally spherical (11)
A plane is categorized by an infinite radius of curvature in both directions.
{ r u z rvI 0 < ( r u ,r v ) > max} a locally flat surface (12)
One of the curvature parameters of a cylinder is much higher than the other, since
a cylinder is only curved in one plane. In addition, the larger radius of curvature
is greater than the maximum value, rnax.
{ r u >> rvl ru > rnax, 0 < r v }
3 locally cylinderical (13)
{ r v >> ru(rv > rnax, 0 < r u }
Paraboloids have one parameter larger than the other, but both parameters are
less than the rnax value.
{ r u > rvl ru < rnax, 0 < r v }
3 locally parabolic (14)
{ r v > rul rv < rnax, 0 < r u }
Hyperboloids have curvature parameters of opposite sign. In addition, the magni-
tude of both parameters is less than the rnax value, but the magnitude of one
parameter is much larger than the other:
{IruI > lrull lrul < rnax, ( r u - r v ) < 0 )
a locally hyperbolic (15)
{lrvl > lrull lrvl < max, ( r u . r v ) < 0 )
An additional assumption is made for cylinders. These objects are oriented such
that they are aligned with one of the coordinate axes (x, y or z), as shown in Fig-
ure 18.

= coordinate axes of object

Figure 18: Cylinder Orientation

Since EP1 can only be used to determine the radius of curvature at the point
contact, the use of successive probings in different locations on the object is used

58
to postulate a shape of the object and its radius of curvature.

4.1.3 Implementation
This exploratory procedure is implemented as an agent of the EOS. The datum
label foreignst s contains a list of all points resulting from the wrap grasp
which have not been matched to an already identified object. If this list has values
within it, then epl uses these points as locations for EP1 executions, otherwise, a
list of current contact points is determined and used for EP1 execution.

ep1:body:-
bb-get(foreign_pts,Foreign),
if (Foreign=[I,
/ * then * /
contact_pts(Contacts),
/ * else * /
(Contacts=Foreign,
b b s u t (shape-verify,true),
b b s u t (foreign_pts,[I) ) ) ,
/ * Call C sub-routine by Charlebois [I]:*/
/ * input=Contacts; output=Pshape-list * /
ep1:do-epl(Contacts, [],Pshape-list),
/ * The output of the e p l agent is asserted * /
/ * the IB under the label pshape.*/
bb-get (pshape,Plist),
append(Pli~t,Pshape~list~New_plist),
,
b b s u t (pshape,New_plist)
/ * e p l is subject to the rating system. * /
,
bb_put(post~shape:opportunisticI1.O)
bb-get(epl:opportunistic,EplRateO),
eval(EplRate0*0.90,NewEplRateO),
bb_put(epl:opportunistic,NewEplRateO).
Listing 21: Implementing EP1
The list of postulated shapes, pshape has the following representation:

,
(pshape,[[Shapel,Contact_pointl,Output~datal]
[Shape2,Contact_point2,0utput-data2],
...I )
ShapeN is the character of the first letter of the shape postulated. This variable is
not used later on to postulate a shape, as this piece of information must be
deduced. It is simply used as a checking means for the author. ContactsointN
is the [X,Y,Zl coordinate of the location of the contact at which EPI was per-
formed, and Output-da taN is the output of EP1.

Since a minimum confidence in at least one shape is required, EP1 is allowed to


execute its actions until the controller decides that the desired confidence is
reached. Once the desired confidence is reached, the controller intervenes to
increase the opportunistic sub-rating of the orientation and ep3 agents, as shown in
the pseudo code below.

In the info stage:

<find the maximum object confidence, Prob>


if(Prob>0.75,
then if <a shape has been verified>,
then assert-on-IB(stage,'pre-grasp'),
else
assert~on~IB(orientation:opportunistic,
NewOrientationOppRate),
assert~on~IB(ep3:opportunistic,
NewEP30ppRate)
else <confidences are not high enough>
<keep status quo and keep trying>.
Listing 22: Meeting the Minimum Object Confidence
This means that EP1 is used to assert postulated shapes on the IB of the EOS and
the confidence in these postulated shapes is increased whenever the robotic hand
comes in contact with the object again.

Confidence in the Estimated Shape


Charlebois [I] investigated the accuracy of EPI for spheres and paraboloids and
found that the accuracy of the curvature estimation decreases as:
(i) the relative curvature radius of the object w.r.t. to the probe increases
(ii) the rolling arc length decreases
(iii) the angle through which the finger rolls decreases

Thus, Charlebois also found that the accuracy of estimating spheres was much
better than of estimating paraboloids. Assuming that conditions (ii) and (iii)
above are kept constant, only difference between estimating spheres vs. cylinders
is the Relative Curvature Radius (RCR) of the object w.r.t. the probe. As already
mentioned in section 4.1.2, cylinders have one radius of curvature approaching
infinity. Thus, one of the RCR values is very large, making one of the estimated
radii of curvature much less reliable. As a result, it becomes evident that the initial
confidence in the output of EP1 must be weighed against the shape which the out-
put data predicts, such that the EPI output predicting a sphere should be
weighted with higher confidence than the output predicting a cylinder or parabo-
loid.

As already mentioned, repetitive iterations of EPI on the same object increases the
confidence in the shape of the object as well. To ensure that both factors, the shape
of the object and the reinforcement of the shape of the object, are taken into
account, calculating the confidence in an object shape, consisting of the shape clas-
sification, radii of curvature, and location, can then be done as follows:

C (ShapeX) = c, x c (ShapeX)
i=l

where, C(ShapeX) = confidence in ShapeX


= confidence in having made contact with ShapeX
(constant in value)
= confidence in EP1 data postulating ShapeX
= number of contacts of the hand with ShapeX
Thus, the confidences in the shapes are calculated in a similar manner to the agent
ratings, allowing two competing parameters to be combined into one.

4.2 EP3: Shape Matching


EP3 is the second haptic exploratory procedure to be investigated here and it is
used to verify the object shape, radii of curvature, and location of the object to be
grasped. As already mentioned in section 2.3.2, EP3 is accomplished by executing
a wrap grasp of the object with the robotic hand and then using the resultant con-
tact points to mathematically verify the estimated objects.

The wrap grasp is achieved through an evolutionary processes coordinated


through ep3, where the agents fingerl, finger2, and finger3 are required to make
contact with the object such that link2 and link3 of each finger be in contact with
the object. The wrap grasp finger configuration is illustrated in the profile view of
finger 1 and finger 3 in Figure 19. The algorithm for the evolutionary process of
the fingers was already discussed in section 2.3.2.

wrist

contact
Figure 19: Grasping Profile of Finger 1 and Finger 3

Since the robotic hand has three fingers, a set of six contact points are produced
between the hand and the object. Once the six contact points are achieved, each
point is tested to see if it does or does not satisfy the equation of the estimated
shape. The predicate responsible for this function is f ind-f oreign. Before going
into the pseudo code for f ind-foreign, the following two instances of this
predicate illustrate special cases which have been provided for: (i) the case of no
estimated objects to test for and (ii) the case of no contact points between the fin-
gers and the object.

For case (i), the output variable, Foreign,is equated to the input variable Con-
tacts:

find-foreign (Contacts,Est-Shapes = [I, Foreign) : -


Foreign = Contacts.

For case (ii), the output variable, Foreign,is equated to an empty list:

find-foreign(Contacts = [],Est-Shapes,Foreign):-
Foreign = [I .

The main body of find-f oreign is recursive so that the Foreign points can be
assessed for belonging to any of the estimated shapes:

find-foreign(Contacts,Est_Shapes,Foreign) :-
remove(Shape from Est-shapes = NewEst-Shapes),
find-foreign_pts(Contacts,Shape, [],TempForeign)
find~foreign(TempForeign,NewEst~~hapes,Foreign)
Listing 23: Determining the Foreign Contact Points
Solving for each individual point requires a recursive predicate, find-f or-
eignsts.The recursion halts when all points in the list of contacts points have
been tested. The pseudo code for the find-f oreignst s implementation is as
follows:

find-foreignsts ( [1 ,ShapeITempVar,Foreign) : -
Foreign = TempVar.
find~foreign_pts(Contacts,Shape,TempVar,F'oreign):-
remove(C0ntactPt from Contacts = New-contacts),
determine (ContactPt E Shape,Belonging)
if(Belonging,
then
NewTempVar = TempVar
else
NewTempVar = append(C0ntactPt to TempVar)
Listing 24: Matching Contact Points to Estimated Shapes

4.3 Tip-Prehension
Tip-prehension is a grasp configuration in which each finger makes a single con-
tact with the object and that contact is made with the fingertip. This grasp can be
achieved from many different pre-grasp approaches.

Grasping from above and from the side were the two approaches implemented by
Seitz and Kraft [34]. Their rationale was to grasp from above unless side features,
such as handles, are detected. In this case, a grasp from the front (Front Grasp),
from the side (Front Side Grasp), or from the top (Top Grasp) is used, depending
on the shape of the object, see orientation in section 2.3.2. The transition to the tip-
prehension grasp is the subject of section 4.6.

4.4 Grasp Quality


A good grasp is a stable grasp and it can be analyzed by three approaches: passive
form closure, passive force closure, and active (force) closure [40]. For this work,
active closure has been chosen, because this criteria for grasp stability takes the
active joints of the fingers of the robot hand into account and thus three fingers
can produce a stable tip-prehension grasp of objects with rotational symmetry
such as spheres and cylinders.

Assuming that:
a) 3 p a t Ci, i: 1, 2, ..., N
b) 3 {M DOF at Ci I fappTi is arbitrary)

then two conditions must be met for active closure [40]:


(i) 3 { Ci, i: 1, 2, ..., N 1
( [C,, C2 ..., CN] $2 straight line) U (N 2 3))
where,
P = the coefficient of friction
Ci = contact point of ith finger
N = number of contact points
DOF = Degrees Of Freedom
M = number of DOF
= applied force at ith contact point
= internal force at ie contact point
= resultant moment at ith contact point
= friction cone as a function of p
Since Prolog's ability for computation is limited and far slower than C/C++, there
is a desire to find a simple mathematical equivalent which can be used to check
for the active closure criteria.

Assuming the coulomb model of friction, the relationship between the normal
and tangential force components applied by a finger at a point of contact can be
expressed by equation (17).

.. tan (Omax)= p or Omax = tan (p)-' (18)


The above equations also denote the friction cone allowable, given the coefficient
of friction, p, see Figure 20.

(a) 3D View (b) 2D view


Figure 20: Friction Cone
Since the internal forces can only exist in the grasp plane, Figure 21(a) and (b)
show possible directions of these internal forces. Also, recall that this work
focuses on two classes of shapes: spheres and cylinders. Thus, the resultant grasp
profiles of these two shapes are shown in Figure 21(c). In both cases, the grasp
profile happens to be a circle.

Approach from the Front (11)


(a) Grasping a Sphere
Grasp Plane (nG)

(c) Grasp Profile

Approach from the Side (e)


(b) Grasping a Cylinder
--E> = Direction of
Internal Force
I
Figure 21: Grasp Profile of Spheres and Cylinders

Since the internal forces must satisfy the friction cone (FC), the internal force in
the grasp plane can only be applied within the FC. Figure 22(a) and (b) show two
configurations of the fingertips of a robot hand around the grasp plane. Since the
resultant internal force must equal to zero, the internal forces resulting from each
contact point must meet at one point [14] called the centroid. The FC criteria is
then satisfied in this plane if the centroid lies within the boundaries of the region
of overlapping FCs, which in turn must lie within the grasp triangle.
-
(a) Configuration A wl FC -
(b) Configuration B wl FC

-
(c) Configuration A wlo FC -
(d) Configuration B wlo FC

= finger contact point 0= grasp triangle


0 = centroid n.... - -rasp~Plane
= FC Overlap

Figure 22: Grasping with and without Friction

In the absence of friction, Figure 22(c) and (d), the only area of the overlapping
FCs is found at the intersection of the normals from the contact points, i.e. at the
centroid. Thus, small changes in the angle of the internal forces can easily render
the grasp unstable. Furthermore, note that in the absence of friction, the centroid
must still lie within the grasp triangle, thus any grasp triangle with an angle
greater than 90•‹,Figure 22(d), results in an unstable grasp.

Although this work deals with grasping in the presence of friction, the proposal is
to use the case of grasping without friction as a way for finding a more optimal
grasp configuration. The constraint then imposed by this optimization on a three
fingered grasp is:

This constraint is used by the tip agent to set the opportunistic sub-rating of the
finger agents. Since no grasp triangle angle is to exceed 90•‹,the opportunistic sub-
ratings of the finger agents are a function of the size of the grasp triangle angle at
the corresponding fingertip, see t i p agent in section 2.3.2. By doing this, the equi-
lateral grasp triangle, i.e. a grasp triangle where all angles = 60•‹,is the grasp con-
figuration at which all finger agents have the same opportunistic sub-rating.

Figure 23 illustrates the case when the grasp profile is a circle. Note that if the
grasp plane lies outside the friction cone, then so do the internal forces.

Figure 23: Geometrical Equivalent

From Figure 23, the perpendicular distance between the grasp plane and the cen-
ter of the circle, d, can be calculated as follows:

d = r - sina
but,

amax
= 0max (21)
Therefore, the maximum allowable value for d can be expressed as a function of
the radius of the circle profile, r, and the coefficient of friction, p, as follows:

P d < r . sin(tan(p)- )
(22)
The constraint imposed by (19) not only satisfies condition (i), but in conjunction
with equation (22), it also satisfies condition (ii) of active closure. This is the geo-
metrical equivalent proposed to test for the quality of a grasp for the case of
grasping spheres and cylinders, or any object with a grasp profile of a circle.
Given this grasp configuration and assuming that arbitrary internal forces can be
applied at the established contact points, the matter of how much force to apply is
then simple and is not the subject of this thesis. Ji and Roth [I41 have presented
one method of choosing the optimal internal forces to be applied, once the grasp
configuration is established.

The two conditions for active closure are verified by the controller during every
cycle.

4.5 EP1 to EP3 Transition


Recall that EP1 is a local probe of the object by one finger and EP3 is a wrap grasp
of the object with all three fingers. Consequently, there must be a coordinated
behaviour between the wrist and fingers of the robotic hand such that this change
in hand posture is accomplished.

This coordinated, dynamic behaviour is synthesized with the help of the rating
system. The behavioural agent ep3 ensures that the desired transition is achieved
by reinforcing the opportunistic sub-rating of the physical agents and itself, every
time it is allowed to act, see ep3 agent in section 2.3.2. Once the wrap grasp is
achieved, ep3 also verifies the estimated shape of the object in contact.

The physical agents are active during this time and allowed to participate in the
competition for execution of their actions. The wrist acts to propel the hand for-
ward (toward the object) so that the fingers can wrap around the object, see wrist
agent in section 2.3.2, and the fingers actuate their joints to encompass the object,
see fingerl/2/3 agent in section 2.3.2.

4.6 EP3 to Tip-Prehension Transition


After executing the wrap grasp of EP3, the robotic hand is required to achieve a
tip-prehension posture. Recall that the tip-prehension posture is a grasp of the
object where only the fingertips of the hand are in contact with the object.

Omata and Sekiyama [251 studied this type of transition in a 2D environment


using cylindrical shapes. They have used a coordinated behaviour between the
hand palm, also called the wrist here, and two fingers of their robotic manipulator
to reorient the objects.

The transition implemented here is coordinated through the use of the rating sys-
tem. This behaviour is tracked and enforced by the behavioural agent tip. The
opportunistic sub-ratings of the physical agents are set or reinforced every time tip
is allowed to act, thus ensuring that the resultant behaviour is going to be
achieved.
Chapter 5

Analysis of the EOS for a Dexterous


End-Eff ector

The previous chapters have introduced the details of the EOS. This chapter pre-
sents the test results of the EOS simulating tip-prehension of unknown curved
objects with a three-fingered robotic hand. The analysis is divided into four areas:
(1)Resultant Behaviour
(2) Effect of Object Shape
(3) Effectof Initial Finger Contact
(4) Effect of Rating System

Please note that the rating system weights used in sections 5.1 through 5.3 are:
default weight = 0.30 and opportunistic weight = 0.70. These are the weights for
which the system was designed. The much larger opportunistic weight ensures
the greater focus on the opportunistic sub-rating vs. the default sub-rating.

5.1 Analysis of the Resultant Behaviour


The EOS successfully achieved a tip-prehension grasp with the help of its eleven
agents, controller, and information board. Allowing one agent to execute its
actions during each cycle as determined by the rating system calculations,
resulted in an agent execution profile as shown in Figure 24. The total number of
cycles required to successfully grasp the sphere of radius 30 units, was 679. The
number of cycles varies as a function of the object shape and size, the initial finger
contact, and the rating system weights, as seen in sections 5.2 through 5.4.

In this case, initial contact is made with finger 3. Looking at the patterns of agent
execution, the two stages of the EOS implementation become very obvious. The
info stage has taken 471 cycles, while the grasp stage required only 208 cycles.
Although the cycle numbers vary as indicated above, the grasp stage does usually
take less cycles to complete than the info stage. The answer lies in the complexity
of each of the stage goals. The greatest challenge of the info stage is achieving the
wrap grasp; however, in the grasp stage, achieving the tip-prehension grasp is rel-
atively simple, since the wrist and fingers effectively back up from their current
positions until only the fingertips are in contact with the object. For a detailed
description of the agents' actions, please see section 2.3.2.

info
I'+ I
end I I I I I
t
wrist
t . . .-.;... -. .; . . - . . . . . .

finger2 - mmm- - .3-X

finger1 - m = m & m n m m m & m ~ w c r e m r r r r ? l l l l t . i ~ . -


- -

orientation * ............1 .......................... ;.... ................. ..,.............. 1. .....

ep3
*.. m..m . . mi. m...w.... .mmmmm.
I.. m . . ..m..
i .m..
. .m.
I ! . . . ; . . . . . . . . . . . . . . ; . . . . . . .. -

epl-set .................................................................................... !....

post-shape:~ .............................................

~PI a I

100
I

200
I

300 400
I I

500 600
I

Cycle Number
Figure 24: Agent Execution Profile

The individual stages are discussed in more detail in the sub-sections to follow.
5.1.1 info Stage Behaviour

,
200~
y-axlS
\
200 150
x-axis
100 50
\
0 200
y-axis
0
200 150 100 50
x-axis
0

(a) info Initial Condition (b) EP1 Done

2
y-axis
0
200 150
x-axis
0
100 50
0
0 200
y-axis
0
200 150 100 50
x-axis
O

(c) EP1 to EP3 Transition (d) EP1 to EP3 Transition

2 0 0 0 0 200 1
200 150
,
100 50 O
y-axis x-axis y-axa X-axIS
(e) EP1 to EP3 Transition (9 info Goal Met - EP3 Done
Figure 25: info Stage Behaviour Evolution

Since MATLAB was used to visualize the cycle-by-cycle development of the


robotic hand and the objects in the environment, the resultant behaviour of the
robotic hand with respect to a grasped object during the info stage is illustrated in
the sequence of diagrams in Figure 25, showing the grasp of a sphere. The dia-
grams shown in this figure are instantaneous samples of the sequence of events of
the info stage.

The behaviour shown in Figure 25, looks as though the robotic hand is snapping
at the object while moving ever closer to it, until finally the hand envelopes its tar-
get. Figure 25(a) shows the configuration of the robotic hand and the location of
the sphere at cycle #I, and the next set of diagrams show the transition of the
robotic hand with respect to the object at cycles #13, #131, #331, #429, and #471,
respectively. This behaviour shows an evolutionary achievement of the stage's
goal and is representative of the robotic hand-object interaction during this stage.

end I I I I I I 1 I I

'~'b 100 150 2k 250


Cycle Number
3k 350 4b0 0
4;

Figure 26: Agent Execution Profile - info Stage

Figure 26 shows the agent execution profile during the info stage. The shape
description duration is quite short, cycles 1 - 13, while the shape matching task lasts
most of the stage, cycles 14 - 471.

Taking a closer look at the first 100 cycles of this stage, Figure 27, shows the differ-
ence in the agent execution patterns of both object reconstruction methods.
Shape description is a regular alternation among three agents: epl, post-shape, and
epl-set. This regular alternation among agents can be viewed as a pattern. The
pattern seen during the shape description period is common in all simulations, irre-
spective of the objects shape or initial finger contact, although the number of such
patterns may vary. Shape matching has more complex patterns, e.g. Pattern A,
involving all other agents active during this stage.

Shape Description Shape Matching

end

tip

wrist

finger3

finger2

finger1

orientation

eP3

, . . . ..........................
epl-sel
pattern A
post-shape

eP1
Cycle Number
Figure 27: Agent Execution Profile - info Stage Revisited

5.1.2 grasp Stage Behaviour


At the onset of this stage, the hand is wrapped around an object in the environ-
ment and the goal of this stage is to achieve a tip grasp of this object.

Consequently, the hand must accomplish this transition by backing up from the
wrap grasp until the tip grasp is achieved. Indeed, during the grasp stage the wrist
tends to back up from the object as the finger joints reset and make contact with
the object, Figure 28, until a tip grasp is found, which satisfies the active closure
criteria presented in section 4.3. Figure 28(a) to (f) shows an instantaneous
excerpts of this transition at cycles M71, M74, #529, #562, #617, and #679 respec-
tively. This evolutionary solution to finding a tip-prehension grasp is typical of
the interactions in this stage.

\
\ \ \
\

y-axis 200 150 100 50 O '200 150 100 5o O


x-axis y-axis x-axis
(a) grasp Initial Condition (b) To Tip Grasp Transition

\
200\ \ \ \ \
\

y-axis 200 150 100 50 O 50 O


x-axIS y-axis '200 1paxiS 100
(c) To Tip Grasp Transition (d) To Tip Grasp Transition

, \

200 150 100 50 O 200\ \ \ \


y-axis x-axis y-axis 200 150 100 50 O
x-axis
(e) To Tip Grasp Transition (9 grasp Goal Met - Tip Grasp
Figure 28: grasp Stage Behaviour Evolution

76
The manner in which the agents are activated during this stage is as shown in the
agent execution profile of Figure 29. Between a start-up and ending period, the reg-
ular alternation among agents fingerl, finger2,finger3, wrist, and tip, generates Pat-
tern B. This pattern repeats itself throughout this period.

Constant Pattern

end

tip

wrist

finger3

finger2

fingerl

orientation

ep3

epl-set

post-shape

epl
480 500 b20 540 560 580 600 620 640 I 6 6 0
Cycle Number
Figure 29: Agent Execution Profile - grasp Stage

As seen in the previous two sub-sections, the resultant behaviour is clearly


divided into two stages, info and grasp, which are discernible on the agent execu-
tion profile graphs. Furthermore, the two object reconstruction methods incorpo-
rated into the EOS during the info stage also have their own distinct agent
execution patterns.

5.2 Effect of Object Shape


The type of shapes investigated here are: spheres, cylinders, and combinations of
the two. Simple shapes are shapes which are made up of only one shape, i.e. one
sphere or one cylinder, and complex shapes are shapes which are made of the two
simple shapes. As a result, the first analysis of this section addresses the issue of
comparing simple shapes, while the second analysis addresses the ability of the
system to deal with simple and complex shapes.

5.2.1 Sphere vs. Cylinder

Table 8: Grasping a Sphere vs. a Cylinder


Sphere Cylinder
I
Radius Success? Radius Success?

10.0 Yes
12.5 1 Yes I
15.0 1 Yes I
1 17.5 1 Yes 17.5 1 Yes I

1 22.5 1 Yes 22.5 1 Yes I


1 25.0 1 Yes 25.0 1 Yes I
1 27.5 1 Yes 27.5 1 Yes I
1 30.0 1 Yes 30.0 1 Yes 1
32.5 1 Yes I
35.0 1 Yes 1
1 37.5 1 Yes 37.5 1 Yes 1

The performance of the system, i.e. the number of cycles of the system, is used in
the comparison of the two simple shapes, spheres and cylinders. The radius of
each of the simple shapes is varied in order to determine whether or not the two
shape primitives have different radii ranges which this robotic hand can accom-
modate and whether the radius variation affects the performance of the system in
any way.

Table 8 summarizes the ability of the robotic hand to grasp the two object primi-
tives. Clearly, the range of successfully grasped cylinder radii, 10.0 to 37.5 units, is
slightly larger then that of the sphere radii, 17.5 to 37.5.

800 - . . . . . . . . . . . . : ............................................................................ ..-

V)
i 8 i
-
Q)
0 p
)r 600 - . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . .0. . . . . . .1 . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .-
.c
0 : o :
St: 5 6
400 - . . . . . . . . - . . i . . . . . . . . . . . . .. i . . . . . . . . . . . . . . i . ........... .I.. . . . . . . . . .%. . . . . . . . . . . . . :
?"....- . --
: f :

; Q * o
200 - .... .........................................................
.--x
x i cylinder i
0 ; sphere
I 1 I I I I
0
5 10 15 20 25 30 35 40
Radius [units]
Figure 30: Sphere vs. Cylinder Performance - info Stage

Figure 30 shows the number of cycles required to complete the info stage for both
shape classes: spheres and cylinders. In most cases, the sphere requires more
cycles to complete the info stage than does the cylinder. The general trend for both
shapes is that the number of cycles decreases as the radius is increased. However,
this trend is not monotonic for either the sphere or cylinder shapes. The reason
why it is easier to grasp larger objects that, given the object fits inside the robotic
hand, the larger objects are closer to the fingers, thus they require less cycles for
the fingers to come into contact with them.
250 I I I I I I

*
200 -.... . . . . . . . [
: x
-
m
:
j
m
o
'Y

0
:
Y
0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . I . . . . . . . . . . -
; 0 : : m
i x Q
f
" 6
: m j
a 150 - ............ ............. ;. ...... .:... . . . . . . . . .I . . . . . . . . . . . . . : . . . . . . . . . . . . .f. . 0
-
0
: o : o ;
0"
+

*0 10
0 - - . . . . . . . . . . :.. .......... !. . . . . . . . . . . . . . . . . . . . . . . . . . . . I..... ........................... -

50 - . . . . . . .
m
1;0 ; sphere

I I
...........................................................

I 1 I I
-

0
5 10 15 20 25 30 35 40
Radius [units]
Figure 31: Sphere vs. Cylinder Performance -grasp Stage

200
o ; sphere

I 1 I I I I I
5 10 15 20 25 30 35 40
Radius [units]
Figure 32: Sphere vs. Cylinder Performance
Figure 31 presents the comparison of the number of cycles for the g a s p stage.
Whereas in the info stage, the sphere shapes needed more cycles, in this case, the
number of cycles produces no such trend. Furthermore, the general trend of each
of these shapes is still to decrease in non-monotonous fashion, however this trend
is not as pronounced during this stage.

Adding the number of cycles for the info and grasp stages produces the graph in
Figure 32. Since the number of cycles in the info stage were generally greater than
those in the grasp stage, it is not surprising to see that the trends exhibited in Fig-
ure 32 resemble those of the info stage.

To summarize, (i) achieving tip grasp of a sphere requires more cycles than of a
cylinder, (ii) the number of cycles required to complete the info stage are generally
greater than those required to complete the grasp stage, and (iii) increasing the
radius for both shapes generally decreases the number of cycles required.

5.2.2 Simple vs. Complex


There are two assumptions made in this analysis:
(i) the contact between the object and the robotic hand is made with the
same finger
(ii) the contact is made at approximately the same location

The reason for "approximately" is seen in Case V in Figure 33 and Figure 36. The
comparison among different objects is made with respect to the number of cycles
required to complete the tip-prehension grasp.

Sphere-Based Objects
In total four sphere-based objects are compared to the case of a simple sphere. A
sphere-based shape means that the primary feature is a sphere, to which a second-
ary feature, a cylinder, is attached. The cylinder can be attached to the back of the
sphere, the left side of the sphere, the top of the sphere, or the front of the sphere.
It is assumed that finger 2 is in contact with the sphere or cylinder, as in Case V , at
the y-z plane center of the object. Recall that the robotic hand grasps spheres
head-on unless secondary features are detected.
Case I: Sphere Only

Case II: Cylinder in Back Case Ill: Cylinder on Left

Case IV: Cylinder on Top Case V: Cylinder in Front

Figure 33: Sphere-Based Objects

The number of cycles required for each of the five cases in Figure 33 is shown
graphically in Figure 34. Clearly, the performance of the EOS varies among these
configurations, especially for Case 11, Case and Case V.
m Total
0 Info

I I I I I
0
Case l Case II Case Ill Case lV Case V
Sphere-Based Objects

Figure 34: Sphere-Based Objects Performance

A possible source of this variation is the location of the cylinder, the secondary
feature, with respect to the sphere, the primary feature. Figure 35 is a sequence of
instantaneous excerpts which show the point at which the secondary feature is
discovered, if it is discovered at all. These excerpts help identify the reason for the
differences in performance among the sphere-based objects and in comparison
with the simple sphere.

Note that the views for each of these images were chosen to best emphasize the
discovery, or lack thereof, of the secondary feature. Also, the cylinder shading has
been chosen to ensure a proper view of it.
\
\

y-axis '200 150


x-axis 100 50 O
Case I:
Sphere Contact (Fingers 1 and 2)

100 150 200 200


y-axis y-axis

Case II: Case Ill:


Cylinder Contact (Finger 3) No Cylinder Contact

Case IV: Case V:


Cylinder Contact (Fingers 1 & 2) Sphere Contact (Finger 2)
Figure 35: Encountering Secondary Features of Spheres

In Case I1 and Case 1% the secondary feature is encountered as the wrap grasp is
attempted, however the resultant number of cycles differ due to the point at
which the encounter takes place. In Case I1 the cylinder is encountered by finger 3,
well into the shape matching period, cycle #154. However, in Case IV the cylinder is
encountered by fingers 1 and finger 2 at the beginning of the shape matching
period, cycle #15.

Due to the location of the secondary feature in Case III, the cylinder is not encoun-
tered at all, thus the number of cycles for each of the stages equals those of Case I.

Lastly, in Case V the secondary feature is encountered at the beginning of the shape
description period, as it is the point of initial contact. However, due to the small
radius of the cylinder, both shapes are discovered during the period of shape
description.

Cylinder-Based Objects
In addition to the sphere-based objects, Figure 36 shows four cylinder-based
objects investigated. A cylinder-based shape is a shape which has a cylinder as its
primary feature and a sphere as its secondary feature. As in the case of the sphere-
based objects, finger 2 is assumed to be in contact with the cylinder or sphere, in
Case K

Given the cylinder-based objects in Figure 36, the performance of the system for
grasping each combination is as shown in Figure 37. As opposed to spheres, cylin-
ders are grasped by default head-on and by rotating the wrist, or in this case the
object, by 90". If secondary features are encountered, then alternate rotation
schemes are contemplated, see orientation in section 2.3.2. In this simulation, the
object is rotated, instead of the hand, for ease of computation.
Case I: Cylinder Only

Case II: Sphere in Back Case Ill: Sphere on Left

Case IV: Sphere on Top Case V: Sphere in Front

Figure 36: Cylinder-Based Objects

Similarly to the sphere-based objects, the number of cycles for grasping each
object varies as a function of the location of the secondary feature, the sphere, with
respect to the primary one, the cylinder. In addition, no data is available for Case
Ill since this particular object configuration proves to be an impossible challenge
for this program, as discussed below.

Figure 38 shows the instances at which the secondary features of the cylinder
were encountered or not. These figures facilitate the understanding of the differ-
ences among these object configurations.

Grasp

Case l Case II case 111 Case lV Case V


Cylinder-Based Objects
Figure 37: Cylinder-Based Objects Performance

Case 1illustrates a cylinder being grasped in the absence of secondary features.

In Case I1 and Case 111, the secondary feature is not discovered until the wrap grasp
is attempted. In Case I1 the sphere encounter is successful, but has a higher cycle
requirement overall, than the simple case, Case I. However, in Case 111the sphere
encounter makes it impossible to progress. The secondary feature blocks the for-
ward motion of the wrist as the fingers cannot move around the unknown imped-
iment. The program assumes that if the object to be grasped is too big, it was
already detected and the program would have picked a different object to grasped
or would have caused the end agent to be executed. However, the secondary fea-
ture in this case effectively increase the size of the object to be grasped and found
a weak point in the program.

In Case IV the secondary feature is not detected, thus the performance of this case
is similar to that of Case I.
Lastly, in Case V the secondary feature is initially in contact with the second finger,
thus it is detected during the shape description period. Consequently, the wrap
grasp is planned around the sphere.

3
\ \

y-ax~s 200 150


x-axis 100 50 0
Case I:
Cylinder Contact (Fingers 1, 2 & 3)

- ~

ya x i s yaxts x-axis
Case II: Case Ill:
Sphere Contact (Finger 3) Sphere Contact (Finger 3)

U
x-axis
Case IV: Case V:
Cylinder Contact (Fingers 1, 2, & 3) Cylinder Contact (Finger 2)
Figure 38: Encountering Secondary Features of Cylinders

88
In conclusion, the ability of the robotic hand to grasp an object varies according to
the shape of the object. In this case, the location of the secondary feature with
respect to the primary one, plays a very important role. Furthermore, the proba-
bility of success of the grasp is increased if the complex object is more precisely
reconstructed early on, during the shape description period.

5.3 Effect of Initial Finger Contact


The initial finger contact refers to the finger which is in initial contact with the
object, but it also refers to the initial contact location of the finger on the object.
These two parameters are dealt with separately in the sub-sections to follow.

5.3.1 Finger 1 & Finger 2 vs. Finger 3


In the analysis of the effect of a particular finger in contact with an object, it is
assumed that the contact location is constant, i.e. all initial contacts occur at posi-
tion #1, Figure 39.

The results of varying the initial finger in contact with each of the two object prim-
itives, is as shown in Table 9.

The results in Table 9 clearly indicate that the finger which makes initial contact
with the object, plays no role in the performance of the system and its ability to
grasp an object.

Table 9: Varying Finger in Contact


Sphere Cylinder
Finger in
Contact info grasp Total info grasp Total
Cycles Cycles Cycles Cycles Cycles Cycles
Finger 1 471 208 679 418 217 635
Finger 2 471 208 679 418 217 635
1Finger3 11 471 1 208 1 679 1 418 1 217 1 635 1
5.3.2 Contact Location
In the next analysis, the finger contact location is varied, while:
the finger in contact remains constant, finger 2
the shape of object is constant, shape = simple sphere
the radius of the object is constant, radius = 30 units

SPHERE: Front View

-Y - axis
Figure 39: Contact Position Reference

Furthermore, Figure 39 shows the reference position numbers, which have been
assigned to each initial contact location. Although this is a 2D view of this map,
the contact points are points in 3D, with corresponding x-coordinates. This refer-
ence position map can be superimposed on spheres and cylinders.

- 2 Contact Position Analysis


Table 10: Finger .,
Position info grasp info grasp Total
position
Cycles Cycles Cycles Cycles Cycles Cycles
1 471 208 679

As before, the number of cycles is used as the performance measure for compari-

91
son purposes. Table 10 shows the summary of these simulations.

Obviously, the change in contact location has some impact on the performance of
the system. The positions experiencing differences in performance have been
highlighted in the table and in Figure 40, to help visualize the affected areas. The
rest of the positions are referred to as the "majority".

Figure 40: Finger 2 Contact Positions

Before trying to explain the reasons for the differences at these positions, it helps
to note the results for doing these simulations with finger 3 initially in contact.
The finger 3 initial contact simulations have produced an entirely uniform system
performance where every info and grasp stage at every position required a consis-
tent number of cycles, Table 11.

Table 11: Finger 3 Contact Position Analysis


info Cycles grasp Cycles Total Cycles
471 208 679

The main difference between allowing finger 2 vs. finger 3 to be initially in contact
with the object, is the physical location of the origin of these fingers, with respect
to the other fingers, see Figure 14 in section 3.1.
The vertical distance between fingers 1/2 and finger 3 is 80 units, while the hori-
zontal distance between finger 1 and finger 2 is only 30 units. Given that the
radius of the sphere used for this simulation is 30 units, that means that if finger 1
or finger 2 was initially in contact with the object, then the other finger may come
in contact with the object as well during the shape description period of the info
stage. However, if finger 3 is initially in contact with the object, fingers 1 and/or
finger 2 have no chance of also coming into contact with the object during shape
description.

Shape Description Shape Matching

orientation . . . . .

J I I I I I I I I

0 20 30 40 50 60 70 80 90 100
Cycle Number
Figure 41 Position 2a Agent Execution Profile - info Stage

To verify this hypothesis, let us first take a look at the circumstances surrounding
positions 2a, 3a, 4f, 5f, 4g, and 5g. In all these cases the number of cycles for the
grasp stage are indifferent from the majority of positions and the number of cycles
for the info stage are slightly less than the majority of positions. Figure 41 shows
the agent execution profile for position 2a during the early part of the info stage,
while Figure 42 shows the equivalent status of position 1. As seen, the number of
pattern repetitions during the shape description period at position 2a is one less
than the number of pattern repetitions at position 1. In addition, each pattern has
a 3 agent duration, which explains the differences for positions 2a, 3a, 4f, 5f, 4g,
and 5g.

Shape Description Shape Matching

end - I I I I I I I I I

- . . . . . . ..:. . . . . . . -:........... :. . . . . . . . . . :. . . . . . . . . ;... ..: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


tip

wrist . . :...m..m:
. . . . . m . m l . m k . . m . . r. ;
. . .* . . - w . ; ; x . . x . . . ' * . M ~ x-

finger3- . . . . . . . ;. . . . . . . x...m.m. . . . . . . . . . . . . . . . . .n..*..*. . . . . . . : . . . . . . . . . m . *..*x . . . . . . . . . . . . *.-

- , ...: .m..m...#..m.*.;..
finger2 . . . . . . . . :. . . . . . m..m.m.. ~ . a.... ...#. m..m.m..r. . . . .).... .m...)c

fingerl - ........ : ..... m . . m . .#.m


~ ...... m..m.m.n.m... I . . . ..m:.m..r.m:..m..
...... m . . m

orientation - ........ :. t.. . . . . .:. . . . . . . . . .:. . . . . . . . . .:. . . . . . . :. . . . . . . . .: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -

ep3
- ........ :. m . . . . . ;.......... :. . . . . . . . m:. . . . . . . . . .:. . . . . . . . . . i . . . x . . . .:. . . . . . .: . . . . . . . :.x...

* ...................... . . . . . . . .;. ....... ..:... . . . . ..;. . . . . . :.


.%...*. . . . . . - . .:. . . . . . . : . . . . . . .-
-set-
epl

post-shape -m..*.m.i* ................................................ ; . . . . . . . ;. . . . . . - . .:. . . . . . . . .I. . . . . . . . -

I I I I I

" " 15
I I I
ePl;
20 30 40 50 60 70 80 90 100
Cycle Number
Figure 42: Position 1 Agent Execution Profile - info Stage

The reason for the difference in system performance at positions 3f and3g remains
to be determined. Note that the number of cycles at each of these positions is iden-
tical and so is their horizontal placement in Figure 40. Obviously, there is a rela-
tion between these two positions.

A first look at the agent execution profiles for position 1 vs. 3f, Figure 43, discloses
nothing. A close look at the first part of the info stage, Figure 44, shows a similar
case of a double contact as for positions 2a, 3a, 4f, 5f, 4g, and 5g, but this does not
account for the big variation.
Figure 43: Comparing Agent Execution Profiles

95
Figure 44: Comparing info Stage Agent Execution Profiles
The only alternative is the relative positioning of the hand with respect to the
object. This is controlled at the onset of the shape matching period by the orienta-
tion agent. This particular configuration evidently enables the faster achievement
of the goals of each stage.

The first set of simulations, section 5.3.1, have shown that the finger which ini-
tially makes contact plays no role in the system performance. Section 5.3.2 has
shown that the location of the initial finger contact does not matter either, unless
the position of this finger facilitates a double finger contact with the object during
the shape description period. In this case, the performance is better. Thus, although
the initial finger in contact does not matter when varied on its own, when varied
in conjunction with the initial contact point, it can facilitate such dual contacts,
thus influencing the performance of the system.

5.4 Effect of Rating System


The following is an analysis of the rating system as a means coordinating the exe-
cution of all the agents in the systems. Given the same initial contact conditions,
the Opportunistic weighting factor (Wo) and Default weighting factor (Wd) are
varied from 0.0 to 1.0, while keeping their sum equal to 1.0 due to the reasoning
presented in Appendix 11. In addition, the object used was a sphere of radius 30
units.

If the EOS produces a stable tip-prehension grasp, the system is considered to be


stable, and the Outcome of the simulation is denoted by a "4 ". If the agents
become caught up in an infinite loop or if the program execution is terminated
inappropriately, the system is considered to be unstable and the Outcome of the
simulation is marked with a 'X". An inappropriate system termination is one
where the cause for the termination is due to an execution of the end agent only
because all other agents have failed to set the system on its proper course. The
results are as shown in Table 12.
Table 12: Varying Weighting Factors
Number of Cycles
Default Opportunistic
Outcome
Weight, Wd Weight, W,
info Total

The item to keep in mind is that the EOS was designed to work with a default
weight of 0.30 and an opportunistic weight of 0.70. The results in Table 12 show
that the system is unstable for:

{ (Wd7 Wo) l Wo>Wd7


Wdf O) (23)
This shows the importance of both types of sub-ratings to the rating system and
the EOS. Neither sub-rating is self-sufficient without the other. The default sub-
rating serves to ensure that in the absence of opportunistic sub-ratings, the pro-
gram can continue. On the other hand, the opportunistic sub-rating ensures the
dynamic and complementary behaviour of the agents. As long as the default
weighting factor is less than the opportunistic one, the opportunistic sub-rating
can perform its job without being hindered by the default sub-rating. The rating
system thus combines the knowledge of each of these sub-ratings in order to
achieve a stable state.

Looking at the region of stability, the system performance changes with varying
weighting factors.

Opportunistic Weight (= 1- Default Weight)


Figure 45: System Performance for Various Weighting Factors

As the weights vary, so do the number of cycles required for each of the stages.
The total number of cycles starts off high, total = 932, and decreases monotonically
until Wo = 0.75 (and Wd = 0.25). Beyond this point, the numbers oscillate up and
down. In addition, the variation of the total number of cycles is mostly due to the
variation in the number of cycles during the info stage.

This analysis also provides a means for determining the optimum weights for the
EOS rating system. Assuming that it is desirable to have the least number of
cycles for both stages, the optimum weights are: Wo = 0.80 and Wd = 0.20.
5.5 Discussion
The EOS implementation in addressing the grasping of curved objects with a dex-
terous end-effector has been analyzed in the previous sections. The three main fac-
tors which affect the resultant behaviour of the system have been identified to be:
the object's shape, the initial finger contact, and the rating system. Varying these
parameters one by one has revealed the manner in which they affect the system.

Agent Execution Profile Study


Agent execution profiles have been used to identify and illustrate the two stages
of the system. The interesting result is that the agent execution profiles of two dif-
ferent system cycle requirements look very similar, Figure 43, making it hard to
determine any reason for the difference in their cycle requirements. This observa-
tion verifies the uniform resultant behaviour of the system, irrespective of the
object shape, initial finger contact, or rating system.

Object Shape
The object shape, when studied at the simple level, has shown that spheres, with
their 3D curvature require more execution cycles than do their counterparts, the
cylinders, which are only curved in 2D. This finding is in spite of the lower confi-
dence allotted to the EP1 output predicting a cylindrical shape vs. a spherical
shape.

When each of these shape primitives has been enhanced by a secondary feature,
the sphere-based objects vary significantly in cycle requirements, Figure 34, while
the cylinder-based objects seem to have a much more stable cycle requirement.
Again, it seems that the degree of the object's curvature affects the performance of
the system.

The analysis of grasping simple shapes of varying radii also showed that both
shape classes have minimum and maximum boundaries on the size of radius
which this robotic hand configuration can accommodate. Both maxima are the
same, while the minima boundaries vary, with the cylinder's minimum being
lower than that of the sphere.
The boundaries are a direct result of the robot hand configuration. Since the verti-
cal distance between fingersl/2 and finger3 is 80 units, and since the fingers do
not exceed the vertical limit of their link1 origins, this hand can only grasp objects
less than 80 units tall. In fact, if an object's size is within 5%of this upper limit, the
object is deemed not graspable and the program ends. Obviously, this means that
the radius of the sphere and the cylinder is responsible for the radius maximum
and it is why the upper boundaries of spheres and cylinders are the same.

If the total height of a complex object exceeds the maximum and the hand has not
been able to identify the features with which it is in contact, then this may cause
the hand to stagnate and never achieve a grasp of the desired object. Such a failure
was seen in Case III of the cylinder-based object. The radius of the cylinder was 30
units and the radius of the sphere was 10 units, thus the combined height of this
object is 80 units. Since the system did not detect the sphere at the side of the cyl-
inder during shape description, the cylinder was grasped as usual, from the side.
Unfortunately, this resulted in a configuration which was not graspable with the
given robotic hand.

Furthermore, since the horizontal distance between finger 1 and finger 2 is equal
to 30 units and since simple spheres are grasped from the front, spheres with a
diameter equal to or less than 30 units will not be able to be grasped. However,
since simple cylinders are grasped from the side, the length of the cylinders, in
addition to their diameter are the parameters which determine its graspability.
Thus, a cylinder with a length greater than 30 units may be grasped even when its
radius is smaller than 15 units. However, eventually the system will fail as the cyl-
inder's radius becomes too small for this robotic hand to grasp.

Changing the hand configuration and/or allowing the hand to open wider would
obviously change the values of the maxima and the minima boundaries.

Initial Finger Contact


The initial finger in contact with the object seems to be irrelevant in the system
cycle requirement, however, when varied with the initial contact location, the data
seemed to indicate otherwise. This is one example which illustrates the impor-
tance of first analyzing the effect of each parameter individually, as the roles of
specific parameters are hard to discern when varied in combinations. The varia-
tion of both parameters can produce an effect which has been called a "double fin-
ger contact". The double finger contact occurs when two fingers make contact
with the object during the shape description period of the info stage. This contact
scenario can drastically decrease the system cycle requirement. Perhaps this can
be taken advantage of in the future work. When the double contact scenario does
not occur, the system cycle requirement is invariant.

Rating system
Lastly, the system was designed with the weighting factors set to [0.30, 0.701 for
the default and opportunistic sub-ratings, respectively. The desire was to empha-
size the opportunistic sub-rating much more than the default sub-rating, so that
the dynamic behaviour of the system could be achieved in less cycles. The expec-
tation was to have the best performance at this weighting combination. However,
varying these weights has shown that the optimum weights for this system are,
[0.20,0.80]. Also, it was discovered that as long as the opportunistic sub-rating is
greater than the default sub-rating, the rating system can do its job, although at
varying system performance.

Of the three factors which played a role in determining the cycle requirement of
the system, the object shape, with respect to the radius of the main feature, had
the most influence on the system performance. Varying the radius of the sphere
and cylinder from 17.5 units to 37.5 units caused the system cycle requirement to
decrease from about 900 to about 400 cycles, see Figure 32.

The factor which least affected the cycle requirement is also tied to the shape of
the object, and it is the location of the secondary feature with respect to the pri-
mary one. Hardly any change is seen in the cylinder-based objects, Figure 37, and
the sphere-based objects exhibit a cycle range of less than 200, Figure 34.
Chapter 6

Analysis of the EOS for a Parallel


Reconfigurable Jaw End-Eff ector

The EOS presented in the previous chapters pertained to the grasp achievement of
objects by a robotic hand. This chapter presents an example of a simple EOS
implementation in grasping with a novel end-effector - the parallel reconfigurable
jaw gripper designed by Hong and Payandeh [12][13].

6.1 Overview
The modeled end-effector is shown in Figure 46. This end-effector has two parallel
jaw grippers, each one enhanced with a reconfigurable rotary disc.

Figure 46: Parallel Reconfigurable Jaw Gripper 1121


This disc is called a rotary disc, because it can rotate about the axis of motion of
the parallel jaws, Figure 47.

Figure 47: Parallel Jaw and Rotary Direction of Motion


The reconfigurable feature of the rotary disc stems from the spring-loaded pins,
which are embedded within it, Figure 47. These pins make contact with the object
when the parallel jaws close in on the object, passively accommodating the object
shape, and when the discs rotate, actively searching for edge contacts with the
object. Two opposing rotaries, each with their own pins, can locate, grasp, immo-
bilize, and reorient an object [121. Being spring-loaded, the pins can take on a vari-
able length so as to be able to make edge contact as well as face contact with an
object face, Figure 48. Edge contact is made when a pin pushes against the outer
edge of an object face and face contact is made when a pin pushes against the face
of the object. These pins act as additional constraints for the gripper, thus enabling
the jaw gripper to produce a force-closure grasp [211 of the object.

(a) Frontal View (b)Side View


Figure 48: Face and Side Contact of Pins
The pins are arranged in patterns suited to locking on to an object and immobiliz-
ing it. Note that the two facing jaw discs rotate so as to oppose each other, i.e.
given the same axis of rotation, one disc rotates clockwise while the other rotates
counter-clockwise.

Naturally, not all pin patterns result in a force closure grasp of the object. Given a
certain object and a pre-determined configuration of the pins within each jaw disc,
the jaw discs rotate until some of the pins catch the edges of the object to be
grasped, thus preventing the disc from rotating any further. The pins which do
not make edge contact may make contact with the face of the object or they may
make no contact at all. It is the combination of pins in contact with an object edge
and face which makes all pin configurations unique. This combination of contacts
also determines whether the resultant grasp is force closure or not [121[131.

6.2 Implementation
As before, the system has been implemented in SICStus Prolog v3.3, maintaining
the same overall architecture. The initial condition of this program is that the par-
allel jaws have been actuated and both rotary discs are in contact with the object to
be grasped. The goal of the system is to achieve a certain combination of edge and
face contacts. In this implementation, that combination is: minimum of 2 edge
contacts and 1 face contact for each disc.

The pin configuration of the rotary disc is very important in the success of the sys-
tem, thus the agents represent different pre-determined pin configurations. In this
case, four agents, see section 6.2.3, were chosen, Figure 49.

(a) Agent A (b) Agent B (c) Agent C (d) Agent D


Figure 49: Agent Pin Configurations
As an example, the specific pin configurations were chosen to minimize the num-
ber of pins on the rotary. Thus, the first rotary disc, A, has only 3 pins, the second
and third rotary discs, B and C respectively, each have four pins, but with differ-
ent pin arrangements, and the last rotary disc has five pins. The number of possi-
ble rotary discs is infinitely large, thus only four such rotary discs were picked.
The nature of the EOS allows future rotary disc configurations to be easily added,
see section 6.2.3. In addition, the pin configurations were chosen to exhibit a wide
range of inter-pin distances (distance between pins on a rotary) in order to accom-
modate shapes of various shapes and sizes.

Figure 50: Rotary Disc Symmetry


Each rotary disc is defined and stored on the information board, see section 6.2.3,
as shown in the following code segment:
( c o n f i g B : p i n s , [ P i n l , Pin2, . . . , PinNI)
where,conf igB= name of disc configuration
N = number of pins on the disc
Pin# = [@,dl
= polar coordinates of a pin w.r.t. a disc, Figure 51

Figure 51: Defining the Rotary Disc Pin Location


The rotary disc representation does not change, thus a variable named rotary--
angle is used to store the value of the angle through which the disc has rotated.
This value is also stored on the information board. To determine the global polar
coordinates of a pin, the distance of the pin from the origin stays the same, but the
angle of the pin from the positive x-axis, must be incremented by the rotary--
angle,i.e. [@+rotary-angle, dl.

Figure 52: Y-Axis Object Symmetry


The first assumption being made in this system is that the rotary discs are of the
same size and have the same pin arrangement, Figure 50. The second assumption
is that the two end faces of the object in front of each of the discs are identical. In
addition, the faces are simple closed polygons with y-axis symmetry e.g. Figure
52. These assumptions are made to simplify the problem of grasping an object
with the parallel rotary jaw gripper.

Frontal View: Left Disc Back View: Right Disc


Figure 53: Disc Rotation Angles
This assumption allows the task to be simplified such that only one rotation angle,
@ in Figure 53, need be found for one particular face shape. If @ is found for the
Left Disc in Figure 53, then -@ can be used for the Right Disc. In addition, the
resultant contact points from the rotation of each disc oppose each other. Relaxing
this assumption can easily be accommodated and is discussed in section 6.5.

6.2.1 Types of Objects Accommodated


There is a wide variety of objects which the parallel reconfigurable jaw end-effec-
tor can accommodate, such as prismatic shapes, spherical objects, and cylindrical
objects. However, this implementation looks at shapes which have y-axis symme-
try and flat end faces.

Figure 54: Object Face Representation


The objects to be grasped are described by the end face, i.e. the face to be grasped.
Each shape face is a planar polygon, thus each edge is a line segment and each line
segment is described by as set two end points. As a result, the object face in Figure
54 requires four pairs of coordinates to fully describe its face. Taking the object
face shown in Figure 54 as an example, the following would be object face repre-
sentation:

[b,clI ,
(object:face,[ [ [-b,cl,[O,-dl 1 , [ [Or-dl,
[[b,cI,[o,aIl,[[(),a1A-b,clll)
Listing 25: Specifying an Object Face
An additional piece of information about the object's face shape is asserted. This
information consists of two parameters: shape regularity and edge parity. The need
for this information becomes evident in section 6.2.4. The regularity of the shape of
the face is classified as: regular, semi-regular, or irregular. A regular object is an
object which has all sides the same length, as a square does. A kite is an example
of a semi-regular shape, because it is symmetric, although the sides are a variety
of lengths. Irregular shapes have all lengths different sizes. The parity of the num-
ber of sides is classified as odd or even, depending on whether the number of
sides of the face is odd or even. This information is asserted on the information
board under the label name object :shape.The object regularity is represented
by a number, 0, 1, and 2 for irregular, semi-regular, and regular shapes respec-
tively. The arity is a string field indicating odd or even.Thus the shape in Figure
54 has the following data asserted on the IB:
(object:shape,[l,even])

6.2.2 Object Contact Detection


It is assumed that the origin of the rotary disc (0,O) lies at the y-coordinate centroid
of the object, within the object, as in Figure 54. This follows from the assumption
of face symmetry. This map is used to determine whether a pin has made edge
contact, face contact, or no contact, Figure 55, with respect to a given object face:

(a) Edge Contact (b) Face Contact (c) No Contact

- = face edge
@ = spring-loaded pin
Figure 55: Types of Contacts
Given a list of edges as in Listing 25, a list of pins on a rotary, as in Listing 26, and
the current angle of rotation of the disc, rotary-angl e,a set of pin status lists is
assembled. These lists are used to determine the contact status of a pin, i.e. edge
contact, face contact, or no contact.
(configB:pins,[ [0,0],[60~,2],
[180~,21 1)
[240•‹,2]
,
Listing 26: Pin Configuration for Rotary Disc B
Each list contains the location of one pin with respect to every edge of the object
face. The location of a pin with respect to an edge, is classified as (i) possible face
contact (value = 0), (ii) edge contact (value = I), or (iii) no contact (value = 2). Tak-
ing the triangle of Figure 55a as an example, the pin has made edge contact with
one edge. Since the pin also lies on the same side of the other two lines as the cen-
ter of the coordinate frame, the pin is categorized as "possible face contact" by the
other two sides. Thus the list generated for the pin in Figure 55a is [ 1, 0,01.
Note that the list has three elements, one for every edge.

Once the status list is assembled, a search of this list is performed and the contact
status is reasoned as shown below:

if member(1,List)
then <pin has made edge contact>
elseif member(0,List)
then <pin has made no contact with object>
else <pin has made face contact>
Listing 27: Reasoning a Pin's Contact Status
Once all pins have been classified, the number of pins which have made face con-
tact with the object is stored under the label face-contacts and the number of
pins which have made edge contact are stored under the edge-contacts label
on the information board.

The contact detection algorithm for establishing the pin location classification
with respect to an edge, i.e. possible face contact, edge contact, or no contact, is
called on-1 ine and it is implemented as in the following code segment:

/ * The two end points of an edge, pointA(Xa,Ya) * /


/ * and pointB(Xb,Yb) are given, while the pin * /
/ * location is (X,Y). The Status is the output * /
/ * variable containing the value 0, 1, or 2. * /
on-line ( [ [Xa,Yal , [XbJb] ], [XIY] Status): -
I

/ * line equation: Y=(Yb-Ya)/(Xb-Xa)*(X-Xb)+Yb* /


,
eval (Xb-Xa,DeltaX)
/ * Calculate difference between right and * /
/ * left side of the line equation. * /
if(De1taX < 0.01,
( / * Cannot divide by zero. * /
eval(Xb-X,Diff),
eval (Xb,Orig)) ,
(/* Division by zero not a problem. * /
eval( (Yb-Ya)/DeltaXX(X-Xb)+Yb,Yest),
eval(Yest-Y,Diff),
I),
eval((Yb-Ya)/DeltaX*(-Xb)+Yb,Orig)
/ * Since it is assumed that the origin lies * /
/ * within the object, the Pin and the Origin * /
/ * must lie on the same side of the edge * /
/ * for face contact consideration. * /
if(Diff > 0, Abovel = true, Abovel = fail),
if(0rig > 0, Above2 = fail, Above2 = true),
/ * Must also ensure that the Pin lies between * /
/ * the end points of the edge in order to have*/
/ * edge contact. * /
/ * Distance between PointA and PointB. * /
dist ( [Xa,Ya], [Xb,Yb],MaxDist),
/ * Distance between PointA and the Pin. * /
dist ( [Xa,Ya], [X,Y], ~ i s t,~ )
/ * Distance between PointB and the Pin. * /
dist ( [Xb,Ybl, [X,Yl,DistB),
/ * ~eterminewhich of the three distance is * /
/ * the largest. * /
,Dist),
eval (max(DistA,DistB)
eval(max(MaxDist,Dist)
,Biggest),
/ * If the pin lies between the two end points,*/
/ * the largest distance is between the two * /
/ * endpoints. * /
if(Biggest = MaxDist,
Boundary = true,
Boundary = fail),
if (and([abs(Diff)<0.10,
Abovel=Above2,Boundary]),
/ * Then, Pin is on the edge. * /
Status = 1,
/ * Elseif the Pin and Origin lie on * /
/ * opposite sides of the edge, then have * /
/ * no contact. * /
if(Abovel=Above2,
/ * Point is outside object. * /
Status = 2,
/ * Else have possible face contact. * /
Status = 0)) .
Listing 28: Determining the Pin Location Classification

6.2.3 EOS Architecture


The structure of the EOS remains the same, i.e. it has the same three building
blocks: the information board, the controller, and the agents, as in Figure 4 in
Chapter 2. The rating system is used again to coordinate the execution of the
agents. The rating system details, as they pertain to this implementation, are dis-
cussed in section 6.2.4.

The three components are integrated together as shown in Figure 56. The program
ends when all agents have been discarded or when a pin configuration has been
found which satisfies the goal of the program.
CONTROLLER
BEGIN - pick highest rated agent
- allow one agent to act
q7 - verify contaEt criteria I

AGENT C AGENT D
- rotate - rotate
- check - check - check - check
contact contact contact

INFORMATION BOARD
- store status variables

Figure 56: Parallel Reconfigurable Jaw Gripper - Algorithm


The role of each of the components is described below.

Information Board (IB)


The information board retains the exact same representation, Figure 5 in Chapter
2. Thus it has a large portion of the data residing on the IB is available to all
agents, and the controller. However, each agent also has its own private IB where
data which pertains only to one agent is stored; for example, the pin configuration
for the rotary disc, which the agent controls.

Controller
The controller plays a similar role as before and is responsible for the following:
pick the highest rated agent at current cycle
intervene in the rating process by taking the current agent off the rating
list and picking the next highest rated agent for execution, if the highest
rated agent is not successful in accomplishing its goal
determine when the pin contact status meets the required contact criteria
(i.e. 2 edge contacts and 1face contact)
An agent is deemed to have been not successful if one of the two conditions below
is satisfied:
(i) the corresponding disc has rotated through 360" and has not made any
pin contacts
(ii) the resultant pin contacts do not meet the minimum contact require-
ment

The pseudo code for the controller implementation is as shown below:

/ * Get current rotary angle and the maximum * /


/ * allowable rotary angle from the IB * /
get-from-IB(rotary-angle,Rangle),
get-from-IB(rotary-max,MaxAngle),
/ * Check that the current angle has not exceeded * /
/ * the max. * /
if Rangle < MaxAngle
then / * continue with current agent * /
choose(Agent from AgentList),

/ * determine whether resultant grasp is good * /


get~from~IB(edge~contactsIEdgeList),
get~frorn~IB(face~contactsIFaceList),
length(of EdgeList is NumEs),
length(of FaceList is NumFs),
if and(NumEs 2 2, NumFs 2 1)
then / * have succeeded * /
End = true,
else / * not good enough, keep going * /
End = fail,
if (End,
then
<do nothing>,
else
/ * Rotate rotary by Rstep amount and go * /
/ * back to beginning * /
eval(Rangle+Rstep,NewRangle),
bbjut (rotary-angle,NewRangle),
else
/ * Delete highest rated agent from the agent * /
/ * listing.*/
if AgentList [I, =

then / * must end * /


End = true
else / * continue * /
End = fail
extract(Agent from ~gentList= NewAgentList),
assert-on-IB(agent-list,New~gentList),
bbsut (rotary-angle,0.01 ,
if End
then / * done * /
exit,
else / * continue * /
eval(Num+l ,NewNum),
get-from-IB(iter,NewNum),
execute(controller)
Listing 29: Implementing the Controller
Agents
The main difference between this implementation and the previous one lies in the
areas of specialty to which these agents are dedicated. Instead of having physical,
behavioural, and task agents, only physical agents are required in this implemen-
tation. The physical agents are dedicated to one rotary disc configuration. As a
result, there are a total of four agents: configA, configB, configc, configD.

The agent structure is the same as before, except that there is no need for the pre-
condition component. Should the need for a pre-condition arise at a later time,
then this can be easily added, see section 6.5.

The implementation of the agents are similar to each other, as the finger agents
were in the tip grasp achievement with a three fingered hand. The task of each
agent is to:
rotate rotary by a small amount
determine contact status of the rotary pins and the object face
assert the contact status on the IB

Here is the Prolog code for the implementation of agent configA:

configA:body:-
/ * Get object face information from IB * /
bb-get(object:shape,[Regularity,-I),
bb-get(object:face,Face),
bb~get(config~:pins,PinConfiguration),
/ * Determine which pins from Pinconfiguration * /
/ * made edge or face contact * /
contact(~in~onfiguration,Face,
[],EdgeContacts,
[],FaceContacts),
/ * Assert output on IB; the Edgecontacts and * /
/ * the Facecontacts lists may be empty if no * /
/ * pins have made contact. * /
bb_put(edge~contacts,EdgeContacts),
bb_put(face~contacts,FaceContacts).
Listing 30: configA Implementation
The pin configuration of each agent is stored as <agent-name> :pins on the IB
and has the following structure:

N is the number of pins on the rotary disc and each pin location is given as a (x,y)
coordinate, which corresponds to a location disc surface, i.e. no z-coordinate is
needed.
6.2.4 Rating System
Given an object, each pin-configuration-spe~ifi~
agent must rate its own confi-
dence in achieving the minimum contact requirement. This rating is based on four
influencing factors:
(i) the number of pins in the rotary
(ii) the arrangement of the pins in the rotary
(iii) the shape of the object face (regular, semi-regular, or irregular)
(iv) the number of sides (odd or even) of the object face

Since the number of pins and the arrangement of the pins in the rotary disc is
fixed for each agent and for all objects, influencingfactors (i) and (ii) are taken into
account in the default sub-rating of the agents. Table 13 shows the default sub-rat-
ings assigned to each of the agents. The motivation behind these sub-ratings is
that the less pins a rotary has, the better. Also, pin arrangements which exhibit a
variety of distances are rated better than regular distance pin arrangements with
the same number of pins.

Table 13: Agent ~ e f a u lSub-ratings


t
I I I

Agent # Pins Pin Arrangement Assigned Dejfault


Su b-rating
--
Agent A 3 (semi-)regular 0.8

Agent B 4 (semi-)regular 0.6

Agent C 4 varied 0.7

Agent D 5 varied 0.5

The last two influencing factors, (iii) and (iv), are dependent on the object, and
thus they must change every time the program is run. Thus, the influencing fac-
tors serve as the basis of the opportunistic sub-rating of the agents. Note that since
the shape of the object does not change throughout the program, neither do the
opportunistic sub-ratings. The sub-ratings vary from agent to agent,
as in Table 13.
The weights used for the two sub-ratings are: 0.4 for default weight and 0.6 for
opportunistic weight. Thus, slightly more emphasis is placed on the opportunistic
than default sub-rating.

Table 14: Agent Opportunistic Sub-ratings


Agent Opportunistic
Face Shape # Sides
Sub-rating
Agent A (semi-)regular odd
(semi-)regular even
irregular odd
irregular even
Agent B (semi-)regular odd
(semi-)regular even
irregular odd
irregular even
Agent C (semi-)regular odd
(semi-)regular even
irregular odd
irregular even
Agent D (semi-)regular odd
(semi-)regular even
irregular odd
irregular even

6.3 Experimental Results


Having already shown how the an EOS has been implemented to solve the task of
grasping an object with the parallel reconfigurable jaw gripper, this section illus-
trates the grasping of triangular, square, and pentagonal faces with the various
rotary disc configurations.

Triangular Faces
Three slightly different triangular faces have been tested. The number of cycles for
each face is as shown in Table 15. Note that 631 is the maximum number of cycles
of one program execution.

Table 15: Triangular Face Summary


Face Number Success? Number of Cycles Discs Tried
1 No 631 A, C, B, D
2 Yes 340 A, C, B
3 Yes 343 4 C, B

Disc A 1 Disc C

-2 - -2 -
Cycle #I9 Cycle #208
3 - -3 -
3 - 2 - 1 0 1 2 3 -3 -2 -1 0 1 2 3
x-axis x-axis

1 Disc B Disc D

-22
3

3 - 2 - 1
x-axis
0
Cycle W34

1 2 3
-2

-3
-
-
3 -2 -1 0
x-axis
Cycle #490

1 2 3

Figure 57: Triangular Face #1


Since the first triangular face was not grasped successfully, Figure 57 shows con-
tact situations with each disc configuration. None of the contacts contained at
least two edge contacts and one face contact, thus all disc configurations were
tried until none were left. Disc configuration C came close at cycle #208, but the
triangle was too large for it.

Figure 58 shows the results of grasping the second triangular face. This time, disc
configuration B was successful in grasping the object face, after configurations A
and C failed to do so. Three edge contacts and one face contact ensured that the
contact criteria was met.

Disc A $1 Disc C

3
-21 ,
-3
,
-2 -1 0
,
x-axis
-&
1
72
,
3
,
3
,
-2
,
-1
,
0
x-axis
7 c I e #'63
1 2 3

3t Disc B

-/2
3 -3 -2 -1 0

x-axis
Cycle #340
1 2 3

Figure 58: Triangular Face #2


The last triangular face is shown in Figure 59. Again, disc configuration B was
successful in grasping this object face, after configurationsA and C tried to do so.
Taking a closer look at the edge contacts, it is obvious that although the contact
criteria was met, the two pins which have made edge contact have done so with
the same edge. Consequently, this resultant contact configuration does not con-
strain the object in the x-direction.
-'I
3
,
-3
,
-2 -1
,
0
,
x-axis
?#;7
I 2
,
3
:I ,
3
,
-2 -1
, ,
0
x-axis
7.;
1 2 3

Disc B

-.i
-3 3 -2 -1

0
x-axis
Cycle #343

1 2 3

Figure 59: Triangular Face #3


Square Faces
Three sizes of square faces have been tested with the four configurations of rotary
discs mentioned in section 6.2. Table 16 lists the summary of these experiments.
Note that although the order in which the agents have been allowed to participate
is the same as for triangular faces, the agents which have succeeded in grasping
the square faces are different.

Table 16: Square Face Summary


Face Number Success? Number of Cycles Discs Tried
1 Yes 33 A
2 Yes 184 A, C
3 No 631 A, C, B, D
The first square face is grasped successfully by disc configuration A after only 33
cycles, as shown in Figure 60.

Disc A

-2 Cycle 133
- 2 5 " ' ' " . . ' a
-25 -2 -15 -1 -05 0 05 1 15 2 25
X-axIS

Figure 60: Square Face #1


Figure 61 shows another successful grasp. This time, disc configuration C grasped
the second square face, which is slightly smaller than the first one. Disc configura-
tion A was not able to make edge contact with the square due to the square's
smaller size.

Disc A Disc C
a

-1 5 .

-2 -
a Cycle #50
-15.

2. Cycle 11.94

- 5 - 5 -; -015 b 015 ; 1'5 ; 2)5


-2 5
-25 -; -1'5 -; -& b 015 1'5 ; 2'5
X-aXIS x-axis

Figure 61: Square Face #2


The third square face is the smallest of the square faces and as a result, none of the
disc configurations were successful in meeting the contact criteria. All four disc
configurations were tried one by one until none were left.
Disc A Disc C

-2b
-2 5
-; -1'5 -; -0.5 D15
Cycle Y1

1'5 ; i5
-,I
-'-g? -4 -1'5 .; 4'5 A 015
Cycle # I 5 8

1'5 d 215
x-axis x--IS

Disc B Disc D
a

aCycle Y316 Cycle Y474

Figure 62: Square Face #3


Pentagonal Faces
The last type of face shape investigated is the pentagon. Again, three slightly dif-
ferent shapes have been tested and the summary of results is as shown in Table 17.

Table 17: Pentagon Faces Summary


Face Number Success? Number of Cycles Discs Tried

1 No 631 A, C, B, D
2 Yes 378 A, C, B
3 No 631 A, C, B, D

Unfortunately, the first pentagon face was not successfully grasped. All four disc
configurations were allowed to run until none were left. Figure 63 shows the
unsatisfactory contacts which were made with this face.
The second pentagon face was successfully grasped, as seen in Figure 64.
Although disc configurations A and C were unsuccessful at first, configuration B
prevailed after 378 cycles.

Figure 65 shows the unsuccessful attempt of the four disc configurations at grasp-
ing the third pentagon face.

1 Disc A '1 Disc C

Cycle #9 Cycle #200


4- 3

4 - 2 - 1 0 2 3 3 -2 -1 0 1 2 3
x-axis x-axis

Disc B 1 Disc D

(--J
.-
4
m r1:
X
a- .q::
X 1

-2 - -2 -- a
Cycle #388 Cycle #480
3. 3-

- 3 - 2 - 1 0 1 2 3 3 -2 -1 0 1 2 3
x-axis x-axis

Figure 65: Pentagon Face #3

6.4 Discussion
The three object faces investigated in section 6.3 were chosen for their simplicity.
Varying the shape face slightly within each class showed that small changes in the
shape of the object affected the ability of any of the disc rotaries to meet the goal
contact criteria. Looking at the cases when the face failed to be grasped success-
fully, it is evident that many of these cases were close, although not close enough.

Disc configuration C was close in grasping the first triangular face, Figure 57, and
the third square face, Figure 62. Changing this configuration slightly could result
in successful grasps of these faces in the future.

Going through a test of M object faces and N disc configurations is a good way of
seeing what works and what does not. Those configurations which do not work,
can then be modified so as to be potentially useful in the next set of tests. In this
case, configuration D, never succeeded in grasping any of the objects. Thus, given
the set of faces tested here, this configuration can be discarded.

As in the case of grasping curved objects with a three fingered hand, here too the
physical distribution of the pins on the rotary discs limits the ranges of objects
which a given pin configuration can grasp. Thus, a large number of discs is
required to ensure the successful grasp of a wide range of objects, or each pin con-
figuration would require a much greater number of pins.

Although the task of grasping an object face with the parallel reconfigurable jaw
end-effector is not as challenging as that of grasping with a three-fingered grasp,
the EOS still lent itself to this problem.

6.5 Extensions
The EOS architecture is very flexible and modular, thus facilitating the expansion
of this program as further functionality is incorporated.

6.5.1 Flexibility: Dual EOS System


Although the shape of the face was constrained to simple closed shapes with y-
axis symmetry, any simple closed face shape can be accommodated, as long as the
origin of the coordinate map is somewhere within the object, recall Figure 54. This
last constraint is necessary for the contact detection algorithm currently used.

One consequence of the lack of y-axis symmetry in the face is that each disc, the
left and right one, has to undergo its own evaluation of which rotary disc configu-
ration is best for the face in front of it. Furthermore, it is necessary to analyze the
grasp quality of these resultant pin contacts together. The manner in which this
can be accomplished is to use one EOS system for each disc and to add a higher
level controller which supervises the cooperation of these two systems, Figure 66.

In order to be able to accommodate any closed face shape, a different contact


detection algorithm has to be implemented. in-line, see section 6.2.2, is cur-
rently responsible for performing the contact detection task. Since the algorithm is
local to this function, changing the contact detection algorithm can be easily
accomplished, without affecting the rest of the system.

HIGH-LEVEL CONTROLLER
END
- verify grasp

LEFT DISC EOS RIGHT DISC EOS


- find rotary for - find rotary for
pin contacts pin contacts

t
HIGH-LEVEL IB
- store status variables

Figure 66: Dual EOS System

6.5.2 Modularity
Currently the EOS for the Parallel ~econfigurableJaw Gripper has only four
agents. However, the addition and/or deletion of agents is easily accomplished.
Adding an agent means that the new agent's body must be added to the program,
suitable sub-ratings must be decided on, and the agent's name must be added to
the list of agents, agents, on the IB. ~eletingan existing agent involves only the
removal of that agent's name from the agents list. The code itself can be deleted
at a later time, if desired. Given the zero success rate of the agent responsible for
disc configuration D, the EOS modularity makes it easy to remove this agent or
modify it.

As discussed in section 6.2.4, the default and opportunistic sub-ratings were


assigned based on the consideration of four parameters. As a result, if there is a
desire to have the sub-ratings take other parameters into consideration, this can
be easily done by simply modifying the default sub-ratings and by changing the
rules for calculating the opportunistic sub-ratings.

The agent's pre-conditions were not used in this implementation, however, if


there is a need in the future to have this initial filtering system added, its installa-
tion is quite simple. It simply requires that each agent be assigned its own pre-
conditions. Then the controller must request the status of each agent's pre-condi-
tion and must ensure that the pre-conditions of an agent are satisfied before allow-
ing that agent to participate during the current cycle.
Chapter 7

Conclusions & Future Work

7.1 Conclusions
The Enhanced Opportunistic System (EOS) was successfully implemented as the
architecture for the grasping tasks of a dexterous end-effector and of a parallel
reconfigurable jaw end-effector.This agent-based architecture exhibited an oppor-
tunistic centralized control and was enhanced through the novel rating system
based on the Bayesian Formalism. This rating system is used to calculate the util-
ity of agents during the current cycle and to distribute the task of scheduling the
agents to the agents themselves.

The two EOS implementations discussed in this thesis have presented the corre-
sponding EOS agents as belonging to one of three categories: physical, behav-
ioural, and task agents. These categories do not span the space of all possible
agents, thus it is possible that other grasping problems call for other categories of
agents as well. Franklin and Graesser [8] have surveyed many agent-based archi-
tectures and have found that each researcher has implemented agent categories
which match his/her problem. Other agent categories which have been identified
are reasoning agents, communicative agents, and information agents. It is up to
the researcher to determine what agent categories are required to solve the prob-
lem at hand. Once the agent categories are identified, the researcher can then start
to identify decoupled entities within a category. The agents within a category
have similar tasks, but they are responsible for different parts of the system. For
example, in the EOS implementation of the dexterous end-effector, two types of
behaviour were required, the wrap grasp and the tip grasp. Since one behaviour is
decoupled from the other, each behaviour was assigned its own agent. Keeping
these two behaviours decoupled means the system modularity can be maintained,
There is no formula which the author is aware of which can predict the number of
agent categories and agents withing each category; however, a methodology has
been identified: (i) identify the agent categories needed to address the problem
and then (ii) identify the agents within each category .

The dexterous end-effector analysis showed that the shape of curved objects such
as spheres and cylinders can be determined through haptic exploratory proce-
dures, such as EP1 and EP3. The data gathered from the object reconstruction data
can then be used to establish a stable tip-prehension grasp of the object. Studying
object shapes composed of combinations of spheres and cylinders, has shown that
once the primary feature has been identified and the secondary feature has been
located, the tip-prehension of the main feature can still be accomplished in spite of
the object's more complex shape.

The flexibility of the EOS to other grasping problems has been shown through its
use in the problem of grasping using the parallel reconfigurablejaw end-effector.
This simple implementation has shown that not only is the EOS suited to this task,
but that the resultant system is open to many areas of expansion.

7.2 Future Work


Using EP1 and EP3 as methods of reconstructing of spheres, cylinders, and combi-
nations thereof, can be further studied and applied to more complex object
shapes. A wide variety of curved objects could be constructed using constructive
solid geometry (CSG) [7] with spheres and cylinders as shape primitives. In CSG
primitive shapes can be combined through boolean operators to produce new,
more complex shapes.

EP1 and EP3 are the two haptic exploratory procedures used in this thesis, how-
ever, they are not the only haptic exploratory procedures which have been identi-
fied and classified. Humans use a wide variety of EPs in their environment
sensing [I71 and there is no reason why a robot cannot do the same. The object
reconstruction method presented here addresses only the identification of an
object's shape and size; however, objects have many more properties. For exam-
ple, identifying the texture of a surface can say a lot about what that object is.

The rating system of the EOS has successfully done its job of scheduling agents in
the grasping environment. However, currently the rating system has no means of
improving itself based on past experiences; the challenges of grasping the same
type of object are encountered with every program iteration. Consequently, work
needs to be done in allowing the system to learn from its grasping experience.
This learning factor can be implemented as a third sub-rating of the rating system,
thus pooling the knowledge acquired by three sub-ratings:
3
rating (Agentj) = Weighti - SubRating,, (24)
i= 1

In this program the environment, i.e. the robotic hand and the objects to be
grasped, have been simulated, although these simulations have been isolated
from the rest of the system architecture. As a result, this EOS implementation
could easily be used with actual hardware. Assuming that the fingers of the actual
robotic hand can be controlled through the joints of the fingers, then the robotic
hand can use the same inputs as the modeled hand. In addition, the output of the
modeled hand can be duplicated with outputs from an actual robotic hand, given
that the robotic hand has tactile sensors at least at the fingertips, to provide con-
tact detection information. Contact with finger links can be done with either joint
torque sensors or tactile sensors along the link.

The EOS implementation of grasping with the parallel reconfigurable jaw gripper
has shown that choosing the appropriate disc configurations for a set of object
faces is not easy. The shape of the face and the number of edges it has is a good
starting point, but there is much work to be done in being able to predict the util-
ity of a disc pin configuration given an object.
Appendix I

Modeling the Robotic Hand

The robotic hand modeled has three fingers, each consisting of three links. The
fingers are attached to the wrist as shown in Figure 67:

finger 1 I finger 2

origin of

\
finger 3

Figure 67: Frontal View of Robotic Hand

Consequently, the origin of the fingers, do not coincide with the origin of the
coordinate frame of the wrist. However, this is easily taken care of through a
simple transformation. For the purpose of devising the inverse kinematic
equations for each finger, it is assumed that all the origins coincide.

Each finger can be modeled by three links and three joints, as shown in Figure
68.

fingertip

0 2
Figure 68: Kinematic Model of a Finger

The above configuration is for the bottom finger, finger 3, since joint angles 0 2
and O3 are between 0" and 180'. This is called the elbow down configuration.
Joint angles 0 2 and 03 of the top two fingers are between 180' and 360•‹,see
Figure 69, thus this configuration is called elbow up.

Figure 69: Finger Positions on Robotic Hand

Due to the difference in the elbow configuration between fingers 1/2 and finger
3, it is necessary to look at the finger kinematics from two points of view: elbow
up and elbow down.
1.1 Forward Kinematics
Figure 70 shows the coordinate frame of a finger and its corresponding joint
variables to be used for the forward kinematics of the robotic hand. Solving the
forward kinematic equations for this manipulator, requires the solution of the
following three sets of coordinates, with respect to the (xf,yf,zf)coordinate
frame of the finger: (xl,yl,zl), (x2,y2,z2), and (x3,y3,z3). {11,12,13) are the link
lengths of links 1,2, and 3 respectively.

where, (xf, yf zf) = coordinate frame of the finger


(x3, y3, z3) = fingertip coordinate
(01, 02,03) = joint angles
Figure 70: Finger Coordinate Frame and Joint Position Variables

1.1.1 Finger 1 and 2: elbow up Configuration


The origin of the finger, with respect to the finger coordinate frame, is at (0,0,0),
thus it needs no calculation. The endpoint of the first link is located at (xlly1,z1)
and can be determined as follows:
The next coordinate (x2,y2,z2)is the endpoint of link2:

x2 = (1, + 1, x cos (2n - 0,) ) x cos 0,


y, = (I, + I, x cos (2n - 0,) ) x sin@,
Z, = -1, x sin (2n - 0,)

(x3,y3,z3)is the last coordinate and it is the fingertip location of the finger.

x3 = (1, + 1, x cos (2n - 0,) + l3 x cos (4n - 0, - 03)) x cos0,


Y, = (I, + 1, x cos (2n - 0,) + l3 x cos (4n - 0, - 0,) ) x sin@, (1-3)
z3 = -1, x sin (2a- 8,)- l3 x sin (4n - 0, - 0,)

Allowing (xo,yo,z,) to be the origin of the finger in the global frame, then the
fingertip location with respect to the global frame is:

X3,g
= (1, + 1, + l3 x COS (4n - 0, - 03)) x coso, + x,
x cos (2n - 0,)
Y3, g
= (I, + I, x cos (2x - 0,) + 1, x cos (4n - 0, - 0,) ) x sin@, + yo (1-4)
z = -1, x sin (2n - 0,) - l3 x sin (4n - 0, - 0,) + zo
8
3 7

1.1.2 Finger 3: elbow down Configuration


The elbow down configuration is virtually identical to the elbow up configura-
tion, except for a few small differences. The origin of the finger with respect to
its own finger coordinate frame is still at (0,0,0) and (xllyl,zl) is unchanged,
but note the differences in the calculation of (x2,y2,z2)and (x3,y3,z3):

x2 = (II + 1, X COS (0,) ) X COSO,


Y, = (I, + 1, x cos (0,) ) x sin@,
Z, = 1, x sin (0,)
and,

x, = + 1, x cos ( 0 , + 0,) ) x cos0,


(I, + I, x cos (0,)
y, = (I, + I, x cos (0,) + 1, x cos ( 0 , + 8,)) x sine, (1-6)
z3 = 1, x sin (0,) + l3 x sin ( 0 , + 0,)
Again, allowing (xo,yo,zo)to be the origin of the finger 3 in the global frame,
then the fingertip location with respect to the global frame is:

X3,g
= (I,+ l2X cos (02)+ I3 X COS (02+ 03)) X coso, + Xo
= (1, + l2x cos (02) + I3 x cos (02+ 0 3 )) x sin@, +yo
Y3, *
= l2 x sin ( 0 2 ) + l3 x sin (02+ 03) + Z,
33 *
z

1.2 Inverse Kinematics


Given the hand configuration illustrated in Figure 67 to Figure 69, it is now
possible to solve for the inverse kinematics of each finger, using the coordinate
frame of Figure 71.

where, (xf, yf zf) = coordinate frame of the finger


(x3, y3, zg) = fingertip coordinate
(01,02,03)= joint angles
Figure 71: Sample Finger Configuration with Coordinate Frame
1.2.1 Finger 1 and 2: elbow up Configuration
Calculating the first joint angle,

n
2 -< O 1 -< 0,
-- for finger 1
where,
n
0 10 < - for finger 2
1-2

In addition,

2 2
and l2cos02 + l3 cos (02+ 03)= ,,/( x 3 + y3) - l I
therefore,

x ~ +2 Y 23 + ~ 3 - 2 x l l x ~ ~ + l =~ cos
- l ( ~2 -O2 + 03)
l ~
2 1213

where ( n < O2 < 2 n )


Using the Cosine Law:

but cosr = 1, and sinr = 0


Then the third joint angle, 03,is as in equation (1-17):

:. O3 = acos 2 1213

I where ( 7<
~ O3< 2r)

1.2.2 Finger 3: elbow down Configuration


As it turns out, the equations are almost identical to the ones for 01, 0 2 , and
e3in section 1.2.1, but the angle constraints are different:
71: r
, where (-Z 5 Ol 5 2)

O2 = acos - 03, where (0 < O2 < r) (1-19)


-2 1213

1.3 Finger Constraints


Any of the fingers can reach any point within their own workspace. The work-
space of a finger is the space within which it can reach the required target point
with the tip of its finger. Consequently, the physical configuration of each fin-
ger dictates the size of the workspace and hence the constraints imposed on the
finger.

Given a target point P = (p,, p,, p,), it is important to first check to make sure
that the point is within the workspace of the finger. If the point is not within
the finger's workspace, then there is no need to go on with the calculations of
the joint angles.

1.3.1 Finger 1 and Finger 2


The following constraints are stated with respect to each finger's individual
coordinate frame.

Assuming that:
the coordinate frame of the finger is {xf,yf, zf1
the length of the finger links are 11, 12,and l3
the target point is P = (p,, py, p,)

then, the first two constraints are:

Using the notation in Figure 72, in conjunction with the Cosine Law, the third
finger constraint can be calculated. The Cosine Law equates the values c, 12, 13,
a (the triangle angle between 12 and 13) as follows:

Therefore,

7C
for Ola<-
2
7C
for -<a50
2
7C
for a = -
2

Furthermore,

where, a = ,/=-
c= i
a'
+b

ll and b = p,
Combining equations (1-23) and (I-24), the third finger constraint is:

'7I:
d ~ / ~ - ~ , ) 2 + p ~ > for
J x 0 5 a < -2

, , / ( ~ a - ~ , ) ~ + p ~ < , , /for
i ~ -7c:
2< a s 0

2 '7I:
- + = for u =-
2

Figure 72: Determining Workspace Constraints

1.3.2 Finger 3
The first two constraints of finger 3 differ from those of finger 1and finger 2:

I Px20 and p, 2 0

The third is as in equation (1-25).


Appendix I1

The Bayesian Formalism

Given a set of correlated events {A, B1, B2, ..., Bn), the probability of event A,
denoted as P(A), can be calculated from the probability of the simultaneous occur-
rence of events A and Bit P(A,Bi) [29],
n
P (A) = (P (A, Bi) (11-1)
i=l
Then, using Bayes' Rule, P(A,Bi) can be calculated as in equation 01-21:

or P (A, Bi) = P (A1 Bi) x P (Bi)

where P(A I Bi) is the conditional probability that A will happen given Bi. Two
requirements are imposed on Equations (11-1) and (11-2):

where P ( ) is any probability

Combining equations (11-1) and (11-21, P(A) can be calculated gi.ven P(A I Bi) and
P(Bi) as in equation (11-4):
i=l

For example let:


A = event of rain
B1 = event of today being Monday
B2 = event of today being Tuesday
B3 = event of today being Wednesday
B4 = event of today being Thursday
B5 = event of today being Friday
B6 = event of today being Saturday
B7 = event of today being Sunday

let P(A I Bi) be as follows:


P(A I B1) = 0.10 P(A
P(A I B2) = 0.20 P(A
P(A I B3) = 0.40 P(A
P(A I B4) = 0.50

and P(Bl) = P(B2) = P(B3) = P(B4) = P(B5) = P(B6) = P(B7) = 1 / 7

The probability of A, rain, can be calculated by summing the products of the prob-
ability of rain on a given day and the probability of that day:

Thus, the probability of rain on any given day is 0.54.


Appendix I11

EP1 Background

The haptic exploration investigated in this thesis is that of a rolling finger on the
surface of the object which was defined as EPI by Charlebois, Gupta, and Payan-
deh [3].

EP1 is executed by slightly rolling the robot finger in the neighbourhood of the
contact point on the object and is done by rolling the finger along the object in a
cross pattern in two directions {u-direction,v-direction).

The rolling must be done at a known and constant angular velocity around a fixed
axis in the instantaneous contact frame. This curvature estimation method is
based on the following equation:

where,
p = contact point on probe in [u,v] direction
M = fingertip metric
K1 = curvature form of fingertip(known)
K~ curvature form of the object in contact with the fingertip
[a,,9
1are angular velocities of the fingertip's contact frame w.r.t. the
object's contact frame around the x and y axes
[v,, vy] are the linear velocities of the fingertip's contact frame w.r.t. the
object's contact frame in the x and y directions ([v,, vy v,] = [0,0,0] without
slippage)

K2can be solved for and the diagonal elements of K2 give the normal curvatures in
the u and v directions.

The type of information which can be retrieved about an object with EP1 is the
surface curvature, radius (r), of the object at the point of contact with the object.
Bibliography

[I] S.P. Ananthanarayanan, D. Gershon, A.A. Goldenberg, and J. Mylopoulos


"Introducing Robotic 'Common Sense' in Real Time Dextrous Manipulation,"
IEEE International Conference on Robotics and Automation, 1992, pp. 2776 -
2781.

[21 M. Charlebois, Exploring the Shape of Obiects with Curved Surfaces using
Tactile Sensing, M.A.Sc. Thesis, Simon Fraser University, Burnaby, B.C.,
December, 1996.

[3] M. Charlebois, K. Gupta, and S. Payandeh "Curvature Based Shape Estima-


tion Using Tactile Sensing," IEEE International Conference on Robotics and
Automation, April 1996, pp. 3502 -3507.

141 N. Chen, R. Rink, and H. Zhang "Local Object Shape From Tactile Sensing,"
IEEE International Conference on Robotics and Automation, 1996, pp. 3496 -
3501.

[5] M.R. Cutkosky, Robotic Graspin and Fine Manipulation, Kluwer Aca-
demic Publishers, 1985, pp. 87 - 109.

[6] L.D. Erman, F. Hayes-Roth, V.R. Lesser, and D.R. Reddy "The Hearsay-I1
Speech-Understanding System: Integrating Knowledge to Resolve Uncer-
tainty" in Engelmore and Morgan (editors), Blackboard Systems, Addison-
Wesley Publishing Company, 1988, pp. 61 - 64.

[71 J.D. Foley, A. van Dam, S.K. Feiner, and J.F. Huges, Computer Graphics
Principles and Practice, Addison-Wesley Publishing company/-1990, pp.557 -
558.

[8] S. Franklin and A, Graesser "Is It an Agent, or Just a Program?: A Taxon-


omy for Autonomous Agents," Proceedings of the Third International Work-
shop on Agent Theories, Architectures, and Languages, 1996.
[9] J.Y. Halpern, Y. Moses, and M.Y. Vardi "Algorithmic Knowledge," Proceed-
ings of the Fifth Conference on Theoretical Aspects of Reasoning About
Knowledge (TARK), 1994, pp. 255 - 266.

[lo] B. Hayes-Roth "The Blackboard Architecture: A General Framework for


Problem Solving," Technical Report HPP-83-30, Stanford University, 1983.

[Ill B. Hayes-Roth "Opportunistic Control of Action in Intelligent Agents,"


IEEE Transactions on Systems, Man, and Cybernetics, 1992, pp. 1575 - 1587.

[I21 M. Hong and S. Payandeh "Novel Design of a Class of Robust and Dexter-
ous End-Effectors/Fixtures for Agile Assembly," IEEE International Confer-
ence on Systems, Man, and Cybernetics, vol. 2,1996, pp. 1393 - 1398.

[I31 M. Hong and S. Payandeh "Design and Planning of a Novel Modular End-
Effector for Agile Assembly," IEEE International Conference on Robotics and
Automation, 1997, pp. 1529 - 1535.

[14] Z. Ji and B. Roth "Direct Computation of Grasping Force for Three-Finger


Tip-Prehension Grasps," Journal of Mechanisms, Transmissions, and Automa-
tion in Design, December 1988, pp. 405 - 413.

[I51 M. Kaneko, Y. Hino, and T. Tsuji "On Three Phases for Achieving Envelop-
ing Grasps," IEEE International Conference on Robotics and Automation,
1997, pp. 385 - 390.

[I61 M. Kaneko, and K. Honkawa "Contact Point and Force Sensing for Inner
Link Based Grasps," IEEE International Conference on Robotics and Automa-
tion, 1994, pp. 2809 - 2814.

[I71 R.L. Klatzky and S.J. Lederman "Stages of manual exploration in haptic
object identification," Perception & Psychophysics, 52 (6), 1992, pp. 661 - 670.

[I81 R. Liscano, A. Manz, E.R. Stuck, R.E. Fayek, and J-Y. Tigli "Using a Black-
board to Integrate Multiple Activities and Achieve Strategic Reasoning for
Mobile-Robot Navigation," IEEE Expert, April 1995, pp. 24 - 36.

[I91 K. Nagata, T. Keino, and T. Omata "Acquisition of an Object Model by


Manipulation with a Multifingered Hand," IEEE/RSJ International Conference
on Intelligent Robots and Systems, 1996, pp. 1045 - 1051.

[20] A. Newel1 "Some Problems of Basic Organization in Problem-Solving Pro-


grams," Self-organizing Systems, 1962, pp. 393 - 423.

[21] V.D. Nguyen "Constructing Force-Closure Grasps," International Journal


of Robotics Research, vol. 7, no. 3,1988, pp. 3 - 16.

[22] V.D. Nguyen ''The Synthesis of Stable Force-Closure Grasps," Technical


Report AI-TR-905, MIT Artificial Intelligence Laboratory, July 1986.

[23] M. Occello and M.C. Thomas "Syst&mesMulti-Agents Temps Reel: Un


Modde d'organisation bas6 sur le Concept de Blackboard, Revue d'Intelli-
gence Artificielle, pp. 1 - 25.

[24] A.M. Okamura, M.L. Turner, and M.R. Cutkosky "Haptic Exploration of
Objects with Rolling and Sliding," IEEE International Conference on Robotics
and Automation, 1997, pp. 2485 - 2490.

[25] T. Omata and T. Sekiyama "Transition from Enveloping to Fingertip


Grasp: A Way of Reorientation by a Multifingered Hand," IEEE International
Conference on Robotics and Automation, 1997, pp. 1004 - 1005.

1261 A. Ordean and S. Payandeh "Design and Analysis of an Enhanced Oppor-


tunistic System for Grasping Through Evolution," Third ECPD International
Conference on Advanced Robotics, Intelligent Automation and Active Sys-
tems, 1997, pp. 239 - 244.

[27] L. Overgaard, B.J. Nelson, and P.K. Khosla "A Multi-Agent Framework
For Grasping Using Visual Servoing and Collision Avoidance," IEEE Interna-
tional Conference on Robotics and Automation, April 1996, pp. 2456 - 2461.

[28] S. Payandeh and A.A. Goldenberg "Knowledge-Based Approach to


Grasping," IEEE Conference of the Industrial Electronics Society, 1988.

[29] J. Pearl. Probabilistic Reasonin in Intelligent Systems, Morgan Kaufmann


Publishers, 1988.

[30] E. Piat and D. Meizel "Proposal of a Probabilistic Believes Fusion Frame-


work Application to Range Data Fusion," IEEE/RSJ International Conference
on Intelligent Robots and Systems, 1997, pp. 1415 - 1422.
[31] N.S. Pollard "The Grasping Problem: Toward Task-Level Programming for
an Articulated Hand," Technical Report AI-TR 1214, MIT Artificial Intelligence
Laboratory, May 1990.

[32] K.S. Roberts "Robot Active Touch Exploration: Constraints and Strate-
gies," IEEE International Conference on Robotics and Automation, 1990, pp.
980 - 985.

[33] M.A. Rodrigues, Y.F. Li, M.H. Lee, J.J. Rowland, and C. King "Robotic
Grasping of Complex Objects Without Full Geometrical Knowledge of the
Shape," IEEE International Conference on Robotics and Automation, 1995, pp.
737 - 742.

[34] M. Seitz and J. Kraft "Some Approaches to Context Based Grasp Planning
for a Multi-fingered Gripper," EEE/RSJ International Conference on Intelli-
gent Robots and Systems, September 1994, pp. 360 - 365.
[35] K.B. Shimoga "Robot Grasp Synthesis Algorithms: A Survey," The Interna-
tional Journal of Robotics Research, vol. 15, no. 3, June, 1996, pp. 230 - 266.

[36] S.A. stamfield "Knowledge-Based Robotic Grasping," IEEE International


Conference on Robotics and Automation, vo1.2,1990, pp. 1270 - 1275.
[37] A. Tailor "MXA - A Blackboard Expert System Shell" in Engelmore and
Morgan (editors), Blackboard Systems, Addison-Wesley Publishing Company,
1988, pp. 315 - 332.
[38] R. Tomovic, G.A. Bekey, and W.J. Karplus "A Strategy for Grasp Synthesis
with ~ultifingeredRobot Hands," IEEE International Conference on Robotics
and Automation, 1987, pp. 83 - 89.

[39] G. Wohlke "The Karlsruhe Dextrous Hand: Grasp Planning, Program-


ming, and Real-Time Control," IEEE/RSJ International Conference on Intelli-
gent Robots and Systems, 1994, pp. 352 - 359.
[40] T. Yoshikawa "Passive and Active Closures by Constraining Mechanisms,"
IEEE ~nternationalConference on Robotics and Automation, April 1996, pp.
1477- 1484.

You might also like