You are on page 1of 13

Robotics and Autonomous Systems 54 (2006) 1026–1038

www.elsevier.com/locate/robot

Automatic visual guidance of a forklift engaging a pallet


Michael Seelinger a,∗ , John-David Yoder b
a Yoder Software, Inc., 715 W. Michigan Ave., Urbana, IL 61801, United States
b Department of Mechanical Engineering, Ohio Northern University, Ada, OH 45810, United States

Received 4 June 2004; received in revised form 11 September 2005; accepted 2 October 2005
Available online 1 August 2006

Abstract

This paper presents the development of a prototype vision-guided forklift system for the automatic engagement of pallets. The system is
controlled using the visual guidance method of mobile camera-space manipulation, which is capable of achieving a high level of precision in
positioning and orienting mobile manipulator robots without relying on camera calibration. The paper contains development of the method, the
development of a prototype forklift as well as experimental results in actual pallet engagement tasks. The technology could be added to AGV
systems enabling them to engage arbitrarily located pallets. It also could be added to standard forklifts as an operator assist capability.
c 2006 Elsevier B.V. All rights reserved.

Keywords: Mobile manipulation; Visual guidance; Machine vision

1. Introduction AGV system. AGVs are effective in engaging pallets that have
been placed in the racks by the AGVs themselves because the
This paper presents the development of a prototype system AGVs are capable of placing the pallets to within 1 cm of the
for the vision-guided automatic engagement of arbitrarily- desired location [1]. AGV pallet engagement assumes the pallet
positioned standard pallets by a computer-controlled forklift. is positioned to within 1 cm of the nominal desired location. If
The method presented here enables a robotic forklift vehicle a pallet is placed in the rack by a human operating a standard
to engage pallets based on their actual current location by using forklift, there is no assurance that the pallet will be located pre-
feedback from vision sensors that are part of the robotic forklift. cisely enough for an AGV system to engage the pallet. Other
The development of this technology could advance the current factors can cause an AGV to lose the necessary precision for
state of the art in material handling in two distinct ways. pallet engagement such as disruption of a laser beacon signal
First, the technology could be added to commercially avail- or loss of communication with the guiding wires or a degrada-
able AGV (automatically guided vehicle) material handling tion of the system’s calibration. (Note: background material on
systems. Typically these systems use floor-embedded wires or AGV systems can be found in [1] and/or online at [2].)
laser beacons located throughout their areas of operation to nav- The use of the vision-guided method for pallet engagement
igate the AGVs through the warehouse and to align themselves presented here would provide AGV systems the flexibility to
for pallet engagement. While most AGV systems are used for perform their final positioning and engagement based on the
horizontal transportation of material, leaving the task of storing pallet’s actual location relative to the AGV. This would enable
pallets in racks up to humanly driven standard forklifts, there human operators to place pallets amidst the previously AGV-
are some AGV systems capable of stowing and retrieving pal- only racks, affording much more flexibility and interaction
lets from rack locations. In order to engage a specific pallet, between AGV material handling units and human operators on
the pallet’s vertical position (or rack height) as well as its po- standard forklifts. Also, the ability to engage pallets based on
sition in the reference frame of the map must be known to the their actual locations would enable AGV systems to operate
in less structured environments. For example, rather than only
working in carefully laid-out warehouses, AGVs could unload
∗ Corresponding author. Fax: +1 217 344 2987. pallets automatically off tractor-trailers, currently a very labor
E-mail address: mseelinger@yodersoftware.com (M. Seelinger). intensive task.

c 2006 Elsevier B.V. All rights reserved.


0921-8890/$ - see front matter
doi:10.1016/j.robot.2005.10.009
M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038 1027

Fig. 1. Crown SC 3-wheeled electric forklift prior and after retrofit.

The second way that the vision-guided pallet engagement demonstrated by experimental results of the forklift prototype,
method presented here could advance the current state of the which will be presented in Section 6.
art would be to add it to standard, manually-guided forklifts. Following this introduction, the paper is divided into
The pallet engagement process can be difficult for forklift five sections. Section 2 gives background for the reader to
drivers due to the fact that their line of sight is obstructed understand the current state of the art in vision-guided pallet
by the mast of the forklift itself. This is especially true when engagement by forklift vehicles as well some other vision-
loading/unloading very high racks. The vision guidance control guided mobile manipulator systems. Section 3 gives some
method presented here could be added to standard forklifts to necessary background information on standard CSM, which is
automate the pallet-engagement portion of the forklift material a basis for developing MCSM. Section 4 presents the prototype
handling operation. The driver would have the responsibility of forklift used in the development and testing of the method.
navigating through the warehouse or other building, avoiding Section 5 presents how the method of MCSM is developed to
obstacles and getting the forklift into view of the desired pallet. control the forklift. Such topics as target definition, trajectory
generation and tracking, and vision-tracking will be discussed.
Then, the operator could switch into ‘automatic-engagement
The experimental results are given in Section 6. Section 7
mode’ which would enable the system to engage the pallet
contains the conclusions.
automatically. Such a system could reduce the amount of
product damage that occurs in forklift accidents involving pallet 2. Background
engagement and transportation of product. Potentially, it could
also improve safety as it would allow the forklift to engage MCSM is a method for controlling mobile manipulators
pallets safely avoiding the possibility of knocking products off via vision sensors that are carried with the mobile system.
pallets that are located very high off the ground. The term mobile manipulator refers to a system consisting
The vision-guided method for forklift pallet engagement of a manipulator with holonomic degrees of freedom (DOFs)
presented in this paper is called ‘Mobile Camera-Space mounted on a mobile base which has non-holonomic DOFs.
Manipulation’ or simply ‘MCSM’. It was developed originally (See [4] for a description of the difference between holonomic
and non-holonomic constraints.) A forklift is an example of a
for the visual guidance of a planetary exploration rover
mobile manipulator. The forks of the forklift can be thought of
equipped with a robotic arm [3] and has been adapted and
as a manipulator. Typical forklifts, including the one discussed
further developed to control the forklift prototype system
in this paper, have three holonomic fork DOFs: vertical,
presented here and shown in Fig. 1. While the method requires
sideshift, and tilt angle. The mobile base of the forklift has two
that at least two vision sensors be attached to the forklift
non-holonomic DOFs of control: the angle of the steering wheel
system, it does not require the system to maintain any sort of as well as the angle of the drive wheel(s). These can be used
strict calibration of the cameras relative to the system nor the to position the forklift in three DOFs: the (X, Y ) location of
cameras relative to each other. Such calibrated relationships the forklift in the plane parallel to ground as well as its angle
would be difficult to maintain in the harsh climate typically of orientation. Other examples of mobile manipulators include
encountered by industrial forklifts. Likewise it would be exploration rovers equipped with robotic arms and construction
difficult to establish and maintain these calibrated relationships equipment such as backhoes or front loaders.
on many forklift systems due to the low-precision (relative MCSM is an extension of standard Camera-Space Manipula-
to typical industrial robotics) nature of the mechanics of tion (CSM), a control strategy that has proven through years of
these models. MCSM uses vision-based estimation along with testing to provide highly accurate control of holonomic systems
the nominal kinematic model of the forklift and a simple using fixed cameras [5–7]. Standard CSM has even been used
camera model to achieve a high-level of positioning precision to control a mobile manipulator, a miniature ‘simulated’ fork-
in a robust fashion. This precision and robustness has been lift [8,9]. However, standard CSM is ill-suited to control mobile
1028 M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038

manipulators since it is limited by the need to control the mo- technique guided by a separate vision sensor with an eye-
bile system using stationary cameras, which severely limits the in-hand configuration. MCSM differs from MacKenzie and
workspace of the mobile system. MCSM overcomes this limita- Arkin’s approach in several respects. For instance, MCSM uses
tion of standard CSM since it allows the cameras to move with the same vision sensors for navigating the mobile platform
the mobile manipulator. as well as positioning of the manipulator. Also, MCSM’s
Pagès et al. present a computer vision system for automatic visual control approach is based on an open-loop method
forklift pallet engagement [10]. Their method incorporates the rather than the closed-loop visual servoing technique. The
use of a single calibrated camera. Their work focuses more advantages of the open-loop approach over closed-loop visual
on the pallet recognition method of the vision system rather servoing techniques are discussed in [19]. Tsakiris, Rives,
than on the overall visual control of the forklift system itself. and Samson present a vision-guided mobile manipulator also
Currently, their system is configured to engage pallets that are based on visual servoing techniques [20]. An eye-in-hand
located on the ground level only. The trajectory generation vision system configuration guides the mobile robot and its
method presented by Pagès is primitive. It incorporates a onboard multiple DOF manipulator. The goal of their research
scheme of turn, drive straight, turn, drive straight. While this is to present methods for achieving stable pose control of
method of trajectory generation does allow the forklift to both the non-holonomic base as well the manipulator mounted
engage the pallet, it is slow and impractical as the system must camera, which they achieve by the addition of holonomic
frequently stop and start. Pagès presents some experimental DOFs for positioning the camera. Aside from the differences
results, however as with Miller [9], the vehicle used was not in open loop versus closed loop control, their system uses only
an actual forklift but a small mobile robot with forks attached one camera and is mounted in the eye-in-hand configuration
to the manipulator intended to simulate the motion of an actual whereas MCSM systems use at least two cameras neither of
forklift. The system presented in this paper differs in several which is mounted on the manipulator.
respects. MCSM requires the use of at least two cameras;
however, it does not require camera calibration. The strategy for 3. MCSM as an extension of standard CSM
pallet recognition employed with MCSM differs substantially
from Pagès’ system. Also, the MCSM system is not limited to Standard CSM plays an integral role as a subsystem of the
engaging pallets that are located only on the floor. Unlike the MCSM forklift system. While a more complete description of
system of Pagés et al. the trajectory generation and tracking standard CSM is given in [5,6], it is important to discuss certain
method used with MCSM enables our system to engage a pallet basic CSM principles in order to understand the MCSM forklift
much the same way a human operator would drive a forklift system. The first part of this section develops CSM as it is used
without the need for stopping and starting. to control purely holonomic systems. The second part of this
section describes how standard CSM has been used to control a
The only other vision-guided forklift system that the authors
‘simulated’ forklift mobile manipulator.
are aware of is the Automated Material Transport System which
is under development at the National Robotics Engineering
3.1. Standard holonomic CSM
Consortium [11]. The development of this system seems
to rely heavily on computer vision both for the navigation
In typical implementations of standard CSM for controlling
throughout the warehouse as well as the engagement of pallets.
holonomic manipulators CSM uses widely separated cameras
Unfortunately, there is no public information available that
located remotely from the robot they control. CSM works by
describes the methods employed for pallet engagement.
estimating a relationship in the reference frame of each camera
Several manufacturers provide forklift-style AGV systems
between the appearance of image-plane visual features located
[12–15]. Product information available through company on the manipulator with the internal joint configuration of the
websites focus on load capacity, fork reach, repeatability, and robot. This relationship, f , is based on the orthographic camera
in the case of warehousing applications the aisle width in which model [21] and is described with a set of view parameters given
the AGV can operate. Currently AGV companies do not offer by C = [C1 , C2 , . . . , C6 ]T :
vision systems as part of their control package for the automatic
engagement of as-located pallets. xc = f x (C, 2) = (C12 + C22 + C32 + C42 )X (2)
Other examples of vision-guided mobile manipulators in + 2(C2 C3 + C1 C4 )Y (2)
the literature include systems proposed by Pissard-Gibollet
+ 2(C2 C4 − C1 C3 )Z (2) + C5
and Rives, and Swain and Devy [16,17]. The goal of their (1)
system is to navigate the mobile robot using a vision sensor yc = f y (C, 2) = 2(C2 C3 − C1 C4 )X (2)
held by a manipulator mounted on the mobile base. They + (C12 − C22 + C32 − C42 )Y (2)
do not address the task of actually engaging an object with
+ 2(C2 C4 + C1 C3 )Z (2) + C6
the manipulator mounted on the mobile platform. MacKenzie
and Arkin describe a vision-guided mobile manipulator for where (xc , yc ) is the position in the reference frame of one
drum sampling [18]. This system servos the mobile base into of the cameras and 2 is the pose of the manipulator (see
position using a camera mounted on the manipulator’s waist. Fig. 2 for definition of coordinate systems). The position of
Then the manipulator engages a drum via a visual servoing a point on the manipulator measured relative to the (X, Y, Z )
M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038 1029

holds in a localized region of joint-space and camera-space


operation. Due to the fact that the measurement and positioning
are carried out in the same reference frames (those of the
participating cameras) CSM compensates for the inadequacies
of the orthographic camera model as well as inaccuracies in
the kinematic model of the robot. Even if the kinematics of the
system were to be altered during a given operation (for instance
if a joint is deflected by the load it carries) CSM can compensate
by simply acquiring more joint-space/camera-space samples.
Experimental results demonstrating this ability are presented
in [3].
Additionally, a process called ‘flattening’ has been
developed to increase the precision of CSM/MCSM systems
by correcting for the inadequacies of the orthographic camera
model. Flattening uses orthographic camera model parameters
and a rough estimate of the distance between the robot reference
frame, (X, Y, Z ) in Fig. 2, and camera reference frames,
(X c1 , Yc1 , Z c1 ) and (X c2 , Yc2 , Z c2 ) in Fig. 2, to obtain identical
results as would have been achieved by using the more accurate
‘pin-hole’ camera model (see [7] for more detail).
Fig. 2. Coordinate system definitions for robot and cameras.

coordinate system is fixed to the base of the manipulator 3.2. Non-holonomic CSM
as can be seen in Fig. 2. It is dependent upon the nominal
kinematic model of the robot and its corresponding pose, 2. A form of standard CSM called ‘non-holonomic CSM’ has
The view parameters are estimated based on sample pairs of been used to control a small mobile manipulator, a ‘simulated
robot internal joint configuration and camera-space location of forklift’, that is capable of engaging a small pallet [8,9]. MCSM
manipulator features. The minimization of was developed, in part, to overcome some of the restrictions
and disadvantages of using this form of CSM to control mobile
ns h manipulators. CSM requires a set of cameras that are stationary
X 2 2 i
Γ = xck − f x (C, 2) + yck − f y (C, 2) Wk (2) relative to the mobile manipulator system. This severely limits
k=1 the workspace of a mobile system and makes the use of
over all C provides the necessary conditions to solve for the six CSM impractical for the task of pallet engagement by an
view parameters for each camera. In Eq. (2) n s denotes the total industrial forklift. CSM’s need for stationary cameras creates
number of joint-space/camera-space samples. The Wk of Eq. (2) other complications such as reduced camera resolution as the
is a relative weighting factor that enables the system to place system moves further from the cameras, the possibility of
a higher relative weight on samples acquired near a particular the system itself obstructing the view of the camera(s), and
region of interest. When all samples are weighted equally, for maintaining communication between the remote cameras and
instance in the initialization of the view parameters, Wk = 1. the mobile vehicle. The relationship between the physical,
When the manipulator approaches its target incoming samples 3D position of manipulator features and their corresponding
might be assigned a weight of Wk = 10 or Wk = 20 or some camera-space locations as described in Eq. (1) now depends
other weight higher than samples acquired more distant from on both the holonomic DOFs of the manipulator and the non-
the current pose. This skews the view parameter relationship holonomic DOFs of the mobile base.
towards the more heavily weighted data so that the system
is more accurate in this heavily weighted region of joint and 3.3. Introduction to MCSM
camera space.
While CSM uses the orthographic camera model and the MCSM retains the strengths of CSM while removing the
nominal kinematic model of the robot, it still can be stated restrictions and disadvantages of using CSM to control mobile
truthfully that CSM, and therefore MCSM as well, is a manipulators. MCSM allows the cameras to be mounted to
calibration-free method. The reasons for this derive from the the mobile platform. With MCSM the relationship between
use of estimation and the fact that the system automatically the physical, 3D position of manipulator features and their
skews the relationships described by Eq. (1) such that they work corresponding camera-space locations as described in Eq. (1) is
well in a local region of operation. In other words, Eq. (1) could purely holonomic, whereas with standard CSM the relationship
describe a calibrated relationship if the view parameters were depends on both the holonomic and non-holonomic DOFs.
established and held fixed. However, these view parameters There are several advantages to this. First, the view parameters
change frequently either through updating with new visual can be found without numerical integration of differential
information or simply by skewing the information used to relationships. This simplifies the view parameter estimation
estimate them such that the relationship described by Eq. (1) procedure and the trajectory updating procedure in addition
1030 M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038

to reducing the computational cost. The separation of the addition to these actuators, there are three analog feedback
non-holonomic information from the holonomic information devices that are used by the system. Two of these devices are
in the estimation model increases the accuracy of the view Penny + Giles Controls Inc. linear potentiometers, which give
parameter estimates. Due to problems such as wheel slip feedback regarding the sideshift position of the forks as well
and numerical estimation errors, information from the non- as the current tilt angle of the forks. The third analog feedback
holonomic DOFs tends to be much less accurate and less device is a UniMeasure, Inc. linear position transducer. This
reliable than information from the holonomic DOFs. With device gives the system feedback regarding the vertical position
standard CSM, the accuracy of the view parameter estimates of the forks. Optical encoders are mounted via rolling contact
and therefore of the overall precision of the system is dependent on each of the two drive wheels. These give the system feedback
upon the relatively inaccurate non-holonomic information. In on exactly how the drive wheels have moved. Assuming no slip,
contrast the overall precision of MCSM systems is not affected this information can be used in a numerical integration scheme
by any non-holonomic inaccuracies. to determine the path that the forklift has actually traversed.
In MCSM systems the camera locations are not known Computer control of the forklift is divided into three
precisely relative to the rest of the system nor to each other. categories: control of the fork position, control of the forklift
The system performs equally well over a large range of speed, and control of the forklift steering. Motion control
camera locations. The position and orientation has changed programs incorporating PID control laws were written for
substantially as well. As visual data is acquired, the system the Acroloop controller in its own language, AcroBasic.
adapts to the current location of each camera. In contrast, Through experimentation it was determined that the computer
calibrated vision systems for robot guidance require that the system is able to control the position of the forks to within
1
position and orientation of the cameras be known relative to 3 in. This precision is a function of this particular forklift
each other as well as relative to the robot (see for instance prototype system and not a limitation of MCSM or of MCSM
[22,23] for examples of calibrated vision systems). As the implemented on forklifts in general. (MCSM was used in [3]
calibration of the system degrades the positioning precision to achieve seven times more precise positioning of a mobile
of the system degrades substantially. External factors such as manipulator than the 13 in. reported here.)
vibration, thermal gradients, or physical damage to the system Also, through experimentation it was determined that there
can easily cause a system to fall out of calibration. It would be was a significant amount of ‘play’ in the steering system.
difficult if not impossible to maintain calibration if a calibrated While a nominal relationship exists between the position of the
vision system were to be used to control a typical industrial steering motor and the position of the steering wheel (there is no
forklift. direct feedback of the wheel angle), it was determined that there
was up to 10◦ of play between the nominal target position for
4. Forklift prototype the steering angle and the effective steering angle. It is possible
to use information from the optical encoders mounted to the
This section gives a brief description of the forklift prototype drive wheels to find the effective steering angle after a motion
used for developing and testing MCSM. A used Crown SC has begun. Using this information, a correction routine was
series three-wheeled electric lift truck with load capacity of added to the Acroloop steering program in an effort to improve
3200 lbs and vertical reach of 24 ft was acquired for use as the accuracy of the steering of the system. As the forklift
the prototype. In order to use MCSM to control the forklift, moves, it monitors the effective steering angle, compares it to
cameras had to be added to the system and the forklift had to the desired steering angle, and issues corrections to the steering
be modified to enable computer control of all of its actuators. motor. This procedure is described in more detail in Section 5.5.
Fig. 1 shows the forklift prior to and after retrofit. It is recognized that this prototype would be inadequate
A platform for housing the computer and controls was for actual industrial use. For instance, the wiring is exposed,
built on top of the forklift’s cage. The computer that runs the which would be very dangerous if operated under the normal
system has a 450 MHz Celeron processor and runs Windows industrial operating conditions. Also, the system is currently
NT 4.0. An Acroloop 8000 8-axis motion control card with limited in speed. This is due primarily to the inadequacy of the
break-out box controls the motion of all of the actuators, hardware chosen for the prototype. While inadequate for actual
reads all of the analog feedback devices, and controls several industrial use in its current state, the forklift prototype has
input/output channels. The Sony XC-75 cameras with wide- served its purpose in demonstrating the efficacy of the vision-
angle lenses are controlled using two Imagenation PX-610 guided method presented in this paper.
frame grabbers each having resolution of 640×480 pixels. Five Schematic views of the forklift prototype are given in Fig. 3.
IDC (Industrial Devices Corporation) linear actuators are used The three holonomic DOFs of the forks are characterized by θv
to control various functions of the forklift. Three of the five are for the vertical direction, θs for the sideshift motion, and θt for
used to move the forklift’s hydraulic valves that control motion the angle of tilt. The angular position of the two drive wheels are
of the forks in the vertical, sideshift, and tilt angle directions. θ1 and θ2 respectively. The steering angle of the rear wheel of
One linear actuator is used to engage the accelerator, the other the forklift is represented as γ . The angles, θt , θ1 , θ2 , and γ are
to engage the brake. These five linear actuators are powered measured in radians and the quantities θv and θs are measured in
with DC power supplies and Copley amplifiers. A Kollmorgen inches. The distance from the rear steering wheel and the front
motor is used to control the angle of the steering wheel. In axle is denoted as da where da = 52.25 in. The half length
M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038 1031

Fig. 3. Top and side views of forklift prototype schematic.

between the two drive wheels is denoted as db where db = Y (2), and Z (2) used in Eq. (1). It should be noted that
17.45 in. The distance from the front axle to the end of the forks several parameters describing the physical layout of the system
is denoted as d f where d f = 56.0 in. The radius of the drive have been combined into numerical constants in Eq. (3) for
wheels is denoted as R where R = 6.66 in. It is important to simplicity.
note that the (X t , Yt , Z t ) coordinate system, as shown in Fig. 3,
is located midway between the forks and travels with the forks. 5. MCSM implementation on forklift prototype
The (X w , Yw , Z w ) coordinate system is fixed midway between
the two front wheels. It is possible to transform any point The task for which MCSM was implemented on the
from the (X t , Yt , Z t ) coordinate system to the (X w , Yw , Z w ) forklift prototype was a simple engagement of an as-located
coordinate system using the forward kinematics: pallet. The strategy for engaging the pallet with the MCSM
algorithm is broken down as follows. The first task is the
X w (2) = (cos θt − 0.01 sin θt )X t − (0.01 cos θt + sin θt )Z t visual identification of the pallet. Two methods for visual
identification are discussed. Once the pallet is identified
+ 59.41 cos θt + (6.28 − θv ) sin θt − 3.41
camera-space targets are generated. These targets are used to
Yw (2) = Yt + θs (3) resolve the three fork DOFs necessary to engage the pallet.
Z w (2) = (sin θt + 0.01 cos θt )X t + (cos θt − 0.01 sin θt )Z t Also, the targets are used to create a trajectory for the forklift
+ 59.41 sin θt + (θv − 6.28) cos θt + 0.8 to follow. As the forklift moves towards the target, additional
visual samples are acquired. The targets, and subsequently,
where 2 = (θv , θs , θt ). The functions X w (2), Yw (2), and the target fork pose and trajectory are updated with new
Z w (2) correspond to the position vector functions X (2), information as it becomes available from the acquisition and
1032 M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038

of concentric black and white circles. As shown in Fig. 4


three such fiducials are placed on each pallet: a white-centered
fiducial in the middle and two black-centered fiducials on either
side of the pallet. The forklift system is able to identify the
pallet if it can locate the white-centered fiducial and at least
one of the black-centered fiducials in both of its cameras.
While it is a bit of a concession to have fiducials on pallets as
opposed to using the pallet’s natural features, it should be noted
that this is not necessarily an unreasonable nor prohibitively
expensive proposition. Many warehousing operations purchase
custom made pallets to meet their particular needs. It would be
relatively simple to make fiducials part of custom-made pallets.
YSI has discussed this possibility with a pallet manufacturer
who indicated that adding fiducials similar to those employed
in our experiments to custom pallets would be quite simple and
would not drive up the cost of the pallets significantly.
The authors developed a simple method for pallet recog-
nition using only natural features for the purpose of demon-
strating that MCSM is not limited by a need for fiducials
Fig. 4. Pallet with fiducials placed on it. being placed on the pallets. (Another natural feature recogni-
tion method was proposed by Pagès et al. [10].) Our method
processing of new images. Also, the system creates a new
nominal trajectory when it detects that it has failed to track uses a standard library for edge detection [24]. Fig. 5 shows a
the ideal trajectory with sufficient accuracy to ensure pallet picture of the pallet itself and next to it the edges that the edge
engagement. detection routine found. The system first looks for the edge seg-
ments that are of a certain minimum length. An example of this
5.1. Pallet identification can be seen in the left image of Fig. 6 where the white lines in-
dicate edge segments found. Through a series of matching tests
The most effective and robust method of identification of it tries to combine segments that are actually part of the same
the pallet is to place features (also called fiducials) on the continuous line (see middle image of Fig. 6). Finally, the system
pallets themselves that simplify the visual identification of identifies the lower edge of the pallet (see right image of Fig. 6).
the pallet. The fiducials used for this project take the shape This is done based on four conditions: (1) the minimum length

Fig. 5. Photo of pallet and edge detection run on pallet photo.

Fig. 6. Image sequence for identifying lower pallet edge.


M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038 1033

over all (X wm , Ywm , Z wm ). The variable n c represents the


number of cameras, which in the case of the current forklift
system is two. It should be noted that MCSM is not limited to
two cameras for operation; it can incorporate information from
as many cameras as are present in the system. Similar to Wk
of Eq. (2) W j is a relative weighting factor. In practice, W j is
always set to 1. However it could be varied if more than two
Fig. 7. Sketch of pallet with key points located via natural features detection. cameras were present or if it was known that information from
one or the other camera was not as accurate and thus should be
of the line itself, and the presence of another upper edge of (2)
de-weighted.
approximately the same slope and (3) same length (4) located
For the fiducial-assisted pallet identification process the
directly above the lower pallet edge. Fig. 7 shows a sketch of the
pallet vector is found as follows. Consider the case that both the
lower pallet edge which begins at corner 1 and runs to corner 2.
white-centered fiducial and the black-centered fiducial located
While this algorithm is simplistic, it did serve the purpose
to its left are identified in the pallet identification stage. The
of demonstrating MCSM’s ability to identify the pallet without
location of the white-centered fiducial is the same as the pallet
relying on artificial fiducials. In order to use the MCSM method
mid-point, (X wm , Ywm , Z wm ). The location of the left, black-
without fiducials in an industrial setting this natural features
centered fiducial is found using the same process as the pallet
pallet recognition routine will have to be made more robust.
mid-point, and is represented as (X wbl , Ywbl , Z wbl ). The pallet
5.2. Target definition vector, P̂v is then found by:
q
Once the pallet has been identified in the images taken by d P̂v = (X wbl − X wm )2 + (Ywbl − Ywm )2 + (Z wbl − Z wm )2
the forklift’s two cameras the next step is to create targets for X wbl − X wm Yw − Ywm Z wbl − Z wm (5)
engagement. Fig. 8 shows a schematic of the forklift and a P̂V = î + bl ĵ + k̂.
d P̂v d P̂v d P̂v
target trajectory for engaging the pallet. There are two pieces
of information about the pallet that are necessary for trajectory For the case of the natural features pallet identification
generation: the pallet mid-point and the pallet vector (both can process the pallet mid-point is found as follows. Using a
be seen on the right side of Fig. 8). Regardless of the method similar form of Eq. (4) it is possible to find the location
employed for pallet identification, it is necessary to find the of the two corner points in the (X w , Yw , Z w ) reference
location of the pallet mid-point and the pallet vector in the frame: (X wcp1 , Ywcp1 , Z wcp1 ) and (X wcp2 , Ywcp2 , Z wcp2 ). Then,
(X w , Yw , Z w ) coordinate system. the pallet mid-point, (X wm , Ywm , Z wm ) is found by:
At this point in the process it is assumed that the system
X wcp1 + X wcp2
has already established view parameters for each camera using X wm =
information obtained from joint-space/camera-space sample 2
pairs (see Section 3 for details). The pallet mid-point has Ywcp1 + Ywcp2
Ywm = (6)
been identified in both camera spaces and is represented as 2
(xc1m , yc1m ) for the first camera and (xc2m , yc2m ) for the second. Z wcp1 + Z wcp2
Z wm = + 3.0.
Using the view parameters and the relationships described in 2
Eq. (1) it is possible to estimate the pallet mid-point location It should be noted that Z component of the pallet midpoint is
in the (X w , Yw , Z w ) reference frame. This is accomplished by located approximately 3 in. above the lower pallet edge.
minimizing The pallet vector is found in a similar fashion as before:
n c  2
j
X q
Γ2 = xcm − f x (C j , X wm , Ywm , Z wm ) d P̂ =
v
(X wcp1 − X wcp2 )2 + (Ywcp1 − Ywcp2 )2 + (Z wcp1 − Z wcp2 )2
j=1 X wcp1 − X wcp2 Ywcp1 − Ywcp2 Z wcp1 − Z wcp2 (7)
2  P̂v = î + ĵ + k̂.

j d P̂ d P̂ d P̂
+ ycm − f y (C j , X wm , Ywm , Z wm ) Wj (4) v v v

Fig. 8. Forklift trajectory for pallet engagement.


1034 M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038

Fig. 9. Error in tracking the ideal polynomial trajectory.

5.3. Resolution of fork DOFs The initial steering angle, γi , is not necessarily equal to zero.
This is due to the fact that new trajectories are planned as
This subsection will review the steps necessary for the forklift moves towards the pallet and acquires more visual
determining a suitable pose for the three fork DOFs that will information. At the beginning of the trajectory it is likely
enable the forklift to engage the pallet. First it is assumed that that the steering angle is not zero, but some known quantity
the pallet is positioned on the plane parallel to the ground. This labeled, γi . The final position and orientation of the forklift
is a reasonable assumption as most warehouse floors as well necessary to engage the pallet satisfy two more conditions.
as pallet racks are relatively level. Thus, the tilt angle of the The fourth condition is based on the final position that when
forks is set to θt = 0. Next the vertical position of the forks is X w = X wm − d f :
resolved by simply necessitating that the forks be aligned with
the pallet mid-point. Thus, θv = Z wm . The sideshift position of Yw = Ywm (11)
the forks must also be resolved. At the beginning of a trajectory, where d f is shown in Fig. 3. The fifth condition is the final
this position is held to θs = 0. In theory, the steering and orientation. It is written as when X w = X wm − d f :
drive wheels should suffice to line the forks up with the proper
orientation. However, once the forklift gets very close to the Ywbl − Ywm
φ = φ f = ds cos−1 . (12)
engagement of the pallet, it is possible to use the sideshift DOF d P̂v
of the forks to correct for any misalignment that might have
occurred in the trajectory tracking portion of the process. In this The parameter, ds , is simply used to get the correct sign for φ f .
X wbl −X wm
case, the sideshift position of the forks is found to be θs = Ywm . If d P̂ > 0 then ds = −1, otherwise ds = 1. Note that
v
It should be noted that the sideshift axis of the forks can move when the forklift is in the final position the Ya direction shown
through a range of about seven inches. in Fig. 9 is perfectly aligned with the pallet vector shown in
Fig. 8.
5.4. Trajectory generation The sixth condition is met by stipulating that the forklift
must make the final approach to the pallet in almost a straight
The method for trajectory generation is similar to the line. Fig. 8 shows a trajectory for the forklift in which the last
approach developed in [25]. The system makes use of a fifth segment of the trajectory is an almost straight-line approach.
order polynomial to generate a suitable trajectory for the forklift This reduces the amount of turning that the forklift must carry
to follow that will position and align the forklift properly for out at the end of the trajectory, which increases the system’s
pallet engagement: engagement precision. This condition is written as:
2
Yw = a0 + a1 X w + a2 X w 3
+ a3 X w 4
+ a4 X w 5
+ a5 X w . (8) d2 Yw X wm − d f

= 0. (13)
The angles, φ and γ (as shown in Fig. 8), are related to the dxw2
polynomial by: Due to the restrictive conditions associated with the fifth
d2 yw order polynomial planner the forklift sometimes has to undergo
dyw da dxw 2 sharp steering changes in small spaces. It has been determined
tan φ = , tan γ = − 2 . (9)
dxw
 through experimentation that as the forklift gets closer to the
1 + dy w
dxw pallet, it is possible to use a third order polynomial planner
The following six conditions are used to solve for instead of the fifth order one. The lower order polynomial
the parameters of the polynomial function. The first three planner usually reduces the amount of abrupt and large changes
conditions are simply the initial conditions that when X w = 0: in the steering angle. The two restrictions that are removed in
the case of the third order polynomial planner are conditions
Yw = 0, φ = 0, γ = γi . (10) four and six which refer to Eqs. (11) and (13) respectively. Due
M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038 1035

to the removal of condition four the forklift may not end up known the value of Y p is found with Eq. (8). Then, the error
aligned perfectly in the Yw direction, though the orientation is normal to the path is calculated as:
correct. In this case, the sideshift DOF of the fork may be used q
to correct this misalignment. E n = (X a − X p )2 + (Ya − Y p )2 . (18)
The ideal orientation of the forklift at point, (X p , Y p ), is
5.5. Visual updating and trajectory correction
denoted, φ p , and is computed using Eq. (9). The error in
orientation is simply:
As the forklift moves towards the pallet, the system will
update its trajectory several times. These updates may be due to E φ = φa − φ p . (19)
incoming visual information or by developing a path-tracking There are three indicators that will cause the system to
error over the specified tolerance. The equations of motion of update its trajectory. The first is simply distance traveled. If
the forklift along with samples from the drive wheel encoders the actual distance traveled, X a , grows beyond a set amount
are used to keep track of the actual motion that the forklift (typically 20 in.), the system will begin a trajectory update.
has undergone. This is carried out as follows, based on the If the error normal to the polynomial, E n , increases above its
development in [26]. tolerance (typically 3 in.) then the system will also update. If
First, the parameters, α and u, are calculated using the orientation error, E φ , grows beyond its tolerance (typically
knowledge of the angular position of the two front wheels, θ1 5◦ ) the system will update its trajectory.
and θ2 : To create an updated trajectory plan, the system first attempts
θ1 + θ2 1θ2 − 1θ1 to identify the pallet using its vision sensors. If it can,
α= , u= . (14) the system updates the trajectory based on this new pallet
2 1θ2 + 1θ1 information. In some instances in order to have the forklift
The equations of motion, based on the non-holonomic approach the pallet with the proper orientation, the original
constraints, can be written as: trajectory calls for the forklift to turn away from the pallet at
the outset of the trajectory. During periods when the forklift is
dX a dYa dφa uR
= R cos φ, = R sin φ, = (15) turned away from the pallet it is not possible to perform a visual
dα dα dα db update. When this occurs the system uses the history of actual
where the angle of orientation of the forklift, φ, is shown in forklift motion to create a new trajectory plan, which corrects
Fig. 8 and the wheel radius, R, and the parameter, db , are shown for the errors incurred.
in Fig. 3. The actual position and orientation, (X a , Ya , φa ) are The forklift continues updating the trajectory in this manner
measured relative to the (X w , Yw , Z w ) coordinate system with until it reaches the final position with the proper alignment.
the caveat that until the trajectory is updated the origin of the Then, the forklift moves straight ahead a fixed distance based
(X w , Yw , Z w ) remains fixed even though the actual forklift on the length of its forks to engage the pallet.
system is in motion. It should be noted that there are two other factors that are
As the forklift moves forward a small distance, the current used to increase the precision of the system. The first factor
position of the forklift is approximated using a simple is inherent to the method of MCSM. When the forks move
numerical integration of these equations of motion: towards the final pose, the pose necessary to engage the pallet,
additional video samples may become available. These samples
X ai+1 = X ai + 1α R cos φai can be used with Eq. (2) to update the view parameters, skewing
(16) the relationship of Eq. (1) to be more accurate in the local region
Yai+1 = Yai + 1α R sin φai
of joint and camera space. With updated view parameters,
1αu R the system updates its estimates of the location of the pallet
φai+1 = φai + (17)
db mid-point and the pallet vector. With the updated information
about the pallet it is sometimes necessary to update the target
where 1α = 1θ1 +1θ 2
2
. trajectory.
Using the actual position and orientation of the forklift The second factor used to increase the precision of the
it is possible to determine an error in the tracking of the system has to do with the steering angle of this particular
ideal polynomial trajectory. There are two error components forklift system, γ . Through experimentation it has been
used: the error normal to the polynomial, E n , and the error in determined that there is approximately ten degrees of ‘play’
orientation, E φ . Fig. 9 shows the front axle of the forklift in in the steering angle. With this large amount of ‘play’, it
reference to the ideal polynomial trajectory. The actual position is possible that the commanded steering angle may differ
based on Eqs. (16) and (17) is denoted as (X a , Ya , φa ). The substantially from the effective steering angle. This discrepancy
X -component of the point at which a line drawn along the will cause E n and E φ to grow quite quickly. With information
axle intersects the ideal polynomial is denoted as, X p . This from the actual history of wheel rotations, θ1 and θ2 it is
intersection point can be approximated with a simple iterative possible to estimate the current effective steering angle by:
procedure requiring only three iterations. This geometric basis
da θ̇1 − θ̇2

for computing error is used rather than a more typical time-
tan γ = . (20)
based strategy for the reasons discussed in [27]. Once X p is db θ̇1 + θ̇2
1036 M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038

Fig. 10. Forklift engaging a pallet.

If this quantity differs substantially from the currently trajectory. The primary source of error which causes the system
commanded steering angle, then the system issues a small to have to back up is the ‘play’ in the steering angle.
correction in an attempt to close the difference between A series of tests were carried out using the natural features
the effective and commanded steering angles. It should be pallet identification routine instead of the fiducials on the pallet
emphasized that this discrepancy is not an inherent quality of to demonstrate that MCSM is not limited to using artificial
MCSM, but a mechanical limitation of the current prototype features for visual identification and engagement of pallets.
forklift used for testing. However, it should be noted that in order for the natural
features routine to identify the desired pallet successfully the
6. Experimental results environment surrounding the pallet was cleared of other pallets
or objects that might have ‘confused’ the algorithm. In these
Two types of tests were run for pallet engagement: one tests the system successfully engaged the pallet in 11 of 13
with the fiducials on the pallet and the other using only the trials. While the natural features algorithm will have to be made
natural features of the pallet. Fig. 10 shows several pictures much more robust to be used in an actual industrial setting,
of the forklift performing a typical pallet engaging experiment these tests demonstrate that the system is capable of engaging
with the fiducials on the pallet. For the purposes of the pallet pallets using only their natural features.
engagement task, success was measured by achieving pallet
engagement. For a more detailed analysis of the precision of 7. Conclusion
MCSM systems see [3].
For the case of using the fiducials on the pallets a series We have presented the development of a prototype vision-
of 100 test runs were carried out. For these tests the initial guided forklift system for the automatic engagement of pallets.
forklift position required that the pallet be in view of both of The system is controlled using the visual guidance method
the forklift cameras. The initial location of the pallet relative of mobile camera-space manipulation (MCSM), which was
to the forklift varied from roughly 6 ft to 12 ft in the X w developed originally for use with planetary exploration rovers.
direction (see Fig. 3), −3 ft to 3 ft in the Yw direction, and 0 MCSM is capable of achieving a high level of precision in
to 4 ft in the Z w direction. The initial angle of the pallet in positioning and orienting mobile manipulator robots such as
the X w − Yw plane relative to the forklift varied from roughly rovers or in the case of this paper a modified, computer-
−20◦ to 20◦ . The forklift successfully engaged the pallet in controlled forklift. It achieves this precision without relying
98 of the 100 trials. In the two unsuccessful tests, the system on camera calibration. MCSM is a significant advancement
automatically detected that it could not find the pallet and over standard camera-space manipulation (CSM). It enables
stopped itself waiting for assistance. In roughly 20% of the tests the mobile system to carry its cameras onboard, whereas CSM
the forklift was unable to engage the pallet on its first attempt. required the cameras to be stationary. Aside from removing
Once the system determined that it was too close to the pallet in the limitation in workspace area caused by the requirement of
order to correct its final position and/or orientation it backed up stationary cameras – a limitation that makes CSM impractical
automatically to give itself sufficient space to carry out a new for the control of industrial forklift systems – the ability of
M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038 1037

MCSM systems to carry the cameras onboard solves other Sanjiv Singh of Carnegie Mellon for their assistance in the
potential problems such as reduced camera resolution, visual successful completion of the Phase II contract.
obstruction of the target by the system itself, and the need
to maintain reliable and robust communication between the References
cameras and the mobile system. Also, MCSM’s estimation [1] G.A. Castleberry, The AGV Handbook, Braun-Brumfield, Inc., 1991.
model is not adversely effected by the relatively inaccurate [2] MHIA Elessons on Automatic Guided Vehicle Systems,
information from the non-holonomic DOFs as is the case with http://www.mhia.org/psc/PSC Products GuidedVehicle elessons.cfm.
[3] M. Seelinger, J.-D. Yoder, E.T. Baumgartner, S.B. Skaar, High-precision
CSM systems.
visual control of mobile manipulators, IEEE Transactions on Robotics and
The development of this technology for forklift-type systems Automation 18 (6) (2002) 957–965.
could advance the current state of the art in material handling [4] H. Seraji, Configuration control of rover-mounted manipulators, in:
in two distinct ways. First, the technology could be added to Proceedings of IEEE Intl. Conf. on Robotics and Automation, 1995, pp.
commercially available AGV material handling systems. This 2261–2266.
[5] S.B. Skaar, W.H. Brockman, R. Hanson, Camera space manipulation,
would enable AGV systems to engage pallets based on their International Journal of Robotics Research 6 (4) (1987) 20–32.
actual location as opposed to the current practice of engaging [6] S.B. Skaar, W.H. Brockman, W.S. Jang, Three dimensional camera space
pallets on the assumption that they are located within 1 cm of manipulation, International Journal of Robotics Research 9 (4) (1990)
the previously recorded position. This would also give AGV 22–39.
[7] E.J. Gonzalez-Galvan, S.B. Skaar, U.A. Korde, W.Z. Chen, Application
systems the capability of performing tasks such as tractor- of a precision enhancing measure in 3-d rigid-body positioning using
trailer unloading where it is impossible to know the position camera-space manipulation, International Journal of Robotics Research
of the pallets a priori. The technology developed here could 16 (2) (1997) 240–257.
also be added to standard industrial forklifts creating a ‘semi- [8] S.B. Skaar, I. Yalda-Mooshabad, W.H. Brockman, Nonholonomic
camera-space manipulation, IEEE Transactions on Robotics and
autonomous’ forklift. A forklift operator would still have the
Automation 8 (4) (1992) 464–479.
task of navigating the forklift through the warehouse and [9] R.K. Miller, D.G. Stewart, H. Brockman, S.B. Skaar, A camera space
bringing the pallet in view of the forklift’s cameras. Then, control system for an automated forklift, IEEE Transactions on Robotics
MCSM would perform the final positioning and engagement of and Automation 10 (5) (1994) 710–716.
[10] J. Pagès, X. Armangué, J. Salvi, J. Freixenet, J. Martı́, A computer vision
the pallet. This technology potentially could reduce the amount
system for autonomous forklift vehicles in industrial environments, in: In
of product damage that occurs in forklift accidents involving Proc. of The 9th Mediterranean Conference on Control and Automation
pallet engagement and transportation of product. MEDS’2001, 2001.
This paper describes an actual prototype forklift system [11] A. Kelly, Automated Material Transport System,
and two types of pallet engagement experiments. With http://www.rec.ri.cmu.edu/projects/amts/amts.shtml, 2003.
[12] AGV Products Homepage, http://www.agvp.com/.
the assistance of fiducials placed on pallets for visual [13] Egemin Automation Homepage, http://www.egeminusa.com/.
recognition, the prototype system was capable of automatic [14] FMC Technologies Homepage, http://www.fmcsgvs.com/.
pallet engagement in 98% of the experiments conducted. [15] B. Jervis, Webb Company Homepage, http://www.jervisbwebb.com/.
[16] R. Pissard-Gibollet, P. Rives, Applying visual servoing techniques to
A rough algorithm was developed to enable the system to control a mobile hand–eye system, in: Proc. IEEE Intl. Conf. on Robotics
automatically detect and engage pallets without the need for and Automation, 1995, pp. 166–171.
artificial features. [17] R. Swain, M. Devy, Motion control using visual servoing and potential
The technology presented in this paper will be developed fields for a rover-mounted manipulator, in: Proc. IEEE Intl. Conf. on
further in several directions. First, in an effort to commercialize Robotics and Automation, 1999, pp. 2249–2254.
[18] D.C. MacKenzie, R.C. Arkin, Behavior-based mobile manipulations for
the technology the authors plan to continue their work in drum sampling, in: Proc. IEEE Intl. Conf. on Robotics and Automation,
making the system more robust by increasing the reliability 1996, pp. 2389–2395.
as close to 100% as possible. The authors also hope to adapt [19] M. Seelinger, S.B. Skaar, M. Robinson, in: D.J. Kriegman, G.D. Hager,
the method and system for high reach applications. Work is A.S. Morse (Eds.), An alternative approach for image-plane control of
robots, in: Lecture Notes in Control and Information Sciences: The
currently being undertaken to develop a framework for the Confluence of Vision and Control, Springer, London, 1998, pp. 41–65.
cooperation of multiple mobile manipulators, at least one of [20] D. Tsakiris, P. Rives, C. Samson, in: D.J. Kriegman, G.D. Hager,
which will be controlled using the MCSM method. The initial A.S. Morse (Eds.), Extending Visual Servoing Techniques to Nonholo-
development of the cooperative control includes testing the nomic Mobile Robots, in: Lecture Notes in Control and Information Sci-
ences: The Confluence of Vision and Control, Springer, London, 1998,
forklift system presented in this paper cooperating with a small,
pp. 106–117.
high-precision rover. [21] B. Horn, Robot Vision, MIT Press, Cambridge, 1986.
[22] R. Tsai, An efficient and accurate camera calibration technique for 3d
Acknowledgments machine vision, in: Proc. of IEEE Conf. on Computer Vision and Pattern
Recognition, 1986, pp. 364–374.
[23] H. Zhuang, K. Wang, Z. Roth, Simultaneous calibration of a robot and
The authors would like to thank NASA-JPL for support hand-mounted camera, IEEE Transactions on Robotics and Automation
of the development of this technology through SBIR Phase I 11 (5) (1995) 649–660.
contract NAS8-98170 and Phase II contract NAS3-99131, [24] S.M. Smith, J.M. Brady, Susan — a new approach to low level image
particularly our technical monitor, Eric Baumgartner, and our processing, International Journal of Computer Vision 23 (1) (1997) 45–78.
[25] D. Shin, S. Singh, J. Lee, Explicit path tracking by autonomous vehicles,
SBIR coordinator, Patricia MacGuire. The authors also thank Robotica 10 (1992) 539–554.
Steven Skaar from the University of Notre Dame, Bob Emery, [26] E. Baumgartner, S. Skaar, An autonomous vision-based mobile robot,
Hal Ulrich, Loren Shaum of Automation Solutions, Inc., and IEEE Transactions on Automotic Control 39 (3) (1994) 493–502.
1038 M. Seelinger, J.-D. Yoder / Robotics and Autonomous Systems 54 (2006) 1026–1038

[27] J.-D. Yoder, E. Baumgartner, S. Skaar, Reference path description for B.S. in mechanical engineering. He received his M.S. and Ph.D. in mechanical
an autonomous wheelchair, in: Proc. IEEE Intl. Conf. on Robotics and engineering also from Notre Dame in 1996 and 1999 respectively. His research
Automation, 1994, pp. 2012–2017. interests include the visual control of both holonomic and non-holonomic robot
systems.

Michael Seelinger is a research engineer and vice John-David Yoder is an assistant professor at Ohio
president of Yoder Software, Inc. He is also a Northern University. He is also the president of
visiting instructor in the Department of Mechanical Yoder Software and was the principal investigator of
and Industrial Engineering at the University of Illinois the NASA-JPL Phase I SBIR contract which began
Urbana-Champaign. He is the principal investigator development of the MCSM technology. He graduated
of the NASA-JPL Phase II SBIR contract that with a B.S. in mechanical engineering from the
supported the research and development of the MCSM University of Notre Dame in 1991. He received his
technology presented in this paper. In 1994 he M.S. from Notre Dame in 1994 and the Ph.D. in 1996,
graduated from the University of Notre Dame with a both in mechanical engineering.

You might also like