Professional Documents
Culture Documents
Abstract— This paper presents a method allowing a quadro- dancing [8]; balancing an inverted pendulum [9]; aggressive
copter with a rigidly attached racket to hit a ball towards a manoeuvres such as flight through windows and perching
target. An algorithm is developed to generate an open loop [10]; and cooperative load-carrying [11].
trajectory guiding the vehicle to a predicted impact point – the
prediction is done by integrating forward the current position The problem of hitting a ball also yields opportunities
and velocity estimates from a Kalman filter. By examining for exploring learning and adaptation: presented here are
the ball and vehicle trajectories before and after impact, strategies to identify the drag properties of the ball, the
the system estimates the ball’s drag coefficient, the racket’s racket’s coefficient of restitution, and a strategy to compen-
coefficient of restitution and an aiming bias. These estimates sate for aiming errors, allowing the system to improve its
are then fed back into the system’s aiming algorithm to improve
future performance. The algorithms are implemented for three performance over time. This provides a significant boost in
different experiments: a single quadrocopter returning balls performance, but only represents an initial step at system-
thrown by a human; two quadrocopters co-operatively juggling wide learning. We believe that this system, and systems like
a ball back-and-forth; and a single quadrocopter attempting to it, provide strong motivation to experiment with automatic
juggle a ball on its own. Performance is demonstrated in the learning in semi-constrained, dynamic environments.
Flying Machine Arena at the ETH Zurich.
The paper is organised as follows: in Section II we
I. I NTRODUCTION derive equations to model quadrocopter flight, ball flight
and ball/racket impact. In Section III we present algorithms
In this paper we describe a system that allows a quadro- to estimate the ball state and predict the ball’s trajectory,
copter to hit a ping-pong ball towards a target using an estimate the racket’s coefficient of restitution and estimate
attached racket. This enables a single quadrocopter to juggle an aiming bias. Then an algorithm to generate a trajectory
a ball, multiple quadrocopters to hit a ball back-and-forth, for the quadrocopter is given in Section IV, followed by
or a human and quadrocopter to play together. The term a discussion on the system architecture and experimental
“juggling” here is used in the same sense as in soccer, where setup in Section V. Results from experiments are presented
one tries to keep the ball in the air for as long as possible. in Section VI. We attempt to explain why the system fails
A useful way of visualising the problem setup is to think of on occasion in Section VII and conclude in Section VIII.
a “flying racket” (see Fig. 1) hitting a ball.
Hitting a ball is a visually engaging problem, with which II. DYNAMICS
everyone is acquainted, and any casual bystander can im-
We model the quadrocopter with three inputs (refer to
mediately judge how successful a system is. This problem
Fig. 2): the angular accelerations q̇ and ṗ, taken respectively
involves various aspects: deciding when and where to hit the
about the vehicle’s x and y axes, and the mass-normalised
ball, and to what target; analysing the dynamics of the ball
collective thrust, f . The thrust points along the racket normal,
flight, ball/racket impact and the dynamics of the quadro-
n. The attitude of the quadrocopter is expressed using the
copter; generating a trajectory moving the quadrocopter to
z-y-x Euler angles, rotating from the inertial frame to the
a state which hits the ball as desired, while respecting the
dynamics of the quadrocopter; and estimating the state of the
ball accurately enough to allow for useful predictions.
Robotic juggling and ball sports are popular research
topics, and are seen as challenging dexterous tasks. Examples
include robotic table tennis, from simplified ping-pong [1]
to teaching a robotic arm to return a table tennis ball [2].
Other interesting cases are human/robot volleyball [3]; robot
basketball [4] and the RoboCup robotic soccer championship
[5]. An example of juggling per se can be found in [6], with
another interesting case being “blind” juggling [7].
Due to their agility, and mechanical simplicity, quadro-
copters have become a popular subject of research. A few
examples of challenging tasks executed by quadrocopters are: Fig. 1. A quadrocopter with attached badminton racket head. Three retro-
reflective markers are attached to the vehicle, with which the vehicle pose
The authors are with the Institute for Dynamic Systems and Control, can be determined (here, two are partially obscured by the racket). The ball,
ETH Zurich, Sonneggstrasse 3, 8092 Zurich, Switzerland. shown lying on the racket, is also wrapped in the retro-reflective tape to be
{mullerm, sergeil, rdandrea}@ethz.ch visible to the motion capture system.
5114
and velocity for integration, r(t) and ṙ(t), respectively. By B. Racket coefficient of restitution
using the estimate at time tk as initial value, r and ṙ can be By examining the ball and vehicle state estimates before
evaluated by numerical integration, so that and after an impact, we can estimate the racket’s coefficient
(r (tk ) , ṙ (tk )) := x̂b [k] (17) of restitution using (8). These estimates can be combined to
form an estimate β̂, again using an RLS estimator, similar to
and r(t), ṙ(t) satisfy (6). (22) - (25), but with the measurement noise variance taken
The state prediction for the Kalman filter is then simply as constant.
r(tk + τk ) and ṙ(tk + τk ). For the variance propagation, We only allow coefficient of restitution measurements
and for the measurement update of both the state and the lying in the range [0.5, 1]. The lower bound allows us to
variances, the usual linear Kalman filter equations are used. identify “failed” impacts, for example if the ball hits the
Taking the time derivative of the ball’s specific mechanical racket frame, or the propellers. The value of 0.5 was chosen
energy Eb = 12 kṡb k2 + gT sb , yields the specific power by examining experimental data.
extracted from the system by the drag force,
C. Aiming bias
Ėb = −KD kṡb k3 . (18)
We notice that with the preceding, the system still shows
Using the ball state estimate x̂b [k] from the ball state an aiming bias, defined as the difference between the target
filter to calculate the ball’s speed v̄[k], the above can be position and where the ball crossed the target height. The
approximated as a measurement K̃D [k], system attempts to identify where it should aim so that
s the ball hits the target point, by using two separate RLS
T 03×3 03×3 estimators, one each for the x and y components of the
v̄[k] = x̂b [k] x̂ [k] (19)
03×3 I3×3 b aiming bias. This allows us to compensate for systematic
1 2 errors, such as a misaligned racket.
Ē[k] = (v̄[k]) + gT 01×3 x̂b [k]
(20)
2 IV. T RAJECTORY GENERATION
Ē[k] − Ē[k − 1]
K̃D [k] = − 3 , (21) Here we derive an open-loop trajectory to move the
1
τk 2 (v̄[k] + v̄[k − 1])
quadrocopter from some initial state to an impact state, in the
where Ē[k] is the estimate of the ball’s mechanical energy time remaining until impact. We define impact as when the
at time tk , and we use a two step average of the ball’s speed. ball drops through some user-defined impact height. From
These measurements can now be used to form an estimate the ball estimator, we have an estimate of time remaining
of the drag value (K̂D [k]) using a recursive least
squares
until impact (T ), and the ball’s velocity at this time.
(RLS) estimator [14]. We define PD [k] = var K̂D [k] as To calculate the required ball velocity after impact, we
the estimate variance, and RD [k] as the measurement noise note that the post-impact trajectory has to start at the impact
variance (from a nominal variance RD,0 , weighted by a point, pass through some maximum height, and reach the
two-step average of the ball’s speed). We also introduce an aiming point. These requirements define a unique ball trajec-
intermediary gain CD [k]. tory. We note that, by (6), the ball always moves in a vertical
plane, implying that we need only solve for horizontal and
RD,0 vertical components of the post-impact ball velocity (vh and
RD [k] = 3 (22)
1 vv , respectively).
2 (v̄[k] + v̄[k − 1])
We can solve for these using a two-dimensional gradient
PD [k − 1]
CD [k] = (23) descent search, minimising the cost function C (vh , vv ),
PD [k − 1] + RD [k] composed of a height error and a lateral distance error. We
denote the achieved maximum height by hm (vh , vv ), and the
K̂D [k] = K̂D [k − 1] + CD [k] K̃D [k] − K̂D [k − 1] (24)
desired maximum ball height by hm,d . Furthermore, we use
PD [k − 1]RD [k] the distance l (vh , vv ) at which the ball passes through the
PD [k] = (25) target height, and the desired distance ld .
PD [k − 1] + RD [k]
2
We disallow measurements below some minimum height C (vh , vv ) = (hm (vh , vv ) − hm,d )
to ensure we only use measurements taken when the ball is 2
(26)
+ (l (vh , vv ) − ld )
in flight (i.e. not being handled, or hit by the racket). To
protect against numerical issues, we disallow measurements The functions hm (vh , vv ) and l (vh , vv ) are evaluated by ex-
when the ball’s speed approaches zero. amining the trajectories resulting from numerical integration
For longer term predictions of the ball state, we numeri- of (6).
cally integrate forward the ball state using (6). This is used Using the ball’s pre- and post-impact velocities, we can
to predict the time of impact and the ball state at impact, as solve for the required racket state at impact using (8),
well as during aiming, to evaluate possible post-impact ball yielding
velocities. For such predictions an accurate estimate of the ṡ−
b − ṡb
+
ndes (T ) = − (27)
drag coefficient is crucial. kṡb − ṡ+b k
5115
1
V⊥,des := (ṡr (T ))T n = (β ṡ− + T
b + ṡb ) n, (28) This open-loop trajectory is sent to a near-hover feedback
1+β controller, similar to that described in [15]. The controller
where V⊥,des is the desired racket speed in the direction of runs on position feedback, and we use the calculated desired
the desired racket normal (ndes ) at impact. Because of how velocity, acceleration and body rates as feed-forward terms,
we mount the racket (see Fig. 1), it follows that the racket to generate the four individual motor commands.
state is simply that of the quadrocopter, displaced in the body V. E XPERIMENTAL SETUP
z-axis by some offset.
The preceding were coded in C++, and implemented in
There are a total of six constraints we need to meet at
the Flying Machine Arena (FMA) at the ETH Zurich. The
impact. The desired pitch and roll angles can be derived
algorithm is shown schematically in Fig. 3, and can be
from the desired racket normal at impact (ndes (T )) using
broken into the following components:
(1). We also need to achieve the normal speed V⊥,des , and
• The ball estimator is responsible for finding the ball in
the quadrocopter has to be at the impact point, at time T .
Using (2) to describe the quadrocopter dynamics under the space, estimating its state, and then predicting when
the small angle assumption, we have three inputs: the mass- and where impact will occur.
• The trajectory generator calculates the desired quadro-
normalised thrust f , and the angular accelerations q̇ and ṗ.
Noting that we need to solve for six equations, we assume copter state at impact, and generates a set of inputs to
affine inputs of the form move the vehicle to that state at impact time.
• Impact identification allows the system to use infor-
f (t) = Af t + Bf (29) mation from previous impacts to improve the system
q̇(t) = Aθ t + Bθ (30) performance.
• The generated trajectory, and commands, are now sent
ṗ(t) = Aφ t + Bφ . (31) to the feedback controller.
This form is inspired by the analysis of a linear one- It is important to note here that the trajectory generator is
dimensional system where the ball and racket are constrained invoked in two places – when a new trajectory is initialised,
along the z axis, with the single control input being the and as information about the impact becomes available.
racket’s acceleration f . An affine input minimises the me- At initialisation, an initial state for the trajectory is fixed,
chanical energy expended by the control effort. (29)-(31) are which remains unchanged until the next impact. As the
simple enough to yield closed-form solution, while providing impact prediction improves, the desired end state of the
the necessary degrees of freedom. By substituting these into quadrocopter is updated, and the equations are solved to
(2), and integrating, we can write the quadrocopter’s position generate a trajectory from the fixed initial condition, to the
and velocity as polynomials in time. We substitute for the updated impact condition.
six requirements at impact, and substitute the integration The resulting trajectory works well when moving over
constants with the initial conditions, to yield six equations small lateral distances, but larger lateral distances imply
in six unknowns (the input coefficients). This system of larger angular deviations, violating our assumption of small
equations can be reduced to a single fourth order equation in θ and φ. Therefore, we use two slightly different imple-
one unknown, which can be solved for in closed form (the mentations of the trajectory generator, differing in how
full derivation is made available online on the first author’s the initial state is chosen. In the “continuous” case, we
website). use the quadrocopter’s state at the start of the manoeuvre,
For a real polynomial of order four, we can have either no,
two, or four real solutions. In case of multiple real solutions, Trajector
we note that we wish to avoid large inputs to not violate the generator Feedback
controller
small-angle assumption, and select a solution minimising:
Continuous
inner loop
Because of the simple relationships between the related affine
constants, small values for Af , Aθ and Aφ imply small Bf ,
Bθ and Bφ , respectively (refer to the online derivation). NO
Impact?
We have now solved for open-loop inputs which move
the system from some initial state to the state needed to
YES
hit the ball, in the time until impact. This trajectory will Post-impact learning
satisfy the quadrocopter equation of motion under the small- Learn COR
angle assumption, but it does not take actuator saturation into aiming estimation
5116
250 0.12
meaning that nominally the commands to the vehicle are
0.1
200
continuous. This mode works best when moving over small
0.08
Estimate K [m−1]
lateral distances. The second, “discontinuous”, mode is used 150
Count
0.06
D
when we know that we have more time until intercept, and
^
100
0.04
works better when moving over larger distances. In this case
50
we generate a nominal trajectory, starting some pre-defined 0.02
convenient. 80 0.86
Coefficient of restitution β [-]
60 0.84 0.85
VI. E XPERIMENTAL RESULTS
40 0.82
Racket face y [mm]
2) Cooperative juggling: two quadrocopters attempt to Fig. 5. Racket estimator output while a vehicle returns throws, showing
hit a ball to one another – each quadrocopter has as coefficient of restitution measurements and the distribution of impacts on
target the other’s starting position. the racket face; and how the estimate evolved in time.
5117
Without parameter identification With parameter identification 2 Vehicle1
2 1
Target
x [m]
0
Start Ball
Aiming −1
1
Achieved −2 Vehicle 2
Throw
1
y [m]
Ball dropped
0 0.5
y [m]
Return 0
−1 −0.5
−1
4
−2
−3 −2 −1 0 1 −3 −2 −1 0 1 3
z [m]
x [m] x [m]
2
deviation. −3
−4
B. Cooperative juggling
1
The system can also be run with two quadrocopters Ball dropped
y [m]
0
playing with one another, set up such that each quadrocopter −1
starts at its opponent’s target. In all other respects the
4
scenario is the same as when a human is throwing the ball
3
for a quadrocopter to return. Ball
z [m]
2
In Fig. 7 a rally between two quadrocopters is shown,
1
Achieved
where the ball is kept in the air for 17 consecutive hits. This 0
Desired
was after sufficient time had passed for the estimates to settle. 0 1 2 3 4 5
Time [s]
6 7 8 9 10
5118
Cooperative juggling Solo juggling
4 8 to its rotational speed squared, we expect a speed command
which looks like a parabola turned on its side. During the
3 6 latter stages of the trajectory, we can see this shape, but
it is initially saturated. Some motor commands are at the
Count
Count
2 4
minimum, and the remainder show much larger commands
than expected. These commands can be understood as the
1 2
feedback controller regulating the vehicle’s attitude. Since
the vehicle’s motors can only produce upwards thrust, this
0 0
0 5 10 15
Number of hits in a rally
20 25 1 2 3 4 5 6 7 8
Number of hits in a rally
has the undesired consequence of producing a net upward
force and reducing the vehicle’s potential for downwards
Fig. 9. Histograms showing the system performance when two vehicles acceleration.
hit a ball back-and-forth (left) and when a single vehicle juggles on its own
(right). Interesting to note is the bi-modal nature of the distribution when B. Unpredictable impacts
two vehicles play together – where the ball is either dropped early on, or
the system manages to sustain a longer rally. Similar to how we measure the racket’s coefficient of
restitution in Section III, we can measure the orientation
of the racket’s normal for each impact. We notice that this
Fig. 8, with a detail of the vehicle’s z trajectory in Fig. normal deviates from the expected vehicle z-axis – this
10, also showing the motor commands. On Fig. 8 we notice deviation will cause the ball’s bounce to be in a slightly
that the system cannot maintain the desired impact point at different direction from the expected, in turn leading to
x = y = 0. aiming errors. A histogram of such deviation measurements
In Fig. 9 a histogram of a single vehicle’s juggling is shown in Fig. 11, showing measurements made during the
performance is shown. The data shows 27 juggling “rallies”, solo juggling experiment of Section VI.
lasting an average of 3 hits. To quantify the effects of such a deflection, we analyse
The longest juggling rally ever achieved by the system a single quadrocopter during solo juggling, maintaining a
lasted 14 hits, but was again not recorded. maximum ball height hmax above the impact point. For
simplicity, we assume zero drag. From the conservation of
VII. FAILURE ANALYSIS
mechanical energy, such √ a ball will
return to the impact
We identify three main causes for the system failing to height at ṡ−
b = 0, 0, − 2gh max . To return the ball to
−
hit the ball: input (motor) saturation, unpredictable bouncing hmax , we want ṡ+ b = −ṡ b , and noting that the nominal
and tracking errors. racket normal is n = (0, 0, 1), we can calculate the required
racket speed using (8) as:
A. Input saturation
The trajectory generator does not take input saturation into β−1 −
ṡr = ṡ . (34)
account, and the generated trajectory might be infeasible. For β+1 b
example, the commanded initial downwards acceleration is If the true racket normal (ndef l ) now deviates from the
often in excess of g, which is unachievable on this system, expected by an angle γ (for convenience taken as a rotation
since each propeller can only produce positive thrust and about the y axis), we can calculate the actual post-impact
some thrust is needed to maintain vehicle attitude. ball velocity (ṡ+
b,def l ), again using (8):
In Fig. 10 we can see the effect of motor saturation.
Taking the thrust produced by a propeller as proportional ndef l = (sin γ, 0, cos γ) (35)
p
ṡ+ 2ghmax · sin 2γ, 0, 2 cos2 γ − 1
b,def l = (36)
1.8
Ball
1.6
Achieved 40
z [m]
1.4
Desired
30
1.2
1500
Count
Motor
saturation 20
Motor speed
commands
1000
500 10
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0
Time [s] 0 5 10
Deflection [deg]
0 1.4 2.6
Fig. 10. Motor saturation during solo juggling. On top is shown the system Horizontal error [m]
performance when following a commanded trajectory (shown for height
only), and below the corresponding motor speed commands, each motor a Fig. 11. A histogram showing the angle between the measured normal,
different colour. The commands are internal, but are linearly related to motor and that expected, for a quadrocopter juggling a ball on its own. On the
speed. Clearly visible is that after generating the new trajectory, the motor right is a detail showing the uneven surface of the racket, and the shape of
commands are too large, leading to insufficient downwards acceleration. For the ball. The deflection was measured, and the shown horizontal deflection
an affine acceleration, we expect monotonously increasing motor commands. derived therefrom – refer to the text for details.
5119
We can now calculate the horizontal distance the ball to demonstrate the effects of the parameter identification.
moves before returning to the impact height (∆x), yielding: The juggling performance for two vehicles cooperating was
shown to be much better than that of one vehicle on its own,
∆x = 4hmax sin 2γ 2 cos2 γ − 1
(37) mostly because the vehicles have more time to respond, and
This equation was used to generate the lower abscissa label start each trajectory in a more favourable position.
in Fig. 11, for a vehicle juggling a ball to a maximum height The system offers various possibilities for improvement.
hmax = 2 m. We can see a 5◦ deviation leads to a horizontal One can imagine the hitting action encoded as a motion
error of 1.4 m, which would likely be too large for the vehicle primitive, described by a simple set of parameters which the
to successfully intercept again. system can learn to improve so that the resulting motion is
closer to the desired (similar to the flips of [16]). Further-
C. Tracking errors more, a game can be created, like the robot ping-pong of
Due to measurement errors and external disturbances, we [1], where different strategies (or even completely different
expect that the vehicle will not achieve a commanded state vehicles) can be compared in a competitive environment.
perfectly. Generally, this is difficult to quantify, but in Fig.
R EFERENCES
12 the position errors are shown for a vehicle hovering. Here
we can see that the vehicle’s position error has a mean of [1] R. L. Andersson, A Robot Ping-Pong Player. The MIT Press,
Cambridge, Massachusetts, 1988.
28 mm – from experience we know that we need to hit a [2] K. Mulling, J. Kober, and J. Peters, “A biomimetic approach to robot
ball within about 50 mm of the racket centre. table tennis,” in IEEE/RSJ International Conference on Intelligent
The tracking errors during aggressive flight are much more Robots and Systems (IROS), 2010.
[3] H. Nakai, Y. Taniguchi, M. Uenohara, T. Yoshimi, H. Ogawa, F. Ozaki,
difficult to analyse, but can be expected to be at least as large J. Oaki, H. Sato, Y. Asari, K. Maeda, H. Banba, T. Okada, K. Tatsuno,
as the errors during hover. E. Tanaka, O. Yamaguchi, and M. Tachimori, “A volleyball playing
robot,” in IEEE International Conference on Robotics and Automa-
VIII. C ONCLUSION tions, 1998.
[4] G. Bätz, M. Sobotka, D. Wollherr, and M. Buss, “Robot basketball:
In this paper we have presented a system for a quadro- Ball dribbling a modified juggling task,” in Advances in Robotics
copter to hit a ball towards a target. This was done by Research, T. Kröger and F. M. Wahl, Eds. Springer Berlin Heidelberg,
2009, pp. 323–334.
analysing simple models of the ball flight, racket/ball inter- [5] H. Kitano, M. Asada, I. Noda, and H. Matsubara, “Robocup: robot
action and quadrocopter flight. A Kalman filter was imple- world cup,” Robotics Automation Magazine, IEEE, vol. 5, pp. 30 –36,
mented to estimate the ball state, which is needed to predict 1998.
[6] M. Bühler, D. E. Koditschek, and P. J. Kindlmann, “A simple juggling
the impact conditions. Using the impact conditions, the robot: Theory and experimentation,” in Proceedings of the First
desired quadrocopter state at impact can be calculated, which International Symposium on Experimental Robotics, 1990.
we combine with affine inputs to move the quadrocopter from [7] P. Reist and R. D’Andrea, “Bouncing an unconstrained ball in three
dimensions with a blind juggling robot,” in IEEE International Con-
an arbitrary initial state to the desired state at impact, under ference on Robotics and Automation (ICRA), 2009.
the assumption that the angles remain small. [8] A. Schöllig, F. Augugliaro, S. Lupashin, and R. D’Andrea, “Synchro-
Strategies were implemented which allow the system to nizing the motion of a quadrocopter to music,” in IEEE International
Conference on Robotics and Automation (ICRA), 2010.
estimate the ball’s drag coefficient and the racket’s coefficient [9] M. Hehn and R. D’Andrea, “A flying inverted pendulum,” in Proceed-
of restitution, and learn an aiming bias. The combination has ings of the IEEE International Conference on Robotics and Automation
been shown to improve the system’s performance hitting a (ICRA), 2011.
[10] D. Mellinger, N. Michael, and V. Kumar, “Trajectory generation and
ball at a target. control for precise aggressive maneuvers with quadrotors,” in Int.
The algorithm was implemented for three different exper- Symposium on Experimental Robotics, 2010.
iments: a single vehicle returning a ball thrown by a human, [11] D. Mellinger, M. Shomin, N. Michael, and V. Kumar, “Cooperative
grasping and transport using multiple quadrotors,” in Proceedings
two vehicles hitting a ball back-and-forth, and a single vehi- of the International Symposium on Distributed Autonomous Robotic
cle attempting to juggle on its own. The performance of the Systems, Nov 2010.
system in each case has been shown, with the first being used [12] J. Nonomura, A. Nakashima, and Y. Hayakawa, “Analysis of effects
of rebounds and aerodynamics for trajectory of table tennis ball,” in
Proceedings of SICE Annual Conference, 2010.
80
[13] D. P. Bertsekas, Dynamic Programming and Optimal Control, Vol. I.
Measured Athena Scientific, 2005.
Mean [14] D. Simon, Optimal State Estimation. John Wiley & Sons, 2006.
Position Error [mm]
60 Racket
[15] S. Bouabdallah, A. Noth, and R. Siegwart, “PID vs LQ control
40
techniques applied to an indoor micro quadrotor,” in Intelligent Robots
and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ Inter-
20
national Conference on, vol. 3, Sept.-2 Oct. 2004, pp. 2451–2456
vol.3.
0
[16] S. Lupashin, A. Schöllig, M. Sherback, and R. D’Andrea, “A simple
0 10 20 30 40 50 60 70 80 learning strategy for high-speed quadrocopter multi-flips,” in IEEE
Time [s]
International Conference on Robotics and Automation (ICRA), 2010,
pp. 1642 –1648.
Fig. 12. Total position errors while commanding the vehicle to hover at [17] B. W. McCormick, Aerodynamics Aeronautics and Flight Mechanics.
a fixed location. The blue line shows the distance that the quadrocopter John Wiley & Sons, Inc, 1995.
is from the commanded point and the red shows the mean error over this
time. The racket has a usable radius of approximately 50 mm, shown by
the green line.
5120