You are on page 1of 6

Velocity Control for a Quad-Rotor UAV Fly-By-Camera Interface

Andrew E. Neff, DongBin Lee, Vilas K. Chitrakaran, Darren M. Dawson and Timothy C. Burg
Clemson University
aneff@ieee.org

Abstract UAV. This perspective, which shall be referred to as the


fly-by-camera perspective, presents a new interface to the
A quad-rotor unmanned aerial vehicle (UAV) and a two- pilot. In this proposed approach, the pilot commands
degree-of-freedom (DOF) camera unit are combined to motion from the perspective of the on-board camera – it is
achieve a fully-actuated fly-by-camera positioning system. as though the pilot is riding on the tip of the camera and
The flight control interface allows the user to command commanding movement of the camera ala a six-DOF flying
motions in the camera frame of reference – a natural camera. This is subtly different from the traditional remote
perspective for surveillance and inspection tasks. A control approach wherein the pilot processes the camera
nonlinear velocity controller, derived using Lyapunov view and then commands an aircraft motion to create a
stability arguments, produces simultaneous complementary desired motion of the camera view.
motion of the quad-rotor vehicle and the camera In this paper, a quad-rotor UAV model will be coupled
positioning unit. The controller achieves Globally Uniform to a two DOF camera kinematic model to create a fully
Ultimate Boundedness (GUUB) on all velocity error terms. actuated camera frame. Based on this composite model, a
nonlinear controller is developed using a backstepping
1. Introduction control design approach. A simulation of the proposed
controlled is shown to provide a proof of concept.
The quad-rotor unmanned aerial vehicle (UAV) shown
in Figure 2 can be used for civilian and military surveillance
and inspection tasks. The six-degree-of freedom rigid body
craft is positioned by changing the relative speed of the four
rotors. These speed differences can produce torques about
the roll, pitch, and yaw axes in addition to the thrust a b c

produced as the sum of the four rotating blades. Since the


system is inherently under-actuated, there are only four
control inputs to directly control the six degrees of freedom,
makes performing surveillance and inspection tasks
challenging. d e f

The inherent problem of attaching a fixed camera to the Figure 1. In (a,b,c) the camera is fixed to the UAV
quad-rotor is illustrated in Figure 1.a, b, and c where it can and motion of the UAV changes the camera
be seen that the motion of the camera is directly tied to the orientation. In (d,e,f) the movable camera mount
motion of the UAV. That is, it is not possible to can compensate for UAV movements to keep the
simultaneously specify the attitude of the camera and the camera positioned at a desired orientation.
attitude of the aircraft. In contrast, the moveable camera
depicted in Figure 1.d, e, and f maintains the same IMU Sensor
orientation regardless of the UAV position. The UAV and RF-Link Camera
camera motions must be coordinated in order to keep the
camera pointed in a particular direction. The traditional
approach has been to have a pilot position the aircraft about
a target and have a camera operator position the camera
with the subtask of compensating for motion of the aircraft. Figure 2. DraganFlyer X-Pro
A new approach to this control problem was presented in
[1] where the UAV and the camera positioning unit are 2. Model
considered to be a single robotic unit. From this perspective
a controller can be developed which will simultaneously
control both the UAV and the camera positioning unit in a
2.1. Quad-Rotor Dynamics
complementary fashion. Here, this control approach is
As discussed above, the quad-rotor UAV is an inherently
exploited to provide a new perspective for piloting the
underactuated system. While the angular torques are fully 1 sinψ tan θ cosψ tan θ  ψ 
actuated, the translational forces are only actuated in the z- J = 0 −1
cosψ − sinψ  , Θ F = θ  . (8)
 I
C
direction. The forces and torques are expressed as
T 0 sinψ/ cos θ cosψ/ cos θ   φ 
FfF = 

0 0 u1  ∈3
T
(1) As long as the θ term is not near ± π2 , the yaw, pitch, roll
Ft F = 
u
 2
u3 u4  ∈3
representation will not reach singularity. To convert
where FfF ( t ) is the translational force vector and Ft F ( t ) is between RFI ( t ) and Θ IF ( t ) ,
the torque vector. Figure 3 shows the three coordinate cosφ cosθ −sinφ cosψ + cosφ sinθ sinψ sinφ sinψ + cosφ sinθ cosψ 
frames used to develop the kinematic and dynamic models. RFI = sinφ cosθ cosφ cosψ + sinφ sinθ sinψ − cosφ sinψ + sinφ sinθ cosφ 
(9)
Four equations describe the rigid-body motion of the quad-  −sinθ cosθ sinψ cosθ cosψ 
rotor UAV [3] is used [4].
I I F
x IF = RF vIF (2)
mv FIF = −mS  ωIFF  vIFF + N1 ( ⋅) + mgRIF e3 + FfF (3) 2.3. Camera Kinematics
 F 
R F = RF S  ωIF 
I I
(4) As stated, the quad-rotor can thrust in the z direction, but
where x I
(t ) ∈  3
is the time derivative of the position of it cannot thrust in the x- or y-directions. Since the quad-
IF
rotor helicopter is underactuated in two of its translational
the UAV frame, v F
IF ( t ) ∈ 3 is the translational velocity of velocities, a two actuator camera is added to achieve six
the UAV, ω F
(t ) ∈  3
is the angular velocities of the UAV degrees of freedom (DOF) control in the camera frame. A
IF
tilt-roll camera is added to the front of the helicopter as
and is calculated directly without modeling the angular
seen in Figure 3. With the new camera frame, there are now
dynamics, and RFI ( t ) ∈ SO ( 3) is the rotational matrix that three rotations and three translations, a total of six DOF, to
transforms the vectors from the UAV frame to the inertia actuate. To control any of the DOF, either the camera must
frame. m ∈  is the mass of the UAV, and M ∈  3 x 3 is the move, the UAV must move, or both.
UAV xc
constant moment of inertia matrix for the UAV. S ( ⋅) ∈  3 x 3 I (F) xf
x IF
represents a skew symmetric defined as [4] Camera
Inertial RFI Frame (C)
 0 −ω3 ω2   
 ω1  (I)
xi
yf
RCF

S (ω ) =  ω3   
0 −ω1  , ω = ω2  ∈  3 . (5)
zc
zf (Optical Axis)
 
 −ω2 ω1 0  ω 
 3
yi
zi yc
N1 ( xFI , RFI , vIFI , ωIFI , t ) ∈  3 and N 2 ( xFI , RFI , vIFI , ω IFI , t ) ∈  3
are the unmodeled, non-linear terms in the translational and Figure 3. Frames of reference used in the model
rotational dynamics, respectively. Gravity is assumed to be
known and is shown separately in (3) using g. 2.4. Tilt-Roll Camera on Front of UAV

The rotation matrix between UAV frame and Camera


2.2. Quad-rotor Kinematics
frame seen in Figure 3 is

Many of the equations, such as (2)-(4), will need either  − sin θ t cos θ r sinθt sinθr cosθt 
 
R ( t ) or RIF ( t ) . While (4) expresses how to obtain
I
F
RCF =  sinθr cosθr 0  . (10)
 
 − cos θ cos θ cosθt sinθr −sinθt 

R ( t ) , it involves integrating an SO(3) matrix, which will
I
F  t r

not yield another SO(3) matrix due to numerical integration Since only two of the angles vary, the Jacobian can be
method errors. However, the integration can be done on the redefined as

angles in (roll, pitch, yaw) form. A Jacobian will be 0 cos θ t 
 
required in order to satisfy the equation J C ( front ) = 1 0  (11)
 FIF
ωIFF = J F Θ (6)  
0

− sin θ 
t
which can then be used to solve for
t and finally
Θ IF = ∫ J F−1ωIFF dt (7) T
0
F
ωFC = J C ( front ) θ C, θ C = [θ t θ r ] ∈  2 (12)
where Θ ( t ) ∈ R represents the roll, pitch, and yaw
I
F
3
which facilitates the calculation of the angles of the camera.
angles between the UAV frame and inertia frame. The
Jacobian matrix used, J C−1  Θ FI  is defined as [2]
 
3. Control Development and the gravity term be defined as
G11 = gRIC e3 . (25)
3.1. Translational error formulation Use (22) on S (ω F
IF (t ))δ in (23) to obtain
 C 
The control system developments begins by defining the r v = − S ω


r + N11 + G11
IC  v
(26)
1 F 
velocity error between the desired camera velocity, −  S  ω IC
C  C
 CICd  +  m RF F f − RF S (δ ) ω IF  .
 vICd + v

C F C
 
C
vICd ( t ) ∈ 3 , and the actual velocities in the camera frame, Substitute FfF ( t ) with (1) and S (δ ) with (5) in the last
ev  vIC
C C
− vICd . (13)
term of (26) to obtain
Using the velocity error, an auxiliary signal is defined as 1 C F
m RF F f − RF S (δ ) ωIF =
C F
rv  ev + RFC δ , (14)
   
where δ will be used to couple the velocity error with the  0  0
  −δ 3 δ2  
  (27)
0 −  δ3 −δ1  ωIFF 
C  1  
angular rates. The quad-rotor can only thrust in the z R F 



 0
direction, so δ is in the form of δ =  0, 0, δ 3  T . Substituting  m   




u 
 1
 −δ 2 δ1 0  


C
(13) into (14) and expanding vIC ( t ) produces which makes (26)
C C C
rv = vFC + vIF − vICd + RFC δ . (15) r v = − S  ωIC  rv + N11 + G11 −  S  ωIC  vICd + v ICd 
 C   C  C C

Since the camera is fixed to the UAV there is zero velocity


between the camera and UAV and vFC C
= 0 . Since vIFF ( t ) is 0 0 δ3 −δ 2   u1  (28)

δ1    .
 
C
measured with respect to the UAV frame, a transformation + R 0 F −δ 3 0
 
to the camera frame is made to yield  m1 δ2 −δ1 0  ω IFF 
rv = RFC vIFF − vICd
C
+ RFC δ . (16)
The time derivative of (16) is 3.2. Angular error formulation
r v = R FvIF + RF v IF − v ICd + R Fδ .
C F C F C C
(17)
Similar to (4), R CF ( t ) can be expressed as The auxiliary signal definition in (16) and the
subsequent open-loop system (28) capture the behavior of
C  F 
R F = − RF S  ωFC  .
C
(18) the translational velocity errors. A similar error term for the
Substituting (3) for v ( t ) and (18) for R ( t ) into (17)
F C rotational velocity is now motivated. The rotational velocity
IF F
error is defined as
yields
eω  ωIC
C C
− ωICd (29)
r v = − RF S (ωFC + ω IF ) vIF + m1 RF N1 ( ⋅) + gRI e3
C F F F C C

(19) where ωIC ( t ) is the rotational velocity of the camera and


C

+ m1 RFC FfF − v CICd − RFC S  ωFC


F 
δ.

C
ωICd ( t ) is the desired rotational velocity. Expanding
By adding and subtracting RFC ( t ) S ( ωIFF ( t ) ) δ , (19) will ωIC ( t ) produces
C

become eω = RFC  ωIFF + ω FC


F  C
 − ω ICd .

(30)
C  F  F 1 C 1 C F
r v = − RF S  ωIC  vIF + m RF N1 ( ⋅) + gRI e3 + m RF Ff
C
Using (12), (30) can be rewritten as
(20)
− v C C
−R S ω

F 
δ +R S ωC  F 
δ. eω = RFC ωIFF + RFC J C θ C − ωICd
C
. (31)
ICd F  IC  F 
 IF 

Some skew-symmetric properties include [4]


3.3. Open-loop error dynamics
S  ω ICF  = RCF S  ωIC
C  C
 RF

(21)
and In preparation for the control design, it is useful to
S ( a ) b = −b × a. (22) combine r v ( t ) from (28) and eω ( t ) from (31) into a single
C
Using (21) in (20), R R = I 3×3 , and adding andF vector to obtain the open-loop error system
F C

subtracting S (ωIC
C
( t ) ) vICd
C
( t ) to the result to obtain
C F  C  C F C C 
r v = − RF RC S  ωIC   RF vIF + RF δ − vICd 
1 C 1
+ RF N1 ( ⋅) + gRIC e3 + RFC FfF − v CICd (23)
m m
+ RFC S  ωIFF  δ − S  ωIC
C  C
 vICd .

Let the non-linear term be defined as


1 C
N11 = m RF N1 ( ⋅) (24)
   

r
 v

 − S  ω IC
C 
 rv 

 − S  ω IC
C  C

 CICd 
 vICd − v

 N11 + G11  The time derivative of V2(t) is found to be
  =   +   + 
V 2 = eω eω . (42)
T
       
e
 ω   O3 x1  
C
−ω ICd  
O3 x1 
   
To obtain e ω ( t ) take the time derivative of eω ( t ) from
0 0 δ 3 −δ 2 
 0 −δ 0 δ1 O3 x 2 
(35)
 3
 u  (32) e ω = U 2 − ω ICd .
C (43)
 RC O3 x 3   m1 δ 2 −δ1 0   1F 
+ F   ω IF  . Substitute (43) into (42) for eω (t) to obtain
O3 x 3 RFC   0 1 0 0   θ 
V 2 = eω (U 2 − ω ICd ) . (44)
T C
0  C 
0 1 0 J C  6
  U ∈ In order to force V 2 ( t ) to be negative, make U 2 ( t ) cancel
 0 

0 0 1
 out ω CICd ( t ) and add an extra term to guarantee V 2 ( t ) is
 B∈ 6 x 6
U (t ) contains the six control inputs available to the negative, making
U 2 = ω ICd − ki eω . (45)
C
controller; however, it is multiplied by the B ( t ) matrix.
Integrate (45) to obtain
Instead of calculating U ( t ) directly, a new control signal
U 2 = − ki ∫ eω + ω ICd
C
. (46)
U ( t ) is defined as
U  BU (33) Theorem 1: The control laws of (39) and (46) guarantee
where that the translational velocity error (29) and angular
 3
U ∈   velocity (29) are Globally Uniformly Ultimately Bounded
U   1 ∈ .
6
(34)
3 
 U 2∈ 
(GUUB) in the sense that
With this new U ( t ) vector, (32) becomes ev ≤ α11e−α12 + α13 (47)
−α 22
    eω ≤ α 21e + α 23 . (48)

r
 v

  − S  ω IC
C 
 rv 

 − S  ω IC
C  C
 CIC d 
 v IC d − v

  =   +  
     
e
 ω   O 3 x1  
C
− ω IC  (35)
   d 
Proof: To show that ev(t) and eω(t) are stable, a Lyapunov

N + G 11   U 1 
 candidate is defined by the addition of V1(t) in (36) and
+ 11   +  

O 3 x 1  U 2 
 V2(t) in (41) to obtain
in which U1 ( t ) and U 2 ( t ) can be designed to control rv(t) V = 12 rvT rv + 12 eωT eω (49)
and eω(t). which is positive definite. The next step is showing that the
derivative has two parts: a negative definite part and a
3.4. Control Design and Stability Analysis positive part that will be bound by an arbitrarily small
bounding constant. Start by taking the derivative of (49) to
The first control input U1(t) is designed using the non- obtain
negative scalar function, V1(t), formed from the auxiliary V = rvT r v + eωT e ω. (50)
signal defined in (16), Using the r v ( t ) from (35) and (43) for e ω ( t ) and
V1 = 12 rvT rv . (36) substituting into (50) to obtain
V = rvT U1 − v CICd + N11 + G11 − S  ωIC
 C 
The time derivative of V1(t) is found to be  rv
 
V 1 = rv r v.
T
(37) (51)
 C  C 
Substituting (35) into (37) yields − S  ωIC  vICd  + eω (U 2 − ω ICd ) .
T C

T  
 C  C  C 
V 1 = rv U1 − S  ωIC  vICd − v ICd + N11 + G11 − S  ωIC  rv  . (38)
C
Taking U1(t) from (39) and the U 2 ( t ) from (45) and
Design U1 ( t ) to cancel out as many terms as possible. substituting them into (51) yields
 ζ12 (⋅)
Leaving only v CICd ( t ) and S (ωIC
C
( t ) ) rv ( t ) untouched, V = rvT  −kr rv + kr RFC δ − rv ε1
− v CICd + N11 − S  ωIC
C 
 rv


ζ12  vIF
F 

U1 = −kr ev − rv 
ε1
 
+S ω


C  C
IC  ICdv − G11 . (39) (
+ ( G11 − G11 ) + S (ωIC
C
) vICd
C
− S (ωIC
C
) vICd
C
) 



(52)
ζ 1 (⋅) is used to compensate for the unmodeled non-linear + eωT ( −ki eω + ω ICd
C
− ω ICd
C
).
terms. This can include terms such as air resistance, which To show that the Lyapunov function derivative is negative
is a function of velocity. In order to guarantee ζ 1 ( ⋅) can definite, the vector norms along with (40) replace N11(·)
compensate for N11 ( ⋅) , it must satisfy the inequality with ζ1(·) to yield
2 ζ 2 (⋅)
V ≤ − kr rv + kr δ rv − rv 1ε + v CICd rv
2
(
ζ 1 vIFF ( t ) ≥ N11 (⋅) . ) (40)
1
(53)
To design U 2 ( t ) , use the non-negative scalar function, 0



2
+ ζ 1 ( ⋅) rv − r S  ω C  r − ki eω .
V2 = 12 eωT eω . (41) v  IC  v
While the first two terms of the derivative in (53) are 4. Simulation
negative definite, the rest are not. The last term of first row
in (53) can be bound such that To simulate the controller, a two computer simulation
ζ (⋅)
ε1 ≥ rv ζ 1  vIFF  1 − rv ε1 (
(54)
1
) system is used. On a QNX Real-Time Operating System
(RTOS) computer, a compiled QMotor program is executed
because it is in the form of non-linear damping according to
to simulate the rigid body dynamics in equations (2)–(4)
Lemma A.10 in [5]. Notice that ε1 was first introduced in
and the control feedback in equations (39) and (46). The
(39) and is completely adjustable in the control input. The
control inputs are formulated based on sensor velocity
middle term of (53) can be bound such that
readings that come from the dynamics calculations.
ε 2 ≥ sup vICd
C
+ kr δ
∀t
( (55) ) A second computer running Windows XP and
because all of these terms are constants. Since ε2 is FlightGear will communicate with the first computer via
multiplied by rv(t), it will have to be bound also. To do this, UDP. FlightGear is used to generate a virtual reality camera
consider the expression view. FlightGear can be used to show the position and
 2 orientation of the camera and UAV, as seen in Figure 4
0 ≤  λ1 rv − 1
λ1
ε 2  , λ1 ∈  + . (56) where the UAV is tilted, but the camera view remains level.
 
This can be expanded to say
  Camera view External view
rv ε 2 ≤ 12  λ1 rv + λ1 ε 22  .
2
(57)
 1 
This finally bounds (53) to make
ε2
( λ 2
)
V ≤ − kr − 21 rv − ki eω + ε1 + 2λ2
2

1
. (58)
2
In order to keep the rv term negative definite, it follows
that Figure 4. FlightGear Images
λ1
kr ≥ 2 (59)
in order to keep the quantity negative. Define a constant λ2 Table 1. DraganFlyer X-Pro Parameters
such that Mass 2.721 kg
 λ1  Max Thrust 35.586 N
λ2 = max  kr − 2 , ki  . (60) Max Pitch/Roll Torque 4.067 Nm
 
Now create a larger upper bound on (58) using λ2 to get Max Yaw Torque 2.034 Nm
ε2
V ≤ −λ  r + e  + ε + 2 .
 2 2
(61)ω The parameters used for the quad-rotor are stated in
2

v

1 2 λ1
Since λ1 can be arbitrarily picked, by choosing a larger kr Table 1 and were measured on the DraganFlyer X-Pro.
gain, λ1 will be bigger and can make the ε2 term arbitrarily
small. The same is true for ε1 since it is also arbitrarily 5. Results
picked. Solve for the differential equation in (58) to finally
arrive at The simulation system shows that the controller works
− ( 2 λ2t ) 2ε1λ1 + ε 22 using the tilt-roll camera to augment the quad-rotor
V ≤ V ( 0) e + 4 λ1λ2
, (62)
positioning capabilities.
which is in the form of (47) and (48). Note that λ2 in (62) As shown in the first 2 seconds of Figure 5 and 6, when
affects both the rate at which the exponential term the UAV is just hovering, it will stand still with no errors.
approaches zero and the value of the bounding constant. To examine how the control reacts, a simple experiment is
Since λ2 is merely the maximum from (60), by increasing conducted by commanding the UAV to move left and right.
the gains the bound can be made arbitrarily small and the The experiment is conducted with δ=100, kr=1 and no ki
Lyapunov function can approach zero arbitrarily fast. because there is no angular error in the simulation. To look
at how the velocities and positions of the UAV react, the
Remark 1: It has been shown that ||eω(t)|| is GUUB desired velocity graph seen in Figure 5 is used on the y
according to the Lyapunov proof for V(t) as stated in coordinate of the camera frame only. The desired velocities
Theorem 1. The proof also shows that ||rv(t)|| is GUUB. xd and zd remain zero and ωICd C
= 0 . The actual velocities,
According to (14), rv(t) is ev(t) plus a constant times an seen in Figure 5 approach the desired in a few seconds. It
SO(3) rotation matrix, making ev(t) GUUB. Both desired can also be see that the x velocity changes when the camera
velocities, vICd
C
( t ) and ω ICd
C
( t ) are bound by design, moves left and right. This is because gravity is in the x
therefore vIC
C
( t ) and ω ICC ( t ) are bound. The details of the direction, and sudden changes cause the quad-rotor to lift or
signal chasing are covered in Appendix A. fall a little. However, the controller does respond and
stabilize the x velocity to zero.
camera to successfully create a fully actuated fly-by-camera
Camera velocities: δ =100, k r = 1
frame system. The controller is shown to be Globally
20
yd
Uniform Ultimately Bounded (GUUB) using only velocity
x
information for feedback. The fly-by-camera system allows
10
the camera and UAV to be controlled by the user in a more
Velocity (m/s)

0
intuitive manner. Simulation verifies that the controller
works based on the rigid body model and that the fly-by-
-10 camera system is easy for anyone to fly.

-20
0 5 10 15 20 25 30 35 40 45 50
References
Time (sec)

Figure 5. Translational velocities of UAV [1] V.K. Chitrakaran, D.M.. Dawson, J. Chen, and M. Feemster,
“Vision Assisted Autonomous Landing of an Unmanned Aerial
UAV and Camera Rotation about the camera z-axis: Vehicle”, Proceedings of the IEEE Conf. on Decision and
δ = 100, k r = 1 Control, Seville, Spain, December, 2005, pp 1465-1470.
0.6
[2] T.I. Fossen, Marine Control Systems: Guidance, Navigation,
UAV rotation
0.4 Camera Rotation
and Control of Ships, Rigs, and Underwater Vehicles, Marine
Cybernetics, Trondheim, Norway, 2002.
Angle (radians)

0.2 [3] T. Hamel, R. Mahony, R. Lozano, and J. Ostrowski, “Dynamic


0 Modelling and Configuration Stabilization for an X-4 Flyer,”
Proceedings of the IFAC World Congress, Barcelona, Spain, July
-0.2 2002.
-0.4
[4] M.W. Spong, and M. Vidyasagar, Robot Dynamics and
0 5 10 15 20 25 30 35 40 45 50 Control, John Wiley and Sons, Brijbasi, India, 1989.
Time (sec)
[5] M.S. de Queiroz, D.M. Dawson, S.P. Nagarkatti, F. Zhang,
Figure 6. Rotations of UAV and Camera Lyapunov-Based Control of Mechanical Systems, Birkhäuser,
Boston, Ma, 2000.
Figure 6 shows the rotation of the quad-rotor and the
final rotation of the camera relative to the inertia frame. In A. Appendix
order to move left and right, the UAV must tilt to achieve
that desired trajectory. The graph shows the UAV tilting According to Theorem 1 and its subsequent stability
over .5 radians (29°). The graphs also show that the analysis from (50) to (62) , in (62) V(t) is bounded. If the
camera’s final rotation only goes up to about .15 radians controller gain in (59) are selected to satisfy (58), it ensures
(8.6°). The feel from the camera is a lot smoother then if that ev(t), eω(t), and then rv(t) are bounded. We know that all
there was no correction. desired trajectories v CICd ( t ) and ω CICd ( t ) are assumed to be
However, there are two problems that result from this
bounded and transformation matrices R(Θ)and J-1(Θ) ∈ L∞
system. The first is that the camera also tilts a little in the
opposite direction. When the UAV turns left .38 radians, under the assumption that θ p ≠ ± π2 , based on the definition
the camera is actually turning right .46 radians, resulting in of (8) and (9). Now we can make a conclusion that v CIC ( t )
the final .08 radian rotation. This gives viewers and
observers a counterintuitive feeling because they are in (13) and ω CIC ( t ) in (29) are bounded. Thus v CIC ( t ) ∈ L∞
familiar with there being no correction. The second problem from (16) which results in x IIF ( t ) ∈ L∞ in (2) and ω FIF ( t )
is that the final camera rotation does not return to zero, as
see at 50 seconds. and ω FFC ( t ) are bounded by (30). Owing to ω FFC ( t ) ∈ L∞,
The rotation is settling slightly higher. Both of these
problems are due to the use of velocity only feedback. S (ω FFC ( t ) ) is also bounded, then R CF ( t ) in (18) and θ C ( t )
Without specific angles, it is impossible to overcome the in (12) are bounded. Due to ω FIF ( t ) ∈ L∞, Θ  FIF ( t ) ∈ L∞ in
drift felt by integrating velocities over time. The drift error
in the translational velocities is less detrimental, because the (6). N11 ( ⋅) and v CICd ( t ) are upper bounded from (40) and
position error is not significant. (55). Consequently, the control input U ( t ) ∈ L∞ since the
equations (39), (46), and (33) ∈ L∞. Hence, rv ( t ) ∈ L∞ in
6. Conclusion
(28) and vIFF ( t ) ∈ in the modeling equation (3). Therefore,
This paper shows the development of a non-linear
we can conclude that all the signals are bounded in the
controller for use on a quad-rotor helicopter and a two DOF
velocity control of fly-by-camera system.

You might also like