Professional Documents
Culture Documents
computers &
mathematics
--
PERGAMON Computers and Mathematics with Applications 41 (2001) 1063-1076
www.elsevier.nl/locate/camwa
1. INTRODUCTION
The literature on control systems disturbed by random noises is expanded around disturbances
in the form of Gaussian white noises, and there are two main reasons for that. First, in a first
approach, this noise is a suitable approximation for a large family of disturbances which are
encountered in real systems, and second, we have at hand many theoretical results for this kind
of noise. Nevertheless, with the advent of computer vision, in the very near future we shall have
at hand control systems driven by dynamic vision devices, and it appears that in this case, there
are other white noises which are involved in such systems, and thus, have to be considered in a
complete analysis. This is exactly our concern in the following.
Our purpose is two-fold. First, we shall show that, in computer vision, as a result of the
basic definition of images in terms of pixels, fractional white noise of order 4 is quite relevant
in problems related to noisy observations. And second, we shall examine the incidence of this
This paper was written while the author, who is from the French part of the West Indies (Gusdeloupe Island),
has been living in Canada. All the help and the support came from outside Canada.
0898-1221/01/$ - see front matter @ 2001 Elsevier Science Ltd. All rights reserved. Typeset by A&-TH
PII: SO898-1221(00)00340-0
1064 G. JUMARIE
feature in control feedback systems via dynamic vision. But before, let us introduce the general
framework of the problem.
The motion of a rigid body in R3 can be described by a point in the Euclidean group of
transformations, g(t) = (R(t),T(t)) = (rotation, translation), which act on points of R3 via the
equation
X(t + 1) = R(t)(X(t) - T(Q). (1.1)
We measure the projection of each feature point Xi = (Xf , Xi, Xi), i = 1,. . . , N and define
the coordinates xi E R2 of the generic point of the image plane as
(1.2)
In problems involving observation noises, one is used to assume that xi is measured up to some
noise,
yi(t) = xi(t) + U?(t), (1.3)
where wi(t) is a Gaussian white noise with zero mean and given covariance matrix. For further
details, see, for instance, [l-7].
Our suggestion herein is that a meaningful alternative to the Gaussian white noise is the fmc-
tional Gaussian white noise of order 4, and we shall examine this point and some of its conse-
quences in the following.
The paper is organized as follows. As a preliminary introduction, we shall give a brief back-
ground on the definition of complex-valued fractional Brownian motion of order n, C-(fBm), in
the following, via random walk in the complex plane. Then, we shall explain why the C-(fBm), is
involved, in quite a natural way, in computer vision, and we shall put in evidence some properties
of complex-valued fractional Gaussian white noise of order 4. Given this prerequisite, we shall
apply the central limit theorem to define the integral of fractional Gaussian white noise, and then
we shall use this result to randomize transfer functions of linear control systems. The application
to linear feedback systems will be outlined, as well as stochastic optimal control of order n.
PRELIMINARY REMARK. The standard Brownian motion (of order 2) is defined as the limit of
the random walk on the real axis, and the latter involves a random variable X which takes on
the values +l and -1 with the respective probabilities l/2. Basically, X is defined on the roots
of order 2 of the unity.
In order to extend the model to an order n, we need a random variable of which the first (n - 1)
moments, including the moments of even orders, are zero. In quite a natural way, we are led to
consider a complex random variable 2 which runs on the complex roots of the unity.
Let Wk(n), k = O,l,. ..,n - 1 denote the n complex roots of the unity: wk(n) = exp{2ilc?r/n},
i2 = -1. Assume that at each instant 0, 1,2,. . . , the random variable &(n) E C takes on the
values Us(n), WI(~), . . . ,w,+_l(n) with the respective probabilities ps,pr,. . . ,p,_l, po +pl +e+.+
p,-1 = 1. Given randomly selected value wj(n) for &(n), we consider the step length AZ, z E C,
AZ := Ax + iAy (the symbol := means that the left side is defined by the right one). For each
time instant k, we define the random variable
withp+q= 1.
Randomization of Transfer Functions 1065
The random variable &(n)&(n) defines a random walk in the complex plane, and assuming
that we start at the origin ws = 0, the position of the moving point z = x + iy at the instant j
is given by the expression
i-l
(2.2)
k=O
In the following, we shall more especially consider the case when po = pi = . . . = p,_l = l/n
and p = q = l/2, to then write &(n), wk, and Zj, and
If TJdenotes the variable of Fourier’s transform, then the characteristic function of zj is given
by the expression
1 + $(i~)~(Az)~
j
+ o (lAz12”j) , G-3
>
PROPOSITION2.1. The limit of the complex random wauC Zj defined by equation (2.4) is a
stochastic process of which the probability density p(z, t) is a solution of the heat equation
(2.7)
.
(2.8)
(2.11)
(2.12)
I $z,
dt
R2
tp dz = $4”,% (2.13)
IfpEp(z,
dz”
t)eivz dz = (-l)n(iv)n(pz(v, t). (2.14)
In the special case when (p=(z),t) is defined by expression (2.9), these equalities (2.11) and (2.12)
can be combined in the form
An alternative to this definition, which may be useful to identify fractional C-valued noises in
practical problems, is the following one.
DEFINITION 2.1. Rademacher’s random variable: it is the random variable R(n) which takes on
the values wk(n), k = 0, 1,2,. . . , n - 1 with the uniform probability l/n.
where 6j,n is the Kronecker delta; clearly, 6j,,, = 0 when j # n while S,,,,, = 1.
DEFINITION 2.3. The continuous stochastic process b(t,n), t > 0, n = 2,3,4,5, is a, Brownian
motion of order n defined in R lln if it satisfies the following conditions:
(i) b(0, n) = 0 almost surely,
(ii) b(t , n) has stationary independent increments,
(iii) for every t > 0, the differential db(t, n) is given by the expression
where R(t, n) is a Rademacher white noise of order n, and w(t) is a Gaussian white noise with
zero mean and variance a2(t). It is assumed that R(t, n) and w(t, n) are mutually independent.
In the sequel, we shall introduce the coefficient Q(n) defined as
12.
I
n = 2k, (2.18)
Q(n) := 2n/2(n/2)!’
2k+lkl
:=z+
The following results can be easily obtained.
n=2k+l. (2.19)
In this form, w(t, n) appears to be the absolute value of a Gaussian white noise which rotates
on the finite grid defined by the complex roots {Wk(n)} of the unity. For further details, see [8].
SUPPORT TO THIS CONJECTURE. Assume that the image is defined in the (x1, sz)-plane and let
(3.2)
z := 21 + ixz. The displacement of objects can be described by equation (1.1) which we rewrite
here for convenience:
z(t + 1) = R(t)@(t) -T(t)), (3.3)
where T(t) and R(t) define a translation and a rotation, respectively. In a continuous time
modelling, we shall write
i(t) = fl(t)+) + j-i@), (3.4)
again with the observation
z’(t) = z(t) + wz(t). (3.5)
Here, in our computer vision problem, z(t) is the coordinate of a pixel of the image, and one
may safely assume that z’ is a neighbouring pixel of z(t): this is the modelling which probably
involves the smallest amount of assumptions.
In other words, w(t) can be thought of as a C-valued random variable which takes on the values
Apexp{2ikr/4}, k = 0, 1,2,3 with the uniform probability l/4, which amounts to replacing
equation (3.5) by
z’(t) = z(t) + w(t, 4), (3.6)
where w(t, 4) is the random walk of order 4.
The continuous version of this observation equation would be
dz’(t) = z(t) dt + db(t,l). (3.7)
1068 G. JUMARIE
Consider the 3 x 3 pixel matrix (B’, B, A’), (C, M, A), (C’, D, 0’) = (first row), (second row),
(third row), in which M is the centre of the square (B’, A’, D’, C’).
When M is observed with some measurement errors, then in quite a natural way, it will be
identified (or confused) with one of its neighbouring points. At first glance, this observed point
might be an element either of the set {A, B, C, D} or of the set {A’, B’, C’, D’}. With the notation
of equation (2.4), a modelling of this error term A.zj at the discrete time j can be written in the
form
Azj = &Rj(4)zuj + &(l - Pj)Rj(4)wjei”‘4, (3.8)
where & is a Bernoulli random variable (& = 1 or 0 with the respective probabilities p and q,
p+q= 1).
The presence of the Bernoulli variable contributes some difficulties of a theoretical nature, but
simplification can occur in some special cases of interest.
(9 If we restrict ourselves to the nearest points {A, B, C, D}, one then has the equality
(iv)”
tpb(v,t) = exp
HX 1 + Jzeis14
>
--?_cJ”t
?z. 1
. (3.11)
(iii) A third approdmate model consists in working as if all the points A, A’, B, B',C,C' were
on the same circle, in which case one will write
Despite the fact that the maximum entropy principle is by now well known, we bear it in mind
for the convenience of the reader.
MAXIMUM ENTROPY PRINCIPLE. Suppose we know that a system has a set of possible states Zi
with unknown probabilities p(x:i), but we know that the latter satisfies some given constraints,
Randomization of Transfer Functions 1069
The same principle applies to continuous probability density p(x) with the entropy
H,(X,Y) := -
JRZ
~(4 W.4 da:dy7
where the notation Hz(X, Y) emphasizes that we are dealing with the entropy of the pair (X, Y)
(4.4)
J RZ
Iz~~~~(z)dxdy = 2cr2n,
where the coefficient 2 is introduced to take account of the fact that we are dealing with the
pair (X, Y).
Applying the MEP with the Lagrange parameter X and the so-called partition function Q(A)
(we do not use the standard notation Z(X) since here Z denotes the variable Z := X + iY) yields
e-xIzp
Pdz) = Q(X) 7 (4.6)
with
x = (2na2n)-l, (4.7)
Q(A) :=27rX-“7 (4.8)
or more explicitly
exp{ -]z]2n/2n02n}
Pan(z) =
27r(2n)lk# (1 + l/n)’ (4.9)
1070 G. JUMARIE
In the following, we shall more especially consider the special case n = 2, and we have the
following result.
LEMMA 4.1. Assume that the random variable Z = X + iY is defined by the probability density
1)4(z) = (2a27r3/2
)-‘exp{--$}, (4.10)
E {Z4} = (4.12)
(ii) Next, according to the various signs of the product XY in the plane, without any calculus,
one has the equality
E{XY} = 0, (4.17)
The point of importance for our purpose is that the C-(mm), is stable in Levy’s sense; loosely
speaking, the sum of independent C-(mm), is also a C-(mm),. This can be detailed as follows.
(i) Assume that bi(t, n) and bz(t, n) are two independent C-(fBm), with the respective char-
acteristic functions &(V) = exp{((iv)n/n!)ayt} and f&(w, t) = exp{((iv)n/n!)a?jt}, then
the characteristic function of bl(t, n) + bz(t, n) is
=exp{yo:t}, (4.19)
with
a,n=(T;+fJ;. (4.20)
(4.21)
(iv) Refer to the C-(fBm),s bl(t, n) and bz(t, n) in (i), then the weighting combination
where p1 and pz are positive real parameters, is also a C-(fJ3m), with the characteristic
function
&%“(V t) = exP @$pror
. + &$)t} . (4.23)
{
This last result is a direct consequence of (i), (ii), and (iii) all together.
LEMMA 5.1. Let z(t) denote a C-valued fractional stochastic process with independent incre-
ments and such that
and let f(t) denote a given nonrandom complex-valued function. Then, the integral
J
b
I:= f (tMt) dt (5.3)
a
can be defined in IWs sense, and it is a random variable of which the characteristicfunction is
(iv)* t
41(v) = exp -
1n- I
J0
(f(~)YaY~) d7 .
>
(5.4)
PROOF.
STEP (i). The lemma holds when z(t) is in the form
where w (t, n) is a fractional Gaussian white noise of order n with the moment ~~ (t), and z’(t) is
a stochastic process independent of w(t, n).
(i) Indeed, using Maruyama’s notation, we shall rewrite (5.3) in the following form which is
more consistent with Itb’s framework, that is,
J
b
I= f(t)z’(t)w(t, n) (dt)l’“. (5.6)
Q
(ii) This It& integral of order n is the limit, in the nth moment mean sense,
n-l
1 = n~p$W1’n c f (tj) z’ (tj) [w (tj+l) - w (tj)] .
j=O
1072 G. JUMARIE
STEP (ii). Assume now that t(t) is not given in the form of equation (5.5); then we shall determine
a stochastic process z’(t) such that
As a result, if one refers to the modelling of w(t,n) as a rotating Gaussian white noise on the
grid defined by {wk(n)}, one has the equalities
m = P’(wldt>, (5.14)
e(t) = e’(t) + e,(t), (5.15)
In the frequency analysis of linear feedback systems, one of the basic tools (at least the classical
one) is the so-called Laplace transform defined by the expression
J
co
Z(s) := eestz(t) dt (5.16)
0
m3
= e -““z(t) dt, s = iw. (5.17)
J0
When z(t) is a fractional stochastic process with independent increments, then according to
Lemma 5.1, Z(s) itself is a fractional random variable with the nth moment defined by equa-
tion (5.4). In the special case when z(t) is the fractional Gaussian white noise w(t,n) with the
nth moment a”(t), then its Laplace transform W(s, n) is a fractional Gaussian variable of order n
with the nth moment
E{W”(s, n)} = lm e-“gtan(t) dt. (5.18)
Randomization of Transfer Functions 1073
(6.1)
to have
W(s,n) = eestR(t, n)lw(t,n)l (dt)‘ln. (6.2)
therefore,
in such a manner that, as a result, W(s, n) can be thought of as if it were the Laplace transform
Assume that we are dealing with the nonrandom C-valued signal z(t) and that the latter is
disturbed by a pixel noise described by a fractional Gaussian white noise w(t, n), in such a manner
that the actual signal is not z(t) but rather is
where W(s, n) is the fractional noise defined by equation (6.5), and the initial stochastic problem
in time is so converted into a stochastic problem with the variable s.
In the next section, we shall show how this simple remark provides a statistical approach to
the analysis of linear systems described in terms of transfer function.
1074 G. JUMARIE
where W(s, n) is the Laplace transform of a given disturbing fractional Gaussian white noise of
order n w(t, n), R(s) is the Laplace transform of the input, and C(s) is the Laplace transform of
the output.
C(s) and R(s) are related by the equation
and in the absence of noise, it is well known that the stability of the system is determined by the
characteristic equation
1 + G(s)H(s) = 0, (7.4)
which can be studied either by the corresponding root locus or the Nyquist locus, for instance
(see, for instance, [13]). Rearranging equation (7.1) provides
C(s)
-= G(s)
(7.5)
R(s) 1+ G(s)(H(s) - WCs,n)G(s))/(l + WCs,n)>’
(8.2)
where T has a given fixed value. Obviously, it is assumed that such a minimization makes sense;
in other words, G will be explicitly a positive functional.
Randomization of Transfer Functions 1075
It is well known that, since there is no single optimal trajectory, there is no stochastic equivalent
to the Euler-Lagrange equations, and that consequently, it is necessary to work via the principle
of optimality. The classical theory (see, for instance, [14]) can be extended as follows to the
optimal control of stochastic fractional dynamics. We consider the stochastic value function
(8.3)
I I
j=l *
therefore,
dV*
- = y + V,*fi + $(V’)(“‘f,“O,“. (8.6)
dt
(iii) The stochastic principle of optimality is obtained by combining equations (8.4) and (8.6),
clearly,
9. CONCLUDING REMARKS
For convenience (on the theoretical standpoint), we are used to representing noisy observations
in control systems by means of Gaussian white noises. Nevertheless, in the new forthcoming
trend of control systems involving computing vision, it appears that this modelling is not quite
satisfactory, and that, as a consequence of the definition of a computerized image in terms of
pixels, a modelling via complex-valued fractional Gaussian white noise would be much more
suitable.
This feature forces us to revisit all the basic problems of control theory, and in a preliminary
study here, in the present paper, we restricted ourselves to the classical approach via transfer
functions of linear systems. But instead of examining how the usual approach via spectral density
can be generalized or merely adapted to this case, we rather chose another point of view which
basically considers Laplace transform of white stochastic processes as random variables. In this
way, one can expand a statistical approach to the analysis of linear control systems defined in
terms of transfer functions.
The point of importance is the presence of a new family of fractional Gaussian white noise in
image processing.
1076 G. JUMAFUE
REFERENCES
1. B. Espiau, F. Chaumette and P. Rives, A new approach to visual servoing in robotics, IEEE tins. on
Robotics and Autom. 8, 313-326 (1992).
2. 0. Faugreas, Three Dimensional Vision, a Geometric Viewpoint, MIT Press, (1993).
3. D.B. Gennery, Visual tracking of known 3-dimensional object, Int. J. Computer SC. 7 (3), 243-270 (1992).
4. B. Horn, Robot Vision, MIT Press, (1986).
5. H.C. Longuet-Higgins, A computer algorithm for reconstructing a scene from projections, Nature 293,
133-135 (1993).
6. S. Maybank, Theory of reconstruction from image motion, In Information Sciences, Vol. 28, Springer-Verlag,
(1992).
7. S. Soatto, R. Frezza and P. Perona, Recursive motion estimation on the essential manifold, In Proc.
Z@ Europ. Conf. Computer Vision, (Edited by J.-O. Eklundh), LNCS-Series, Vol. 800-801, pp. 61-72,
Springer-Veriag, Stockholm, (1994).
8. G. Jumarie, A new approsch to complex-valued fractional Brownian motion via rotating white noise, Chaos,
Solitons and Fmctals 9 (6), 881-893 (1998).
9. J. Uffink, Can the maximum entropy principle be explained as a consistency requirement?, Studies in the
History and Philosophy of Modem Physics 26, 223-261 (1995).
10. J. Uffink, The constraint rule of the maximum entropy principle, Studies in the History and Philosophy of
Modem Physics 27, 47-79 (1996).
11. G. Jumarie, A new approach to frsctional Brownian motion of order n via random walk in the complex
plane, Chaos, Solitons and Fractols 10 (7), 1193-1212 (1999).
12. G. Jumarie, Fractional Brownian motion with complex variance via random walk in the complex plane and
applications, Chaos, Solitons and i%ctals 11 (7), 1099-1111 (2000).
13. R.C. Dorf, Modem Control Systems, Addison Wesley, New York, (1989).
14. R.F. Stengel, Stochastic Optimal Control, Wiley, New York, (1986).