You are on page 1of 14

AnI-Joumal

computers &
mathematics
--
PERGAMON Computers and Mathematics with Applications 41 (2001) 1063-1076
www.elsevier.nl/locate/camwa

Randomization of Transfer Functions in


Control Systems via Computer Vision
with Pixel Noises of Order n
G. JUMARIE
Department of Mathematics, University of Quebec at Montreal
P.O. Box 8888, Downtown Station
Montreal, Quebec, H3C 3P8, Canada
jumarie. guy@uqam. ca

(Received March 1999; accepted April 1999)

Abstract-Complex-vahred fractionalBrownianmotion of order n can be defined either as rotat-


ing Gaussian white noise on the net defined by the nth roots of the unity, or ss the limit of a random
walk in the complex plane. After a brief background, one shows that thii stochastic process is quite
relevant in computer vision, ss the result of the definition of image in terms of pixels. It follows that
in control systems involving numerical vision, one will have to consider disturbances in the form of
fractional white noise of order n with independent increments, and the present paper deals with this
problem. By using the central limit theorem, it is possible to consider the Laplace transform of such
a process as a fractional Gaussian variable of order n of which the nth moment is the integral of
the nth moment of the white noise. One can then use this result to carry on a statistical analysis of
linear feedback systems subject to disturbing fractional white noises (or pixel noises) in control via
computer vision. Stochastic optimal control of order n is outlined. @ 2001 Elsevier Science Ltd. All
rights reserved.

Keywords-Feedback systems, Computer vision, Dynamic vision, Fractional noises, Transfer


function.

1. INTRODUCTION

The literature on control systems disturbed by random noises is expanded around disturbances
in the form of Gaussian white noises, and there are two main reasons for that. First, in a first
approach, this noise is a suitable approximation for a large family of disturbances which are
encountered in real systems, and second, we have at hand many theoretical results for this kind
of noise. Nevertheless, with the advent of computer vision, in the very near future we shall have
at hand control systems driven by dynamic vision devices, and it appears that in this case, there
are other white noises which are involved in such systems, and thus, have to be considered in a
complete analysis. This is exactly our concern in the following.
Our purpose is two-fold. First, we shall show that, in computer vision, as a result of the
basic definition of images in terms of pixels, fractional white noise of order 4 is quite relevant
in problems related to noisy observations. And second, we shall examine the incidence of this

This paper was written while the author, who is from the French part of the West Indies (Gusdeloupe Island),
has been living in Canada. All the help and the support came from outside Canada.

0898-1221/01/$ - see front matter @ 2001 Elsevier Science Ltd. All rights reserved. Typeset by A&-TH
PII: SO898-1221(00)00340-0
1064 G. JUMARIE

feature in control feedback systems via dynamic vision. But before, let us introduce the general
framework of the problem.
The motion of a rigid body in R3 can be described by a point in the Euclidean group of
transformations, g(t) = (R(t),T(t)) = (rotation, translation), which act on points of R3 via the
equation
X(t + 1) = R(t)(X(t) - T(Q). (1.1)
We measure the projection of each feature point Xi = (Xf , Xi, Xi), i = 1,. . . , N and define
the coordinates xi E R2 of the generic point of the image plane as

(1.2)

In problems involving observation noises, one is used to assume that xi is measured up to some
noise,
yi(t) = xi(t) + U?(t), (1.3)
where wi(t) is a Gaussian white noise with zero mean and given covariance matrix. For further
details, see, for instance, [l-7].
Our suggestion herein is that a meaningful alternative to the Gaussian white noise is the fmc-
tional Gaussian white noise of order 4, and we shall examine this point and some of its conse-
quences in the following.
The paper is organized as follows. As a preliminary introduction, we shall give a brief back-
ground on the definition of complex-valued fractional Brownian motion of order n, C-(fBm), in
the following, via random walk in the complex plane. Then, we shall explain why the C-(fBm), is
involved, in quite a natural way, in computer vision, and we shall put in evidence some properties
of complex-valued fractional Gaussian white noise of order 4. Given this prerequisite, we shall
apply the central limit theorem to define the integral of fractional Gaussian white noise, and then
we shall use this result to randomize transfer functions of linear control systems. The application
to linear feedback systems will be outlined, as well as stochastic optimal control of order n.

2. FRACTIONAL BROWNIAN MOTION OF ORDER n

2.1. Random Walk in the Complex Plane

PRELIMINARY REMARK. The standard Brownian motion (of order 2) is defined as the limit of
the random walk on the real axis, and the latter involves a random variable X which takes on
the values +l and -1 with the respective probabilities l/2. Basically, X is defined on the roots
of order 2 of the unity.
In order to extend the model to an order n, we need a random variable of which the first (n - 1)
moments, including the moments of even orders, are zero. In quite a natural way, we are led to
consider a complex random variable 2 which runs on the complex roots of the unity.

Derivation of the model

Let Wk(n), k = O,l,. ..,n - 1 denote the n complex roots of the unity: wk(n) = exp{2ilc?r/n},
i2 = -1. Assume that at each instant 0, 1,2,. . . , the random variable &(n) E C takes on the
values Us(n), WI(~), . . . ,w,+_l(n) with the respective probabilities ps,pr,. . . ,p,_l, po +pl +e+.+
p,-1 = 1. Given randomly selected value wj(n) for &(n), we consider the step length AZ, z E C,
AZ := Ax + iAy (the symbol := means that the left side is defined by the right one). For each
time instant k, we define the random variable

+Az, pr{wk = +Az} = p,


tik = (2.1)
-AZ, pr{wk = -AZ} = q,

withp+q= 1.
Randomization of Transfer Functions 1065

The random variable &(n)&(n) defines a random walk in the complex plane, and assuming
that we start at the origin ws = 0, the position of the moving point z = x + iy at the instant j
is given by the expression
i-l

(2.2)
k=O

which is merely the solution of the difference equation

Zj+l = Zj + iij~j. (2.3)

In the following, we shall more especially consider the case when po = pi = . . . = p,_l = l/n
and p = q = l/2, to then write &(n), wk, and Zj, and

zj+l = zj + Rjwj. (2.4)

If TJdenotes the variable of Fourier’s transform, then the characteristic function of zj is given
by the expression

(pzi (v) := E{eivzj} = &2


k=O
(exp{iVwkAz} + exp{-iVwkAz})
1,
’ (2.5)

and one can then state the following lemma.

LEMMA 2.1. For small ]Azl, one has the equivalence

1 + $(i~)~(Az)~
j
+ o (lAz12”j) , G-3
>

where o(e) denotes Landau’s symbol.


PROOF. One expands the exponentials in Taylor’s series and one then uses the properties of the
complex roots of the unity.

2.2. Fractional Brownian Motion via Complex Random Walk

One can now state the following result.

PROPOSITION2.1. The limit of the complex random wauC Zj defined by equation (2.4) is a
stochastic process of which the probability density p(z, t) is a solution of the heat equation

(2.7)

where cr denotes a positive constant.


PROOF. (i) We have to take the limit of (2.6) as IAzl 1 0, and to thii end, we shall use the
standard approach. We define j = t/At to write

.
(2.8)

In order to have a sensible result, we shall require that


1066 G. JUMARIE

where CJis a complex-valued parameter, say

u:=u, fia, (2.9)


exp{G},
= 1~1 (2.10)

to obtain the limit cp=,(v) --+ p*(v), with

(2.11)

(ii) We then have the density

(2.12)

which satisfies the heat equation of order n (2.7).


(iii) An alternative to the proof of equation (2.7) is as follows. We notice that, according to
the definition (P~(zJ,t) = E{exp{ivz}} of characteristic function, one has the equalities

I $z,
dt
R2
tp dz = $4”,% (2.13)

IfpEp(z,
dz”
t)eivz dz = (-l)n(iv)n(pz(v, t). (2.14)

In the special case when (p=(z),t) is defined by expression (2.9), these equalities (2.11) and (2.12)
can be combined in the form

-f&z,t) - (-I)~$&p(z,t) eivz dz = 0, (2.15)


1

which holds for every real-valued 21,therefore, equation (2.7).

2.3. Modelling via Rotating Gaussian White Noise

An alternative to this definition, which may be useful to identify fractional C-valued noises in
practical problems, is the following one.

DEFINITION 2.1. Rademacher’s random variable: it is the random variable R(n) which takes on
the values wk(n), k = 0, 1,2,. . . , n - 1 with the uniform probability l/n.

DEFINITION 2.2. A Rademacher’s white noise R(t, n) of order n is a complex-valued stochastic


process which satisfies the following condition: at each instant t, R(t, n) is a Rademacher’s r.v.
of order n, and which moreover is such that

B{Rj(t,n)Rj(7-,n)} = dj,,d(t - T), (2.16)

where 6j,n is the Kronecker delta; clearly, 6j,,, = 0 when j # n while S,,,,, = 1.

DEFINITION 2.3. The continuous stochastic process b(t,n), t > 0, n = 2,3,4,5, is a, Brownian
motion of order n defined in R lln if it satisfies the following conditions:
(i) b(0, n) = 0 almost surely,
(ii) b(t , n) has stationary independent increments,
(iii) for every t > 0, the differential db(t, n) is given by the expression

db(t,n) = R(t,n)lw(t)l (dt)‘ln, n = 2,3,4,5, ..., (2.17)


Randomization of Transfer Functions 1067

where R(t, n) is a Rademacher white noise of order n, and w(t) is a Gaussian white noise with
zero mean and variance a2(t). It is assumed that R(t, n) and w(t, n) are mutually independent.
In the sequel, we shall introduce the coefficient Q(n) defined as
12.
I
n = 2k, (2.18)
Q(n) := 2n/2(n/2)!’
2k+lkl

:=z+
The following results can be easily obtained.
n=2k+l. (2.19)

LEMMA 2.1. (Parallel to Lemma 4.1.) One has the equalities


E{dti(t,n)} = 0, j=l ,...,n-1, (2.20)
E{dP(t, n)} = Q(n)cP(t) dt. (2.21)

In this form, w(t, n) appears to be the absolute value of a Gaussian white noise which rotates
on the finite grid defined by the complex roots {Wk(n)} of the unity. For further details, see [8].

3. COMPLEX WHITE NOISE IN IMAGE PROCESSING


In the present section, we shall show that the Gaussian white noise so defined is quite relevant
in image processing.

3.1. F’ractals in Dynamic Vision


Our main assumption is the following one. \
CONJECTURE. As a consequence of the basic definition of an image in terms of pixels, a meaning-
ful alternative to the Gaussian white noise w(t) in equation (1.3) is the fractional complex-valued
Gaussian white noise of order 4, w(t, 4), defined by the expression
db(t, 4) =: w(t, 4) (dt)1’4, (3.1)
or, what amounts to the same,
t
l&4) =
J 0
W(T, 4) (d+4.

SUPPORT TO THIS CONJECTURE. Assume that the image is defined in the (x1, sz)-plane and let
(3.2)

z := 21 + ixz. The displacement of objects can be described by equation (1.1) which we rewrite
here for convenience:
z(t + 1) = R(t)@(t) -T(t)), (3.3)
where T(t) and R(t) define a translation and a rotation, respectively. In a continuous time
modelling, we shall write
i(t) = fl(t)+) + j-i@), (3.4)
again with the observation
z’(t) = z(t) + wz(t). (3.5)
Here, in our computer vision problem, z(t) is the coordinate of a pixel of the image, and one
may safely assume that z’ is a neighbouring pixel of z(t): this is the modelling which probably
involves the smallest amount of assumptions.
In other words, w(t) can be thought of as a C-valued random variable which takes on the values
Apexp{2ikr/4}, k = 0, 1,2,3 with the uniform probability l/4, which amounts to replacing
equation (3.5) by
z’(t) = z(t) + w(t, 4), (3.6)
where w(t, 4) is the random walk of order 4.
The continuous version of this observation equation would be
dz’(t) = z(t) dt + db(t,l). (3.7)
1068 G. JUMARIE

3.2. Further Remarks and Comments on the Significance of the Model

Consider the 3 x 3 pixel matrix (B’, B, A’), (C, M, A), (C’, D, 0’) = (first row), (second row),
(third row), in which M is the centre of the square (B’, A’, D’, C’).
When M is observed with some measurement errors, then in quite a natural way, it will be
identified (or confused) with one of its neighbouring points. At first glance, this observed point
might be an element either of the set {A, B, C, D} or of the set {A’, B’, C’, D’}. With the notation
of equation (2.4), a modelling of this error term A.zj at the discrete time j can be written in the
form
Azj = &Rj(4)zuj + &(l - Pj)Rj(4)wjei”‘4, (3.8)
where & is a Bernoulli random variable (& = 1 or 0 with the respective probabilities p and q,
p+q= 1).
The presence of the Bernoulli variable contributes some difficulties of a theoretical nature, but
simplification can occur in some special cases of interest.

(9 If we restrict ourselves to the nearest points {A, B, C, D}, one then has the equality

Az~ = Rj(4)wj, (3.9)

which yields the Gaussian white noise of order 4, w(t, 4).


(ii) Assume now that one has the equality p = q = l/2. In this case, one can replace (3.8) by
the equation
A..z~= X 1 + heiri Rj(4)wj, (3.10)
( >
where X is a real-valued parameter, whereby we obtain a C-(fBm)4 defined by the charac-
teristic function

(iv)”
tpb(v,t) = exp
HX 1 + Jzeis14
>
--?_cJ”t
?z. 1
. (3.11)

(iii) A third approdmate model consists in working as if all the points A, A’, B, B',C,C' were
on the same circle, in which case one will write

Azj = Rj(S)Wj (3.12)

to obtain a Gaussian white noise of order 8.


(iv) A simplified version of (3.8) is

Azj = XlRj(4)w.j + xzJZe”“‘4Rj(4)wj, (3.13)

where Xi and X2 denote two real-valued weighting coefficients.

4. MORE ON COMPLEX BROWNIAN MOTION OF ORDER 4


In the following, we shall use the maximum entropy principle to get some useful results on the
probability density of w(t, 4).

4.1. Background on the Maximum Entropy Principle

Despite the fact that the maximum entropy principle is by now well known, we bear it in mind
for the convenience of the reader.

MAXIMUM ENTROPY PRINCIPLE. Suppose we know that a system has a set of possible states Zi
with unknown probabilities p(x:i), but we know that the latter satisfies some given constraints,
Randomization of Transfer Functions 1069

either values of certain mathematical expectations Cip(x:i)fk(ti), k = 1,2,. . . , or bounds on


these values. Assume that we need to define the best estimate p* of p, given what we know.
The maximum entropy principle states that, of all the distributions which satisfy the con-
straints, we should select that one which maximizes the entropy

H(X) := - CP(Zi) lnp(zJ. (4.1)

The same principle applies to continuous probability density p(x) with the entropy

H(X):= - p(z) lnp(z) dz, x E R”. (4.2)


s R”
We shall not herein discuss the controversial issue which is given rise to by this principle (see,
for instance, [9,10]) and we shall merely notice that its soundness can be considered as being
supported by the number of results it yields in physics and practical studies.
In the applications to thermodynamics, we shall merely assume that the MEP is a picture of
the second principle, and provides the state of the thermodynamic equilibrium.

4.2. Probability Density in the Complex Plane


We consider the stochastic two-dimensional system with the state vector (X,Y), where X
and Y are two random variables, and we introduce the complex state z = x + iy, which is used
to define the probability density p(z): p( z ) is a nonnegative real-valued function of which the
probabilistic meaning is provided by the equality

pr{(X,Y) E a} = p(z)dxdy. (4.3)


sR
With this probability density, one can meaningfully define the mathematical expectations of
any function g(z, y) and the informational (Shannon) entropy

H,(X,Y) := -
JRZ
~(4 W.4 da:dy7
where the notation Hz(X, Y) emphasizes that we are dealing with the entropy of the pair (X, Y)
(4.4)

but expressed via z = x + iy.

4.3. Application of the Maximum Entropy Principle


We consider the maximization of the entropy Hz(X, Y), see equation (4.4), with the macrostate
constraint (n= 1,2,...; 0 >O),

J RZ
Iz~~~~(z)dxdy = 2cr2n,

where the coefficient 2 is introduced to take account of the fact that we are dealing with the
pair (X, Y).
Applying the MEP with the Lagrange parameter X and the so-called partition function Q(A)
(we do not use the standard notation Z(X) since here Z denotes the variable Z := X + iY) yields
e-xIzp
Pdz) = Q(X) 7 (4.6)

with
x = (2na2n)-l, (4.7)
Q(A) :=27rX-“7 (4.8)

or more explicitly
exp{ -]z]2n/2n02n}
Pan(z) =
27r(2n)lk# (1 + l/n)’ (4.9)
1070 G. JUMARIE

4.4. Some Properties of pd(z, t)

In the following, we shall more especially consider the special case n = 2, and we have the
following result.

LEMMA 4.1. Assume that the random variable Z = X + iY is defined by the probability density

1)4(z) = (2a27r3/2
)-‘exp{--$}, (4.10)

then one has the mathematical expectations

E(Z) = E { Z2} = E { Z3} = 0, (4.11)

E {Z4} = (4.12)

PROOF. (i) A preliminary calculation yields

E(X) = E(Y) = 0, (4.13)

E{X2} =E{Y2} = 5, (4.14)

E {X3} = E {Y3} = 0, (4.15)

E {X4} = E {Y”} = ;04. (4.16)

(ii) Next, according to the various signs of the product XY in the plane, without any calculus,
one has the equality
E{XY} = 0, (4.17)

which shows, incidentally, that X and Y are linearly independent.


In other words, one can rephrase the result as follows: applying the maximum entropy principle
to the probability density p(z) with the constraints (4.11) and (4.12) yields p4(z) in (4.10).

Stability in Levy’s sense of C-(fBm),

The point of importance for our purpose is that the C-(mm), is stable in Levy’s sense; loosely
speaking, the sum of independent C-(mm), is also a C-(mm),. This can be detailed as follows.
(i) Assume that bi(t, n) and bz(t, n) are two independent C-(fBm), with the respective char-
acteristic functions &(V) = exp{((iv)n/n!)ayt} and f&(w, t) = exp{((iv)n/n!)a?jt}, then
the characteristic function of bl(t, n) + bz(t, n) is

@bl+bz(vt) = eXP y(cT;+fY;)t} (4.18)

=exp{yo:t}, (4.19)

with
a,n=(T;+fJ;. (4.20)

(ii) Assume that b(t,n) is a C-(mm), with the characteristic function

&(w,t) = eXp{ @$?t},


then /3(t, n) := exp{ie}b(t, n) is also a C-(mm), with the parameter on.
Randomizationof TransferFunctions 1071

Thii property is a direct consequence of the equality (e*%(t, n)l = Ib(t,n)I.


(iii) Assume again that b(t,n) is the C-(mm), of (ii), and let X denote a given real-valued
parameter, then Xb(t, n) is also a C-(mm), with the characteristic function

(4.21)

(iv) Refer to the C-(fBm),s bl(t, n) and bz(t, n) in (i), then the weighting combination

&(t,n) = pie i81bl(t,n) + pae%2(t, n), (4.22)

where p1 and pz are positive real parameters, is also a C-(fJ3m), with the characteristic
function
&%“(V t) = exP @$pror
. + &$)t} . (4.23)
{
This last result is a direct consequence of (i), (ii), and (iii) all together.

5. RANDOM LAPLACE TRANSFORM OF FRACTIONAL NOISES

5.1. Preliminary Result

LEMMA 5.1. Let z(t) denote a C-valued fractional stochastic process with independent incre-
ments and such that

E{z”(t)} = 0, k=1,2 ,..., n-l, (5.1)


E{zn(t)zn(T)} = a”(t)lqt - T), (5.2)

and let f(t) denote a given nonrandom complex-valued function. Then, the integral

J
b
I:= f (tMt) dt (5.3)
a
can be defined in IWs sense, and it is a random variable of which the characteristicfunction is

(iv)* t
41(v) = exp -
1n- I
J0
(f(~)YaY~) d7 .
>
(5.4)

PROOF.
STEP (i). The lemma holds when z(t) is in the form

z(t) = z’(t)?& n), (5.5)

where w (t, n) is a fractional Gaussian white noise of order n with the moment ~~ (t), and z’(t) is
a stochastic process independent of w(t, n).
(i) Indeed, using Maruyama’s notation, we shall rewrite (5.3) in the following form which is
more consistent with Itb’s framework, that is,

J
b
I= f(t)z’(t)w(t, n) (dt)l’“. (5.6)
Q

(ii) This It& integral of order n is the limit, in the nth moment mean sense,

n-l
1 = n~p$W1’n c f (tj) z’ (tj) [w (tj+l) - w (tj)] .
j=O
1072 G. JUMARIE

The sequence {.z’(tj)[ul(tj+i)--w(tj)]} is a sequence of independent fractional random variables


of order n, and according to the central limit theorem of order n (see [11,12]) and equation (4.22),
I is a fractional Gaussian variable with the nth moment
n-1
E(P) = nfi=lm~fn(~j)on(tj) At. (5.7)
3=0

STEP (ii). Assume now that t(t) is not given in the form of equation (5.5); then we shall determine
a stochastic process z’(t) such that

z(t) % ~‘(+-4t,n), (5.8)

and then we shall apply the result of Step (i).


The existence of z’(t) can be obtained as follows.
First, one remarks that, according to condition (5.1) on the first n moments of z(t), one
necessarily has the equality

z(t) = wdn), k=1,2 ,..., n-l, ak~R. (5.9)

As a result, if one refers to the modelling of w(t,n) as a rotating Gaussian white noise on the
grid defined by {wk(n)}, one has the equalities

r(t) = z’(t)w(t, n), (5.10)


z’(t) = p’(t) exp{ie’(t)}, (5.11)

where p’(t) and O’(t) are random functions. Define

4t) = p(t) w(W)), (5.12)


w(t, n) = kh(t) ev{%dt)). (5.13)

One then has the equalities

m = P’(wldt>, (5.14)
e(t) = e’(t) + e,(t), (5.15)

whereby one can determine the probabilities of p’(t) and Q’(t).

5.2. Application to Random Laplace Transform of Fractional Processes

In the frequency analysis of linear feedback systems, one of the basic tools (at least the classical
one) is the so-called Laplace transform defined by the expression

J
co
Z(s) := eestz(t) dt (5.16)
0
m3
= e -““z(t) dt, s = iw. (5.17)
J0

When z(t) is a fractional stochastic process with independent increments, then according to
Lemma 5.1, Z(s) itself is a fractional random variable with the nth moment defined by equa-
tion (5.4). In the special case when z(t) is the fractional Gaussian white noise w(t,n) with the
nth moment a”(t), then its Laplace transform W(s, n) is a fractional Gaussian variable of order n
with the nth moment
E{W”(s, n)} = lm e-“gtan(t) dt. (5.18)
Randomization of Transfer Functions 1073

6. MORE ABOUT LAPLACE TRANSFORM


OF FRACTIONAL WHITE NOISES

6.1. Modelling via Rotating Laplace Transform of Gaussian White Noises

According to equation (2.15), let us write w(t, n) in the form

(6.1)
to have
W(s,n) = eestR(t, n)lw(t,n)l (dt)‘ln. (6.2)

The corresponding moments are

E {W(S,~I)} = Srn E {R~(~,TL)}E {e-kstlw(t)lk} dt,


0

therefore,

E {Wk(s,n)} = 0, k=1,2 ,..., n-l, (6.3)

E{Wn(s, n)} = lrn e -n”tE{jw(t)In}dt = Q(n) lrn eenstan(t)dt. (6.4)


0 0

A Laplace transform W(s, n) which satisfies conditions (6.3) and (6.4) is

W(s,n) :=R(n) lrn e-stlw(t)ldt, (6.5)

in such a manner that, as a result, W(s, n) can be thought of as if it were the Laplace transform

W’(s,n) := O”e-Stlw(t)l dt, (6.6)


s0

which rotates on the finite grid defined by {wk(n)}.

6.2. Random Systems in the Complex Plane

Assume that we are dealing with the nonrandom C-valued signal z(t) and that the latter is
disturbed by a pixel noise described by a fractional Gaussian white noise w(t, n), in such a manner
that the actual signal is not z(t) but rather is

&(t) := z(t) + w(t, n). (6.7)

We then have the corresponding Laplace transform

G(s) = Z(s) + W(s,n), (6.8)

where W(s, n) is the fractional noise defined by equation (6.5), and the initial stochastic problem
in time is so converted into a stochastic problem with the variable s.
In the next section, we shall show how this simple remark provides a statistical approach to
the analysis of linear systems described in terms of transfer function.
1074 G. JUMARIE

7. STATISTICAL ANALYSIS OF TRANSFER FUNCTIONS


As an illustrative example of our purpose, consider the linear feedback system which, in terms
of transfer function, is described by equations

C(s) = G(sW(s, n&(s) + E(s)], (7-l)


E(s) = R(s) - H(s)C(s), (7.2)

where W(s, n) is the Laplace transform of a given disturbing fractional Gaussian white noise of
order n w(t, n), R(s) is the Laplace transform of the input, and C(s) is the Laplace transform of
the output.
C(s) and R(s) are related by the equation

C(s) G(s) G(s)W(s, n)


(7.3)
R(s) = 1 + G(s)H(s) + 1 + G(s)H(s)’

and in the absence of noise, it is well known that the stability of the system is determined by the
characteristic equation
1 + G(s)H(s) = 0, (7.4)
which can be studied either by the corresponding root locus or the Nyquist locus, for instance
(see, for instance, [13]). Rearranging equation (7.1) provides

C(s)
-= G(s)
(7.5)
R(s) 1+ G(s)(H(s) - WCs,n)G(s))/(l + WCs,n)>’

in other words, everything happens as if H(s) were changed to (H - WG-‘)I(1 + W) by the


presence of W(s, n).
The stability of the system is then defined by the random characteristic equation

1 + G (H - WG-‘) (1+ W)-1 = 0, (7.6)

where W(s, n) is a fractional Gaussian variable.


In the next section, for the sake of completeness, we shall outline how the complex-valued
Gaussian white noise of order n modifies the classical approach to stochastic optimal control.

8. STOCHASTIC OPTIMAL CONTROL OF ORDER n


In the following, z(t) := z(t) + iy(t) will denote the time dependent state of the system in the
complex plane, and v(t) will refer to the complex-valued control.
With this notation, the dynamical system under consideration is defined by Itd’s stochastic
differential of order n,

dz(t) = fl(z, u, t) dt + fn(~,uL,t) db(t,n), z(0) = zo. (8.1)


fi(.) and fn(.) are two functions which are suitably defined in order that t(t) exist (loosely
speaking, these conditions are similar to those of the usual Ito’s stochastic differential equation
of order 2), and b(t,n) is a C-(fRm), with the characteristic function exp{ (iv)nan(t)/n}.
The purpose is to determine the (optimal) control u*(t) which minimizes the cost function

(8.2)

where T has a given fixed value. Obviously, it is assumed that such a minimization makes sense;
in other words, G will be explicitly a positive functional.
Randomization of Transfer Functions 1075

It is well known that, since there is no single optimal trajectory, there is no stochastic equivalent
to the Euler-Lagrange equations, and that consequently, it is necessary to work via the principle
of optimality. The classical theory (see, for instance, [14]) can be extended as follows to the
optimal control of stochastic fractional dynamics. We consider the stochastic value function

(8.3)

and we calculate its total time derivative in two different ways.


(i) First, one has
dV*(t)
- = -g(z*(t),u*(t),t)). (8.4)
dt
The mathematical expectation E{ .} is dropped since z(t) and u(t) can be known without error
at time t.
(ii) Second, we use the Taylor lemma of order n [ll] to obtain dV*. We express dV* in series
expansion, and retaining the term of order n, the incremental change in V* can be written as
follows:

dV* = E zdt.&-- n ’ a-lv* (d#


j=l 3! w
{ I
(8.5)
=E Y;dt+~~(V*)(j)[f,dt+f,db(t,n)]~ .

I I
j=l *

Functions of z(t) equal their own expectations, E{dti(t, n)} = 0, j = 1, . . . , n - 1. Dividing by


dt and replacing the nth term by its time derivative, we get

dV* = v,* dt + V,tfl dt + -$(V*)@‘f,“c; dt,

therefore,
dV*
- = y + V,*fi + $(V’)(“‘f,“O,“. (8.6)
dt
(iii) The stochastic principle of optimality is obtained by combining equations (8.4) and (8.6),
clearly,

g(z*(t),u(t),t) + Vz*fi(z*(t),u(t),t) + -$gf,“(z*(t),t)c$(t) . (8.7)

9. CONCLUDING REMARKS
For convenience (on the theoretical standpoint), we are used to representing noisy observations
in control systems by means of Gaussian white noises. Nevertheless, in the new forthcoming
trend of control systems involving computing vision, it appears that this modelling is not quite
satisfactory, and that, as a consequence of the definition of a computerized image in terms of
pixels, a modelling via complex-valued fractional Gaussian white noise would be much more
suitable.
This feature forces us to revisit all the basic problems of control theory, and in a preliminary
study here, in the present paper, we restricted ourselves to the classical approach via transfer
functions of linear systems. But instead of examining how the usual approach via spectral density
can be generalized or merely adapted to this case, we rather chose another point of view which
basically considers Laplace transform of white stochastic processes as random variables. In this
way, one can expand a statistical approach to the analysis of linear control systems defined in
terms of transfer functions.
The point of importance is the presence of a new family of fractional Gaussian white noise in
image processing.
1076 G. JUMAFUE

REFERENCES

1. B. Espiau, F. Chaumette and P. Rives, A new approach to visual servoing in robotics, IEEE tins. on
Robotics and Autom. 8, 313-326 (1992).
2. 0. Faugreas, Three Dimensional Vision, a Geometric Viewpoint, MIT Press, (1993).
3. D.B. Gennery, Visual tracking of known 3-dimensional object, Int. J. Computer SC. 7 (3), 243-270 (1992).
4. B. Horn, Robot Vision, MIT Press, (1986).
5. H.C. Longuet-Higgins, A computer algorithm for reconstructing a scene from projections, Nature 293,
133-135 (1993).
6. S. Maybank, Theory of reconstruction from image motion, In Information Sciences, Vol. 28, Springer-Verlag,
(1992).
7. S. Soatto, R. Frezza and P. Perona, Recursive motion estimation on the essential manifold, In Proc.
Z@ Europ. Conf. Computer Vision, (Edited by J.-O. Eklundh), LNCS-Series, Vol. 800-801, pp. 61-72,
Springer-Veriag, Stockholm, (1994).
8. G. Jumarie, A new approsch to complex-valued fractional Brownian motion via rotating white noise, Chaos,
Solitons and Fmctals 9 (6), 881-893 (1998).
9. J. Uffink, Can the maximum entropy principle be explained as a consistency requirement?, Studies in the
History and Philosophy of Modem Physics 26, 223-261 (1995).
10. J. Uffink, The constraint rule of the maximum entropy principle, Studies in the History and Philosophy of
Modem Physics 27, 47-79 (1996).
11. G. Jumarie, A new approach to frsctional Brownian motion of order n via random walk in the complex
plane, Chaos, Solitons and Fractols 10 (7), 1193-1212 (1999).
12. G. Jumarie, Fractional Brownian motion with complex variance via random walk in the complex plane and
applications, Chaos, Solitons and i%ctals 11 (7), 1099-1111 (2000).
13. R.C. Dorf, Modem Control Systems, Addison Wesley, New York, (1989).
14. R.F. Stengel, Stochastic Optimal Control, Wiley, New York, (1986).

You might also like