## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**environment based on transferable belief model
**

Wang Yaonan

a

, Yang Yimin

a,⇑

, Yuan Xiaofang

a

, Zuo Yi

a,b

, Zhou Yuanli

a

, Yin Feng

a

, Tan Lei

a

a

College of Electrical and Information Engineering, Hunan University, Changsha, Hunan 410082, PR China

b

Key Laboratory of Regenerative Energy, Electric-Technology of Hunan Province, Changsha University of Science and Technology, Changsha, Hunan 410004,

PR China

a r t i c l e i n f o

Article history:

Received 2 February 2010

Received in revised form 19 March 2011

Accepted 12 May 2011

Available online 20 May 2011

Keywords:

Navigation

Path planning

Mobile robot

Robust tracking control

Transferable belief model

Dynamic environment

a b s t r a c t

This paper investigates the possibility of using transferable belief model (TBM) as a prom-

ising alternative for the problem of path planning of nonholonomic mobile robot equipped

with ultrasonic sensors in an unknown dynamic environment, where the workspace is

cluttered with static obstacles and moving obstacles. The concept of the transferable belief

model is introduced and used to design a fusion of ultrasonic sensor data. A new strategy

for path planning of mobile robot is proposed based on transferable belief model. The

robot’s path is controlled using proposed navigation strategy that depends on navigation

parameters which is modiﬁed by TBM pignistic belief value. These parameters are tuned

in real time to adjust the path of the robot. A major advantage of the proposed method

is that, with detection of the robot’s trapped state by ultrasonic sensor, the navigation

law can determine which obstacle is dynamic or static without any previous knowledge,

and then select the relevant obstacles for corresponding robot avoidance motion. Simula-

tion is used to illustrate collision detection and path planning.

Ó 2011 Elsevier Ltd. All rights reserved.

1. Introduction

During the past few years, autonomous navigation of

nonholonomic systems such as nonholonomic mobile

robot has received wide attention and is a topic of great

research interest. The navigation systems including map

building and path planning implies that the robot is

capable of reacting to static obstacles and unpredictable

dynamic object that may impede the successful exaction

of a task. To achieve this level of robustness, many litera-

tures that deals with path planning is rapidly growing.

The work of Rueb and Wong [1], Habib and Yuta [2],

Mataric [3], Rimon and Koditschek [4] and Borenstein

and Koren [5] are among the earliest attempts to solve

the problem of path planning. Various classical approaches

designed originally are extended in order to be applicable

in real applications. Probabilistic roadmap methods are

used in [6–9]. Potential ﬁeld method is suggested in

[10–12]. These methods perform well in static environ-

ments. However, this does not automatically imply good

performance in dynamic environment. Additionally, these

methods have limited performance when obstacles are al-

lowed to move in the workspace.

Recently, other kinds of research were proposed [13–

21], which extended the various approaches to dynamic

environment. For example, dynamic potential ﬁeld method

[13–15], kinematics methods to solve the problem of colli-

sion detection [16], sensor-based path-planning methods

[17–20]. Sensor-based path-planning methods are widely

used to navigation in dynamic environments. The robot

calculates and estimates the motion of the moving obsta-

cles based on its sensory system. There also exist methods

with other algorithms such as genetic algorithms [21], fuz-

zy system [22], intelligent algorithms [23,24] to solve this

problem. These methods have the ability of wheeled mo-

bile robots to navigate safely and avoid moving obstacle

0263-2241/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.

doi:10.1016/j.measurement.2011.05.010

⇑

Corresponding author.

E-mail address: Yangyi_min@126.com (Y. Yimin).

Measurement 44 (2011) 1389–1405

Contents lists available at ScienceDirect

Measurement

j our nal homepage: www. el sevi er. com/ l ocat e/ measur ement

in dynamic environment. However, most of the proposed

approaches for mobile robots in the literature did not con-

sider autonomous navigation in an environment which in-

cludes both static obstacles and dynamic obstacles.

Furthermore, in the nature of the real world, any prior

knowledge about the environment is, in general, incom-

plete, uncertain, or completely unknown. These methods

assume that, each moving object’s velocity and direction

is exactly known, but this prior knowledge is not easy to

gain.

The transferable belief model (TBM) is a model for the

quantiﬁed representation of epistemic uncertainty and

which can be an agent, an intelligent sensor, etc., and pro-

vides a highly ﬂexible model to manage the uncertainty

encountered in the multi-sensor data fusion problems.

Application of the transferable belief model (TBM) to many

areas has been presented in [25–29] including classiﬁca-

tion and target identiﬁcation during recent times. And we

feel it appealing when using navigation of mobile robots.

This work investigates the possibility of using transfer-

able belief model (TBM) as a promising alternative for nav-

igation system of nonholomonic mobile robot. First, a

neural robust tracking controller, comprising adaptive

wavelet neural network controller (AWNN), is used to

achieve the tracking control of the mobile manipulator un-

der some uncertainties without any knowledge of those ro-

bot parameters. Then, based on transferable belief model

(TBM) to design a fusion of ultrasonic sensor data, a new

approach for path planning is proposed. The local map,

represented as an occupancy grid, with the time update

is proven to be suitable for real-time applications. Attrac-

tive and repulsion potential function is modiﬁed by TBM

pignistic belief value. Taking the velocity of the obstacles

and static object into account, the suggested method can

determine which obstacle is dynamic or static without

any previous knowledge of moving obstacles, then select-

ing the relevant obstacles for corresponding robot avoid-

ance motion.

The rest of the paper is organized as follows: Section 2

shows the mathematical representation of mobile robot

with two actuated wheels. Section 3 discusses the nonlin-

ear kinematics-WNN back stepping controller as applied to

the tracking problem. Section 4 proposed a new method

for map building based on sonic rangeﬁnder. A path-plan-

ning approach is discussed in Section 5. Simulation and

experiment are shown in Section 6.

2. Model of a mobile robot with two actuated wheels

The kinematic model of an ideal mobile robot is widely

used in the mobile robot (MR) control [30–37]. The MR

with two driven wheels shown in Fig. 1 is a typical exam-

ple of nonholonomic mechanical systems. OXY is the refer-

ence coordinate system; O

r

X

r

Y

r

is the coordinate system

ﬁxed to the mobile robot; O

r

the middle between the right

and left driving wheels, is the origin of the mobile robot; d

is the distance from O

r

to P; 2b is the distance between the

two driving wheels and r is the radius of the wheel. In the

2-D Cartesian space, the pose of the robot is represented by

q ¼ ðx; y; hÞ

T

ð1Þ

where (x, y)

T

is the coordinate of O

r

in the reference coordi-

nate system, and the heading direction h is taken counter-

clockwise from the OX-axis.

The motion model including kinematics and dynamics

of the nonholonomic mobile robot system can be described

by Hou et al. [34]. It is assumed that the wheels of the ro-

bot do not slide. This is expressed by the nonholonomic

constraint.

_

x sinh À

_

y cos h ¼ 0 ð2Þ

All kinematic equality constraints are assumed to be

independent of time and to be expressed as follows:

AðqÞ

_

q ¼ 0 ð3Þ

By appropriate procedures and deﬁnitions, the robot

dynamics can be transformed as [30,34]

M

_

v

w

þVv

w

þF þs

ed

¼ s ð4Þ

where

M ¼

r

2

4b

2

ðmb

2

þI

0

Þ þI

w

r

2

4b

2

ðmb

2

ÀI

0

Þ

r

2

4b

2

ðmb

2

ÀI

0

Þ

r

2

4b

2

ðmb

2

þI

0

Þ þI

w

_

_

_

_

V ¼

0

r

2

2b

2

m

c

d

_

h

À

r

2

4b

2

m

c

d

_

h 0

_

_

_

_

_

_

_

_

F ¼ J

T

F

s

ed

¼ J

T

s

ed

B ¼

1 0

0 1

_ _

where m = m

c

+ 2m

w

, I

0

= m

c

d

2

+ 2m

w

b

2

+ I

c

+ 2I

m

; m

c

and

m

w

are the masses of the robot body and wheel with actu-

ator, respectively; I

c

, I

w

and I

m

are the moments of inertia of

the robot body about the vertical axis through p, the wheel

with a motor about the wheel axis, and the wheel with a

motor about the wheel diameter, respectively.

y

x

r

X

r

Y

r

O

P

d

2r

θ

2b

left wheel

right wheel

O

Fig. 1. Nonholonomic mobile robot.

1390 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

Since the wheels of the robot are driven by actuators, it

is necessary to take the dynamics of the wheel actuators

into account. The motor models attained by neglecting

the voltage on the inductances are:

s

r

¼ k

a

ðv

r

Àk

b

x

r

Þ=R

a

; s

l

¼ k

a

ðv

l

Àk

b

x

l

Þ=R

a

ð5Þ

where v

r

and v

l

are the input voltages applied to the right

and left motors; k

b

is equal to the voltage constant multi-

plied by the gear ratio; R

a

is the electric resistance con-

stant; s

r

and s

l

are the right and left motor torques

multiplied by the gear ratio; and k

a

is the torque constant

multiplied by the gear ratio. The dynamic equations of the

motor-wheels are:

s

r

¼

k

a

v

r

R

a

À

k

a

k

b

x

r

R

a

; s

l

¼

k

a

v

l

R

À

k

a

k

b

x

l

R

a

ð6Þ

By using (4)–(6), the mobile robot model including the

robot kinematics, robot dynamics, and wheel actuator

dynamics can be written as:

M

_

v

w

þVv

w

þF þs

ed

¼

k

a

v

r

R

a

À

k

a

k

b

x

r

R

a

ð7Þ

w

H

¼ XV ð8Þ

where V, X is selected as

X ¼

1

r

R

r

1

r

À

R

r

_ _

ð9Þ

V ¼

v

w

_ _

ð10Þ

3. Nonlinear kinematic-WNN backstepping controller

In area of research of trajectory tracking in mobile robot,

based on whether the system is described by a kinematic

model or a dynamic model, the tracking-control problem

is classiﬁedas either a kinematic or a dynamic tracking-con-

trol problem. Using kinematic or dynamic models of non-

holonomic mobile robots, various approaches [21,32–37]

consider that wheel torques are control inputs though in

reality wheels are driven by actuators and therefore using

actuator input voltages as control inputs is more realistic.

To this effect, actuator dynamics is combined with the mo-

bile robot dynamics. In this section, a neural robust tracking

controller is discussedbrieﬂy to achieve the tracking control

under some uncertainties without any knowledge of those

robot parameters from our recent work [36]. More detail,

proof and simulation can be also found in [36].

Recall the robot dynamics (10)

M

_

v

w

þVv

w

þF þs

ed

¼

k

a

v

r

R

a

À

k

a

k

b

x

r

R

a

ð11Þ

The controller for s is chosen as

s ¼

^

f þK

4

e

c

þc ð12Þ

where K

4

is a positive deﬁnite diagonal gain matrix, and

^

f is

an estimation of f(x) that is deﬁned by

f ðxÞ ¼ MðqÞ

_

v

c

þV

m

ðq;

_

qÞv

c

þFðvÞ ð13Þ

where x ¼ ½v

T

v

T

c

_ v

T

c

, so error can be deﬁned as

e

c

¼ v

c

Àv ð14Þ

Using (11)–(14), the error dynamics stable the input term

is chosen as

M

_

e

c

¼ ÀV

m

e

c

ÀK

4

e

c

þ

~

f þ

s

d

þ

k

a

k

b

R

a

BXv

c

À

k

a

R

a

Bu

Àc ð15Þ

The function in braces in (15) can be approximated by

an AWNN, such that

M

_

v

c

þV

m

v

c

þF ¼

´

Wwðx

i

; c; mÞ ð16Þ

where the term

´

Wwðx

i

; c; mÞ represents an adaptive

approximation WNN. The structure for the tracking control

system is presented in Fig. 2. In this ﬁgure, no knowledge

Fig. 2. Structure of the wavelet neural network-based robust control system.

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1391

of the dynamics of the cart is assumed. The function of

WANN is to reconstruct the dynamic (16) by learning it

on-line.

Assumption 3.1. The approximation errors and distur-

bances are bounded, i.e., the speciﬁed constants D

e

and D

r

satisfy ke

f

k 6 D

e

and k s

d

k 6 D

r

, respectively.

For case of notation, the approximation error

~

f can be

rewritten as

~

f ¼ W

ÃT

¯

W þ

W

T

´

W þe

f

ð17Þ

where

~

W ¼ W

Ã

À

´

W and

~

w ¼ w

Ã

À

^

w. The controller is

designed to achieve good tracking and stability results

with the connecting weights, dilation, and translation

parameters of the WNN tuned on line. To achieve this,

the linearization technique is used to transform the

nonlinear output of WNN into partially linear from so

that the Lyapunov theorem extension can be applied.

The expansion of

~

w in Taylor series is obtained as

follows:

~

w ¼

~

w

11

~

w

12

.

.

.

~

w

pq

_

¸

¸

¸

¸

¸

¸

_

_

¸

¸

¸

¸

¸

¸

_

¼

@w

11

@m

@w

11

@m

.

.

.

@w

11

@m

_

¸

¸

¸

¸

¸

¸

¸

_

_

¸

¸

¸

¸

¸

¸

¸

_

T

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

m¼^ m

ðm

Ã

À ^ mÞ þ

@w

11

@m

@w

11

@m

.

.

.

@w

11

@m

_

¸

¸

¸

¸

¸

¸

¸

_

_

¸

¸

¸

¸

¸

¸

¸

_

T

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

c¼^c

ðc

Ã

À^cÞ þH

ð18Þ

H is a vector of higher-order terms and assume to be

bounded by a positive constant. Substituting (18) into

(17), it is revealed that

~

f þ

s

d

¼

´

W

T

~

w þ

~

W

T

~

w þ

~

W

T

^

w þe

f

þ

s

d

¼

´

W

T

ð

^

w ÀA

T

^

mÀB

T

^

cÞ þ

´

W

T

ðA

T

~

mþB

T

~

cÞ þr ð19Þ

where r ¼ W

ÃT

ðA

T

m

Ã

þB

T

c

Ã

þHÞ À

´

W

T

ðA

T

m

Ã

þB

T

c

Ã

Þ þe

f

þ

s

d

By substituting (19) into (15), the closed-loop system

dynamics can be rewritten as

M

_

e

c

¼ À K

4

þV

m

_ _

e

c

þ

k

a

k

b

R

a

BXv

c

À

k

a

R

a

Bu

þ

~

W

T

^

w ÀA

T

^

mÀB

T

^

c

_ _

þ

´

W

T

A

T

~

mþB

T

~

c

_ _

þr Àc

ð20Þ

Moreover, assume the following inequality:

krk þ

@X

2

m

4

¼ W

ÃT

A

T

m

Ã

þB

T

c

Ã

þH

_ _

À

´

W

T

A

T

m

Ã

þB

T

c

Ã

_ _ _

_

_

þe

f

þ s

d

_

_

þ

@X

2

m

4

6

@X

2

m

4

þ W

ÃT

A

T

m

Ã

þB

T

c

Ã

þH

_ _ _

_

_

þe

f

þ s

d

_

_

þ

´

W

Ã

_

_

_

_

_

_ Á A

T

m

Ã

þB

T

c

Ã

_

_

_

_

_

_ 6 k

p

p

where @ is a positive constant; p

T

¼ ½1k

´

Wk; k

p

¼

½k

p1

k

p2

; k

p1

P kW

ÃT

ðA

T

m

Ã

þ B

T

c

Ã

; þHÞ þ e

f

þ s

d

k; k

p2

P

kðA

T

m

Ã

þB

T

c

Ã

Þk, i.e., k

p

is an uncertainty bound.

The robust term c is designed as

c ¼

^

k

p

p Á sgnðe

c

Þ ð21Þ

Deﬁnition 1. considering (21) for nonholonomic mobile

robot, if using controller (12), the closed loop error

dynamics given by (20) is locally asymptotically stable

with the approximation network parameter tuning laws

given by.

_

´

W ¼ g

1

ð

^

w ÀA

T

^

mÀB

T

^

cÞe

c

Àg

1

@ke

c

k

´

W ð22Þ

_

^

m ¼ g

2

´

WAe

c

Àg

2

@ke

c

k

^

m ð23Þ

_

^

c ¼ g

3

´

WBe

c

Àg

3

@k@k

^

c ð24Þ

_

´

K ¼ g

4

ke

c

kp ð25Þ

where g

1

, g

2

, g

3

, g

4

are positive constants;

^

k

p

is an on line

estimated value of the uncertain bound k

p

.

4. Map building

An occupancy grid is essentially a data structure that

indicates the certainty that a speciﬁc part of space is occu-

pied by an obstacle, which is widely used in map building

in mobile robot [17–21]. It represents an environment as a

two-dimensional array. Each element of the array corre-

sponds to a speciﬁc square on the surface of the actual

world, and its value shows the certainty that there is some

obstacle there. When new information about the world is

received, the array is adjusted on the basis of the nature

of the information.

Here, the proposed map-building process utilizes Trans-

ferable Belief Model (TBM). The sonar sensor readings are

interpreted by this theory and used to modify the map

using transferable belief model rule. Whenever the robot

moves, it catches new information about the environment

and updates the old map. Because of using this uncertain

theory to build an occupancy map of the whole environ-

ment, it represent important point of view what we must

to consider: any position in the updated map of the whole

environment do not exist absolute exactness for the reason

that any obstacle can not be measured absolutely right-

ness. So every discrete region of the path position may be

in two states: belief degree E, O and uncertain degree H.

For this purpose, the map building system to process the

readings in order to assess, as accurately as possible, which

cells are occupied by obstacles (partially certain) and

which cells are (partially) empty and thus suitable for ro-

bot navigation.

4.1. Review theory of transferable belief model

In this section, we brieﬂy regroup some basic of the be-

lief function theory as explained in the transferable belief

model ( TBM). More details can be found in [25–29].

A.1 (Frame of discernment). The frame of discernment is

a ﬁnite set of mutually exclusive elements, denoted

X hereafter. Beware that the frame of discernment

is not necessarily exhaustive.

1392 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

A.2 (Basic belief assignment). A basic belief assignment

(bba) is a mapping m

X

from 2

X

to [0, 1] that satisﬁes

A#X

m

X

ðAÞ ¼ 1. The basic belief mass (bbm) m(A),

A # X, is the value taken by the bba at A.

A.3 (Categorical belief function). A categorical belief

function on Xfocused on A

Ã

# X, is a belief function

which related bba m

X

satisﬁes:

m

X

¼

1 if A ¼ A

Ã

0 otherwise

_

ð26Þ

When all bbas are categorical, the TBM becomes equivalent

to classical propositional logic. Two limiting cases of cate-

gorical bbas have received special names.

A.4 (Normalizing or nonnormalizing) The basic belief

mass m(A) represents the part of belief exactly com-

mitted to the subset A of Xgiven a piece of evidence,

or equivalently to the fact that all we know is that A

holds. Normalizing a bba means requiring that

E(H) = 0 and that the sum of all bbms givens to

the non-empty subsets is 1. This means closing the

world. When a bba means requiring that m(H) > 0

and we call this open world. In TBM, we do not

require m(H) = 0 as in Shafer’s work.

A.5 ( Related function) the degree of belief bel(A) is

deﬁned as: bel(A): 2

X

?[0, 1] with, for all A # X

belðAÞ ¼

h–B#A

mðBÞ ð27Þ

The degree of plausibility pl(A) is deﬁned as: pl: 2

X

?[0, 1]

with, for all A # X

plðAÞ ¼

B#H;B\A¼h

mðBÞ ¼ belðHÞ ÀbelðAÞ ð28Þ

The commonality function q is deﬁned as: q: 2

X

?[0, 1]

with, for all A # X

qðAÞ ¼

A#B;B#X

mðBÞ ð29Þ

The function q,bel,pl are always in one to one correspon-

dence. More details and proofs of the relationship among

functions above can be found in [26,27].

A.6 (The conjunctive rule). Given two bbas m

X

1

; m

X

2

from

different sensor respectively, the bba that results

from their conjunctive combination deﬁned by

m½E

1

\ m½E

2

ðAÞ ¼

**B;C #H;B\C¼A
**

m½E

1

ðBÞ m½E

2

ðCÞ;

8A#H ð30Þ

A.7 (The pignistic transformation for decision). The

pignistic transformation maps bbas to so called pig-

nistic probability functions. The pignistic transfor-

mation of m

X

is given by

BetP

X

ðAÞ ¼

B#X

jA \ Bjm

X

ðBÞ

jBjð1 Àm

X

ðhÞÞ

; 8A 2 X ð31Þ

where jAj is the number of elements of Xin A. This solution

is a classical probability measure from which expected

utilities can be computed in order to take optimal deci-

sions. Some of its detail and justiﬁcations can be found in

[25,29]

4.2. sensor modeling and measurements interpretation

The Polaroid Ultrasonic Rangeﬁnder is used for map

building. This is a very common device that can detect dis-

tances in the range of 0.47–5 m with 1% accuracy. In [21],

the sensor model converts the range information into

probability values. The model in Figs. 3 and 4 is given by

Eqs. (32)–(37).

In region I, where R À e < r < R + e

x ¼

bÀ@

b

_ _

2

þ

eÀjRÀrj

e

_ _

2

2

ð32Þ

EðOjxÞ ¼

0 À1 < x < 0

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

0 6 x 6 1

_

ð33Þ

EðEjxÞ ¼ 0:8 ÀEðOjxÞ ð34Þ

In region II, where R

min

< r < R À e

x ¼

bÀ@

b

_ _

2

þ

RÀeÀr

RÀe

_ _

2

2

ð35Þ

EðOjxÞ ¼

0 À1 < x < 0

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

0 6 x 6 1

_

ð36Þ

EðEjxÞ ¼ 0:8 ÀEðOjxÞ ð37Þ

R is the range response form the ultrasonic sensor, and

(r, @) is the coordinate of a point inside the sonar cone. e is

the range error, and it distributes the evidence in Region I.

b is the half open beam angle of the sonar cone.

4.3. The fusion of data from sonar

The sonar data interpreted by Transferable Belief Model

of evidence are collected and updated into a map using the

same theory of evidence. In our approach, the basic

30

o

270

300

330

0

30

60

90

Fig. 3. Typical beam pattern of Polaroid ultrasonic sensor.

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1393

probability assignment corresponding to a range reading r

is obtained directly using Eqs. (32)–(37). The next example

illustrates the usage of transferable belief model for map-

building process.

Example 1. Let the robot be located in cell (15, 10) in the

beginning of the navigation process. The condition is

shown in Table 1 and Figs. 5 and 6. The new basic

probability assignments for cells in the map become (the

ﬁrst lies in region I and the second lies in region II):

(Static obstacles)

x

S

0

0

¼

15À3

15

ð Þ

2

þ

10À2À7

10À2

ð Þ

2

2

¼ 0:1878 x

S

1

0

¼

15À3

15

ð Þ

2

þ

10À2À7

10À2

ð Þ

2

2

¼ 0:1878

E

S

0

0

ðxÞ ¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1 ¼ 0:5341 E

S

0

0

ðxÞ ¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5341

E

S

0

0

ðOÞ ¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2659 E

S

0

0

ðOÞ ¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2659

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

and

x

S

0

1

¼

15À6

15

ð Þ

2

þ

2Àj10À9j

2

ð Þ

2

2

¼ 0:3050 x

S

1

1

¼

15À0

15

ð Þ

2

þ

2Àj10À9j

2

ð Þ

2

2

¼ 0:6250

E

S

0

1

ðxÞ ¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5851 E

S

1

1

ðxÞ ¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:7809

E

S

0

1

ðOÞ ¼ 0:8 ÀEðxÞ

S

1

1

¼ 0:2149 E

S

1

1

ðOÞ ¼ 0:8 ÀEðxÞ

S

1

1

¼ 0:0191

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

(Dynamic obstacles)

x

S

2

4

¼

15À3

15

ð Þ

2

þ

10À2À5

10À2

ð Þ

2

2

¼ 0:3903 x

S

2

5

¼

15À3

15

ð Þ

2

þ

2Àj10À11j

2

ð Þ

2

2

¼ 0:4450

E

S

2

4

ðxÞ ¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:6322 E

S

2

4

ðxÞ ¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:6625

E

S

0

1

ðOÞ ¼ 0:8 ÀEðxÞ

S

1

1

¼ 0:1678 E

S

0

1

ðOÞ ¼ 0:8 ÀEðxÞ

S

1

1

¼ 0:1375

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

In the next moment, the robot is moving in a direction de-

picted by the broken line and sonars scan the environment

again. This situation is given in Fig. 6. S

0

, S

1

and S

2

detect

some possible obstacles on @

S

1

0

¼ 6; r

S

0

0

¼ 4; r

S

1

0

¼ 5;

r

S

0

1

¼ 6; r

S

1

1

¼ 6; r

S

2

1

¼ 7; r

S

1

2

¼ 9; r

S

2

2

¼ 9; @

S

0

0

¼ 3; @

S

0

1

¼

6; @

S

1

1

¼ 0; @

S

2

1

¼ 3; @

S

1

2

¼ 3; @

S

2

2

¼ 6; r

S

2

6

¼ 3; @

S

2

6

¼ 6:

x

S

0

0

¼

15À3

15

ð Þ

2

þ

10À2À4

10À2

ð Þ

2

2

¼ 0:3100 x

S

1

0

¼

15À6

15

ð Þ

2

þ

10À2À5

10À2

ð Þ

2

2

¼ 0:2503

EðxÞ

S

0

0

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5877 EðxÞ

S

1

0

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5590

EðOÞ

S

0

0

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2123 EðOÞ

S

1

0

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2410

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

x

S

0

1

¼

15À6

15

ð Þ

2

þ

10À2À6

10À2

ð Þ

2

2

¼ 0:2112 x

S

1

1

¼

15À0

15

ð Þ

2

þ

10À2À6

10À2

ð Þ

2

2

¼ 0:6250

EðxÞ

S

0

1

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5427 EðxÞ

S

1

1

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:7809

EðOÞ

S

0

1

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2573 EðOÞ

S

1

1

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:0191

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

y

ε

ε

r

β

x

α

R

Π

angle Pr ofile for the

α

l

−15

15

Pr ofile for the range

l

R ε − R

R ε +

min

R

Fig. 4. The proﬁle of the ultrasonic sensor model.

Table 1

Condition of this example.

Robot position x

r

= 15, y

r

= 10

Dynamic

obstacles

position

and its

moving

direction

X

4x

= 14, X

4y

= 10 moving direction

in a positive X-axis direction.

X

5x

= 15, X

5y

= 7 moving direction

in a positive X-axis direction.

Static

obstacles

position

X

0x

¼ 8; X

0y

¼ 8

X

1x

¼ 10; X

1y

¼ 5

Distances r

and angles

@ detected

by different

sonar

X

0

: r

S

0

0

¼ 7; @

S

0

0

¼ 3; r

S

1

0

¼ 7; @

S

1

0

¼ 3

X

1

: r

S

0

1

¼ 9; @

S

0

1

¼ 6; r

S1

1

¼ 9; @

S

1

1

¼ 0

X

4

: r

S

2

4

¼ 5; @

S

2

4

¼ 3

X

5

: r

S

2

5

¼ 11; @

S

2

5

¼ 3

1394 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

x

S

2

1

¼

15À3

15

ð Þ

2

þ

10À2À7

10À2

ð Þ

2

2

¼ 0:3278 x

S

1

2

¼

15À3

15

ð Þ

2

þ

2À 10À9 j j

10À2

ð Þ

2

2

¼ 0:3278

EðxÞ

S

2

1

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5970 EðxÞ

S

2

1

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5970

EðOÞ

S

2

1

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2030 EðOÞ

S

2

1

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2030

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

(Dynamic obstacle)

x

S

2

2

¼

15À6

15

ð Þ

2

þ

2Àj10À9j

10À2

ð Þ

2

2

¼ 0:1878 x

S

2

6

¼

15À6

15

ð Þ

2

þ

10À2À3

10À2

ð Þ

2

2

¼ 0:3753

EðxÞ

S

2

2

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:5341 EðxÞ

S

2

6

¼

1

2

Â

2Âx

2

1þx

2

_ _

þ1

_ _

¼ 0:6235

EðOÞ

S

2

2

¼ 0:8 ÀEðxÞ

S

0

0

¼ 0:2659 EðOÞ

S

2

6

¼ 0:8 ÀEðxÞ

S

2

6

¼ 0:1765

EðHÞ ¼ 0:2 EðHÞ ¼ 0:2

Fig. 5. Initial position.

Fig. 6. Occupancy determination based on sonar measurements in t = 2 s.

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1395

Evidence about occupancies of cells including dynamic ob-

ject are combination using transferable belief model,

which is shown in Table 2.

From the new results it can be concluded that the evi-

dence about the occupancy of cells is increased. This pro-

cess continues until the target is reached.

5. Path planning

In this section, we adopt a new path planning strategy,

in which the robot replans the path as new observations

are acquired. What’s more, the navigation recognition

law that can determine which obstacle is dynamic or static

without any previous knowledge is discussed in this

section.

5.1. dynamic object recognition law

Condition 1. Consider the pignistic value and basic prob-

ability assignments given by transferable belief model

based on update information. At ttime, suppose the

number of the sensor is i; interval time is 1, Event A is

moving obstacle if satisfying that:

(1) m

S

1

ÈÁÁÁÈS

i

1È2ÁÁÁÈt

½AðEÞ þm

S

1

ÈÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ ! 1

(2) m

S

1

ÈÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ P0:5

Proof. Let H = {h

1

, h

2

, . . . , h

n

} denote a set of n hypothesis.

Given the likelihoods l(h

i

jx) for every h

i

2 H Let X denote

the set of possible values this observation can take.

m½xðAÞ ¼

h

i

2A

lðh

i

jxÞ

h

i

2

A

ð1 Àlðh

i

jxÞÞ ð38Þ

pl½xðAÞ ¼ 1 À

h

i

2

A

ð1 Àlðh

i

jxÞÞ ð39Þ

BetPðAÞ ¼

A#X

m

X

ðAÞ

Að1 Àm

X

ðHÞÞ

; 8A 2 X ð40Þ

Suppose the sensor measurement is A 2 Xandits likelihoods

are m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ ¼ a; m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðEÞ ¼ b, with (38)

m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ ¼ ð1 ÀaÞð1 ÀbÞ ð41Þ

From (40),

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½Aðh

1

Þ ¼

að1 ÀbÞ þ

nÀ1

i¼1

C

i

nÀi

ð1 ÀbÞ

nÀiÀ1

=ði þ1Þ

1 Àð1 ÀaÞð1 ÀbÞ

¼

a

nÀ1

i¼0

C

i

nÀi

b

i

ð1 ÀbÞ

nÀiÀ1

=ði þ1Þ

1 Àð1 ÀaÞð1 ÀbÞ

ð42Þ

The sum can be simpliﬁed to from Delmotte [28]:

nÀ1

i¼0

C

i

nÀi

b

i

ð1 ÀbÞ

nÀiÀ1

=ði þ1Þ ¼

1 Àð1 ÀbÞ

n

nb

ð43Þ

Thus:

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½Aðh

1

Þ ¼

a

nb

1 Àð1 ÀbÞ

n

1 Àð1 ÀaÞð1 ÀbÞ

nÀ1

ð44Þ

Let n = 2 for the reason these are two hypothesis including

occupancy (O) and empty (E):

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ ¼

a

2b

1 Àð1 ÀbÞ

2

1 Àð1 ÀaÞð1 ÀbÞ

¼

a

2b

1 Àð1 À2b þb

2

Þ

1 Àð1 Àb Àa þabÞ

¼

a

2b

2b Àb

2

b þa Àab

¼

a

2b

2b Àb

2

b þa Àab

¼

að2 ÀbÞ

2ð1 ÀmðHÞÞ

ð45Þ

If Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ ( Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ, it means event A of

occupancy is unbelievable, thus if "e and e is a small con-

stant, and

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðhÞ

< e we can belief that event A is

occupancy is unlikelihood.

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ

¼

að2ÀbÞ

2ð1Àm

X

½x

1

; . . . ; x

i

ðHÞÞ

_

m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ

1Àm

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ

¼

að2ÀbÞ

2m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ

ð46Þ

If

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ

Bet

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ

< e, we get:

m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðOÞ ! 0 and m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðEÞ

! 1 and m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ P0:5

Table 2

Probabilitic result from transferable belief model.

H H

(unknown)

Occupancy Empty

m

S0

1

ðT

0

Þ

0.3362 0.3921 0.1293

m

S1

1

ðT

0

Þ

0.3362 0.3921 0.1293

m

S0ÈS1

1

ðT

0

Þ

0.6642 0.2561 0.0505

m

S0

1

ðT

1

Þ

0.3248 0.4629 0.0875

m

S0

2

ðT

1

Þ

0.3347 0.4243 0.1063

m

S0ÈS1

1

ðT

1

Þ

0.5409 0.4550 0.0022

m

S2

1

ðT

4

Þ

0.3007 0.5261 0.0671

m

S2

1

ðT

5

Þ

0.2911 0.5714 0.0464

m

S0

2

ðT

0

Þ

0.3248 0.4629 0.0875

m

S1

2

ðT

0

Þ

0.3320 0.4243 0.1063

m

S0ÈS1

2

ðT

0

Þ

0.5371 0.3117 0.0344

m

S0ÈS1

1È2

ðT

0

Þ

0.5281 0.3874 0.0476

m

S0

2

ðT

1

Þ

0.3396 0.4031 0.1177

m

S1

2

ðT

1

Þ

0.2149 0.7660 0.0042

m

S2

2

ðT

1

Þ

0.3212 0.4758 0.0818

m

S0ÈS1ÈS2

2

ðT

1

Þ

0.5478 0.2516 0.0004

m

S0ÈS1ÈS2

1È2

ðT

1

Þ

0.5273 0.3582 0.0682

m

S1

2

ðT

2

Þ

0.4212 0.4758 0.0818

m

S2

2

ðT

2

Þ

0.3420 0.3921 0.1239

m

S1ÈS2

2

ðT

2

Þ

0.6444 0.3016 0.0368

m

S2

2

ðT

6

Þ

0.3086 0.5149 0.0661

m

S2

1È2

ðT

6

Þ Èm EðHÞ

S2

6

_ _

1

(dyanmic obstacle)

0.8444 0.1203 0.0309

mðT

S2

4

Þ

1

Èm EðHÞ

S2

4

_ _

2

(dynamic obstacle)

0.8442 0.1222 0.0292

mðT

S2

5

Þ

1

Èm EðHÞ

S2

5

_ _

2

(dyanmic obstacle)

0.8436 0.1289 0.0239

1396 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

Because m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ þ m

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

½AðEÞ þ m

S

1

ÈS

2

ÁÁÁÈ

1È2ÁÁÁÈt

S

i

½AðOÞ ¼ 1, we get the condition:

(1) m

S

1

ÈÁÁÁÈS

i

1È2ÁÁÁÈt

½AðEÞ þm

S

1

ÈÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ ! 1

(2) m

S

1

ÈÁÁÁÈS

i

1È2ÁÁÁÈt

½AðHÞ P0:5If satisfying the condition, we

believe this obstacle is dynamic. And the velocity

of this dynamic obstacle is deﬁned:

v

obs

ðtÞ ¼

ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ ﬃ

ðx

BetP

S

1

ÈS

2

ÁÁÁÈS

i

t

ðOÞ

Àx

BetP

S

1

ÈS

2

ÁÁÁÈS

i

tÀT

ðOÞ

Þ

2

þðy

BetP

S

1

ÈS

2

ÁÁÁÈS

i

t

ðOÞ

Ày

BetP

S

1

ÈS

2

ÁÁÁÈS

i

tÀT

ðOÞ

Þ

2

_ _

T

ð47Þ

where T is interval.

5.2. potential ﬁeld method based on TBM

The potential ﬁeld method has been studied extensively

for autonomous mobile robot path planning in the past

decade. An attractive potential which drives the mobile ro-

bot to its destination can be described by Ge and Cui

[10,13]. In this paper, we proposed a new method to

modify this potential ﬁeld method based on TBM in com-

plex dynamic environment under static and dynamic

obstacle.

Let ½x; y

BetP

X

½Tt ðOÞ

and [x

r

,y

r

] is the position both from

dynamic obstacle and robot. v

obs

ðtÞ ¼

ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

ðx

BetP

S

1

ÈS

2

ÁÁÁÈS

i

t

ðOÞ

Àx

BetP

S

1

ÈS

2

ÁÁÁÈS

i

tÀT

ðOÞ

Þ

2

þðy

BetP

S

1

ÈS

2

ÁÁÁÈS

i

t

ðOÞ

Ày

BetP

S

1

ÈS

2

ÁÁÁÈS

i

tÀT

ðOÞ

Þ

2

_ _

T,

v

RO

(t) = [v(t) À v

obs

(t)]

T

n

RO

, at t time,

Fatt ðq; vÞ ¼

BetP

S1ÈS2ÁÁÁÈS

i

1È2ÁÁÁÈt

ðOÞm@qkq

tar

ðtÞ ÀqðtÞk

2

nRT if not satisfy condition:1

BetP

S1ÈS2ÁÁÁÈS

i

t

ðOÞm@

q

kq

tar

ðtÞ ÀqðtÞk

2

n

RT

þBetP

S1ÈS2ÁÁÁÈS

i

t

ðOÞn@

v

kv

tar

ðtÞ ÀvðtÞk

2

n

VRT

if satisfy condition:1

_

¸

¸

_

¸

¸

_

ð48Þ

where q(t) and q

tar

(t) denote the positions of the robot and

the target at time t, respectively; q = [xyz]

T

in a 3-dimen-

sional space or q = [xy]

T

in a 2-dimensional space; v(t)

and v

tar

(t) denote the velocities of the robot and the target

at time t, respectively; kq

tar

(t) À q(t)k is the Euclidean dis-

tance between the robot and the target at time t;

kv

tar

(t) À v(t)k is the magnitude of the relative velocity be-

tween the target and the robot at time t; @

q

and @

v

are sca-

lar positive parameters; nand mare positive constants;

BetP

X

(O) is pignsitic decision value that sensor detect the

occupancy belief value both for static and dynamic respec-

tively; n

RT

being the unit vector pointing from the robot to

the target and n

VRT

being the unit vector denoting the rela-

tive velocity direction of the target with respect to the

robot.

Also a repulsive potential can be written as:

F

rep

ðq; vÞ ¼

0; if q

s

ðq; q

obs

Þ Àq

m

ðv

RO

Þ Pq

0

orv

RO

60

or satisfy condition:1

F

rep1

þF

rep2

; if 0 <q

s

ðq; q

obs

Þ Àq

m

ðv

RO

Þ <q

0

and v

RO

>0

or satisfy condition:1

not defined; if v

RO

>0 and q

s

ðq; q

obs

Þ <q

m

ðv

RO

Þ

or satisfy condition:1

1

2

BetP

S

1

ÈS

2

ÁÁÁÈS

i

1È2ÁÁÁÈt

ðOÞg

S

1

qðq;q

obs

Þ

À

1

q

0

_ _

2

if not satisfy condition:1

_

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

_

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

¸

_

ð49Þ

where

F

repv1

¼

ÀgBetP

X

ðO

v

Þ

ðq

s

ðq; q

obs

Þ Àq

m

ðv

RO

ÞÞ

2

1 þ

v

RO

a

max

_ _

n

RO

F

repv2

¼

BetP

X

ðO

v

Þgv

RO

v

RO?

q

s

ðq; q

obs

Þa

max

ðq

s

ðq; q

obs

Þ Àq

m

ðv

RO

ÞÞ

2

n

RO?

q

m

ðv

RO

Þ ¼

v

2

RO

ðtÞ

2a

max

BetP

X

ðO

v

Þ ¼

O#X

m

X

ðO

v

Þ

jO

v

jð1 Àm

X

ðHÞÞ

; 8A#X

With q

0

is a positive constant describing the inﬂuence

range of the obstacle; q

s

(q(t), q

obs

(t)) is denoted the short-

est distance between the robot and the body of the obsta-

cle; v

RO

(t) is the relative velocity between the robot and the

obstacle in the direction from the robot to the obstacle; n

RO

is a unit vector pointing from the robot to the obstacle; a

and g is a positive constant; BetP

X

(O) and BetP

X

(O

v

) are

pignsitic decision value that sensor detect the occupancy

belief value both for static and dynamic respectively;

m

X

(H) is unknown value from pignistic transformation.

From Eqs. (48) and (49), we obtain the potential total

virtual force which drives the reference point of the mobile

robot to the destination, described by:

F

total

¼ F

att

þF

rep

ð50Þ

Assuming that the target moves outward or synchronously

with the obstacle, the robot is obstructed by the obstacle

and cannot reach the target. Refs. [13–17] indicate in highly

dynamic environment, waiting method is often adopted

where both the target and the obstacle are moving.

5.3. Local minimum problems

Like other local path planning approaches, the proposed

navigation algorithm has a local minimum problem. To de-

tect the outbreak of local minimum, we adopt the method

in [5] (see Fig. 7), which compares the robot-to-target

direction, h

t

, with the actual instantaneous direction of tra-

vel, h

0

. If the robot’s direction of travel is more than a cer-

tain angle (90° in most cases) off the target point, that is,

when

jh

t

Àh

0

j > h

s

ð51Þ

where h

s

is a trap warning angle, and typically 90°. We re-

gard it is very likely about to get trapped.

Fig. 7. Incorporating a virtual target on local-minimum alert.

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1397

Consider a common situation as shown in Fig. 8, in

which the robot needs to detour around a long wall to

reach the goal. At position A, the robot is heading to the

goal position and hence h

s

= 0°.

But at position B, the robot changes its direction by

repulsive force from detected wall and h

s

P90°. In such

a situation, the original target point is replaced with the

virtual target point. The virtual target point is placed by

following equations:

x

v

¼ x

i

þm cosðh

i

Àb

vt

Þ

y

v

¼ y

i

þm sinðh

i

Àb

vt

Þ ð52Þ

h

v

¼ h

0

Àb

vt

where m is the distance that the robot needs to travel to

reach the virtual goal. Also

v ¼ L ÂBetP

X

ðOÞ þR ð53Þ

where L is the distance between the robot and the obstacle

near the virtual target detected by the longest-range sen-

sor in the corresponding direction. BetP

X

(O) is a pignsitic

decision value that sensor detect the occupancy belief va-

lue in h

v

that direction. R is an offset which depends on

the size of the robot. b

vt

is a certain angler, and typically

45°.

Based on (51)–(53), the robot escapes from this trap-

ping point and moves to point C. When the robot escapes

from the condition above or when no obstacle is detected,

the original target point is restores, and the robot contin-

ues to proceed to the target point.

6. Simulation and experiment results

To show the usefulness of the proposed approach, a ser-

ies of simulations have been conducted using an arbitrarily

constructed environment including obstacles.

6.1. Simulation results

In these simulations, the positions of all the obstacles

including moving obstacles in the workspace are unknown

to the robot. The robot is aware of its start and ﬁnish posi-

tions only. Simulation results obtained in three working

scenarios are presented in Figs. 9–13. The robot has been

modeled as a small circle, and imaginary sensors (sixteen

in number) are placed in the form of the arc along the cir-

cumference of the robot. The minimum distance obtained

within the cone of each sensor is considered as the dis-

tance of the obstacle from the sensor which detects any

obstacle. And any distance information detected from so-

nar is adding Gaussian White Noise. The ﬁrst to fourth

experiments are conducted to verify the proposed method

in the MATLAB environment, and the ﬁfth experiment is

carried out from Mobilesim environment.

Experiment 1: Fig. 9 represents the obstacle avoidance

path of the mobile robot in static case. Table 3 shows the

condition settings of this experiment. As seen from the vir-

tual plane shown in Fig. 9a, with the decrease of distance

between Obs6 and robot, BetP

X

(O) became much large

which lead the repulsive force much large. Ultimately, ro-

bot changes its path by deviating to the right. The complete

process is shown in Fig. 9.

Experiment 2: This example illustrates the collision

avoidance process in dynamic environment. The scenario

Fig. 8. Anti-deadlock mechanism.

Fig. 9. Collision avoidance (static case).

1398 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

shown in Fig. 10 has some similarity with the scenarios of

example 1 but adding two moving obstacles. Fig. 10 thor-

oughly plots the obstacle avoidance path of the mobile ro-

bot in discrete time while it is maneuvered in a complex

environment (including dynamic and static obstacle)

where two rectangular shapes of obstacles are moving in

Fig. 10. Collision avoidance (dynamic case).

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1399

different directions and different velocity, meanwhile, ﬁve

static obstacles also set in this environment. Table 4 shows

the condition settings of this experiment.

Fig. 10 represents the scenario and the virtual plane in

different time intervals: (a and b) 0 6 t 6 5s; (c and d)

0 6 t 6 10s; (e and f) 0 6 t 6 15s; (g and h) 0 6 t 6 37s. As

seen from Fig. 10(c and d), the robot is in a collision course

with both ObsD1, Obs1 and Obs2. However, it is more ur-

gent to avoid collision with ObsD1, and the robot slows

down to accomplish this. The robot avoids collision with

Obs2 as shown in Fig. 10e and f. Fig. 10e and f show the ro-

bot neglect this moving obstacle and can not change its

path for the reason there is no collision risk with ObsD2.

The complete process is shown in Fig. 10g and h.

Experiment 3: For local minimum problem avoidance, a

rigorous testing is carried out in cluttered maze consisting

of concave obstacles and dynamic obstacles (Fig. 11). In the

following Figs. 11 and 12, h represents the static obstacle

and represents the dynamic obstacle. Point a, b, c, . . . rep-

resent several trapping points. represents the real goal

Fig. 11. Collision avoidance (local minimum case).

1400 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

Fig. 11 (continued)

Fig. 12. Collision avoidance (mazes case).

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1401

and represents the current virtual goal. ?represents the

potential total virtual force which drives the reference

point of the mobile robot to the current position. Table 5

shows the dynamic obstacles sets of this experiment.

Fig. 11 represents the scenario and the virtual plane in

different time intervals: (b and c) 0 6 t 6 10s; (d and e)

0 6 t 6 30s; (f and g) 0 6 t 6 40s. As seen from Fig. 11b

and c, the robot does not detect the obsD1 until it reaches

point A at which time it turns to the right-the side with the

lowest collision possibility. Since the obsD1 is now to

change its moving direction and move away from the ro-

bot, the robot continues to track the virtual goal in a coun-

terclockwise direction until it reaches the virtual goal at

point B. Since the goal is now to the left of the robot, it con-

tinues to track the current virtual goal (point C) at which

time the robot detects obsD2. But in Fig. 11b, it shows

the robot neglects this moving obstacle and can not change

its path for the reason that there is no collision risk with

ObsD2. In Fig. 11d and e, there are several local minimum

points including point D, E, F. At these points where the ro-

bot’s direction of travel is more than a certain angle

(jh

t

À h

0

j > 90°) off the target point, the robot turns its

direction and tracks current virtual goal. Then the robot

continues to see the wall on its left until it arrives at point

H and I after which it goes towards a real target point. The

complete process is shown in Fig. 11f and g and the robot

moves counterclockwise around obstacles through point J

and K to the ﬁnal goal.

Fig. 13. Collision avoidance based on Mobilesim environment (dynamic case).

Table 3

Experimental conditions of obstacle avoidance (static case).

Initial robot position x

r

= 0[m], y

r

= 0[m]

Goal position Goal

x

= 30[m], Goal

y

= 20.5[m],

Obstacle position Obs1

x

= 8[m], Obs1

y

= 10[m]

Obs2

x

= 13[m], Obs2

y

= 10[m]

Obs3

x

= 18[m], Obs3

y

= 20[m]

Obs4

x

= 20[m], Obs4

y

= 15[m]

Obs5

x

= 24[m], Obs5

y

= 20[m]

Obs6

x

= 9[m] ?12[m], Obs6

y

= 10[m]

Processing time 30 s

1402 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

Experiment 4: In this experiment, the more complex

test is performed to present the effectiveness of the pro-

posed approach in cluttered environment with obstacle

loops and mazes. Similar as experiment 3, h represents

the static obstacle and represents the dynamic obstacle.

Fig. 12a, c and e thoroughly plot the obstacle avoidance

path of the mobile robot in discrete time while it is maneu-

vered in a dynamic environment where many dynamic

obstacles are moving in different direction. Whereas,

Fig. 12b, d and f give some detail information about how

the robot navigation system behaves at each discrete time.

The proposed algorithm performs well in a cluttered dy-

namic environment, as shown in Fig. 12a and b. The pro-

cesses for wall following, passing through a narrow door

and escaping from local minimum point using our algo-

rithm are shown in Fig. 12c–f.

Experiment 5: This example is shown in Fig. 13. This

illustrates the collision avoidance process by change both

seed and orientation when facing with multi-moving

obstacle. Fig. 13a represents the scenario, while Fig. 13b–

e indicate the virtual plane in different time intervals: (b)

0 6 t 6 1.2; (c) 0 6 t 6 5.1; (d) 0 6 t 6 6.3; (e) 0 6 t 6 8.1.

As seen from the virtual plane shown in Fig. 13b, there is

a collision risk with D

1

in the time 0 6 t 6 1.2. The robot

changes its path by deviating to the right, as shown in

Fig. 13b. In Fig. 13c and d, robot changes its orientation

by avoiding collision with D

1

and D

2

using proposed

method.

6.2. Experiment with a pioneer robot

The method presented in this paper has been tested on

a Pioneer robot. The robot and the experiment setup are

shown in Fig. 14. The laptop on top on the robot is in

charge of all computation: motion control, planning, SLAM

and so on. The navigation is carried out in real-time and we

only use sonar to detect obstacle.

In order to simulate dynamic obstacle, two football mo-

bile robots (MT-Robot) are used based on remote control.

The orientation of the two MT-Robot is shown in Fig. 15.

The track line of Pioneer robot is indicated approximately

with blue color in this ﬁgure. The room is a clean environ-

ment with long wall obstacles and measures approxi-

mately 8.6 m by 6.5 m. The objects were placed in the

room and their positions are point out as shown in

Fig. 15. The position of the robot is recorded at regular time

intervals. During the test executions, all programs were

run under the Windows NT operating system, directly on

the main processor of the robot, a Pentium 2.3G. A test case

is presented in Fig. 15.

In this experiment, the goal position is aligned to the

front of the mobile robot which is obstructed by obstacles

wall and two MT-Robots. The mobile robot starts to move

towards according to the real goal. As it moves, it detects

obstacles in front and in the left from its ultrasonic sensors.

The navigation law makes a correct decision by indicating

the right direction to the mobile robot until it reaches the

virtual goal. Then the mobile robot keeps moving and de-

tects dynamic obstacles in right direction. The robot makes

the correct decision and turns to the left avoiding the de-

tected obstacles successfully and then reaches the goal.

Although feature detectors using sonar is not very

accurate especially in complex environment and real

Table 4

Experimental conditions of obstacle avoidance (dynamic case).

Initial robot position x

r

= 0[m], y

r

= 0[m]

Goal position Goal

x

= 29[m], Goal

y

= 25[m],

Initial dynamic obstacle

position and its moving

direction and velocity

ObsD1

x

= 10[m], ObsD1

y

= 0[m]

moving direction and velocity

moving in a positive Y-axis

direction at 0.88(m/s)

ObsD2

x

= 12[m], ObsD1

y

= 15[m]

moving direction and velocity

moving in a positive X-axis

direction at 0.19(m/s).

Static obstacle position

Obs1

x

¼ 8½m; Obs1

y

¼ 10½m

Obs2

x

¼ 12½m; Obs2

y

¼ 10½m

Obs3

x

¼ 18½m; Obs3

y

¼ 20½m

Obs4

x

¼ 20½m; Obs4

y

¼ 15½m

Obs5

x

¼ 24½m; Obs5

y

¼ 20½m

Processing time 37 s

Fig. 14. Pioneer robot.

Table 5

Experimental conditions of dynamic obstacles.

Initial robot position x

r

= 0[m], y

r

= 0[m]

Goal position Goal

x

= 29[m], Goal

y

= 25[m],

Dynamic obstacle position and

its moving direction and

velocity

ObsD 1

x

= 0[m] M6[m],

ObsD1

y

= 5[m], moving back and

forth along X-axis and its velocity

at 1.20(m/s)

ObsD2

x

= 10[m],

ObsD1

y

= 5[m] M10[m], moving

back and forth along Y-axis and its

velocity at 1.00(m/s)

ObsD3

x

= 15[m] M22[m],

ObsD3

y

= 10[m], moving back and

forth along X-axis and its velocity

at 0.70(m/s)

ObsD4 = (0, 20) M(5, 25), moving

back and forth and its velocity at

0.71(m/s)

Processing time 40 s

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1403

performance, in this experiment, it is not very robust as

compared with simulation experiment in Matlab or

Mobilesim (Process time became long, waiting motion

happened frequently), it also can achieve the goal based

on this navigation method. Furthermore, the performance

of the simulation and the real robot is different. The simu-

lation has a faster cycle time for the iterations of the pro-

posed method. This allows the robot to follow the

smoother path. The reactions of the robot in the simulation

are also different from the real robot. For instance, once a

speed command is given to the real robot, the set point is

obtained over a period of time. In the simulation, the speed

set point is almost immediately achieved.

From these results, it shows that based on our collision

avoidance algorithm, the mobile robot can ﬁnd a safe and

smooth path for attaining the target position autono-

mously whether in a static or dynamic environment when

adding highly noise to the sensor.

7. Conclusion

In this paper, a new mobile robot navigation strategy

for nonholomonic mobile robot in dynamic environment

was designed and fully tested in this work based on trans-

ferable belief model. The aim was to let the robot ﬁnd a

collision-free trajectory between the starting conﬁguration

and the goal conﬁguration in a dynamic environment con-

taining some obstacle (including static and moving object).

For this purpose, a navigation strategy which consists of

sonar data interpretation and fusion, map building, and

path planning is designed. The sonar data fusion and dy-

namic object distinguish law were discussed using trans-

ferable belief model. Based on proposed judging law, a

path planning algorithm is modiﬁed based on potential

ﬁeld method. Simulation and experiment results validate

the effectiveness of the proposed method. Though mobile

robot is discussed, the method is generally applicable to

other type of problem, as well as to pattern recognition,

objective classiﬁcation.

Acknowledgements

The authors thank the anonymous editor and reviewer

for their invaluable suggestions, which has been incorpo-

rated to improve the quality of this paper dramatically.

This work was supported by National Natural Science

Foundation of China (60775047, 60673084), National High

Technology Research and Development Program of China

(863 Program: 2008AA04Z214), National Technology Sup-

port Project (2008BAF36B01) and Research Foundation of

Hunan Provincial Education Department (10C0356).

References

[1] K.D. Rueb, A.K.C. Wong, Structuring free space as a hypergrarph for

roving robot path planning and navigation, IEEE Transactions on

Pattern Analysis and Machine Intelligence 9 (2) (1987) 263–273.

[2] M.K. Habib, S. Yuta, Efﬁcient online path planning algorithm and

navigation for a mobile robot, International Journal of Electronics 69

(2) (1990) 187–210.

[3] M.J. Mataric, Integration of representation into goal-riven behavior-

based robots, IEEE Transactions on Robotics and Automation 8 (3)

(1992) 304–312.

[4] E. Rimon, D.E. Koditschek, Exact robot navigation using artiﬁcial

potential function, IEEE Transactions on Robotics and Automation 8

(5) (1992) 501–518.

[5] J. Borensein, Y. Koren, Real-time obstacle avoidance for fast mobile

robots, IEEE Transactions on Systems Man and Cybernetics 19 (5)

(1989) 1179–1187.

[6] S. Thrun, W. Burgard, D. Fox, A probabilistic approach to concurrent

mapping and localization for mobile robots, Machine Learning 31

(1–3) (1998) 29–53.

[7] P. Svestka, M.H. Overmars, Motion planning for carlike robots using a

probabilistic learning approach, International Journal of Robotics

Research 16 (2) (1997) 119–143.

[8] S. Thrun, Probabilistic algorithms in robotics, Ai Magazine 21 (4)

(2000) 93–109.

[9] C.M. Clark, Probabilistic road map sampling strategies for multi-

robot motion planning, Robotics and Autonomous Systems 53 (3–4)

(2005) 244–264.

[10] S.S. Ge, Y.J. Cui, New potential functions for mobile robot path

planning, IEEE Transactions on Robotics and Automation 16 (5)

(2000) 615–620.

[11] K.P. Valavanis, T. Hebert, R. Kolluru, et al., Mobile robot navigation in

2-D dynamic environments using an electrostatic potential ﬁeld,

IEEE Transactions on Systems Man and Cybernetics Part A –Systems

and Humans 30 (2) (2000) 187–196.

Fig. 15. Mobile robot and experiment.

1404 W. Yaonan et al. / Measurement 44 (2011) 1389–1405

[12] N.C. Tsourveloudis, K.P. Valavanis, T. Hebert, Autonomous vehicle

navigation utilizing electrostatic potential ﬁelds and fuzzy logic, IEEE

Transactions on Robotics and Automation 17 (4) (2001) 490–497.

[13] S.S. Ge, Y.J. Cui, Dynamic motion planning for mobile robots using

potential ﬁeld method, Autonomous Robots 13 (3) (2002) 207–222.

[14] J. Ren, K.A. McIsaac, R.V. Patel, Modiﬁed Newton’s method applied to

potential ﬁeld based navigation for nonholonomic robots in dynamic

environments, Robotica 26 (2008) 285–294.

[15] L. Yin, Y.X. Yin, C.J. Lin, A new potential ﬁeld method for mobile robot

path planning in the dynamic environments, Asian Journal of Control

11 (2) (2009) 214–225.

[16] F. Belkhouche, B. Belkhouche, Kinematics-based characterization of

the collision course, International Journal of Robotics & Automation

23 (2) (2008) 127–136.

[17] Y.C. Chang, Y. Yamamoto, On-line path planning strategy integrated

with collision and dead-lock avoidance schemes for wheeled mobile

robot in indoor environments, Industrial Robot – An International

Journal 35 (5) (2008) 421–434.

[18] A.O. Djekoune, K. Achour, R. Toumi, A sensor based navigation

algorithm for a mobile robot using the DVFF approach, International

Journal of Advanced Robotic Systems 6 (2) (2009) 97–108.

[19] J. Minguez, L. Montano, Sensor-based robot motion generation in

unknown, dynamic and troublesome scenarios, Robotics and

Autonomous Systems 52 (4) (2005) 290–311.

[20] L. Moreno, E. Dapena, Path quality measures for sensor-based

motion planning, Robotics and Autonomous Systems 44 (2) (2003)

131–150.

[21] J. Velagic, B. Lacevic, B. Perunicic, A 3-level autonomous mobile

robot navigation system designed by using reasoning/search

approaches, Robotics and Autonomous Systems 54 (12) (2006)

989–1004.

[22] X.Y. Yang, M. Moallem, R.V. Patel, A layered goal-oriented fuzzy

motion planning strategy for mobile robot navigation, IEEE

Transactions on Systems Man and Cybernetics Part B –Cybernetics

35 (6) (2005) 1214–1224.

[23] T. Kondo, Evolutionary design and behavior analysis of

neuromodulatory neural networks for mobile robots control,

Applied Soft Computing 7 (1) (2007) 189–202.

[24] J.A. Fernandez-Leon, G.G. Acosta, M.A. Mayosky, Behavioral control

through evolutionary neurocontrollers for autonomous mobile robot

navigation, Robotics andAutonomous Systems 57(4) (2009) 411–419.

[25] P. Smets, Analyzing the combination of conﬂicting belief functions,

Information Fusion 8 (4) (2007) 387–412.

[26] F. Delmotte, P. Smets, Target identiﬁcation based on the transferable

belief model interpretation of Dempster–Shafer model, IEEE

Transactions on Systems Man and Cybernetics Part A – Systems

and Humans 34 (4) (2004) 457–471.

[27] T. Denoeux, P. Smets, Classiﬁcation using belief functions:

Relationship between case-based and model-based approaches,

IEEE Transactions on Systems Man and Cybernetics Part A –

Systems and Humans 36 (6) (2006) 1395–1406.

[28] F. Delmotte, Comparison of the performances of decision aimed

algorithms with Bayesian and beliefs basis, International Journal of

Intelligent Systems 16 (8) (2001) 963–981.

[29] P. Smets, Application of the transferable belief model to diagnostic

problems, International Journal of Intelligent Systems 13 (3) (1998)

127–157.

[30] R. Fierro, F.L. Lewis, Control of a nonholonomic mobile robot:

Backstepping kinematics into dynamics, Journal of Robotic Systems

14 (3) (1997) 149–163.

[31] Q.J. Zhang, J. Shippen, B. Jones, Robust backstepping and neural

network control of a low-quality nonholonomic mobile robot,

International Journal of Machine Tools & Manufacture 39 (7)

(1999) 1117–1134.

[32] S.X. Yang, M. Meng, Real-time ﬁne motion control of robot

manipulators with unknown dynamics, Dynamics of Continuous

Discrete and Impulsive Systems-Series B – Applications &

Algorithms 8 (3) (2001) 339–358.

[33] G. Antonelli, S. Chiaverini, G. Fusco, A fuzzy-logic-based approach for

mobile robot path tracking, IEEE Transactions on Fuzzy Systems 15

(2) (2007) 211–221.

[34] Z.G. Hou, A.M. Zou, L. Cheng, et al., Adaptive control of an electrically

driven nonholonomic mobile robot via backstepping and Fuzzy

approach, IEEE Transactions on Control Systems Technology 17 (4)

(2009) 803–815.

[35] W. Sun, Y.N. Wang, A robust robotic tracking controller based on

neural network, International Journal of Robotics & Automation 20

(3) (2005) 199–204.

[36] Y. Zuo, Y.N. Wang, X.Z. Liu, et al., Neural network robust control for a

nonholonomic mobile robot including actuator dynamics,

International Journal of innovative Computing, Information and

Control 6 (8) (2010) 3437–3449.

[37] Y.N. Wang, W. Sun, Y.Q. Xiang, et al., Neural network-based robust

tracking control for robots, Intelligent Automation and Soft

Computing 15 (2) (2009) 211–222.

Wang Yaonan received the B.S. degree in

computer engineering from East China Sci-

ence and Technology University (ECSTU),

Shanghai, China, in 1982 and the M.S., and

Ph.D. degrees in electrical engineering from

Hunan University, Changsha, China, in 1990

and 1994, respectively.

From 1994 to 1995, he was a Postdoctoral

Research Fellow with the Normal University

of Defence Technology. From 1981 to 1994, he

worked with ECSTU. From 1998 to 2000, he

was a senior Humboldt Fellow in Germany,

and from 2001 to 2004, he was a visiting professor with the University of

Bremen, Bremen, Germany. He has been a Professor at Hunan University

since 1995. His research interests are intelligent control and information

processing, robot control and navigation, image processing, and industrial

process control.

Yang Yimin received the B.E.E., and M.S.E.E.

degrees in 2005 and 2009, respectively, from

Xiangtan University and Hunan University,

Hunan, China. Now, He is currently working

toward the Ph.D. degree in Hunan University.

His research interests include robot control

and navigation, intelligent information pro-

cessing, and artiﬁcial neural networks.

Yuan Xiaofang received the B.S., M.S., and

Ph.D. degrees in electrical engineering from

Hunan University, Changsha, China, in 2001,

2006 and 2008, respectively.

He is currently a lecturer with the College of

Electrical and Information engineering, Hunan

University. His research interests include

intelligent control theory and applications,

kernel methods, and artiﬁcial neural networks.

Zuo Yi received the Ph.D. degree in Control

Science and Engineering from Hunan Univer-

sity, Changsha, China, in 2009. He is a visiting

scholar in University of Waterloo from 2008

to 2009. His scientiﬁc interests include neural

networks and robotic robust control.

W. Yaonan et al. / Measurement 44 (2011) 1389–1405 1405

mc and mw are the masses of the robot body and wheel with actuator. is used to achieve the tracking control of the mobile manipulator under some uncertainties without any knowledge of those robot parameters. with the time update is proven to be suitable for real-time applications. Ic. based on transferable belief model (TBM) to design a fusion of ultrasonic sensor data. Section 3 discusses the nonlinear kinematics-WNN back stepping controller as applied to the tracking problem. The transferable belief model (TBM) is a model for the quantiﬁed representation of epistemic uncertainty and which can be an agent. It is assumed that the wheels of the robot do not slide. incomplete. the robot dynamics can be transformed as [30. any prior knowledge about the environment is. the suggested method can determine which obstacle is dynamic or static without any previous knowledge of moving obstacles. a neural robust tracking controller. . then selecting the relevant obstacles for corresponding robot avoidance motion. Then. The local map. 2. OrXrYr is the coordinate system ﬁxed to the mobile robot. Application of the transferable belief model (TBM) to many areas has been presented in [25–29] including classiﬁcation and target identiﬁcation during recent times. The MR with two driven wheels shown in Fig. the wheel with a motor about the wheel axis. 1 is a typical example of nonholonomic mechanical systems. but this prior knowledge is not easy to gain. most of the proposed approaches for mobile robots in the literature did not consider autonomous navigation in an environment which includes both static obstacles and dynamic obstacles. 1. each moving object’s velocity and direction is exactly known. First. represented as an occupancy grid.. Yaonan et al. _ _ x sin h À y cos h ¼ 0 ð2Þ All kinematic equality constraints are assumed to be independent of time and to be expressed as follows: _ AðqÞq ¼ 0 ð3Þ By appropriate procedures and deﬁnitions. The rest of the paper is organized as follows: Section 2 shows the mathematical representation of mobile robot with two actuated wheels. This is expressed by the nonholonomic constraint. where (x. Taking the velocity of the obstacles and static object into account. Attractive and repulsion potential function is modiﬁed by TBM pignistic belief value.34] Mv_w þ V v w þ F þ sed ¼ s where ð4Þ 1 A 0 M¼ r2 2 @ 4b r2 4b2 ðmb þ I0 Þ þ Iw ðmb À I0 Þ 2 2 r2 4b2 r2 4b2 ðmb À I0 Þ ðmb þ I0 Þ þ Iw 2 2 1 r2 _ mc dh 0 2b2 C B C V ¼ B À r22 mc dh 0 _ A @ 4b 0 F ¼ JT F sed ¼ JT sed B¼ 1 0 0 1 q ¼ ðx.1390 W. etc. Section 4 proposed a new method for map building based on sonic rangeﬁnder. Model of a mobile robot with two actuated wheels The kinematic model of an ideal mobile robot is widely used in the mobile robot (MR) control [30–37]. in general. Furthermore. [34]. an intelligent sensor. I0 = mcd2 + 2mwb2 + Ic + 2Im. OXY is the reference coordinate system. y)T is the coordinate of Or in the reference coordinate system. hÞT ð1Þ where m = mc + 2mw. However. or completely unknown. Iw and Im are the moments of inertia of the robot body about the vertical axis through p. This work investigates the possibility of using transferable belief model (TBM) as a promising alternative for navigation system of nonholomonic mobile robot. is the origin of the mobile robot. In the 2-D Cartesian space. Simulation and experiment are shown in Section 6. and provides a highly ﬂexible model to manage the uncertainty encountered in the multi-sensor data fusion problems. d is the distance from Or to P. the pose of the robot is represented by Yr left wheel 2r y Xr P 2b Or d θ right wheel O x Fig. in the nature of the real world. respectively. respectively. comprising adaptive wavelet neural network controller (AWNN). Nonholonomic mobile robot. The motion model including kinematics and dynamics of the nonholonomic mobile robot system can be described by Hou et al. / Measurement 44 (2011) 1389–1405 in dynamic environment. uncertain. y. and the heading direction h is taken counterclockwise from the OX-axis. Or the middle between the right and left driving wheels. And we feel it appealing when using navigation of mobile robots. These methods assume that. A path-planning approach is discussed in Section 5. and the wheel with a motor about the wheel diameter. 2b is the distance between the two driving wheels and r is the radius of the wheel. a new approach for path planning is proposed.

actuator dynamics is combined with the mobile robot dynamics. mÞ ð16Þ c where the term W wðxi . 2. and ka is the torque constant multiplied by the gear ratio. robot dynamics. In this section. such that c M v_c þ V m v c þ F ¼ W wðxi . c. Ra is the electric resistance constant. Structure of the wavelet neural network-based robust control system. based on whether the system is described by a kinematic model or a dynamic model. Ra Ra sl ka v l ka kb xl ¼ À R Ra M v_w þ V v w þ F þ sed ¼ ð6Þ ka v r ka kb xr À Ra Ra ð11Þ The controller for s is chosen as By using (4)–(6). To this effect. proof and simulation can be also found in [36]. Using kinematic or dynamic models of nonholonomic mobile robots. the tracking-control problem is classiﬁed as either a kinematic or a dynamic tracking-con- The function in braces in (15) can be approximated by an AWNN. The dynamic equations of the motor-wheels are: trol problem. so error can be deﬁned as c c ð13Þ where V. the error dynamics stable the input term is chosen as V¼ v w _ M ec ¼ ÀV m ec À K 4 ec þ ~ þ sd þ f ð10Þ Àc ka kb ka BX v c À Bu Ra Ra ð15Þ 3. Recall the robot dynamics (10) sr k a v r k a k b xr ¼ À . X is selected as ec ¼ v c À v ð9Þ ð14Þ " X¼ 1 r 1 r R r # ÀR r ! Using (11)–(14). . Nonlinear kinematic-WNN backstepping controller In area of research of trajectory tracking in mobile robot. 2. The structure for the tracking control system is presented in Fig. / Measurement 44 (2011) 1389–1405 1391 Since the wheels of the robot are driven by actuators. no knowledge Fig. and wheel actuator dynamics can be written as: s ¼ ^ þ K 4 ec þ c f ð12Þ M v_w þ V v w þ F þ sed ¼ wH ¼ XV ka v r ka kb xr À Ra Ra where K4 is a positive deﬁnite diagonal gain matrix. More detail. qÞv c þ Fðv Þ _ where x ¼ ½v T v T v T . The motor models attained by neglecting the voltage on the inductances are: sr ¼ ka ðv r À kb xr Þ=Ra . a neural robust tracking controller is discussed brieﬂy to achieve the tracking control under some uncertainties without any knowledge of those robot parameters from our recent work [36]. kb is equal to the voltage constant multiplied by the gear ratio. Yaonan et al. various approaches [21. sl ¼ ka ðv l À kb xl Þ=Ra ð5Þ where vr and vl are the input voltages applied to the right and left motors. and ^ is f an estimation of f(x) that is deﬁned by ð7Þ ð8Þ _ f ðxÞ ¼ MðqÞv_c þ V m ðq. c.W. it is necessary to take the dynamics of the wheel actuators into account. sr and sl are the right and left motor torques multiplied by the gear ratio.32–37] consider that wheel torques are control inputs though in reality wheels are driven by actuators and therefore using actuator input voltages as control inputs is more realistic. the mobile robot model including the robot kinematics. In this ﬁgure. mÞ represents an adaptive approximation WNN.

7 6 . Here. A. the closed loop error dynamics given by (20) is locally asymptotically stable with the approximation network parameter tuning laws given by. g2. the proposed map-building process utilizes Transferable Belief Model (TBM). kp is an on line estimated value of the uncertain bound kp. The sonar sensor readings are interpreted by this theory and used to modify the map using transferable belief model rule.1 (Frame of discernment). g3. Yaonan et al. The approximation errors and disturbances are bounded. it catches new information about the environment and updates the old map.. 5 4 5 @w11 @w11 ~ wpq @m @m ^ m¼m c¼^ c _ c ^ ¼ g3 W Bec À g3 @k@k^ c c _ b K ¼ g4 kec kp ^ where g1. considering (21) for nonholonomic mobile robot. and translation parameters of the WNN tuned on line. kp is an uncertainty bound. The frame of discernment is a ﬁnite set of mutually exclusive elements. the array is adjusted on the basis of the nature of the information. the linearization technique is used to transform the nonlinear output of WNN into partially linear from so that the Lyapunov theorem extension can be applied.1. þHÞ þ ef þ sd k. Review theory of transferable belief model In this section. respectively. c where @ is a positive constant. 5 4 . which cells are occupied by obstacles (partially certain) and which cells are (partially) empty and thus suitable for robot navigation. 4. For this purpose. i. the speciﬁed constants De and Dr satisfy kefk 6 De and ksd k 6 Dr .. More details can be found in [25–29].1. Each element of the array corresponds to a speciﬁc square on the surface of the actual world. Beware that the frame of discernment is not necessarily exhaustive. assume the following inequality: krk þ @ X2 T T m c ¼ W ÃT A mÃ þ BT cÃ þ H À W T A mÃ þ BT cÃ 4 @ X2 @ X2 ÃT T Ã m m 6 þ W A m þ BT cÃ þ H þ ef þ sd þ 4 4 c T þ ef þ sd þ W Ã Á A mÃ þ BT cÃ 6 kp p 4. 7 6 . 6 . O and uncertain degree H. The robust term c is designed as ^ c ¼ kp p Á sgnðec Þ ð21Þ . the map building system to process the readings in order to assess. It represents an environment as a two-dimensional array. and its value shows the certainty that there is some obstacle there.e. The controller is designed to achieve good tracking and stability results with the connecting weights.1392 W. denoted X hereafter. Whenever the robot moves.e. kp2 P kðAT mÃ þ BT cÃ Þk. dilation. Map building An occupancy grid is essentially a data structure that indicates the certainty that a speciﬁc part of space is occupied by an obstacle. Because of using this uncertain theory to build an occupancy map of the whole environment. For case of notation. Substituting (18) into (17). 7 6 . To achieve this. as accurately as possible. kp1 P kW ÃT ðAT mÃ þ BT cÃ . if using controller (12). When new information about the world is received. kp ¼ ½kp1 kp2 . / Measurement 44 (2011) 1389–1405 of the dynamics of the cart is assumed. it is revealed that ~ þ s ¼ W Tw þ WTw þ WTw þ e þ s ~ ~ ~ ^ d f d c ~ f c ^ c ^ ~ ¼ W T ðw À AT m À BT ^Þ þ W T ðAT m þ BT ~Þ þ r c c ÃT T Ã T Ã ð19Þ c where r ¼ W ðA m þ B c þ HÞ À W T ðAT mÃ þ BT cÃ Þ þ ef þ sd By substituting (19) into (15). 7 . pT ¼ ½1k W k. g4 are positive constants. _ c c ^ ^ W ¼ g1 ðw À AT m À BT ^Þec À g1 @kec k W c _ c ^ ^ m ¼ g2 W Aec À g2 @kec km ð22Þ ð23Þ ð24Þ ð25Þ ~¼W WþW Wþe e b f f ÃT fT ð17Þ ~ c ~ ^ where W ¼ W Ã À W and w ¼ wÃ À w. So every discrete region of the path position may be in two states: belief degree E. we brieﬂy regroup some basic of the belief function theory as explained in the transferable belief model ( TBM). i. The function of WANN is to reconstruct the dynamic (16) by learning it on-line. 7 6 . 7 4 . Assumption 3. it represent important point of view what we must to consider: any position in the updated map of the whole environment do not exist absolute exactness for the reason that any obstacle can not be measured absolutely rightness. the approximation error ~ can be f rewritten as Deﬁnition 1. which is widely used in map building in mobile robot [17–21]. ð18Þ H is a vector of higher-order terms and assume to be bounded by a positive constant. ~ The expansion of w in Taylor series is obtained as follows: 2 @w 3T 2 ~ 3 2 @w11 3T 11 w11 @m @m 6 7 7 6 @w 7 6 6 @w11 7 ~ 6 w12 7 6 11 7 6 @m 7 7 6 @m 7 6 6 7 7 ~ 7¼6 ^ ðmÃ À mÞ þ 6 c w¼6 6 7 7 ðcÃ À ^Þ þ H 6 . the closed-loop system dynamics can be rewritten as À Á ka kb ka _ Mec ¼ À K 4 þ V m ec þ BX v c À Bu Ra Ra c ~ ^ ^ ~ c c þ W T w À AT m À BT ^ þ W T AT m þ BT ~ þ r À c ð20Þ Moreover.

for all A # X R is the range response form the ultrasonic sensor. Yaonan et al. mX from 1 2 different sensor respectively. jBjð1 À mX ðhÞÞ B#X 8A 2 X ð31Þ 270 Fig.2 (Basic belief assignment). where R À e < r < R + e mX ¼ & 1 if A ¼ A 0 otherwise Ã ð26Þ When all bbas are categorical. Normalizing a bba means requiring that E(H) = 0 and that the sum of all bbms givens to the non-empty subsets is 1. sensor modeling and measurements interpretation The Polaroid Ultrasonic Rangeﬁnder is used for map building. for all A # X EðOjxÞ ¼ 0 1 2 Â 2Âx2 1þx2 ð36Þ ð37Þ plðAÞ ¼ X B # H. 3.47–5 m with 1% accuracy. In region I. 3 and 4 is given by Eqs. When a bba means requiring that m(H) > 0 and we call this open world.3 (Categorical belief function). and (r. is the value taken by the bba at A. The pignistic transformation of mX is given by 60 BetP X ðAÞ ¼ X jA \ BjmX ðBÞ . (32)–(37). Typical beam pattern of Polaroid ultrasonic sensor. 4. and it distributes the evidence in Region I. 1] that satisﬁes P X A # X m ðAÞ ¼ 1. A. b is the half open beam angle of the sonar cone. 90 . A # X. the sensor model converts the range information into probability values. This solution is a classical probability measure from which expected utilities can be computed in order to take optimal decisions. 1] with. In our approach.W. Two limiting cases of categorical bbas have received special names. / Measurement 44 (2011) 1389–1405 1393 A.B\A¼h mðBÞ ¼ belðHÞ À belðAÞ ð28Þ EðEjxÞ ¼ 0:8 À EðOjxÞ The commonality function q is deﬁned as: q: 2X ? [0. This means closing the world. is a belief function which related bba mX satisﬁes: where jAj is the number of elements of X in A. ð30Þ 300 330 30 8A # H A. In [21]. The basic belief mass (bbm) m(A).4 (Normalizing or nonnormalizing) The basic belief mass m(A) represents the part of belief exactly committed to the subset A of X given a piece of evidence. the basic qðAÞ ¼ X A # B. More details and proofs of the relationship among functions above can be found in [26.2. the TBM becomes equivalent to classical propositional logic. The model in Figs.bel. e is the range error. 1] with. Some of its detail and justiﬁcations can be found in [25. 1] with. @) is the coordinate of a point inside the sonar cone. or equivalently to the fact that all we know is that A holds. A categorical belief function on X focused on AÃ # X. This is a very common device that can detect distances in the range of 0. we do not require m(H) = 0 as in Shafer’s work.5 ( Related function) the degree of belief bel(A) is deﬁned as: bel(A): 2X ? [0. the bba that results from their conjunctive combination deﬁned by 30o m½E1 \ m½E2 ðAÞ ¼ X B.C # H. Given two bbas mX .27].6 (The conjunctive rule).3. The fusion of data from sonar The sonar data interpreted by Transferable Belief Model of evidence are collected and updated into a map using the same theory of evidence. A.pl are always in one to one correspondence.B # X mðBÞ ð29Þ The function q.7 (The pignistic transformation for decision). where Rmin < r < R À e x¼ belðAÞ ¼ X h–B # A bÀ@ b 2 þ 2 ÀRÀeÀrÁ2 RÀe mðBÞ ð27Þ ð35Þ À1 < x < 0 þ1 06x61 ( X The degree of plausibility pl(A) is deﬁned as: pl: 2 ? [0. A. The pignistic transformation maps bbas to so called pignistic probability functions. A basic belief assignment (bba) is a mapping mX from 2X to [0. In TBM.29] 4. for all A # X x¼ bÀ@ b 2 þ eÀjRÀrj e 2 ð32Þ À1 < x < 0 06x61 2 ( 0 1 2 EðOjxÞ ¼ Â 2Âx2 1þx2 þ1 ð33Þ ð34Þ EðEjxÞ ¼ 0:8 À EðOjxÞ In region II.B\C¼A 0 m½E1 ðBÞ m½E2 ðCÞ. A.

@ 1 ¼ 0. S0. X4y = 10 moving direction in a positive X-axis direction. Yaonan et al. 4. The new basic probability assignments for cells in the map become (the ﬁrst lies in region I and the second lies in region II): (Static obstacles) xS0 ¼ 0 ð15À3Þ þð10À2À7Þ ð15À3Þ þð10À2À7Þ 15 10À2 ¼ 0:1878 xS1 ¼ 15 2 10À2 ¼ 0:1878 0 2 2 2 ES0 ðxÞ ¼ 1 Â 2Âx2 þ 1 ¼ 0:5341 ES0 ðxÞ ¼ 1 Â 2Âx2 þ 1 ¼ 0:5341 0 0 2 2 1þx 1þx ES0 ðOÞ ¼ 0:8 À EðxÞS0 ¼ 0:2659 0 0 EðHÞ ¼ 0:2 2 2 2 2 ES0 ðOÞ ¼ 0:8 À EðxÞS0 ¼ 0:2659 0 0 EðHÞ ¼ 0:2 and ð15À6Þ þð2Àj10À9jÞ ð15À0Þ þð2Àj10À9jÞ 15 2 2 ¼ 0:3050 xS1 ¼ 15 ¼ 0:6250 1 2 2 S0 S1 1 2Âx2 1 2Âx2 E1 ðxÞ ¼ 2 Â 1þx2 þ 1 ¼ 0:5851 E1 ðxÞ ¼ 2 Â 1þx2 þ 1 ¼ 0:7809 xS0 ¼ 1 ES0 ðOÞ ¼ 0:8 À EðxÞS1 ¼ 0:2149 1 1 EðHÞ ¼ 0:2 ES1 ðOÞ ¼ 0:8 À EðxÞS1 ¼ 0:0191 1 1 EðHÞ ¼ 0:2 2 2 2 2 Table 1 Condition of this example. X 0y ¼ 8 X 1x ¼ 10. Static obstacles position X 0x ¼ 8. r S1 1 1 5. the robot is moving in a direction depicted by the broken line and sonars scan the environment again. Example 1. @ 2 ¼ 6. (32)–(37). @ S2 ¼ 3 4 11. @ S2 ¼ 3 5 ¼ 7. r S1 ¼ 6. rS2 ¼ 9. r S0 ¼ 4. This situation is given in Fig. The proﬁle of the ultrasonic sensor model. X 1y ¼ 5 In the next moment. r 6 ¼ 3. r S2 ¼ 7. @ S0 ¼ 1 1 1 2 2 0 1 S1 S2 S1 S2 S2 S2 6. r S1 0 0 9. / Measurement 44 (2011) 1389–1405 Pr ofile for the range l Rmin y r R −ε R R +ε Pr ofile for the angle α 15 α β R Π x l ε ε −15 Fig. 5 and 6. yr = 10 X4x = 14. @ S1 0 @ S1 1 ¼3 ¼0 EðOÞS0 ¼ 0:8 À EðxÞS0 ¼ 0:2123 0 0 EðHÞ ¼ 0:2 2 2 xS0 ¼ 1 ð15À6Þ þð10À2À6Þ ð15À0Þ þð10À2À6Þ 15 10À2 ¼ 0:2112 xS1 ¼ 15 2 10À2 ¼ 0:6250 1 2 2 2 EðxÞS0 ¼ 1 Â 2Âx2 þ 1 ¼ 0:5427 EðxÞS1 ¼ 1 Â 2Âx2 þ 1 ¼ 0:7809 1 1 2 2 1þx 1þx EðOÞS1 ¼ 0:8 À EðxÞS0 ¼ 0:0191 1 0 EðHÞ ¼ 0:2 EðOÞS0 ¼ 0:8 À EðxÞS0 ¼ 0:2573 1 0 EðHÞ ¼ 0:2 . rS1 ¼ 5. @ S0 ¼ 3. (Dynamic obstacles) xS2 ¼ 4 ð15À3Þ þð10À2À5Þ ð15À3Þ þð2Àj10À11jÞ 15 10À2 2 ¼ 0:3903 xS2 ¼ 15 ¼ 0:4450 5 2 2 2 2 ES2 ðxÞ ¼ 1 Â 2Âx2 þ 1 ¼ 0:6322 ES2 ðxÞ ¼ 1 Â 2Âx2 þ 1 ¼ 0:6625 4 4 2 2 1þx 1þx ES0 ðOÞ ¼ 0:8 À EðxÞS1 ¼ 0:1375 1 1 EðHÞ ¼ 0:2 2 2 2 2 ES0 ðOÞ ¼ 0:8 À EðxÞS1 ¼ 0:1678 1 1 EðHÞ ¼ 0:2 X5x = 15. Let the robot be located in cell (15. ¼ 9. r S1 ¼ 9. @ 6 ¼ 6: xS0 ¼ 0 ð15À3Þ þð10À2À4Þ ð15À6Þ þð10À2À5Þ 15 10À2 ¼ 0:3100 xS1 ¼ 15 2 10À2 ¼ 0:2503 0 2 2 2 EðxÞS0 ¼ 1 Â 2Âx2 þ 1 ¼ 0:5877 EðxÞS1 ¼ 1 Â 2Âx2 þ 1 ¼ 0:5590 0 0 2 2 1þx 1þx EðOÞS1 ¼ 0:8 À EðxÞS0 ¼ 0:2410 0 0 EðHÞ ¼ 0:2 2 2 2 2 2 2 Distances r and angles @ detected by different sonar X0 : X1 : X4 : X5 : r S0 0 r S0 1 r S2 4 r S2 5 ¼ ¼ ¼ ¼ 7. 6. probability assignment corresponding to a range reading r is obtained directly using Eqs. The condition is shown in Table 1 and Figs. @ 2 ¼ 3. 10) in the beginning of the navigation process. 0 0 0 r S0 ¼ 6. X5y = 7 moving direction in a positive X-axis direction. @ S0 ¼ 6.1394 W. @ S0 ¼ 3. The next example illustrates the usage of transferable belief model for mapbuilding process. @ 1 ¼ 3. Robot position Dynamic obstacles position and its moving direction xr = 15. S1 and S2 detect some possible obstacles on @ S1 ¼ 6.

Yaonan et al. xS2 ¼ 1 EðxÞS2 1 ð15À3Þ þð10À2À7Þ 15 10À2 2 j10À9 ð15À3Þ þð2À10À2 jÞ ¼ 0:3278 xS1 ¼ 15 ¼ 0:3278 2 2 2 2 S2 ¼ 1 Â 2Âx2 þ 1 ¼ 0:5970 EðxÞ1 ¼ 1 Â 2Âx2 þ 1 ¼ 0:5970 2 2 1þx 1þx 2 2 2 2 (Dynamic obstacle) ð15À6Þ þð2Àj10À9jÞ ð15À6Þ þð10À2À3Þ 15 10À2 ¼ 0:1878 xS2 ¼ 15 2 10À2 ¼ 0:3753 6 2 2 2 S2 S2 EðxÞ2 ¼ 1 Â 2Âx2 þ 1 ¼ 0:5341 EðxÞ6 ¼ 1 Â 2Âx2 þ 1 ¼ 0:6235 2 2 1þx 1þx xS2 ¼ 2 EðOÞS2 ¼ 0:8 À EðxÞS0 ¼ 0:2659 2 0 EðHÞ ¼ 0:2 EðOÞS2 ¼ 0:8 À EðxÞS2 ¼ 0:1765 6 6 EðHÞ ¼ 0:2 2 2 2 2 EðOÞS2 ¼ 0:8 À EðxÞS0 ¼ 0:2030 1 0 EðHÞ ¼ 0:2 EðOÞS2 ¼ 0:8 À EðxÞS0 ¼ 0:2030 1 0 EðHÞ ¼ 0:2 .W. Occupancy determination based on sonar measurements in t = 2 s. 6. Fig. / Measurement 44 (2011) 1389–1405 1395 Fig. 5. Initial position.

0875 0. H mS0 ðT 0 Þ 1 mS1 ðT 0 Þ 1 mS0 ÈS1 ðT 0 Þ 1 mS0 ðT 1 Þ 1 mS0 ðT 1 Þ 2 mS0 ÈS1 ðT 1 Þ 1 mS2 ðT 4 Þ 1 mS2 ðT 5 Þ 1 mS0 ðT 0 Þ 2 mS1 ðT 0 Þ 2 mS0 ÈS1 ðT 0 Þ 2 mS0 ÈS1 ðT 0 Þ 1È2 mS0 ðT 1 Þ 2 mS1 ðT 1 Þ 2 mS2 ðT 1 Þ 2 mS0 ÈS1 ÈS2 ðT 1 Þ 2 mS0 ÈS1 ÈS2 ðT 1 Þ 1È2 mS1 ðT 2 Þ 2 mS2 ðT 2 Þ 2 mS1 ÈS2 ðT 2 Þ 2 mS2 ðT 6 Þ È m EðHÞS2 6 1È2 (dyanmic obstacle) mðT S2 Þ1 È m EðHÞS2 4 4 2 W.1293 0.4243 0.1203 Empty 0. and 1 2 Bet1È2ÁÁÁÈt S ÈS ÁÁÁÈSi ½AðOÞ S ÈS ÁÁÁÈSi 1 2 Bet 1È2ÁÁÁÈt ½AðhÞ < e we can belief that event A is occupancy is unlikelihood.0042 0. Path planning In this section. . Consider the pignistic value and basic probability assignments given by transferable belief model based on update information. Event A is moving obstacle if satisfying that: 1 1 (1) m1È2ÁÁÁÈti ½AðEÞ þ m1È2ÁÁÁÈti ½AðHÞ ! 1 1 (2) m1È2ÁÁÁÈti ½AðHÞ P 0:5 S ÈS ÁÁÁÈS a 1 À ð1 À bÞ2 2b 1 À ð1 À aÞð1 À bÞ 2 a 1 À ð1 À 2b þ b Þ ¼ 2b 1 À ð1 À b À a þ abÞ 2 a 2b À b ¼ 2b b þ a À ab 2 a 2b À b ¼ 2b b þ a À ab að2 À bÞ ¼ 2ð1 À mðHÞÞ S ÈS ÁÁÁÈS ð45Þ 1 2 1 2 If Bet1È2ÁÁÁÈt i ½AðOÞ ( Bet1È2ÁÁÁÈt i ½AðHÞ. .0875 0.0661 0.3362 0.1177 0.0239 Thus: 1 2 Bet1È2ÁÁÁÈt i ½Aðh1 Þ ¼ (dynamic obstacle) mðT S2 Þ1 È m EðHÞS2 5 5 2 S ÈS ÁÁÁÈS a 1 À ð1 À bÞn nb 1 À ð1 À aÞð1 À bÞnÀ1 ð44Þ (dyanmic obstacle) Let n = 2 for the reason these are two hypothesis including occupancy (O) and empty (E): 1 2 Bet1È2ÁÁÁÈt i ½AðOÞ ¼ Evidence about occupancies of cells including dynamic object are combination using transferable belief model.3362 0.4031 0.2516 0.3921 0.3582 0.3420 0.0818 0.4758 0.3007 0. This process continues until the target is reached.2561 0.0022 0. 1 2 Bet 1È2ÁÁÁÈt i ½AðOÞ S ÈS ÁÁÁÈS S1 ÈS2 ÁÁÁÈS Bet 1È2ÁÁÁÈt i ½Að HÞ ¼ að2 À bÞ 2ð1 À mX ½x1 .8442 0.6642 0.1289 0.6444 0.1063 0.3921 0. it means event A of occupancy is unbelievable. .3874 0.0671 0. / Measurement 44 (2011) 1389–1405 H (unknown) 0. 5.5281 0. thus if "e and e is a small con- S ÈS ÁÁÁÈS stant.5714 0. suppose the number of the sensor is i.2911 0.7660 0.0309 Proof.0292 0.1293 0.3320 0.1222 0.0682 0.0368 0.3248 0.8444 1 Occupancy 0.3921 0. 8A 2 X Að1 À mX ðHÞÞ ð40Þ Suppose the sensor measurement is A 2 X and its likelihoods S1 ÈS2 ÁÁÁÈS S1 ÈS2 ÁÁÁÈS are m1È2ÁÁÁÈt i ½AðOÞ ¼ a.4243 0.1063 0. m½xðAÞ ¼ Y hi 2A lðhi jxÞ Y Y hi 2A ð1 À lðhi jxÞÞ ð38Þ ð39Þ pl½xðAÞ ¼ 1 À X A#X ð1 À lðhi jxÞÞ hi 2A BetPðAÞ ¼ mX ðAÞ .4758 0. in which the robot replans the path as new observations are acquired.2149 0.3212 0.0464 0..4212 0. interval time is 1. with (38) 1 2 m1È2ÁÁÁÈt i ½AðHÞ ¼ ð1 À aÞð1 À bÞ S ÈS ÁÁÁÈS ð41Þ From (40).8436 0. we get: 1 2 ! 0 and m1È2ÁÁÁÈt i ½AðEÞ 1 2 ! 1 and m1È2ÁÁÁÈt i ½AðHÞ P 0:5 S ÈÁÁÁÈS S ÈÁÁÁÈS S ÈÁÁÁÈS S1 ÈS2 ÁÁÁÈS m1È2ÁÁÁÈt i ½AðOÞ S ÈS ÁÁÁÈS S ÈS ÁÁÁÈS .0505 0. dynamic object recognition law Condition 1. . From the new results it can be concluded that the evidence about the occupancy of cells is increased.5273 0.4629 0.1396 Table 2 Probabilitic result from transferable belief model. which is shown in Table 2.3248 0.5371 0. At ttime. we adopt a new path planning strategy. m1È2ÁÁÁÈt i ½AðEÞ ¼ b.5149 0.4550 0. hn} denote a set of n hypothesis.3086 0. h2.0476 0.3016 0.0004 0. Given the likelihoods l(hijx) for every hi 2 H Let X denote the set of possible values this observation can take. .5261 0.0344 0. Yaonan et al.3347 0.3117 0. the navigation recognition law that can determine which obstacle is dynamic or static without any previous knowledge is discussed in this section. What’s more. . .0818 0. xi ðHÞÞ 0 mS1 ÈS2 ÁÁÁÈSi ½AðHÞ að2 À bÞ 1È2ÁÁÁÈt ¼ S1 ÈS2 ÁÁÁÈS S1 ÈS2 ÁÁÁÈS 1 À m1È2ÁÁÁÈt i ½AðHÞ 2m1È2ÁÁÁÈt i ½AðHÞ ð46Þ If S1 ÈS2 ÁÁÁÈS Bet1È2ÁÁÁÈt i ½AðOÞ S1 ÈS2 ÁÁÁÈS Bet1È2ÁÁÁÈt i ½AðHÞ < e.4629 0.5478 0.5409 0. 1 2 Bet1È2ÁÁÁÈt i ½Aðh1 Þ ¼ S ÈS ÁÁÁÈS að1 À bÞ þ ¼ P i a nÀ1 C inÀi b ð1 À bÞnÀiÀ1 =ði þ 1Þ i¼0 1 À ð1 À aÞð1 À bÞ C inÀi ð1 À bÞnÀiÀ1 =ði þ 1Þ 1 À ð1 À aÞð1 À bÞ i¼1 PnÀ1 ð42Þ The sum can be simpliﬁed to from Delmotte [28]: nÀ1 X i¼0 mS2 ðT 6 Þ 2 C inÀi b ð1 À bÞnÀiÀ1 =ði þ 1Þ ¼ i 1 À ð1 À bÞn nb ð43Þ 0.3396 0. 5. Let H = {h1.1239 0.1.

q Þ À q 1È2ÁÁÁÈt 2 obs 0 jht À h0 j > hs ð51Þ where hs is a trap warning angle. 5. In this paper. h0. Yaonan et al. described by: rﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ0 2 2 ðxBetPS1 ÈS2 ÁÁÁÈSi ðOÞ À xBetPS1 ÈS2 ÁÁÁÈSi ðOÞ Þ þ ðyBetPS1 ÈS2 ÁÁÁÈSi ðOÞ À yBetPS1 ÈS2 ÁÁÁÈSi ðOÞ Þ T. q = [xyz]T in a 3-dimensional space or q = [xy]T in a 2-dimensional space.3. we obtain the potential total virtual force which drives the reference point of the mobile robot to the destination. qobs Þamax ðqs ðq. qobs Þ À qm ðv RO Þ P q0 or v RO 6 0 > > > > > > or satisfy condition:1 > > > > > >F > rep1 þ F rep2 . q Þ < q ðv Þ > RO RO > s obs m > > > > > > or satisfy condition:1 > > > > > 2 > >1 : BetP S1 ÈS2 ÁÁÁÈSi ðOÞg 1 1 if not satisfy condition:1 S qðq. we proposed a new method to modify this potential ﬁeld method based on TBM in complex dynamic environment under static and dynamic obstacle. (48) and (49). respectively. respectively. / Measurement 44 (2011) 1389–1405 1 2 1 2 Because m1È2ÁÁÁÈt i ½AðHÞ þ m1È2ÁÁÁÈt i ½AðEÞ þ mS1 ÈS2 ÁÁÁÈ 1È2ÁÁÁÈt Si ½AðOÞ ¼ 1. kqtar(t) À q(t)k is the Euclidean distance between the robot and the target at time t. We regard it is very likely about to get trapped. qobs Þ À qm ðv RO ÞÞ2 ÀgBetP X ðOv Þ BetP X ðOv Þgv RO v RO? qs ðq. with the actual instantaneous direction of travel. qobs(t)) is denoted the shortest distance between the robot and the body of the obstacle. and typically 90°.yr] is the position both from dynamic obstacle and robot. which compares the robot-to-target direction.W. if 0 < qs ðq. 8 > BetPS1 ÈS2 ÁÁÁÈSi ðOÞm@ q kqtar ðtÞ À qðtÞk2 nRT > 1È2ÁÁÁÈt < F att ðq. that is. Let ½x. kvtar(t) À v(t)k is the magnitude of the relative velocity between the target and the robot at time t. the robot is obstructed by the obstacle and cannot reach the target.2. Local minimum problems Like other local path planning approaches. jOv jð1 À mX ðHÞÞ ð47Þ 2amax X BetP ðOv Þ ¼ O#X 8A # X where T is interval. qs(q(t). vRO(t) is the relative velocity between the robot and the obstacle in the direction from the robot to the obstacle. a and g is a positive constant. 5. if v > 0 and q ðq. If the robot’s direction of travel is more than a certain angle (90° in most cases) off the target point. nand mare positive constants. waiting method is often adopted where both the target and the obstacle are moving. 7). v obs ðtÞ ¼ t tÀT t tÀT With q0 is a positive constant describing the inﬂuence range of the obstacle. qobs Þ À qm ðv RO Þ < q0 and v RO > 0 > > > > < or satisfy condition:1 F rep ðq. Also a repulsive potential can be written as: 8 > 0. if qs ðq. Refs. we adopt the method in [5] (see Fig. And the velocity of this dynamic obstacle is deﬁned: S ÈÁÁÁÈS S ÈÁÁÁÈS S ÈÁÁÁÈS F repv 1 ¼ F repv 2 ¼ v RO nRO 1þ amax ðqs ðq. the proposed navigation algorithm has a local minimum problem. ð49Þ Fig. we get the condition: 1397 S ÈS ÁÁÁÈS S ÈS ÁÁÁÈS where 1 1 (1) m1È2ÁÁÁÈti ½AðEÞ þ m1È2ÁÁÁÈti ½AðHÞ ! 1 1 (2) m1È2ÁÁÁÈti ½AðHÞ P 0:5If satisfying the condition. @ q and @ v are scalar positive parameters. From Eqs.13]. An attractive potential which drives the mobile robot to its destination can be described by Ge and Cui [10. 7. at t time. F total ¼ F att þ F rep ð50Þ vRO(t) = [v(t) À vobs(t)]TnRO. nRO is a unit vector pointing from the robot to the obstacle. BetPX(O) and BetPX(Ov) are pignsitic decision value that sensor detect the occupancy belief value both for static and dynamic respectively. ht. v(t) and vtar(t) denote the velocities of the robot and the target at time t. Incorporating a virtual target on local-minimum alert. nRT being the unit vector pointing from the robot to the target and nVRTbeing the unit vector denoting the relative velocity direction of the target with respect to the robot. [13–17] indicate in highly dynamic environment. potential ﬁeld method based on TBM The potential ﬁeld method has been studied extensively for autonomous mobile robot path planning in the past decade. we believe this obstacle is dynamic. when where q(t) and qtar(t) denote the positions of the robot and the target at time t. To detect the outbreak of local minimum. BetPX(O) is pignsitic decision value that sensor detect the occupancy belief value both for static and dynamic respectively. v Þ ¼ > > > not defined. qobs Þ À qm ðv RO ÞÞ2 nRO? v obs ðtÞ ¼ ﬃ rﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ0 ðxBetPS1 ÈS2 ÁÁÁÈSi ðOÞ À xBetPS1 ÈS2 ÁÁÁÈSi ðOÞ Þ2 þ ðyBetPS1 ÈS2 ÁÁÁÈSi ðOÞ À yBetPS1 ÈS2 ÁÁÁÈSi ðOÞ Þ2 T t tÀT t tÀT qm ðv RO Þ ¼ X v 2 ðtÞ RO mX ðOv Þ . yBetPX ½T t ðOÞ and [xr. v Þ ¼ BetPS1 ÈS2 ÁÁÁÈSi ðOÞm@ q kqtar ðtÞ À qðtÞk2 nRT t > > : þBetPS1 ÈS2 ÁÁÁÈSi ðOÞn@ v kv tar ðtÞ À v ðtÞk2 nVRT t if not satisfy condition:1 if satisfy condition:1 ð48Þ Assuming that the target moves outward or synchronously with the obstacle. mX(H) is unknown value from pignistic transformation. .

in which the robot needs to detour around a long wall to reach the goal. Simulation and experiment results To show the usefulness of the proposed approach. Yaonan et al. the original target point is restores. Collision avoidance (static case). and typically 45°. and the robot continues to proceed to the target point. Simulation results In these simulations. bvt is a certain angler. The robot has been modeled as a small circle. the original target point is replaced with the virtual target point. 8. BetPX(O) is a pignsitic Fig. 9a. 6. The virtual target point is placed by following equations: decision value that sensor detect the occupancy belief value in hv that direction. R is an offset which depends on the size of the robot. the robot is heading to the goal position and hence hs = 0°. When the robot escapes from the condition above or when no obstacle is detected. 9. the positions of all the obstacles including moving obstacles in the workspace are unknown to the robot. As seen from the virtual plane shown in Fig.1. The minimum distance obtained within the cone of each sensor is considered as the distance of the obstacle from the sensor which detects any obstacle. 8. / Measurement 44 (2011) 1389–1405 Consider a common situation as shown in Fig. 9 represents the obstacle avoidance path of the mobile robot in static case. 9–13. And any distance information detected from sonar is adding Gaussian White Noise. But at position B. with the decrease of distance between Obs6 and robot. Also v ¼ L Â BetPX ðOÞ þ R ð53Þ where L is the distance between the robot and the obstacle near the virtual target detected by the longest-range sensor in the corresponding direction. 9. The robot is aware of its start and ﬁnish positions only. Experiment 1: Fig. Simulation results obtained in three working scenarios are presented in Figs.1398 W. and imaginary sensors (sixteen in number) are placed in the form of the arc along the circumference of the robot. BetPX(O) became much large which lead the repulsive force much large. Experiment 2: This example illustrates the collision avoidance process in dynamic environment. At position A. and the ﬁfth experiment is carried out from Mobilesim environment. The scenario xv ¼ xi þ m cosðhi À bv t Þ yv ¼ yi þ m sinðhi À bv t Þ hv ¼ h0 À bv t ð52Þ where m is the distance that the robot needs to travel to reach the virtual goal. the robot escapes from this trapping point and moves to point C. Anti-deadlock mechanism. Based on (51)–(53). In such a situation. a series of simulations have been conducted using an arbitrarily constructed environment including obstacles. 6. Table 3 shows the condition settings of this experiment. robot changes its path by deviating to the right. The complete process is shown in Fig. Ultimately. . Fig. the robot changes its direction by repulsive force from detected wall and hs P 90°. The ﬁrst to fourth experiments are conducted to verify the proposed method in the MATLAB environment.

Fig. 10 has some similarity with the scenarios of example 1 but adding two moving obstacles. 10 thoroughly plots the obstacle avoidance path of the mobile ro- bot in discrete time while it is maneuvered in a complex environment (including dynamic and static obstacle) where two rectangular shapes of obstacles are moving in . Yaonan et al. shown in Fig.W. 10. / Measurement 44 (2011) 1389–1405 1399 Fig. Collision avoidance (dynamic case).

b. it is more urgent to avoid collision with ObsD1. Collision avoidance (local minimum case). different directions and different velocity. 10e and f show the robot neglect this moving obstacle and can not change its path for the reason there is no collision risk with ObsD2. The complete process is shown in Fig. Table 4 shows the condition settings of this experiment. (g and h) 0 6 t 6 37s. Point a. c. Experiment 3: For local minimum problem avoidance. Obs1 and Obs2. represents the real goal . . the robot is in a collision course with both ObsD1. h represents the static obstacle and represents the dynamic obstacle. Yaonan et al. Fig. The robot avoids collision with Obs2 as shown in Fig. As seen from Fig. 10 represents the scenario and the virtual plane in different time intervals: (a and b) 0 6 t 6 5s. .1400 W. 10e and f. However. 11). a rigorous testing is carried out in cluttered maze consisting of concave obstacles and dynamic obstacles (Fig. In the following Figs. (c and d) 0 6 t 6 10s. 10(c and d). Fig. meanwhile. 11. and the robot slows down to accomplish this. (e and f) 0 6 t 6 15s. 10g and h. . represent several trapping points. ﬁve static obstacles also set in this environment. / Measurement 44 (2011) 1389–1405 Fig. 11 and 12.

Yaonan et al. Collision avoidance (mazes case). 12.W. 11 (continued) Fig. / Measurement 44 (2011) 1389–1405 1401 Fig. .

Obs4y = 15[m] Obs5x = 24[m]. Collision avoidance based on Mobilesim environment (dynamic case). But in Fig. Since the obsD1 is now to change its moving direction and move away from the robot. the robot turns its direction and tracks current virtual goal. Then the robot continues to see the wall on its left until it arrives at point H and I after which it goes towards a real target point. 11 represents the scenario and the virtual plane in different time intervals: (b and c) 0 6 t 6 10s. 13. (d and e) 0 6 t 6 30s. E. 11b. 11f and g and the robot moves counterclockwise around obstacles through point J and K to the ﬁnal goal. 11b and c. ? represents the potential total virtual force which drives the reference point of the mobile robot to the current position. Goaly = 20. Since the goal is now to the left of the robot. 11d and e. . Obs5y = 20[m] Obs6x = 9[m] ? 12[m]. it continues to track the current virtual goal (point C) at which time the robot detects obsD2. the robot continues to track the virtual goal in a counterclockwise direction until it reaches the virtual goal at point B. Table 3 Experimental conditions of obstacle avoidance (static case). Obs6y = 10[m] 30 s Processing time and represents the current virtual goal. Yaonan et al. Initial robot position Goal position Obstacle position xr = 0[m]. it shows the robot neglects this moving obstacle and can not change its path for the reason that there is no collision risk with ObsD2.1402 W. Obs2y = 10[m] Obs3x = 18[m]. As seen from Fig. At these points where the robot’s direction of travel is more than a certain angle (jht À h0j > 90°) off the target point. there are several local minimum points including point D. In Fig. the robot does not detect the obsD1 until it reaches point A at which time it turns to the right-the side with the lowest collision possibility. (f and g) 0 6 t 6 40s. Table 5 shows the dynamic obstacles sets of this experiment. Obs1x = 8[m]. F. The complete process is shown in Fig. / Measurement 44 (2011) 1389–1405 Fig. yr = 0[m] Goalx = 30[m]. Fig. Obs3y = 20[m] Obs4x = 20[m].5[m]. Obs1y = 10[m] Obs2x = 13[m].

12c–f. 13. ObsD1x = 10[m]. The processes for wall following. ObsD1y = 0[m] moving direction and velocity moving in a positive Y-axis direction at 0.71(m/s) 40 s Processing time Experiment 4: In this experiment.19(m/s). ObsD1y = 5[m]. The position of the robot is recorded at regular time intervals. The robot changes its path by deviating to the right. A test case is presented in Fig. Obs1y ¼ 10½m Obs2x ¼ 12½m. 14.2.2. 14. 13c and d. Obs5y ¼ 20½m Processing time 37 s Table 5 Experimental conditions of dynamic obstacles. Obs4y ¼ 15½m Obs5x ¼ 24½m. moving back and forth along X-axis and its velocity at 0. The navigation law makes a correct decision by indicating the right direction to the mobile robot until it reaches the virtual goal. there is a collision risk with D1 in the time 0 6 t 6 1. As it moves. moving back and forth along Y-axis and its velocity at 1. In Fig. Goaly = 25[m]. SLAM and so on. ObsD1y = 5[m] M 10[m]. as shown in Fig.5 m. yr = 0[m] Goalx = 29[m]. 13a represents the scenario. Fig. 20) M (5. Experiment 5: This example is shown in Fig. ObsD 1x = 0[m] M 6[m]. Although feature detectors using sonar is not very accurate especially in complex environment and real Fig. The laptop on top on the robot is in charge of all computation: motion control.70(m/s) ObsD4 = (0. The objects were placed in the room and their positions are point out as shown in Fig. (d) 0 6 t 6 6.W. Experiment with a pioneer robot Static obstacle position Obs1x ¼ 8½m. 12b. In this experiment. Fig. all programs were run under the Windows NT operating system. Initial robot position Goal position Initial dynamic obstacle position and its moving direction and velocity xr = 0[m]. Then the mobile robot keeps moving and detects dynamic obstacles in right direction. The proposed algorithm performs well in a cluttered dynamic environment. 1403 seed and orientation when facing with multi-moving obstacle. The robot makes the correct decision and turns to the left avoiding the detected obstacles successfully and then reaches the goal. This illustrates the collision avoidance process by change both The method presented in this paper has been tested on a Pioneer robot. The orientation of the two MT-Robot is shown in Fig. 6. 13b. (c) 0 6 t 6 5. The navigation is carried out in real-time and we only use sonar to detect obstacle. .3. Initial robot position Goal position Dynamic obstacle position and its moving direction and velocity xr = 0[m]. h represents the static obstacle and represents the dynamic obstacle. The track line of Pioneer robot is indicated approximately with blue color in this ﬁgure. ObsD1y = 15[m] moving direction and velocity moving in a positive X-axis direction at 0. In order to simulate dynamic obstacle. c and e thoroughly plot the obstacle avoidance path of the mobile robot in discrete time while it is maneuvered in a dynamic environment where many dynamic obstacles are moving in different direction. 13b. moving back and forth along X-axis and its velocity at 1. robot changes its orientation by avoiding collision with D1 and D2 using proposed method. Whereas.1. planning. 12a.6 m by 6. while Fig. ObsD3y = 10[m]. 25). The mobile robot starts to move towards according to the real goal. (e) 0 6 t 6 8. Pioneer robot. Fig. 15. 13b– e indicate the virtual plane in different time intervals: (b) 0 6 t 6 1. yr = 0[m] Goalx = 29[m]. it detects obstacles in front and in the left from its ultrasonic sensors. The room is a clean environment with long wall obstacles and measures approximately 8. Obs2y ¼ 10½m Obs3x ¼ 18½m. as shown in Fig. Similar as experiment 3. During the test executions. a Pentium 2.00(m/s) ObsD3x = 15[m] M 22[m]. moving back and forth and its velocity at 0. As seen from the virtual plane shown in Fig. the goal position is aligned to the front of the mobile robot which is obstructed by obstacles wall and two MT-Robots. the more complex test is performed to present the effectiveness of the proposed approach in cluttered environment with obstacle loops and mazes. 12a and b. Goaly = 25[m]. 15. 15. directly on the main processor of the robot. d and f give some detail information about how the robot navigation system behaves at each discrete time.20(m/s) ObsD2x = 10[m].1.88(m/s) ObsD2x = 12[m].2. two football mobile robots (MT-Robot) are used based on remote control. Obs3y ¼ 20½m Obs4x ¼ 20½m. / Measurement 44 (2011) 1389–1405 Table 4 Experimental conditions of obstacle avoidance (dynamic case). Yaonan et al.3G. passing through a narrow door and escaping from local minimum point using our algorithm are shown in Fig. The robot and the experiment setup are shown in Fig.

IEEE Transactions on Systems Man and Cybernetics Part A –Systems and Humans 30 (2) (2000) 187–196. Burgard. From these results. et al. Cui. Svestka. waiting motion happened frequently). IEEE Transactions on Pattern Analysis and Machine Intelligence 9 (2) (1987) 263–273. The aim was to let the robot ﬁnd a collision-free trajectory between the starting conﬁguration and the goal conﬁguration in a dynamic environment containing some obstacle (including static and moving object). it is not very robust as compared with simulation experiment in Matlab or Mobilesim (Process time became long. Clark. This allows the robot to follow the smoother path. Though mobile robot is discussed. M. Mataric. [10] S. 7. a navigation strategy which consists of sonar data interpretation and fusion. Kolluru. Efﬁcient online path planning algorithm and navigation for a mobile robot. Furthermore. performance. International Journal of Electronics 69 (2) (1990) 187–210. D. Integration of representation into goal-riven behaviorbased robots. Probabilistic road map sampling strategies for multirobot motion planning.H. The reactions of the robot in the simulation are also different from the real robot. Thrun. This work was supported by National Natural Science Foundation of China (60775047. Probabilistic algorithms in robotics. Based on proposed judging law.S. Robotics and Autonomous Systems 53 (3–4) (2005) 244–264. / Measurement 44 (2011) 1389–1405 Fig. For instance. Fox. a new mobile robot navigation strategy for nonholomonic mobile robot in dynamic environment was designed and fully tested in this work based on transferable belief model. Overmars. in this experiment. it shows that based on our collision avoidance algorithm. the method is generally applicable to other type of problem. [2] M. Valavanis.J. Y. 15. Koren.J.1404 W. References [1] K. [4] E. the performance of the simulation and the real robot is different. Mobile robot navigation in 2-D dynamic environments using an electrostatic potential ﬁeld. Koditschek. it also can achieve the goal based on this navigation method.. International Journal of Robotics Research 16 (2) (1997) 119–143. Rueb. [7] P. Exact robot navigation using artiﬁcial potential function. the set point is obtained over a period of time. Ai Magazine 21 (4) (2000) 93–109. Yaonan et al.E. For this purpose.C. map building. IEEE Transactions on Systems Man and Cybernetics 19 (5) (1989) 1179–1187. [9] C. once a speed command is given to the real robot. [11] K.K. Acknowledgements The authors thank the anonymous editor and reviewer for their invaluable suggestions. Conclusion In this paper. Hebert. Machine Learning 31 (1–3) (1998) 29–53. Habib. and path planning is designed. Real-time obstacle avoidance for fast mobile robots. Mobile robot and experiment. IEEE Transactions on Robotics and Automation 8 (5) (1992) 501–518. Thrun. as well as to pattern recognition. Motion planning for carlike robots using a probabilistic learning approach. S. In the simulation. objective classiﬁcation. [5] J. Borensein. The sonar data fusion and dynamic object distinguish law were discussed using transferable belief model. a path planning algorithm is modiﬁed based on potential ﬁeld method. which has been incorporated to improve the quality of this paper dramatically. W. Ge. the speed set point is almost immediately achieved. IEEE Transactions on Robotics and Automation 16 (5) (2000) 615–620. A. [3] M. Yuta. Simulation and experiment results validate the effectiveness of the proposed method. Y.K. National High Technology Research and Development Program of China (863 Program: 2008AA04Z214).P.M. Rimon. The simulation has a faster cycle time for the iterations of the proposed method. the mobile robot can ﬁnd a safe and smooth path for attaining the target position autonomously whether in a static or dynamic environment when adding highly noise to the sensor. R. Structuring free space as a hypergrarph for roving robot path planning and navigation. T. [8] S. [6] S. Wong. 60673084). IEEE Transactions on Robotics and Automation 8 (3) (1992) 304–312.D. New potential functions for mobile robot path planning. . A probabilistic approach to concurrent mapping and localization for mobile robots. National Technology Support Project (2008BAF36B01) and Research Foundation of Hunan Provincial Education Department (10C0356). D.

respectively. He is currently working toward the Ph. Valavanis.C. J. His research interests are intelligent control and information processing. Yang. [15] L. Minguez. Sensor-based robot motion generation in unknown. Shippen. Adaptive control of an electrically driven nonholonomic mobile robot via backstepping and Fuzzy approach. Tsourveloudis. and Ph. K. W.S. Dynamics of Continuous Discrete and Impulsive Systems-Series B – Applications & Algorithms 8 (3) (2001) 339–358. Smets. Smets. 2006 and 2008. Ge. Information and Control 6 (8) (2010) 3437–3449. [22] X.V. from Xiangtan University and Hunan University. Hebert. A sensor based navigation algorithm for a mobile robot using the DVFF approach.D. Jones.A. G. Robotics and Autonomous Systems 54 (12) (2006) 989–1004. Wang.M. Dapena. Neural network-based robust tracking control for robots. Changsha. Comparison of the performances of decision aimed algorithms with Bayesian and beliefs basis. International Journal of Machine Tools & Manufacture 39 (7) (1999) 1117–1134.. Velagic. International Journal of Intelligent Systems 16 (8) (2001) 963–981. K. R. Germany. in 2009. Y. B. respectively. Djekoune. Sun. He has been a Professor at Hunan University since 1995. Zou. degree in Control Science and Engineering from Hunan University. Patel. Hou. Toumi.S. Patel. image processing.. [32] S. Application of the transferable belief model to diagnostic problems. Mayosky. 1405 [36] Y. Denoeux. Moallem. Fierro. dynamic and troublesome scenarios. C. and M.D. Xiang. P.J.E. Control of a nonholonomic mobile robot: Backstepping kinematics into dynamics. Achour. . Yamamoto. he worked with ECSTU. China. robot control and navigation. R. / Measurement 44 (2011) 1389–1405 [12] N.N. Sun. and from 2001 to 2004. International Journal of Intelligent Systems 13 (3) (1998) 127–157.S. Information Fusion 8 (4) (2007) 387–412.X. Dynamic motion planning for mobile robots using potential ﬁeld method. P. Y.. From 1981 to 1994. degrees in electrical engineering from Hunan University. [31] Q. Behavioral control through evolutionary neurocontrollers for autonomous mobile robot navigation. IEEE Transactions on Systems Man and Cybernetics Part A – Systems and Humans 36 (6) (2006) 1395–1406. B. B. Cui.. [28] F. Journal of Robotic Systems 14 (3) (1997) 149–163.P. Now. Applied Soft Computing 7 (1) (2007) 189–202. intelligent information processing. Robotics and Autonomous Systems 44 (2) (2003) 131–150.. in 2001. Meng. L. T. Yin. A fuzzy-logic-based approach for mobile robot path tracking.D. et al. Kinematics-based characterization of the collision course. Smets. China.E. [35] W. kernel methods. in 1982 and the M. and artiﬁcial neural networks. Modiﬁed Newton’s method applied to potential ﬁeld based navigation for nonholonomic robots in dynamic environments. Changsha. A new potential ﬁeld method for mobile robot path planning in the dynamic environments. China.N.Y.S. Evolutionary design and behavior analysis of neuromodulatory neural networks for mobile robots control. On-line path planning strategy integrated with collision and dead-lock avoidance schemes for wheeled mobile robot in indoor environments.Z. Lewis. Delmotte.V. Perunicic. [20] L. [23] T. [18] A. Hunan. Y. Autonomous vehicle navigation utilizing electrostatic potential ﬁelds and fuzzy logic. he was a visiting professor with the University of Bremen. Zhang. [29] P. Yang Yimin received the B. B.Q. Wang.L. M. He is a visiting scholar in University of Waterloo from 2008 to 2009. Y. K.D. M. Robotics and Autonomous Systems 52 (4) (2005) 290–311. Path quality measures for sensor-based motion planning.C. Y. Analyzing the combination of conﬂicting belief functions. Bremen. [33] G. International Journal of Robotics & Automation 20 (3) (2005) 199–204. [24] J. Delmotte.A. Chiaverini. [13] S. Liu.O. Zuo.S. Yuan Xiaofang received the B. [25] P. respectively.S. [34] Z. Acosta. Classiﬁcation using belief functions: Relationship between case-based and model-based approaches. Neural network robust control for a nonholonomic mobile robot including actuator dynamics.A. A robust robotic tracking controller based on neural network. Shanghai. A layered goal-oriented fuzzy motion planning strategy for mobile robot navigation.. et al. His research interests include intelligent control theory and applications. Wang. A 3-level autonomous mobile robot navigation system designed by using reasoning/search approaches. From 1998 to 2000.G. degree in computer engineering from East China Science and Technology University (ECSTU). degree in Hunan University. Belkhouche. Yin. Ren. Fusco. Smets. G.W. [19] J. Montano. Autonomous Robots 13 (3) (2002) 207–222. Hunan University. [21] J. Target identiﬁcation based on the transferable belief model interpretation of Dempster–Shafer model.J. S. IEEE Transactions on Systems Man and Cybernetics Part B –Cybernetics 35 (6) (2005) 1214–1224. Kondo. Yang.X. Cheng. and Ph. IEEE Transactions on Fuzzy Systems 15 (2) (2007) 211–221. A. IEEE Transactions on Robotics and Automation 17 (4) (2001) 490–497. Y. Yaonan et al. His research interests include robot control and navigation. F. China. [27] T. Robotics and Autonomous Systems 57 (4) (2009) 411–419. [14] J. Wang Yaonan received the B. Real-time ﬁne motion control of robot manipulators with unknown dynamics.N. IEEE Transactions on Systems Man and Cybernetics Part A – Systems and Humans 34 (4) (2004) 457–471. degrees in 2005 and 2009. L. Industrial Robot – An International Journal 35 (5) (2008) 421–434. Chang. Lacevic. Belkhouche. M. he was a Postdoctoral Research Fellow with the Normal University of Defence Technology. [16] F. International Journal of innovative Computing. R. Lin. He is currently a lecturer with the College of Electrical and Information engineering. International Journal of Advanced Robotic Systems 6 (2) (2009) 97–108. degrees in electrical engineering from Hunan University. in 1990 and 1994. and industrial process control. E. Intelligent Automation and Soft Computing 15 (2) (2009) 211–222. Asian Journal of Control 11 (2) (2009) 214–225. IEEE Transactions on Control Systems Technology 17 (4) (2009) 803–815. Robust backstepping and neural network control of a low-quality nonholonomic mobile robot.G. Antonelli. X. [30] R. [37] Y. Moreno. Zuo Yi received the Ph.E. Fernandez-Leon. he was a senior Humboldt Fellow in Germany.J. His scientiﬁc interests include neural networks and robotic robust control. Robotica 26 (2008) 285–294. International Journal of Robotics & Automation 23 (2) (2008) 127–136. [17] Y. From 1994 to 1995. Changsha. M. China. [26] F. McIsaac. and artiﬁcial neural networks.E.. et al.

- Elizabeth Wailes Letter of Rec Fall09 v2
- 05228071
- Mobile Robotics
- Flexlink ADMV Robotic Solution
- Shepherding via Deformable Shapes
- The Agents
- IntroductionCoopAutoMobiRobo
- S 420i Serie
- submit_paper_03
- genre proposal uwrt 1104
- Komputer
- Obstacle Avoidance
- Robot
- Alexan Product List Part 2
- Teacher is the Person Who Shapes the Future of Any Human Being
- Intro to Rob
- instructional software lesson idea
- Radio Control - Upgrade to Squarebot 3.0
- John Neel Resume2012
- Guidelines
- 5710K&5720T Manual.pdf
- Robotics- Thinking, Sensing and Acting
- Robotic Arm Lab
- Machinery Selection-Books21
- 2014 Ieee Embedded Based Biomedical Project Titles
- Crowd Simulation
- Robotic Assembly Processes as a Driver in Architectural Design
- 8 - Semester ME 408 Theory Course Plan
- Duc Nguyen Drexel Resume
- Gar

- DNS-320 A1 Manual v2
- God Does Not Play Dice with Social Sentiments
- Online Job Portal
- Final Exam Prep for 622
- Virtual Classroom System(SRS)
- Cloud Computing Abstract
- Synopsis on Web Design
- Aaron Greenspan lawsuit
- ITT Technical Institute Curricula
- Introduction
- TP vb..net
- Learn SAS Programming
- Smirnoff Vodka Brand Analysis
- Vehicle License Plate Segmentation
- Getting MEAN With Mongo, Express, Angular, And Node
- Sample Statement of Work for Blog Design Services v1.0
- Network Society Project
- EDI Basics and Architecture.pdf
- Zabbix Manual v1.4
- Validation Users Guide - VAM 3.0.11
- ISSUE OF JURISDICTION UNDER THE INFORMATION TECHNOLOGY ACT’ 2000
- (IN)SECURE Magazine issue 18
- Smart Machines
- Cyber Threat Posed by North Korea and China to South Korea and US Forces Korea
- NIB Internship Report
- upgrade 11.2.0.3 real Application Cluster to 11.2.0.4
- FACT SHEET
- syslog plugin for cacti
- How to Write an Acceptable Use Policy
- How to Build the Future

- Too Different for Comfort
- tmpA258.tmp
- Prey 2 Document 1
- tmp3C9F
- The MLaaTR Sketchbook by the artists from My Life as a Teenage Robot
- Obstacle Detection and Path Crossover Using Fire Bird V Robot
- As 3984-1991 Manipulating Industrial Robots - Performance Criteria and Related Test Methods
- Out of the Loop
- tmp2F22
- tmpAB8B.tmp
- tmp5AD7
- TNH 91107
- tmp51FA.tmp
- Biped Walk and Talk Robot
- Automated Child Rescue Robot Using Embedded System
- The Daily Tar Heel for October 4, 2013
- tmpD54B
- Bomb Detection Robot with Pickup and Drop Facility
- Design and manufacture of mechanical arm and analysis
- tmpC96
- tmp5ADF.tmp
- Intelligent Shortest Path Tracking Arm 7 Based Robot
- Design and Development of Sonar Based Autonomous Robot for Localization and Mapping for Potentially Unsafe Areas
- White Line Follower Using Fire Bird V Robot
- Tmp 9681
- tmp5056.tmp
- Invisible Threats, by Gabriella Blum
- Cyborg Glove ExcerptNick and Tesla’s Super Cyborg Gadget Glove [Excerpt]
- tmp655B.tmp
- Ministry of Disturbance by Piper, H. Beam, 1904-1964

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading