You are on page 1of 15

Robotics and Autonomous Systems 89 (2017) 95–109

Contents lists available at ScienceDirect

Robotics and Autonomous Systems


journal homepage: www.elsevier.com/locate/robot

Optimal path planning and execution for mobile robots using genetic
algorithm and adaptive fuzzy-logic control
Azzeddine Bakdi a,b , Abdelfetah Hentout a, *, Hakim Boutami a,b , Abderraouf Maoudj a ,
Ouarda Hachour b , Brahim Bouzouia a
a
Centre de Développement des Technologies Avancées (CDTA), Division Productique et Robotique (DPR), BP 17, Baba Hassen, Algiers 16303, Algeria
b
University M’hamed Bougara de Boumerdès (UMBB), Institute of Electrical and Electronic Engineering (IGEE), Boulevard de l’Indépendance, Boumerdès
35000, Algeria

highlights

• Exact representation of the mobile robot and the obstacles dimensions, location and shapes.
• Objective function leads to optimal solutions in terms of (i) minimum path length, (ii) maximum safety, and (iii) shortest trajectory execution time.
• PCHIP path smoothing reduces accelerations and increases the average velocity of the robot for faster trajectory execution in a high-resolution map
representation.
• AFL controller allows the robot to track and minimize the errors during trajectory execution.
• GA-PCHIP approach with Kinect cameras allow indoor mobile robots to safely navigate inside highly constrained environments.
• Ensures optimal maneuvering in a limited workspace with obstacles.
• Faster than the conventional methods found in the literature.

article info a b s t r a c t
Article history: This paper presents preliminary results of the application of two-Kinect cameras system on a two-
Received 17 May 2016 wheeled indoor mobile robot for off-line optimal path planning and execution. In our approach, the
Received in revised form 14 November robot makes use of depth information delivered by the vision system to accurately model its surrounding
2016
environment through image processing techniques. In addition, a Genetic Algorithm is implemented to
Accepted 20 December 2016
generate a collision-free optimal path linking an initial configuration of the mobile robot (Source) to a
Available online 26 December 2016
final configuration (Target). After that, Piecewise Cubic Hermite Interpolating Polynomial is used to smooth
Keywords: the generated optimal path. Finally, an Adaptive Fuzzy-Logic controller is designed to keep track of a
Mobile robot mobile robot on the desired smoothed path (by transmitting the appropriate right and left velocities
Off-line optimal path planning using wireless communication). In parallel, sensor fusion (odometry sensors and Kinect sensors) is used
Path execution to estimate the current position and orientation of the robot using Kalman filter. The validation of the
Genetic algorithm proposed solution is carried out using the differentially-driven mobile robot, RobuTER, to successfully
Adaptive fuzzy-logic control achieve safe motion (without colliding with obstacles) in an indoor environment.
Two-Kinect cameras system
© 2016 Elsevier B.V. All rights reserved.

1. Introduction and multi-sensory fusion [4], (iii) path planning [5], (iv) localiza-
tion of the robot inside its environment [6], and (v) path execution
Path generation and execution is one of the most important and tracking [7].
tasks for autonomous mobile robotics. Indeed, the robot has to per- Task planning is the process of generating a sequence of oper-
form certain interrelated activities [1] including (i) task planning ations deriving from the assigned task to the robot to be able to
(generation of operations plans) [2,3], (ii) environment modeling perform it [8]. Depending on the task complexity, the planning can
be carried out by a single-planner or a multi-planner system [9].
In the area of environment modeling and multi-sensory fusion,
*Corresponding author. the robot can obtain and interpret information through its sensors
E-mail addresses: bkdaznsun@gmail.com (A. Bakdi), ahentout@cdta.dz
(A. Hentout), Hakim08boutami@gmail.com (H. Boutami), amaoudj@cdta.dz
for building maps. This includes for example, image-based sensors
(A. Maoudj), hachour_ouarda@yahoo.fr (O. Hachour), bbouzouia@cdta.dz (stereovision systems, etc.) [10–13] and time-of-flight-based sen-
(B. Bouzouia). sors (laser-range finders, ultrasonic sensors, etc.) [14–17]. These

http://dx.doi.org/10.1016/j.robot.2016.12.008
0921-8890/© 2016 Elsevier B.V. All rights reserved.
96 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

last few years, depth cameras are revolutionizing robots percep- using the minimum number of landmarks. Another major problem
tion as a substitution for the previous classical robotic sensors. is the visibility constraints of static landmarks in an environment
Indeed, Kinect cameras are able to provide good quality depth with obstacles; this problem had been studied by Hayet et al.
information of the surrounding environment of the robot (usually in [41]. In the case of Markov localization, the robot belief state is
unstructured, unknown or partially known), which makes these usually represented as separate probability assignments for every
sensors significantly attractive as they have been adopted by re- possible robot location in its map. Kalman Filter (KF ) localization
searches all over the world [18–20]. is an optimal and efficient sensor fusion technique; it represents
The other issue is related to path planning, which mainly tries the robot belief state using a single and well-defined Gaussian
to identify a sequence of configurations that will cause the robot probability density function [42].
to move from an initial configuration (Source) toward a final con- Finally, path execution consists of controlling and directing a
figuration (Target) [21,22]. Besides collision avoidance with possi- mobile robot to a specified destination with respect to an imposed
ble obstacles, the robot must often satisfy other requirements or performance criteria. Here, the robot needs strategies to complete
optimize certain performance criteria [23] such as execution time, the execution and tracking of the generated path [43]. Thus, mo-
path length, energy, etc. tors actions must be determined and adapted to environmental
changes. For this aim, the robot uses a control system to provide
Existing work on path planning can be classified into off-line
suitable input velocities to drive the robot toward the desired
and on-line planning approaches [24]. Off-line planning approaches
location (Target).
generate the entire path to the Target before motion begins These
In this paper, we deal with off-line path planning problem
approaches use complete information about the workspace, where
for mobile robots. First, visual perception using two-Kinect cam-
an optimization criterion can be used for searching the optimal
eras system is performed to model the surrounding environment
collision-free trajectory. They are most useful for repeatable tasks
through image processing techniques. Furthermore, a GA is im-
in static environments where optimality is essential (industrial plemented to generate an optimal collision-free path, which is
applications, etc.). Whereas in on-line planning approaches, the smoothed using Piecewise Cubic Hermite Interpolating Polynomial
trajectory to the Target is calculated incrementally during motion. (PCHIP) [44]. In addition, an Adaptive Fuzzy-Logic (AFL) controller
These methods use incomplete information of the environment is designed to keep track of a differentially-driven mobile robot,
and build the trajectory step by step. Thus, this type enables fast RobuTER, on the desired path (by transmitting the appropriate
collision detection and trajectory planning [25]. These approaches right and left velocities). In parallel, sensor fusion is used to esti-
are required in applications where obstacles are detected during mate the robot location (position and orientation) using a KF.
motion, the computation time required for a global solution delays The remainder of this paper is organized as follows. Section 2
the task execution, or simply as an alternative to a computationally describes the experimental robotic system. Section 3 briefly sum-
expensive off-line search. marizes the proposed path planning and execution approach.
Hossain and Ferdous [26] explored the application of Bacte- Section 4 describes visual perception of robot environment and
rial Foraging Optimization (BFO) for mobile robot path planning, images processing phases. Section 5 presents the path planning
through simulation application. The obstacles are wrapped by cir- and smoothing phase using GA and PCHIP. Section 6 details the
cles, and the robot by a square (i.e. low accuracy); elapsed time sensor-fusion localization of the mobile robot considering both
depends on the number of obstacles: 15 to 160 s are required odometry and Kinect sensors. Section 7 describes the AFL controller
to generate safe path, better than basic BFO and Particle Swarm developed to track the generated optimal path. Section 8 presents
Optimization (PSO) algorithms [27]. Hao and Agrawal [28] provided and discusses the main obtained experimental results. Section 9
a framework for planning and control of formations of multiple concludes the paper and presents some future works.
Unmanned Ground Vehicles (UGVs) to traverse between goal points.
However, the environment is modeled at low resolution (15 × 2. Experimental robotic system
30) through fixed-cell decomposition. Paths are planned using A*
The considered experimental robotic system available at our
search algorithm and Lyapunov-based controller is used to keep the
laboratory is shown in Fig. 1. It consists of two distant sites with
vehicles on their planned trajectories.
a wireless connection [45]:
There exist a challenge of complex interaction between the
robot and its dynamic environment [26,28–31]. The authors in [32] • Operator site (local site): an off-board host PC is used to send
addressed this challenge by using an identification system through orders to the robot using wireless communication through
NARMAX modeling to identify the desired robot behavior and TCP/IP connection.
generate control code by task demonstration. Cabreira et al. [33,34] • Remote site: it consists of a rectangular non-holonomic
used a Genetic Algorithm (GA) with a very limited resolution due to differentially-driven mobile manipulator robot, RobuTER/
cell decomposition of their simulated environment. Moreover, nei- ULM, controlled by an on-board industrial PC.
ther exact robot geometry and location nor obstacles location and
shapes were taken into account while dealing with such a problem. In this work, we only utilize the RobuTER mobile base; the ULM
Das and colleagues [35] proposed an Improved Gravitational Search manipulator is not considered.
Algorithm (IGSA) to optimize the paths for multi-robot system in
3. Global overview on the proposed path planning approach
a dynamic environment represented with moderate resolution.
However, the proposed algorithm requires about 55 s for optimal Our goal is to plan and carry out a collision-free opti-
path generation in simulation (without real robots execution). mal smoothed path moving an autonomous mobile robot, from
Besides the aforementioned problems, the robot must utilize Source(x, y, θ )Init to Target(x, y, θ )Fin , using a two-Kinect cameras
a robust localization strategy inside its environment to precisely system and soft computing techniques (Genetic algorithm and
estimate its location (position and orientation) within the spatial fuzzy logic).
map during the planned path execution [36]. Accurate localization The overall diagram of the proposed approach can be summa-
is a key for successful path execution by mobile robots. In this area, rized in Fig. 2 This approach consists of five successive phases
landmark-based methods have been widely used for localization (i) visual perception of robot environment, (ii) image processing,
and path planning [37–39] for indoor mobile robots, Automated (iii) optimal path planning and smoothing, (iv) sensor-fusion lo-
Guided Vehicles (AGVs), and Unmanned Aerial Vehicles (UAVs). Bein- calization of the robot and finally, (v) optimal path tracking and
hofer et al. [40] considered the problem of optimal placing of ar- execution. The two last phases are carried out in parallel. All these
tificial landmarks for mobile robots to execute navigation tasks by phases are detailed in the following sections.
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 97

Fig. 1. Architecture of the experimental robotic system.

Fig. 3. Experimental robotic test bed.

depth information in pixels, with same resolution and rate with a


valid range of [500 mm, 4000 mm]. For more information, please
refer to [46].

4.1. Visual perception of the robot environment

As shown in Fig. 3, two Kinect cameras are fixed on the roof of


the experimental workroom; each Kinect is placed at a height of
3500 mm. They are placed in such a manner with the same axis
to visualize both the environment (ground, obstacles, etc.) and the
mobile robot (RobuTER). The Kinect cameras are connected to the
off-board host PC (operator site) to acquire and process images of
the scene.
The diagram of this first phase is shown by Fig. 4(a) (RGB image
acquisition) and Fig. 4(b) (Depth image acquisition).
The resolution of the pictures delivered by the two-Kinect cam-
eras system is 640 × 960 pixels. It covers an area of 4058 mm ×
6087 mm approximately with a horizontal accuracy of 1 pixel ≈
Fig. 2. Overall diagram of the proposed approach for path planning and obstacle
avoidance.
6.34 mm; the vertical accuracy (depth) = 1 mm.
The obtained result of this first phase (Kinect-sensors acquisi-
tion of environment), consists of an almost continuous description
4. Visual perception of the robot environment and images pro- of the environment which is well suited to be used in the next
cessing phases. Fig. 5(a) represents the acquired RGB images by the first
and second Kinect cameras, respectively; Fig. 5(b) shows depth
The visual perception of the robot environment is carried out images given by Kinect 1 and Kinect 2, respectively (for the con-
using two Microsoft Kinect cameras system Version 1. This type sidered environment of the robot).
of Kinect has two sensors, a color sensor and a depth sensor. The It must be mentioned here that the RGB image is only consid-
image stream contains color data in RGB format with resolution of ered by the user to indicate the imposed Target location on his
640 × 480 and 30 frames per second. The depth stream returns screen by using the mouse.
98 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

(a) RGB images captured by Kinect 1 and Kinect 2.

(a) RGB images acquisition.

(b) Depth images captured by Kinect 1 and Kinect 2.

Fig. 5. Continuous description of the environment where the robot evolves.

(i) forbidden area (obstacles), (ii) dangerous area (near the obsta-
cles) and (iii) safe area (far away from the obstacles) respectively.

(b) Depth images acquisition. 5. Collision-free optimal path planning using Genetic Algo-
rithm (GA)
Fig. 4. Diagrams of RGB and depth images acquisition.
The safe map generated in the previous phase (Fig. 7(e)) will
consist of the input, among others (Source, Target . . . ), of this third
4.2. Image processing phase. This latter deals with optimal path planning connecting
Source to Target using GA and path smoothing using PCHIP; it is
described in the diagram of Fig. 8.
The objective of this second phase is to obtain a binary safe map
where the robot can evolve without colliding possible obstacles
inside its environment. The various performed operations can be 5.1. Coding the problem
described by the diagram of Fig. 6.
We first reduce the noise of the acquired depth image (Fig. 5(b)) GA is initialized based on the binary safe map obtained from the
to obtain a non-noisy image using a rotationally symmetric 2D Gaus- previous phases, estimated initial robot position (Source) and final
sian low-pass filter [47] of size 13 × 13 with a standard deviation of goal (Target) introduced by the operator. Each generation has its
1.4 (Fig. 7(a)). Next, Canny edge detector [48] is applied to detect the own population POP that consists of fixed number of chromosomes
edges where two thresholds are applied, using a hysteresis model C ; each represents a possible path from Source to Target in the
(Fig. 7(b)). The higher threshold = √ 35%, the lower threshold = 5%, form of a variable number S of segments joined together Each
and the standard deviation = 2. Moreover, using distributed chromosome stores S + 1 coordinates (x, y) of the segments ends,
threshold, an environment model can be obtained in the form of including Source, Target, and S − 1 points between POP consists
a binary (logical) map that could be used by the robot (Fig. 7(c)). therefore of a multidimensional array: two coordinates (x, y) by
The threshold for height to extract obstacles = 100 mm. S + 1 points by C paths.
However, to successfully move without collisions, and due to Table 1 represents one generation of C chromosomes with S
uncertainties of the mobile robot sensors, a safe region could segments each linking Source to Target.
be added to the binary map using a disk mask (Fig. 7(d)). The Each two successive points define a directed segment that is a
radius for the added safety region = 20 pixels (20 pixels ≈ part of the path linking Source to Target; thus each chromosome
20∗ 6.34 mm/pixel ≈ 120.8 mm). Finally, using two thresholds, represents one path. A sample chromosome i from a generation is
we obtain a safe map representing three areas (Fig. 7(e)) as follows represented by Fig. 9 as follows:
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 99

(a) Gaussian filtered depth image (Kinect 1 + Kinect 2).

(b) Canny edges image (Kinect 1 + Kinect 2).

Fig. 6. Diagram of building the safe map and calculation of the three areas.
(c) Binary map (Kinect 1 + Kinect 2).
Table 1
A generation of C chromosomes with S segments.
Fig. 7. Environment processing and obtained result images.

The initial population is a combination of three sets. Set 1


consists of random points in the map. Set 2 and Set 3 are not totally
random; they are generated along X or Y . Two different cases could
be distinguished:
1. ⏐XTarget − XSource ⏐ > ⏐YTarget − YSource ⏐
⏐ ⏐ ⏐ ⏐
For Set 2 and Set 3, X is uniformly distributed with fixed step
X −X
∆X = Target S Source as follows (where XX 0 = XSource , XX S = XTarget ,
YY 0 = YSource , YY S = YTarget ):

• XX = [XX 0 , XX 1 . . . XX S −1 , XX S ], where XX i = XX i−1 + ∆X ,


i = 1, 2, . . . , S.
5.2. Generating an initial population • YY = [YY 0 , YY 1 . . . YY S −1 , YY S ], where YY i = (XX i −
Y −Y
XSource ) XTarget −XSource + YSource , i = 1, 2, . . . , S.
Target Source
This is achieved by setting the parameters (x, y) of each point
Set 2 is the set of segments near the straight line joining Source
in each chromosome to random numbers. In generating the pop-
and Target Yi = YY i +ri , where ri is a random number in the interval
ulation, we have to use common sense when setting limits on [−r , r ]; we took r = Ymax .
6
the chosen random numbers; so (x, y) should be inside the map. Set 3 has the same X coordinates as Set 2; however, Yi is ran-
Moreover, choosing x or y as a linear space distribution leads to a dom in the interval [1, Ymax ] where Ymax represents the maximum
faster convergence toward the best solution. allowed Y coordinates of the segment points.
100 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

(d) Disk filtered image (Kinect 1 + Kinect 2).

(e) Obtained safe map (Kinect 1 + Kinect 2).

Fig. 7. (continued)

Table 2
Three examples of chromosomes.

Fig. 8. Global diagram of the considered GA-PCHIP for optimal path planning and
smoothing.

Table 2 gives examples for chromosomes of the three consid-


ered sets.
2. ⏐XTarget − XSource ⏐ < ⏐YTarget − Y Source ⏐
⏐ ⏐ ⏐ ⏐
X and Y are reversed for the Set 2 and Set 3. So, the sets are
generated this time along Y axis.
Fig. 10 illustrates three examples of chromosomes of initial
random population:
In this work, we have considered the following divisions of sets:

• Set 1 has 25% chromosomes of initial population.


• Set 2 has 25% chromosomes of initial population.
• Set 3 has 50% chromosomes of initial population.

5.3. Testing the fitness of the strings

Once we have our population of strings, we need to see how Fig. 9. A sample chromosome i from a generation linking Source to Target.
well each of them (strings) works to find their fitness J. It is evalu-
ated based on the following function (Eq. (1)):
of hundreds, error about thousands, and total deviation is
J = KL ∗ ι + KE ∗ ξ + KT ∗ η (1)
about tens, we selected KL = 3 unit/pixel, KE = [1 +
• KL , KE , KT : they are weighting factors, for the length (ι), 0.3∗ n] unit/pixel2 (where n is the number of generation)
error (ξ ), and total deviation (η) respectively In our case, and KT = 10 unit/radian. However, we took KE increasing
we took an equilibrium selection. Since length is at level at each generation, so that when several generations are
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 101

(a) Mobile robot ge-


ometry.

Fig. 10. Three examples of chromosomes of initial random population.

passed, the GA focuses on ξ more than ι or η. In the selection,


it is clear that best paths are those with lower J .
• ι: it is the total length of the chromosome (sum of its S
(b) Overlapping the mobile robot with the binary map.
segments lengths). ι, calculated by (2), is used to ensure the
optimal (shortest) path in terms of Euclidean distance:
Fig. 11. Calculation of the total error ξ of a chromosome.
k=S −1
∑ √
ι= (xk+1 − xk )2 + (yk+1 − yk )2 (2) Table 3
k=0 (Source) Examples of two chromosomes A and B.

◦ (xk , yk ), (xk+1 , yk+1 ): they are the coordinates of the


points k and k + 1 of the kth segment of the path,
k = (Source), 1, 2, . . . , (Target).

• ξ : it is the total error of the chromosome contributed by


its segments The error of each segment is calculated as the
area of the intersection between robot and obstacles when Table 4
it moves along this segment while taking into account the Crossing over chromosomes A and B at random position R.
robot geometry (Fig. 11(a)). To move along a segment from
A(xA , yA ) to B(xB , yB ), we took as a measure of the error ξ the
area of intersection between the obstacles and the rectangle
around the segment (representing the area scanned by the
robot). This is done by overlapping the rectangle, with the
binary map, and summing up the number of intersection
points, for each segment, in each chromosome, and for each 5.4. Selecting breeding pairs
generation (Fig. 11(b)).
• η: it is the accumulated change in the robot orientation to Nb_Best_C best chromosomes are kept; they represent Set A (see
reach its goal (Target) through this path. The total deviation the next subsection for the others sets). These chromosomes have
η is the sum of deviations at the S + 1 points, contributed the lower fitness; they are randomly paired up in Nb_Best_C pairs.
2
by S segments, initial orientation, and final orientation. It is Their chance of ending up in the new population is proportional to
given by (3): their fitness J.
k=S (Target)

η= |Θk+1 − Θk |. (3)
5.5. Crossover
k=0 (Source)

◦ θk : it represents the angle of the kth segment of the After taking the fittest strings, we need to let them breed. First,
path, k = 1, 2, . . . , S ; θ0 is the real initial robot two strings are paired up Then a point R on the strings is chosen at
orientation (Source); θS +1 is the final robot desired random Next, everything after that point is crossed over, so that
orientation (Target). everything in first string, after the point, ends up in the second
string and vice-versa. To more clarify this, let us consider the
In this work, the deviation at Source (between segment 1 and initial
example of two chromosomes A and B as shown in Table 3.
orientation) |θ1 − θInit | is amplified by 10 times, so the robot can
Crossover chromosomes A and B at random position R, will give
start easily (deviation at Source must be 0). The same is done for
two chromosomes AB and BA as shown in Table 4.
Target; the robot must stop at correct orientation. Thus, the real
Since Source and Target are the same, cross over there will result
considered η is calculated by (4):
in same chromosomes, just exchanged; consequently, crossover
k=S −1
∑ cannot be done at R = 0 (between Source and next point), or at
η = 10 ∗ (|θ1 − θInit | + |θTarget − θS −1 |) + |θk+1 − θk | . (4) R = S − 1 (between Target and previous point). Crossover is thus
k=1 done at any random position R where 1 ≤ R ≤ S − 2.
102 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

(segment end) is entirely inside an obstacle, M5 will get it out,


then check the segments attached to Source and Target. If error
exists there, only one point can be moved by M7 to correct the
error associated with that segment; also the error associated with
a segment can be reduced while M6 moves two (02) points. Those
operations produce two sets (Set E and Set F ) by executing them
twice. If no error exists, then mutations M1 , M2 , M3 and M4 are
applied for other times to come up with complete generation.
Basically, some mutations are useful at the beginning of GA; others
work better at the end of it just for improvement.

5.7. Stopping the GA

In the current version of our implementation, the GA will con-


sider that the found Path is optimal if it is safe first (ξ = 0), and the
best Path is not getting improved for five (05) generations, and all
mutations and crossovers will lead to worse paths. The GA stops in
the following cases:

• At first level, in case where the user picked up a wrong


destination, such as on obstacle or near to it.
• At second level, if the GA does not find a feasible collision-
free path after 100 iterations. The GA will consider that no
path exists between Source and Target.
• At third level, the GA will never stop if the error is greater
than zero (ξ ̸ = 0); because this will lead the robot to a
collision with an obstacle.
• If a safe path is obtained, the GA will not go forever; it
will stop when the best path is the same for 05 successive
iterations, i.e. same best path and same fitness. That means
that GA is not improving the path (this is the best).
• If the path is getting slightly improved for too long (100 it-
erations for example), converging toward the optimal path,
the GA will stop and considers the nearly optimal path as the
Fig. 12. Considered mutations leading to the optimal solution. best one.

5.8. Path smoothing using PCHIP


5.6. Mutation
The GA stops when it finds the optimal path, which consists of a
After breeding, the next step is to mutate the strings. We tried set of connected segments that needs to be smoothed. For this aim,
too many mutations; some of them make the GA diverge, some of PCHIP [44] is applied on this resulted path.
them lead it to a non-optimal solution, etc. We observe the results PCHIP interpolation is like cubic spline interpolation, but it
and the development of GA several times, we added, modified, and saves the shapes. In this case, taking discrete data (the segments
organized the mutations to solve each problem. ends), PCHIP generates 3rd order polynomials (curved lines) in-
The set of chromosomes resulting from cross over will go under stead of lines (1st order); these polynomials are continuous to
the following mutations (see Fig. 12): smooth the path. The polynomials are founded by solving the
equations of end points, continuity (derivatives) and minimal error
• M1: this mutation exchanges two segments ends; it is very (shape preserving), 3rd order polynomial, four unknowns and four
useful to correct the path.
equations.
• M2: pick up a random point and move it to a random Taking one path which is the optimal in form of (X , Y ) vec-
position. tors, as in Table 1 (S + 1 points). The path is zigzag line; we
• M3: minimize ∆θi to reduce the zigzag effects. generate a new vector xi of about 1000 points from Source to
• M4: delete one segment. Target. yi = PCHIP(X , Y , xi ) returns a vector yi containing elements
• M5: move a point to reduce error. corresponding to the elements of xi and determined by piecewise
• M6: move a segment to reduce error. cubic interpolation within vectors x and y. Resulting yi versus xi give
• M7: similar to M6, but the segment is attached to Source or the smoothed path.
Target.
• M8: move a point toward previous or next point. 6. Mobile robot localization

All the mutations are about manipulating (X , Y ) coordinates Sensor-fusion localization is implemented to better estimate
depending on the scenario. M1 and M2 go together; if M1 cannot the mobile robot position and orientation while moving toward
be applied (no loop), apply M2 to yield one new set (Set B). M3 and Target. For this aim, we consider both robot encoders and Kinect
M4 are combined together; if one segment has a large θk , then M3 Hue–Saturation–Value (HSV ) localization.
will reduce it. Otherwise, M4 will delete a segment (almost line) The differentially-driven mobile robot, RobuTER, is assumed
to make it one segment. This operation is performed twice to give moving on the plan. Consequently, its kinematic model can be
two sets (Set C and Set D); pick up a point and apply M3 or M4 , and decided, in the Absolute reference frame RA , by three parameters
pick up another point and do the same. (xB , yB ) and θB , which represent the Cartesian coordinates and the
The remaining mutations are concerned with the error reduc- orientation angle of the mobile base local frame RB , respectively
tion. M5 , M7 , and M8 are done by moving a single point; if a point More details are given in the following subsections.
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 103

6.2. Kinect HSV based localization

The mobile robot location is tracked using color and depth im-
ages acquired by the two-Kinect cameras system, based on single
color extraction method (Fig. 14(a)) as follows [50]:

1. Selection of a red (or another color) colored small area on


the robot, which is unique in the scene.
2. Conversion of the RGB image to HSV color format.
Fig. 13. Odometry mobile robot localization and Instantaneous Center of Curvature. 3. Calculation of the histograms of H, S and V components.
4. Derivation of the lower and upper limits of higher concen-
tration for each component.
5. Use these limits as thresholds to extract the preselected
6.1. Odometry localization
colored area in the images when the robot moves.

During its motion, the mobile robot odometry is calculated in The RGB color image is converted to a HSV color image format.
real time as shown in [49]. Indeed, we established a relationship As H varies from 0 to 1, the resulting color varies from red through
between RA and RB to specify the actual robot location p. To describe yellow, green, cyan, blue, magenta, and returns to red. When S is 0,
the robot motion in terms of component motions, in (6) we map colors are unsaturated (i.e., shades of gray). When S is 1, colors are
motion ṗA along the axes of RA to motion ṗB along the axes of fully saturated (i.e., they contain no white component). As V varies
RB through the transformation matrix R (θ) and its inverse R−1 (θ ) from 0 to 1, the brightness increases [51].
given by (5): Color content is characterized using its HSV components his-
tograms; for each component, the histogram is constructed by
cosΘ sinΘ 0 cosΘ −sinΘ 0
[ ] [ ]
counting the number of pixels having the same value (Fig. 14(b)).
R (θ) = −sinΘ cosΘ 0 , R−1 (θ) = sinΘ cosΘ 0 These histograms are used to determine H0 , H1 , S0 , S1 , V0 and V1
0 0 1 0 0 1 lower and upper limits for H, S and V components respectively.
(5) With the predetermined depth of the mobile robot, D0 and D1 can
ṗB = R(θ )ṗA , ṗA = R −1
(θ )ṗB . (6) also be used as the lower and higher limits for the depth. These
limits are then applied to threshold the next images transferred
The differentially-driven mobile robot has two driving wheels, to HSV format, to extract the pixels corresponding to the tracked
each with diameter r. Q is centered between these two wheels; area; we took the mean of the coordinates of the obtained pixels
each is at a distance l from Q . Given r, l and g, the Instantaneous as the current location of the mobile robot delivered by the Kinect
Center of Curvature (ICC ), shown in Fig. 13, can be given by (7): cameras.
The red color selected on the environment has (H < 0.1 or
ICC = [x − g sin θ , y + g cos θ]. (7) 0.93 < H) and (0.4 < S < 0.8) and (0.41 < V < 0.62); as shown
on the histogram of Fig. 14(b), these represent H0 , H1 , S0 , S1 , V0 , V1
the limits of each component, respectively. Moreover, we know
Right velocity VR = ω(g +l) and left velocity VL = ω(g −l) where that the depth of the robot is (2500 < D < 3000) which
g = l VR −VL are mapped into robot angular velocity ω = R 2l L and
V +V V −V
R L
correspond to the limits D0 and D1 If the only red color (having the
linear velocity V = g ω =
VR +VL
. Thus, the component motions are same HSV ) is on the robot, then this will necessarily corresponds
2
given by (8): to the current location of the mobile robot.
⎡V + V ⎤
R L
cos θ 6.3. Sensor fusion using Kalman filter
ẋA V cos θ ⎢ 2
⎡ ⎤ ⎡ ⎤ ⎥
VR + VL
⎢ ⎥
⎣ẏA ⎦ = ⎣ V sin θ ⎦ = ⎢ sin θ ⎥ .

(8)
For a better estimation of the actual location (position and

⎢ 2
θ̇A ω

VR − VL orientation) of the mobile robot inside its environment, we com-
⎣ ⎦
2l bine the information provided by two types of sensors (odometry
∆X
sensors and two-Kinect sensors). Thus, we approximate the error
Assuming that Ẋ (t ) ≈ the curve between two positions
∆T characteristics of these sensors with uni-modal zero-mean Gaussian
is approximated by a straight-line segment with constant angle noise [52]. Furthermore, we assume that there is no robot mo-
θ + ∆2θ , ∆SR and ∆SL are the traveled distances for the right and left tion between two measurements (odometry and Kinect measure-
wheels respectively, the robot position at the (K + 1)th instance is ments).
calculated by (9): The odometry system gives a measurement p1 with mean value
pˆ1 = p1 and variance σ1 2 ; the two-Kinect system gives a measure-
pA (k + 1) = pA (k) + ∆pA ment p2 with pˆ2 = p2 and variance σ2 2 . Based on weighted least-
⎡ ∆S +∆S ∆θ ⎤
( )
R L squares, the best localization estimation p̂ that minimizes the error
cos θ +
⎡ ⎤ ⎢ 2 2 ⎥ EP given by (10) is found to be (11):
xA ⎢ )⎥
⎢ ∆S R +∆S L ∆ θ ⎥
(
= ⎣yA ⎦ + ⎢ 1 1
sin θ + ⎥ (9) EP = (p̂ − p1 )2 + (p̂ − p2 )2 (10)
2 2 σ1 2 σ2 2
⎢ ⎥
θA ⎢ ⎥
∆S R −∆S L σ1 2 σ1 2 σ2 2
⎣ ⎦
p̂ = p1 + (p2 − p 1 ), σ̂ 2
= . (11)
2l σ1 2 + σ2 2 σ1 2 + σ2 2
104 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

RB as in (12) and its derivative as in (13) as follows:


Ex cos θ sin θ 0 E1
[ ] [ ][ ]
EB = Ey = − sin θ cos θ 0 E2 (12)
Eθ 0 0 1 E3
E˙x
⎡ ⎤
vd cos Eθ − v + Ey ω
[ ]
∂ EB
˙
EB = = ⎣Ėy ⎦ = vd sin Eθ − Ex ω . (13)
⎢ ⎥
∂t ωd − ω
E˙θ

(a) Features extraction by threshold H , S , V and depth components. We select v and ω such that EB → [0, 0, 0]T asymptotically. If
vm and ωm are the desired values of the conceptual intermediate
controls v and ω, we have the velocity errors Ev = v − vm and
Eω = ω − ωm .
The adaptation mechanism is based on Lyapunov stability [53],
5 the following Lyapunov candidate function is selected as indicated
in (14); using (12) and (13), its derivative V̇ = ∂∂Vt is calculated by
(15):
0
0 0.2 0.4 0.6 0.8 1(
V (EB ) = Ex 2 + Ey 2 + 1 − cos Eθ
)
(14)
S 2
5
V̇ = EX E˙x + Ey E˙y + sin(EΘ )

E˙Θ = EX (−vm + vd cos Eθ ) + sin EΘ Ey vd + ωd − ωm


( )

− EX Ev − Eω sin EΘ . (15)

0 Selecting vm and ωm such that:


0 0.2 0.4 0.6 0.8
V vm = vd cos Eθ + k1 EX (k1 > 0) (16)
5
ωm = ωd + vd Ey + k2 sin EΘ (k2 > 0). (17)

V̇ = −k1 EX 2 − k2 sin2 EΘ − EX Ev − Eω sin EΘ must be negative


definite. The first two terms of V̇ are negative; so for asymptotic
stability EX → 0, Ey → 0, Eθ → 0 and parameter convergence, we
0 have to make Ev → 0 and Eω → 0.
(b) H , S and V histograms of selected red color. The AFL controller takes as inputs the errors Ev , Eω , and their
derivatives E˙v and E˙ω Then, the controller maps them into required
change in the motors speeds v (Table 5) and ω (Table 6) using
Fig. 14. Kinect HSV-based localization of the mobile robot. (For interpretation of the
references to colour in this figure legend, the reader is referred to the web version
Mamdani rules.
of this article.) The four inputs and two outputs are chosen to have nine mem-
bership functions, where the suffixes NVB, NB, NM, NS, Z , PS, PM, PB,
and PVB stand for Negative Very Big, Negative Big, Negative Medium,
Negative Small, Zero, Positive Small, Positive Medium, Positive Big,
7. Adaptive fuzzy-logic (AFL) controller
Positive Very Big, respectively.
The four inputs correspond to the errors Ev , E˙v , Eω and E˙ω as
To carry out and track the optimal path generated in the pre- follows:
vious steps to move the mobile robot toward Target, we have
• Ev : evNVB, evNB, evNM, evNS, evZ, evPS, evPM, evPB, evPVB.
developed the AFL controller given in Fig. 15 The choice of an AFL • E˙v : evdNVB, evdNB, evdNM, evdNS, evdZ, evdPS, evdPM, evdPB,
controller can be motivated as follows: evdPVB.
• Eω : ewNVB, ewNB, ewNM, ewNS, ewZ, ewPS, ewPM, ewPB,
• The adaptation is due to instantaneous errors EA (t) (in RA ) ewPVB.
and EB (t) (in RB ), keep updating v to vm and ω to ωm , to • E˙ω : ewdNVB, ewdNB, ewdNM, ewdNS, ewdZ, ewdPS, ewdPM,
correctly execute the path (vm and ωm are the desired values ewdPB, ewdPVB.
of the conceptual intermediate controls v and ω).
The outputs are the required change of the mobile robot linear
• AFL controller makes the robot quickly updating its veloci-
velocity uv and angular velocity uω; they are as follows:
ties as soon as possible, so that the error is always converg-
ing to 0. • uv: uvNVB, uvNB, uvNM, uvNS, uvZ, uvPS, uvPM, uvPB, uvPVB.
• uω: uwNVB, uwNB, uwNM, uwNS, uwZ, uwPS, uwPM, uwPB,
Let us consider the path Pd(t) (given in RA ), the actual position uwPVB.
of the mobile robot is p(t) (given in RA ), and the robot is moving
The membership functions for each of the four inputs are as fol-
with velocities v (t) and ω(t).
lows (Fig. 16(a)): for Ev , 100% corresponds to 0.2 m/s (200 mm/s);
Let us consider, in RA , P(t) = [x (t ) , y (t ) , θ (t)]T being the state
for E˙v , 100% corresponds to 0.4 m/s2 (400 mm/s2 ); for Eω , 100%
vector representing the actual current position and orientation and corresponds to 3 rad/s and finally for E˙ω , 100% corresponds to
Pd = [xd (t ) , yd (t ) , θd (t)]T being the desired path to be tracked; 5 rad/s2 . The membership functions for each of the two outputs
the error vector is E (t ) = [E1 , E2 , E3 ]T = [xd (t) − x(t), yd (t) − are given as follows (Fig. 16(b)): for uv, 100% corresponds to 0.1 m/s
y(t), θd (t) − θ (t)]T . Using Eqs. (5) and (6), we can write the error in (100 mm/s); and for uω, 100% corresponds to 1 rad/s.
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 105

Fig. 15. Adaptive fuzzy-logic controller developed to track the generated path.

Table 5
If–then fuzzy rules for controlling linear velocity of the robot.
Ev , E˙v evdNVB evdNB evdNM evdNS evdZ evdPS evdPM evdPB evdPVB
evNVB uvPVB uvPVB uvPVB uvPVB uvPVB uvPB uvPM uvPS uvZ
evNB uvPVB uvPVB uvPVB uvPVB uvPB uvPM uvPS uvZ uvNS
evNM uvPVB uvPVB uvPVB uvPB uvPM uvPS uvZ uvNS uvNM
evNS uvPVB uvPVB uvPB uvPM uvPS uvZ uvNS uvNM uvNB
evZ uvPVB uvPB uvPM uvPS uvZ uvNS uvNM uvNB uvNVB
evPS uvPB uvPM uvPS uvZ uvNS uvNM uvNB uvNVB uvNVB
evPM uvPM uvPS uvZ uvNS uvNM uvNB uvNVB uvNVB uvNVB
evPB uvPS uvZ uvNS uvNM uvNB uvNVB uvNVB uvNVB uvNVB
evPVB uvZ uvNS uvNM uvNB uvNVB uvNVB uvNVB uvNVB uvNVB

Table 6
If–then fuzzy rules for controlling the angular velocity of the robot.
Eω , Ėω eωdNVB eωdNB eωdNM eωdNS eωdZ eωdPS eωdPM eωdPB eωdPVB
eωNVB uωPVB uωPVB uωPVB uωPVB uωPVB uωPB uωPM uωPS uω Z
eωNB uωPVB uωPVB uωPVB uωPVB uωPB uωPM uωPS uω Z uωNS
eωNM uωPVB uωPVB uωPVB uωPB uωPM uωPS uω Z uωNS uωNM
eωNS uωPVB uωPVB uωPB uωPM uωPS uω Z uωNS uωNM uωNB
eω Z uωPVB uωPB uωPM uωPS uωZ uωNS uωNM uωNB uωNVB
eωPS uωPB uωPM uωPS uω Z uωNS uωNM uωNB uωNVB uωNVB
eωPM uωPM uωPS uωZ uωNS uωNM uωNB uωNVB uωNVB uωNVB
eωPB uωPS uω Z uωNS uωNM uωNB uωNVB uωNVB uωNVB uωNVB
eωPVB uω Z uωNS uωNM uωNB uωNVB uωNVB uωNVB uωNVB uωNVB

8. Experimental results Unfortunately and as it can be seen, the initial and final orientations
of the mobile base (θBInit and θBFin ) are not taken into consideration
The validation of the proposed GA-PCHIP planning ap- when dealing with this validation scenario.
proach is carried out through our own-established scenario. The
differentially-driven mobile robot, RobuTER, has to move from 8.2. Obtained results
Source to Target inside a cluttered workspace. The appropriate right
VR and left VL velocities are all calculated before motion begins As shown in Fig. 17(a), the GA best path consists of 18 segments;
on the operator host PC. To send the required velocities to the it is obtained by the 50th generation. The path is generated in less
robot, we created a C ++ program using WinSock API. A wireless than 7 s; its total length is ι = 6020 mm. In addition, the final
communication is established (between the operator-site host PC
GA-PCHIP smoothed path is shown in Fig. 17(b).
and the on-board PC of the robot) to send VR and VL , and to receive
Fig. 18(a) gives the variation of the right wheel, left wheel and
the odometry readings which are used for feedback control of the
linear velocities of the mobile robot. Fig. 18(b) shows the variation
robot.
of the instantaneous path tracking errors of the robot (Ex , Ey , Eθ )
with respect to the imposed position and orientation at that time.
8.1. Description of the validation scenario

For this experiment, the mobile robot evolves in the envi- 8.3. Discussion of obtained results
ronment described in Fig. 5. The first two snapshots (Fig. 5(a))
represent the images captured by Kinect 1 and Kinect 2, respec- From Fig. 18(a), the total execution time of the path for the ex-
tively. Fig. 7(e) represents the safe binary map built on the basis perimental example is about 41.5 s. As it can be seen, the maximum
of the concatenation of the two pictures shown on Fig. 5(b) The linear speed of the robot is 150 mm/s; acceleration from 0 s to 7s,
mobile robot RobuTER, with dimensions of 1200 mm×680mm, stable speed from 7 to 39 s and, finally, deceleration from 39 s to
has to move from Source(x, y)Init = (74 mm, 1953 mm) toward 41.5 s. In addition, Fig. 18(b) indicates that errors do not exactly
Target(x, y)Fin = (5501 mm, 1308 mm) with a maximum linear converge toward 0, but they are very close to 0, as expected. The
speed of 150 mm/s. It must be noted here that all the distances maximum error is about 5 mm. The errors are cumulative; they are
are given in millimeters (mm) and the angles in radian (rad) in RA . most due to turning locations.
106 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

(a) Gaussian inputs membership functions.

(a) Variation of the robot velocity for path execution: right wheel, left wheel
and linear velocities.

(b) Gaussian outputs membership functions.


6

Fig. 16. Inputs and outputs membership functions. 4

-2

-4

-6

-8

0 5 10 15 20 25 30 35 40

(b) Variation of the path tracking errors (Ex , Ey and Eθ ).

Fig. 18. Results obtained for the considered experimental validation scenario.

In general, for all the executions we did (for the previous


experimental example), the generation of the best GA path took
between 3 and 8 s, depending on the complexity of the map and
the requested optimality level. In addition, we noticed that around
the 40 first generations could lead to a collision-free path but not
exactly optimal. This mainly depends on the number of obstacles
Fig. 17. Obtained optimal smoothed path linking Source to Target. and their placement inside the robot environment.
Table 7 summarizes the average calculation time for 10 differ-
ent runs ofGA-PCHIP optimal path planning and smoothing (with-
At the first turning of the robot (at about t = 5 s) in its path out path execution) for different workspaces, different dimensions
(Fig. 17), the robot velocities change too much (Fig. 18(a)). Due and number of obstacles, and various Source–Target positions.
to that, the error in orientation (Eθ ) rapidly increased as shown in The fusion of the data collected by the odometry and the Kinect
Fig. 18(b), then it is compensated to 0 again; thus, the trajectory of sensors allows the determination of the robot location with a
the robot is corrected. The most important is that the orientation is maximum error of less than 7 mm in either x or y directions. This
kept accurate; the robot does not deviate too much (the maximum is mainly due to the resolution of the used Kinect cameras and that
error Eθ is less than 13 mrads) and most of time Eθ = 0. of the mobile robot encoders.
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 107

Table 7
Average calculation times for path generation of 10 runs with different Source and Target.
Resolution (accuracy, area size) Number of obstacles Time of generation (s)
Feasible safe path Optimal path
05 5.09 5.6
640 × 480 (01 Kinect camera) 10 5.59 6.3
15 6.46 7.5
10 6.33 7.42
640 × 960 (02 Kinect cameras) 15 8.03 8.74
20 8.47 9.06
15 9.62 10.3
60 × 1280 (04 Kinect cameras) 20 10.6 10.56
25 10.96 11.26

The true errors are mostly due to the reference frame of the is configured to work at fast speed (15 frames per second, for
Kinect cameras; they are not rightly concatenated Indeed, an open- example), it can rapidly detect changes in the environment. In such
ing angle exists between them; it is impossible to physically correct a case, the mobile robot stops, its current position is considered as
this angle and rectify these errors. Nevertheless, the solution could the new initial configuration (new Source) and a new path will be
be by taking this opening angle into consideration mathematically, generated by the proposed approach to reach the imposed Target
and use a correct concatenation. while taking into account the new intruded obstacles and changes
GA is widely used for optimization problems and particularly in the environment.
for mobile robots. The traditional use of GA and other optimization
methods is by searching for next position of the robot in an exact 8.5. Main contributions
cell decomposition environment description, which results in a
limited set of possibilities. Two problems are associated with this The main contributions of this work can be outlined as follows:
approach. On the one hand, a very poor environment model is
considered such as in [33] and [34] which results in meaningless • Instead of an on-line path planning, we focused on off-line
robot location (position and orientation), and shape (the robot path planning for mobile robots with exact environment
cannot be exactly represented in the map), with bad obstacles modeling, global optimal path planning, and correct path
representation (shape and location). On the other hand, an im- execution.
provement of the environment model, at moderate resolution, will • Instead of considering a dynamic environment, our ap-
increase the search space of the algorithm, which results in a very proach was applied on a highly-constrained environment,
long execution time even in simulation. For example, 15 s to 160 s which is the case for large-size mobile robots evolving inside
are required for generating safe path (not optimal) using BFO or workspaces, where accurate representation and modeling
PSO such in [27]; and about 55 s for optimal IGSA path planning are required, and optimal conditions must be satisfied.
such in [35]. • Kinect cameras system with artificial intelligence tech-
However, within our accurate environment description at a niques are used for indoor mobile robot applications inside a
high resolution, obtained from Kinect cameras system, the GA is highly-constrained environment (limited maneuvering area
used in a different and efficient way: with obstacles).
• The use of Kinect cameras system with the proposed pro-
• To ensure fast execution, GA does not search for next cell
cessing techniques gives very high accurate results in mod-
occupied by the robot; instead, it is based on limited variable
eling the constrained environment at a high resolution,
number of segments which change in the map to ensure
which is almost continuous; thus, the proposed approach
optimal path with high accuracy.
ensures exact representation of the mobile robot and the
• We adopt accurate representations of the environment, ob-
obstacles considering their shapes and locations.
stacles and the mobile robot.
• GA is used for global path planning by representing con-
• We proposed an objective function with a set of mutations
tinuous path as a set of connected segments. Indeed, the
that leads to optimal solutions in terms of path length, safety
proposed objective function and the set of mutations lead
and trajectory execution time.
to optimal solutions in terms of (i) minimum path length,
• This method allows the consideration of exact robot dimen-
sions, position, and orientation, so it allows the robot to (ii) maximum safety, and (iii) shortest trajectory execution
navigate in a highly constrained workspace. time.
• For real-time application, this approach ensures optimality, • The proposed approach allows considering of exact robot
accurate results, high safety, and fast path generation. dimensions, location and orientation (robot geometrical
properties). It ensures optimal maneuvering in a limited
workspace with obstacles. Moreover, it is faster than the
8.4. Extension of the proposed approach to on-line path planning in conventional methods found in the literature.
dynamic environment • The use of PCHIP path smoothing reduces accelerations and
increases the average velocity of the mobile robot for faster
In our work, GA-PCHIP is used for global path planning in the trajectory execution in a high-resolution map representa-
whole area observed by the two-Kinect cameras system with static tion.
environment However, as the proposed GA-PCHIP path planning • The use of feedback sensors measurements and sensors
algorithm is faster compared to other methods in the literature, fusion is allows the robot to track and minimize the errors
it can be easily extended for non-static environments, with ex- during trajectory execution.
panded operational workspace for the mobile robot. For this aim, • The proposed approach allows mobile robot, which have
an array of Kinect sensors can be easily used as a vision and model- large sizes with limited speed (industrial, etc.) to safely
ing system to cover the whole area. In addition, if the vision system navigate inside a highly-constrained environment.
108 A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109

9. Conclusion [12] P. Beeson, J. Modayil, B. Kuipers, The mapping problem: mobile robot map-
building in the hybrid spatial semantic hierarchy, Int. J. Robot. Res. 29 (4)
(2010) 428–459.
This article presented the implementation of an off-line Kinect [13] I.H. Li, W.Y. Wang, Y.H. Chien, N.H. Fang, Autonomous ramp detection and
based optimal collision-free path planning for a differentially- climbing systems for tracked robot using Kinect sensor, Int. J. Fuzzy Syst. 15 (4)
driven indoor mobile robot The robot successfully achieved several (2013) 452–459.
tasks including (i) perception and modeling of its surrounding [14] Y.L. Ip, A.B. Rad, K.M. Chow, Y.K. Wong, Segment-based map building using
enhanced adaptive fuzzy clustering algorithm for mobile robot applications, J.
environment, (ii) finding its best collision-free smoothed path, (iii)
Intell. Robot. Syst. 35 (3) (2002) 221–245.
determination of its position and orientation instantaneously, and [15] X. Li, X. Huang, M. Wang, Robot map building from sonar sensors and DSmT,
(iv) execution of the generated path. Int. J. Inf. Security 20 (1) (2006) 104–121.
Initially, image acquisition and processing techniques were im- [16] K. Konolige, Improved occupancy grids for map building, Auton. Robots 4 (4)
plemented, allowing the mobile robot to exactly model its environ- (1997) 351–367.
ment by constructing binary safe map. Then, a Genetic Algorithm [17] S.J. Lee, D.W. Cho, W.K. Chung, Y. Lee, J.H. Lim, C.U. Kang, W.S. Yun, Evaluation
of features through grid association for building a sonar map, in: The 2006 IEEE
is implemented to generate the most suitable path starting from
International Conference on Robotics and Automation, ICRA2006, May, 2006,
a number of random paths and ending up with the optimal one, pp. 2615–2620.
ensuring that the robot will go from its initial position (Source) to [18] J. Cunha, E. Pedrosa, C. Cruz, A.J.R. Neves, N. Lau, Using a Depth Camera for
its goal position (Target) safely and rapidly. Next, Piecewise Cubic Indoor Robot Localization and Navigation, RGB.-D RSS workshop, Los Angeles,
Hermite Interpolating Polynomial is used to smooth the best found California, USA, 2011.
path. After that and since the execution of the obtained path is [19] A.A. Ramey, V. Gonzalez-Pacheco, M.A. Salichs, Integration of a low-cost RGB.-
D sensor in a social robot for gesture recognition, in: The 6th International
primordial, an adaptive fuzzy-logic controller based on Lyapunov
Conference on Human Robot Interaction, HRI 2011, Lausanne, Switzerland,
global asymptotic stability was designed to be used by the mobile 6–9 March, 2011, pp. 229–230.
robot to ensure perfect path tracking. Finally, we implemented a [20] ‘‘Willow garage - ROS 3d Contest’’, May, 2010, http://www.ros.org/wiki/
sensor fusion localization process (to estimate the robot position openni/Contests/ROS%203D/.
and orientation) while the robot tracks its path using both odome- [21] F. Fahimi, Autonomous Robots Modeling, Path Planning, and Control, Springer,
try and Kinect HSV localization; hence, a better position estimation 2009.
[22] Spyros G. Tzafestas, Introduction to Mobile Robot Control, Elsevier Inc., 2014.
is obtained.
[23] F.J. Abu-Dakka, Trajectory Planning for Industrial Robots Using Genetic Al-
Future works aim to extend the developed approach to deal gorithms, Ph.D. Thesis, Department of Mechanical and Material Engineering,
with path planning problem in larger workspaces. An array of Universitat Politècnica de València, 2011.
Kinect cameras can be installed to cover the whole environment. [24] P.G. Zavlangas, S.G. Tzafestas, Industrial robot navigation and obstacle avoid-
After that, the proposed approach will be validated in more com- ance employing fuzzy logic, J. Intell. Robot. Syst. 27 (1–2) (2000) 85–97.
[25] R. Menasri, A. Nakib, B. Daachi, H. Oulhadj, P. Siarry, A trajectory planning of
plex and dynamic environments including non-static obstacles.
redundant manipulators based on bilevel optimization, Appl. Math. Comput.
Finally, for further validation, future perspective consists of ex- 250 (2015) 934–947.
tending and testing our approach to deal with on-line path plan- [26] Md A. Hossain, I. Ferdous, Autonomous robot path planning in dynamic en-
ning problems for mobile robots (which certainly are the focus of vironment using a new optimization technique inspired by bacterial foraging
any real-world application). technique, Robot. Auton. Syst. 64 (2015) 137–141.
[27] A. Dhariwal, G.S. Sukhatme, A.A.G. Requicha, Bacterium-inspired robots for
environmental monitoring, in: The International Conference on Robotics and
References Automation, ICRA2004, USA, 2004, pp. 1496–443.
[28] Y. Hao, S.K. Agrawal, Planning and control of UGV formations in a dynamic
[1] A. Hentout, B. Bouzouia, R. Toumi, Multi-agent Missions Planning for Mobile environment: A practical framework with experiments, Robot. Auton. Syst. 51
Manipulators: Implementation on RobuTER/ULM, in: The 2010 IEEE Interna- (2005) 101–110.
tional Conference on Robotics and Biomimetics, ROBIO2010, Tianjin, China, [29] B. Kovacs, G. Szayer, F. Tajti, M. Burdelis, P. Korondi, A novel potential field
14–18 December, 2010, pp. 1037–1042. method for path planning of mobile robots by adapting animal motion at-
[2] C. Yu, Q. Qiu, X. Chen, A hybrid two-dimensional path planning model based tributes, Robot. Auton. Syst. 82 (2016) 24–34.
on frothing construction algorithm and local fast marching method, Comput.
[30] A.V. Savkin, C. Wang, Seeking a path through the crowd: Robot navigation
Electr. Eng. 39 (1) (2013) 475–487.
in unknown dynamic environments with moving obstacles based on an in-
[3] J. Wen, H. Xing, Multi-agent based distributed control system for an intelligent
tegrated environment representation, Robot. Auton. Syst. 62 (5) (2014) 1568–
robot, in: The IEEE International Conference on Services Computing, SCC’04,
15–18 September, 2004, China, pp. 633–637. 1580.
[4] I.H. Li, Ch.-Ch. Hsu, Sh.-Sh. Lin, Map building of unknown environment based [31] A. Sipahioglu, A. Yazici, O. Parlaktuna, U. Gurel, Real-time tour construction
on fuzzy sensor fusion of ultrasonic ranging data, Int. J. Fuzzy Syst. 16 (3) for a mobile robot in a dynamic environment, Robot. Auton. Syst. 56 (2008)
(2014). 289–295.
[5] I. Akli, B. Bouzouia, N. Achour, Motion analysis of a mobile manipulator [32] B. Gardiner, S.A. Coleman, T.M. McGinnity, H. He, Robot control code genera-
executing pick-up tasks, Comput. Electr. Eng. 43 (1) (2015) 257–269. tion by task demonstration in a dynamic environment, Robot. Auton. Syst. 60
[6] K. Yamazaki, M. Tomono, T. Tsubouchi, Pose planning for a mobile manipulator (2012) 1508–1519.
based on joint motions for posture adjustment to end effector error, Adv. [33] T.M. Cabreira, G.P. Dimuro, M.S. de Aguiar, An evolutionary learning approach
Robot. 22 (4) (2008) 411–431. for robot path planning with fuzzy obstacle detection and avoidance in a multi-
[7] Y. Hargas, A. Mokrane, A. Hentout, O. Hachour, B. Bouzouia, Mobile ma- agent environment, in: The 3rd Brazilian Workshop on Social Simulation,
nipulator path planning based on artificial potential field: Application on 2012.
RobuTER/ULM, in: The 4th International Conference on Electrical Engineering, [34] T.M. Cabreira, M.S. de Aguiar, G.P. Dimuro, An extended evolutionary learning
ICEE 2015, Boumerdès, 13–15 December, 2015, pp. 1–6. approach for multiple robot path planning in a multi-agent environment, in:
[8] S. Russel, P. Norvig, Artificial Intelligence: A Modern Approach, second ed.,
The IEEE Congress on Evolutionary Computation, Cancún, México, 20–23 June,
Prentice Hall, 2003.
2013.
[9] A. Konar, L.C. Jain, Distributed planning and multi-agent coordination of
[35] P.K. Das, H.S. Behera, P.K. Jena, B.K. Panigrahi, Multi-robot path planning in a
robots, in: A Distributed Approach to Machine Intelligence (Advanced Infor-
dynamic environment using improved gravitational search algorithm, J. Electr.
mation and Knowledge Processing), Springer, London, 2005 Editor.
[10] M. Harker, P.O. Leary, M.S. Kharicha, S. Eck, Calibration, measurement and error Syst. Inf. Technol. 3 (2) (2016) 295–313.
analysis of optical temperature measurement via laser induced fluorescence, [36] G. Cook, Mobile Robots Navigation, Control and Remote Sensing, IEEE Press,
in: The Instrumentation and Measurement Technology Conference, 2007, pp. 2011.
1–6. [37] L. Babel, Flight path planning for unmanned aerial vehicles with landmark-
[11] K. Arimura, N. Hagita, Feature space design for image recognition with image based visual navigation, Robot. Auton. Syst. 62 (2014) 142–150.
screening, in: The 13th International Conference on Pattern Recognition, 1996, [38] I. Loevsky, I. Shimshoni, Reliable and efficient landmark-based localization for
pp. 261–265. mobile robots, Robot. Auton. Syst. 58 (2010) 520–528.
A. Bakdi et al. / Robotics and Autonomous Systems 89 (2017) 95–109 109

[39] C.-J. Wu, W.-H. Tsai, Location estimation for indoor autonomous vehicle navi- Abdelfetah Hentout is affiliated with the SRP (Systèmes
gation by omni-directional vision using circular landmarks on ceilings, Robot. Robotisés de Production) research team in CDTA (Centre
Auton. Syst. 57 (2009) 546–555. de Développement des Technologies Avancées). He ob-
[40] M. Beinhofer, J. Müller, W. Burgard, Effective landmark placement for accurate tained his Ph.D. degree in Robotics in 2012 from the ‘‘Uni-
and reliable mobile robot navigation, Robot. Auton. Syst. 61 (2013) 1060–1069. versity of Sciences and Technology Houari Boumediene’’
[41] J.-B. Hayet, H. Carlos, C. Esteves, R. Murrieta-Cid, Motion planning for main- (Algeria). His researches mainly focus on multi-agent sys-
taining landmarks visibility with a differential drive robot, Robot. Auton. Syst. tems, control architectures, mobile manipulation, multi-
62 (2014) 456–473. robot systems, optimal path planning, telerobotics and
cyber–physical systems.
[42] J.-S. Gutmann, Markov-Kalman localization for mobile robots, in: The 16th
International Conference on Pattern Recognition, ICPR 2002, Quebec, Canada,
11–15 August 2002.
[43] C. Urrea, J. Muñoz, Path tracking of mobile Robot in crops, J. Intell. Robot. Syst.
80 (2) (2015) 193–205.
[44] J.M. Hyman, Accurate monotonicity preserving cubic interpolation, SIAM J. Sci.
Stat. Comput. 4 (4) (1983) 645–654. Hakim Boutamai received his Master degree in Control
in 2015 from the ‘‘Institut de Génie Electrique et Elec-
[45] A. Hentout, M.R. Benbouali, I. Akli, B. Bouzouia, L. Melkou, A telerobotic
tronique (IGEE)’’ of the ‘‘University of M’hamed Bougara
human/Robot interface for mobile manipulators: A study of human operator
Boumerdes’’ (Algeria). His field of study includes Control
performance, in: The 2013 International Conference on Control, Decision and
systems engineering, System identification and modeling
Information Technologies, CoDIT’13, Hammamet, Tunisia, 6–8 May, 2013, pp. and Industrial automation. He is currently a Field Engineer
641–646. at ‘‘High Tech Systems Company’’.
[46] A. Sgorbissa, D. Verda, Structure-based object representation and classification
in mobile robotics through a Microsoft Kinect, Robot. Auton. Syst. 61 (12)
(2013) 1665–1679.
[47] Ch. Nuthong, Th. Charoenpong, Lane detection using smoothing spline, in: The
IEEE 3rd International Congress on Image and Signal Processing, CISP2010,
China, 16–18 October, 2010, pp. 989–993.
[48] J. Canny, A computational approach to edge detection, in: IEEE Transactions
on Pattern Analysis and Machine Intelligence, PAMI-8(2), November, 1986, pp. Abderraouf Maoudj received his Master degree in Con-
679–698. trol and Systems Analysis in 2012 from the University
[49] A. Hentout, B. Bouzouia, R. Toumi, Z. Toukal, Agent-based coordinated control of Jijel (Algeria). He is currently a Ph.D. student at the
of mobile manipulators: Study on hardware system of a mobile manipulator, ‘‘University of Sciences and Technology Houari Boumedi-
in: The International Conference on Systems and Information Processing, ene’’ (Algeria). He is also affiliated as a research engineer
ICSIP’09, Guelma, Algeria, 02–04 May, 2009. with the SRP (Systèmes Robotisés de Production) research
[50] J.R. Smith, S.F. Chang, Single color extraction and image query, in: The Inter- team in CDTA (Centre de Développement des Technolo-
national Conference on Image Processing, ICIP-95, Vol. 3, Washington, USA, gies Avancées). His research interests include multi-robot
23–26 October, 1995, pp. 528–531. systems, control architectures, robotized flexible manu-
[51] S. Sural, G. Qian, S. Pramanik, Segmentation and histogram generation using facturing systems, optimal path planning and, flexible job-
shop planning and scheduling.
the HSV color space for image retrieval, in: The 2002 International Conference
on Image Processing, ICIP2002, New York, USA, 22–25 September, 2002, pp.
589–592.
[52] E. Babaians, N.K. Korghond, A. Ahmadi, M. Karimi, S.S. Ghidary, Skeleton and
visual tracking fusion for human following task of service robots, in: The 2015 Ouarda Hachour is a teacher at the ‘‘Institut de Génie Electrique et Electronique
3rd RSI International Conference on Robotics and Mechatronics, ICROM2015, (IGEE)’’ of the ‘‘University of M’hamed Bougara Boumerdes’’ (Algeria). Her main
Tehran, Iran, 7–9 October, 2015, pp. 761–766. interests are in the fields of electrical and control systems engineering, robotics,
intelligent systems and adaptive control theory.
[53] H.K. Khalil, Non-linear Systems, second ed., Prentice-Hall, 1996 ISBN: 0-13-
P8U24-8.

Azzeddine Bakdi received his Master degree in Control Brahim Bouzouia is a Director of Research in CDTA (Cen-
in 2015 from the ‘‘Institut de Génie Electrique et Elec- tre de Développement des Technologies Avancées). He is
tronique (IGEE)’’ of the ‘‘University of M’hamed Bougara currently the leader of the SRP (Systèmes Robotisés de
Boumerdes’’ (Algeria). He is currently a Ph.D. student at Production) research team. He obtained a Ph.D. degree in
the same university (UMBB). His research interests include Robotics and automation in 1989 from Paul Sabatier Uni-
robotics, path planning, soft-computing and images pro- versity and LAAS/CNRS Laboratory (France). His research
cessing. interests include multi-agent systems, flexible manufac-
turing systems, smart factory, multi-robot systems, mo-
bile manipulation and teleoperation.

You might also like