You are on page 1of 22

Faculty of Engineering

Department of Electrical & Computer Engineering

EE4306 – Distributed Autonomous Robotic Systems

2009/2010 Semester 2 Project

Group 7

Li Ruixiong U076182R

Mohamed Isa Bin Mohamed Nasser U076183A


EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

Table of Contents

1. Introduction ....................................................................................................................................3

2. Virtual Environment........................................................................................................................4

2.1 Overview .................................................................................................................................4

2.2 Implementation ......................................................................................................................5

3. Overtaking of moving obstacle .......................................................................................................6

3.1 Sensors....................................................................................................................................6

3.2 Lane Positioning......................................................................................................................7

3.3 Control ....................................................................................................................................8

3.4 Routine....................................................................................................................................9

4. Road Following..............................................................................................................................13

4.1 Consideration to choosing Algorithm ...................................................................................13

4.2 Line Extraction ......................................................................................................................13

4.3 Regression.............................................................................................................................14

4.4 Control for Line Tracking.......................................................................................................14

4.5 Centering Control..................................................................................................................15

4.6 Parallel Control......................................................................................................................15

4.7 Weighting the Inputs ............................................................................................................16

4.8 Results...................................................................................................................................16

4.9 Further Work for Road Following .........................................................................................17

4.9.1 Supervised Tuning.........................................................................................................17

4.9.2 Unsupervised Tuning ....................................................................................................17

5. Integration: Advance Overtaking Circuit......................................................................................19

6. Conclusion.....................................................................................................................................22

Group 7
Page 2
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

1. Introduction

In this project, we were exposed to the use of Microsoft Robotics Studio as a platform for creating a

virtual environment consisting of multiple mobile robots. The task involved in this project includes

the designing of the virtual environment, path planning and algorithmic approaches needed for the

mobile robots to complete certain tasks.

These tasks as outlined in the project requirements mainly consists of two primary objectives. The

first being the overtaking of a moving obstacle while the second involves path planning using

waypoints. Subsequently, when both tasks are able to function independently, they would be

integrated and form the basis of this project. Additional components will also be added on top of

this to form the bonus component where we seek to explore solutions to more complex scenarios.

We will also in this project, attempt the use of C# instead of the conventional VPL approach. This

adds more flexibility to the program as interfacing with other programs like MATLAB becomes

possible.

Even the implementation using C# comes in various approaches, like the use of user edited services,

interfacing with the manifest editor or purely run on C# alone using the classes from MRDS’s

simulation engine. The approach our team has used is the second case, where the environment is

still designed using the manifest editor, and our C# program will interface with the drives and

sensors specified within that manifest just like how VPL would.

The use of C# allows implementation of logics to be realised easily, the main difficultly however is to

get the initial interfacing working as documentation on such approach is scarce. This interfacing is

done by first specifying the location of the manifest, and then with the use proxy services connect

them to actual service within the manifest. This connection of the proxy services to the actual

service is performed during initialisation and is done for all motors and sensors within the manifest.

Group 7
Page 3
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

2. Virtual Environment

2.1 Overview

FIGURE 1: OVERVIEW OF VIRTUAL ENVIRONMENT USED

The track used in this project adopts the shape similar to that of most stadium tracks. It is made up

of a single lane carriageway consisting of two straights sections and two curved sections.

Overtaking is only possible during the straight sections and this can be inferred from real life

scenario where roads along bends are almost always marked with single line which prohibits

overtaking due to safety concern.

Bends are usually potential areas for accidents and it is not uncommon to find accident prone areas

along such bends. Thus, in this project we will attempt to simulate such a scenario. All robots

entering the accident prone area will have to slow down and observe a lower speed limit.

Group 7
Page 4
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

2.2 Implementation

In our first attempt in designing the track, we made a 3D model with the use of “Blender” then

converting the output file into a “.bos” file supported by MRDS. The mesh loads perfectly when we

tried in using the manifest editor, we however experienced difficulties loading the manifest through

our C# program.

FIGURE 2 : 3D OBJECT USING BLENDER

FIGURE 3 : CURVY TRACK USING RHINO

In order to overcome this problem, we implemented a workaround. Instead of loading the 3D mesh,

we modify the texture of the ground using the 2D top view of our track. Finally, we size that entity to

scale such that it matches in proportion to the robot we use. The end result we obtain is a visual

layout of the track which is sufficient for our track following algorithm.

Group 7
Page 5
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

3. Overtaking of moving obstacle

3.1 Sensors

For the implementation of the overtaking routine, we will predominantly be using the Laser Range

Finder (LRF) for sensing. The LRF provides distance measurement through a 180 degree sweep with a

0.5 degree resolution. This not only allows us to detect whether obstacles are present, but also the

relative distance apart.

A limitation of the LRF however is its requirement on the processor speed. Initial testing using the

LRF on the laptop returned invalid readings when the robot moves at a fast speed. This was later

found to be due to inadequate processing power and thus a work around was to test with a slower

real time scale in the physics of the environment or to test with a more powerful computer.

As it is difficult to get a full picture with just a single value within the LRF, different ranges were used

to get a fuller picture. We can split up the ranges within the LRF as shown in the figure below. We

note that while not all of the ranges were used within the algorithm of the overtaking routine, it

however provides the robot the ability to read its surrounding and allow us to visually depict its state

from our user interface.

FIGURE 4: SEGMENTS USED IN LRF

Here, the minimum values within each segment is stored and provides an approximate to the

general direction and distance of the nearest obstacles within the different segments of the arc.

Group 7
Page 6
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

The choice of using 17 degrees for the middle segment is also not without reason. As the width of a

single lane is experimentally found to be 1.5m in the visual environment, we can consider the typical

case where this reading will be used. As our lane positioning algorithm (to be mentioned in the next

section) functions very well, we can always safety assume the active robot will be somewhere within

the centre of the lane. The threshold used in activating the overtaking is set to 5m. Thus, will the

above information we can design the middle segment to best function as we desire.
1.5 m

5m

0.75 m

Robot

FIGURE 5 : TRAJECTORY FOR CENTRE SEGMENT

In the worst case, the robot will be somewhere along the edge of the lane and by Pythagoras

theorem, we can compute the angle to be . This would be the

angle from either side the midpoint, thus the total angle for that segment is 8.53x2 ≈ 17 degrees.

3.2 Lane Positioning

Lane positioning for the active robot along the straight path is done with the use of LRF and the

compass reading. However for bends and passive filter, we will be using purely vision to showcase

two unique approaches towards this problem.

In order to ensure the robot positions itself in the middle of the lane, two values are needed. The

first being the orientation, and the second being the distance. The orientation of the robot is

handled by simple proportional control using data from the compass to readjust itself to the desired

Group 7
Page 7
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

angle. The distance reading is obtained through detecting of the pavement using the LRF. A

threshold of 0.1m from the desired midpoint is chosen, and then simple heuristics and corrective

actions are implemented to ensure the robot operates within this desired region. This method has

been tested to be stable for high speed and even with low update rate.
Threshold

Lane
FIGURE 6 : LANE POSITIONING THRESHOLD

3.3 Control

We adopt a variation to the conventional proportional control for the control signals to the motor

drives and is explained below.

A common mistake is the computation of dA. For instance, when the angles are at 359 and 1, we

want dA to be 2 and not 358. This can be done using simply condition checks as shown below.

Group 7
Page 8
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

dA = carlist[robotNo]._curCompass[0] - _overtakeDesiredAngle;
if (dA < -180.0) dA += 360.0;
if (dA >= 180.0) dA -= 360.0;

Also, for our control, we penalise the linear speed using the difference in angles between the desired

and current angle to obtain varying speed. For instance, for every difference in angle of 1 degree, we

drop the linear constant by about 3% which indirectly affects the maximum linear speed it can

achieve in that instant. A maximum deduction of speed and minimum constant speed is also in place

to ensure the robot moves at a minimum speed if the difference in angle is very large. This can be

simply implemented with the following code.

//kD initially set to a certain value. i.e. kD = 0.75;


double dAcontrol = Math.Abs(dA) * 12.5;
if (dAcontrol > 180.0f) dAcontrol = 180.0f;
kD = 0.4*kD + 0.6 * (1 - (dAcontrol/180.0f)) * kD;

Therefore, if a large correction in angle is required, the speed is slower, and when the difference in

angle is small, it runs at full potential.

3.4 Routine

The overtaking routine is implemented using states and its flowchart is as shown below.

FIGURE 7 : FLOWCHART FOR OVERTAKING ROUTINE

Group 7
Page 9
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

Moving Straight

Active robot in moves straight state. Active robot detects an obstacle within a
predefined distance set using the front segment
of the LRF. For this project, the distance
threshold is set to 5m.

Steer Right Returning

Active robot enters the next state and steers Active robot enters the returning state. It slows
right. It however, detects an obstacle in front of down at the same time steers back to its original
it. lane.

Group 7 Page
10
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

Safe following distance Straight -> Steer Right

Upon returning to its original lane, the active Active robot loops back to the straight state, and
robot realigns itself, maintain a safe following since it detects the robot in front within the
distance. Safe following distance is set to 2.5m distance threshold, it transits into the steer right
which is half the trigger for distance threshold. state.

Realigns Speeds Up

Upon reaching the next lane, the active robot The active robot speeds up. During this state,
realigns itself to the lane. additional speed boost is given to allow the
robot to reach up to the maximum speed of 1.

Group 7 Page
11
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

Steer Left Straight

Once the robot detects the segment on its The active robot returns back to its original lane,
original lane is clear, it transits into the steer left and completes the overtaking. Upon detection of
state to return back to its original lane. the green arrow, it will transit into the track
following state which follows the road path base
on vision alone. More details of this will be
covered in the later chapters

Group 7 Page
12
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

4. Road Following

4.1 Consideration to choosing Algorithm

The useful features from the road image are the line markings. There are a few considerations to

choosing the right line following algorithm.

The first consideration is how ideal the world is assumed to be. If the colours of the line markings are

not assumed to be unique, line extraction algorithms like the Hough transform preceded by

conditioning algorithms like the Canny filter has to be run.

The second consideration is the balance between computational requirements compared to the

amount of updates needed to keep the robot from steering off. This is not a trivial issue. If every

update is able to guarantee path following for a longer period of time, the less update is needed and

hence the algorithm can take a longer time run. However extraction of curved paths from the road

does not translate to the actual movements required since the lines are obtained in perspective view.

It has to be considered that a single CPU is required to do these computations for multiple agents.

This is not true in real life where each agent will have its own CPU.

4.2 Line Extraction

Due to the computational requirement of running on multiple agents, simple colour extraction is

chosen; the road lines colours are assumed to be unique. The picture is parsed for a range of colours

for the yellow lines and white line. The next problem to tackle is differentiation between the two

yellow tracks as shown in Figure 8.

Group 7 Page
13
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

FIGURE 8: COLOUR EXTRACTION OF YELLOW AND WHITE PIXELS

4.3 Regression

In order to separate the two yellow lines, the pixels obtained from the colour extraction of the white

line that separates these lines are taken as points. A straight line fit of this line is obtained through

polynomial regression of order 1.

This equation is then used to separate the remaining yellow pixels into two groups, one for the right

yellow line and the other for the left. Two straight line fits of these groups are then obtained. The

whole process obtains 6 parameters defining the equation of three straight lines which

approximates the road lines for short distance.

4.4 Control for Line Tracking

The inspiration for controlling the differential drive robot comes from Braitenberg vehicles where

simple sensory inputs are directly connected to the actuator to produce abstract behaviours.

Although this robot does not qualify as a Braitenberg vehicle due to its rich sensor and a relatively

complex image processing technique for the sensory inputs, the method whereby these inputs are

used directly to “weigh” the left and right wheel speeds without any trajectory planning is similar to

the Braitenberg approach. One can also see this control as a single layer Perceptron.

Group 7 Page
14
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

4.5 Centering Control

The indication that the agent is in the middle of the track is the intersection of the immediate lines

with the left and right border of the image that is captured by the webcam. This intersection should

be at the same point in the vertical axis and of a fixed value if the lane width is constant.

Input 1B =0
Input 1A

FIGURE 9 : INPUTS FOR CENTERING THE ROBOT

4.6 Parallel Control

The indication that the agent is moving parallel to the road can be deduced by the vanishing point of

the line markings. The vanishing point refers to points where parallel lines intersect in the projection

plane.

The vanishing point for a set of parallel lines that is perpendicular with the camera is recorded

experimentally from a webcam snapshot. The vertical difference of this point with the intersection

of the lane lines with the midpoint of the image (drawn as the white line in figure 9) are used as the

second set of inputs.

Input 2A

Input 2B = 0

FIGURE 10 : INPUTS FOR KEEPING THE ROBOT PARALLEL

Group 7 Page
15
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

4.7 Weighting the Inputs

Looking at the control as a single layer Perceptron, the inputs are scaled and limited with Sigmoid

instead of weighted by only scaling. Hence there are two parameters to tune for each edge, the

scaling factor k and the limited value n. The reason for this is firstly to make sure “spikes” from

erroneous line extraction will not occur. And secondly, it is found that such an asymptotic profile

produces a smoother trajectory.

Weighting function

Input w(i) = n/(1+ e-tk)


Activation function
1A, i1

Input a(i) = n/(1+ e-tk)

1B, i2
Input

2A, i3

Input

2B, i4
FIGURE 11 : SINGLE LAYER PERCEPTRON USED FOR CONTROL
The activation function that is used for the final output is also a sigmoid for the same reason. This

output will be subtracted from the straight line velocity set. The straight line velocity defines the

velocity that will be used on perfect straight tracks where no turning adjustments are required.

4.8 Results

With some manual tuning, Very good result has been observed. It is able to move at straight line

velocity of 1 as shown in Figure 12. It is also able to negotiate many curves in succession as shown in

Figure 13.

Group 7 Page
16
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

FIGURE 12 : MAXIMUM STRAIGHT LINE SPEED FIGURE 13 : ROAD WITH MULTIPLE BENDS

Some oscillation can be observed however they can be reduced with better tuning and higher

processor speeds.

4.9 Further Work for Road Following

These weights are vital to the performance of the controller. Tuning manually is possible and has

been done for this project however supervised and unsupervised methods of tuning should be

explored.

4.9.1 Supervised Tuning

A perfect sequence of image captures and the corresponding output values should be recorded. This

can be then used to tune the weights using backpropagation. The parameter n should be tuned in

this manner since it increases the actual output from the neuron. The parameter k should be tuned

manually based on the range of values of the corresponding input such that it does not saturate the

sigmoid all the time.

4.9.2 Unsupervised Tuning

Unsupervised learning is possible. Code has been written where the robot will keep rotating,

searching for the lines when they cannot detect any lines. At this moment, the robot can be

Group 7 Page
17
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

programmed to return into the lanes. Unsupervised learning can be set up such that the start/end

point is marked to enable the robot to identify that it has travelled one lap.

The number of oscillation (which can be deduced from wheel speeds) and the number of times it

goes off track (which can be deduced from losing track of the lines) will be minimised using a cost

function. The weights are changed every lap using searching algorithms.

Group 7 Page
18
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

5. Integration: Advance Overtaking Circuit

Passive
Active

FIGURE 14 : INTEGRATION

After the required tasks are capable of operating independently, integration together with new

features is performed. The robustness of our algorithm allows the placing of robots at any arbitrary

positions. However, for ease of explanation we adopt the initial position as shown in figure 14 with

red being the active robot and blue being the passive robot.

The active robot will be armed with overtaking and intrinsic speed varying capabilities while the

passive robots will transverse through the map using vision based road following. Both robots will

however have to observe the general speed limit imposed on certain areas of the track.

Upon commencement of the program proper, the outer lane robots will transverse the track in a

clockwise manner and the inner robot in a counter-clock wise manner similar to the road markings.

Group 7 Page
19
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

In the initial phase, the robots will perform overtaking taking into consideration the oncoming traffic

and is similar to process explained in Chapter3.4.

FIGURE 15 : ROAD MARKERINGS FOR STRAIGHTS AND BENDS

The green and purple road markings on the ground set a flag for the active robot and this process is

also dependent on whether the active robot is travelling on the inner or outer lane. This flags allows

the active robot to know whether the overtaking routine should take precedence over the generic

vision based road following algorithm.

Typically on straight path, the corresponding markers will trigger this flag and set priority to the

overtaking routine, while on the curved bend, the respective marker will trigger this flag back to

false. In the event that the active robot is in the process of overtaking already and triggers the flag to

disable overtaking, necessary steps have been implemented to ensure a smooth transition to end

overtaking or to return to lane before handing over to the vision based road following.

FIGURE 16 : ROAD MARKING ALONG STRETCH OF ACCIDENT PRONE AREA

Another unique feature in this project is the stretch of accident prone area. Here, road markings

have been demarcated to notify the driver/robots to slow down and observe a slower speed limit

Group 7 Page
20
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

due to high accidental rate and can be easily related to in real life. Upon detecting the blue slow

marker on the road, a flag within the robot is set and weights within the generic vision based road

following is adjusted to reflect a slower speed. This reduction in speed can not only be observed

through the motion of the robot but also from the visual display of our user interface as shown in

figure 17.

FIGURE 17 : ROBOT 2 AT ACCIDENT PRONE AREA

FIGURE 18 : ROBOT 0 AT STRAIGHT SECTION

We also note that the speed used throughout this project is relatively fast and it is only along the

accidental area stretch, that we forces a lower speed limit to simulate situation awareness and

reaches maximum speed during overtaking. What we achieve as a result is a robust, versatile, fast

paced set of movement that is capable of handling various situations.

Group 7 Page
21
EE4306 – Distributed Autonomous Robotic Systems – 2009/2010 Semester 2 Project

6. Conclusion

Through this project, we had a greater understanding on the usage of MRDS environment as well as

the working mechanics behind mobile robots. What we have achieved is a practical, working

algorithm that uses assumption similar to what we find in daily lives. We have also mimicked the

British lane markings that we find in Singapore to keep it realistic.

The only consistent feature in road following that exist on all roads are lines or a “visible path” that

defines the road which has to be extracted through vision. From the lines ahead, a certain control

input is required to follow the short segment.

This is much like how humans drive cars; we have learnt to identify lanes and the steering required

to negotiate a certain path ahead. Laser range finders and other sensors can be used to complement

the vision data so as to provide timely depth data in order to perform specific manoeuvres, in this

case overtaking. This fusion mimics stereo vision in humans which would be too slow to implement

in this project.

We note that while computation time for our program might take awhile due to hardware

limitations, we are actually accessing and computing 4 robots at every instance. If we were to

implement actual physical autonomous robots, every robot will probably have its own processor and

this computation limitation will become less of an issue.

Group 7 Page
22