Professional Documents
Culture Documents
V
x
(x, y) V
x
(x, y) where
V
x
(x, y) is a weighted average of calculated in a neighborhood around the pixel at lo-
cation (x,y). Using this notation the above equation system may be written
_
I
2
x
+
2
_
v
x
+I
x
I
y
v
y
=
2
v
x
I
x
I
t
(11)
I
x
I
y
v
x
+
_
I
2
y
+
2
_
v
y
=
2
u
y
I
y
I
t
(12)
which is linear in V
x
and V
y
and may be solved for each pixel in the image. However, since the solution
depends on the neighboring values of the ow eld, it must be repeated once the neighbors have been updated.
The following iterative scheme is derived:
v
k+1
x
= v
k
x
I
x
_
I
x
v
k
x
+I
y
v
k
y
+I
t
_
2
+I
2
x
+I
2
y
(13)
v
k+1
y
= v
k
y
I
y
_
I
x
v
k
x
+I
y
v
k
y
+I
t
_
2
+I
2
x
+I
2
y
(14)
9
where the superscript k+1 denotes the next iteration, which is to be calculated and k is the last calculated
result. This is in essence the Jacobi method applied to the large, sparse system arising when solving for all
pixels simultaneously.
10
3 Time To Contact (T
c
)
Let the translation velocity for the environments W= (W
x
, W
y
, W
z
) and the angular velocity = (
x
,
y
,
z
)
then the moment R is dene as
R =W R (15)
In whatever follows r = (x, y) is the image world coordinate while R = (X, Y, Z) is the real world coordi-
nates.
In other words the above equation can be written as
_
_
_
Z
_
_
_=
_
_
_
W
x
W
y
W
z
_
_
_
_
_
_
y
Z
z
Y
z
X
x
Z
x
Y
y
X
_
_
_ (16)
Assume image plane lies at f = 1, then x =
X
Z
and y =
Y
Z
, after differentiation one gets
x =
XZ
ZX
Z
2
(17)
y =
YZ
ZY
Z
2
(18)
Substitute
X,
Y, &,
Z in Eq-16 above one gets the following equations for the x and y component of the
velocity.
_
V
x
V
y
_
=
_
X
Y
_
=
1
Z
_
1 0 x
0 1 y
_
_
_
W
x
W
y
W
z
_
_+
_
xy
_
1+x
2
_
y
1+y
2
xy x
_
_
z
_
_ (19)
Divergence of the ow eld is computed to estimate time to contact (T
c
) [6], [7] Thus the equations for the
components of optical ow due to general camera motion (arbitrary translation and rotation) in a stationary
environment can be written as:
V
x
=
1
Z
(T
x
+xT
z
) +
_
xy
x
_
1+x
2
_
y
+y
z
(20)
V
y
=
1
Z
(T
y
+yT
z
) +
__
1+y
2
_
x
xy
x
+x
z
, (21)
where Z is the depth of the object in the environment relative to the camera, and (T
x
, T
y
, T
z
) and (
x
,
y
,
z
)
are the translational and rotational motion of the environment relative to the camera. The divergence of optical
ow eld [parametrized by image coordinates (x, y)] is dened by
(V
x
, V
y
) =
V
x
x
+
V
y
y
. (22)
Also
11
V
x
x
=
x
(T
x
+xT
z
) +T
z
y
x
2x
y
(23)
V
y
y
=
x
(T
y
+yT
z
) +T
z
2y
x
x
y
, (24)
where = 1/Z, at (x, y) = (0, 0). So that
(V
x
, V
y
) =
x
(T
x
) +2T
z
+
x
(T
y
), (25)
or
(V
x
, V
y
) = 2T
z
. (26)
The gradient of the imaged surface is perpendicular to the transverse velocity. So time to contact is:
T
c
=
Z
T
z
=
2
(V
x
, V
y
)
(27)
12
4 Collision Detection and Compute Steering
4.1 Object Detection
To detect object rst the motion information is extracted from the sequences using temporal derivatives. For
camera, sequential images (video frames), 1 , 5 ,9 and 13 , are digitized and stored and the temporal derivative
is computed as
In order to detect the objects the vision data alone is not adequate as when the image difference is taken(for
optical ow computation, as shown in gure ) in to account then all the slow moving objects having very small
amount of optical ow values so to nd these objects ultrasonic sensor is used.Thus to detect the objects both
vision and ultrasonic data are used.
The main task is to avoid obstacles while achieving the mobility goals. In order to detect obstacle robot
divided obstacle in two parts
1. Static Objects
2. Moving Objects
4.1.1 Static Objects
To detect static objects, a pair of Ultrasonic sensor is used i.e. ultrasonic sensor continuously emitting ultrasonic
waves around 40 KHz, and when these ultrasonic pulse beam strike on an object it reect back to ultrasonic
sensor .By calculating the to and fro time (for the ultrasonic pulse) , obstacle can be detected along with the
information of time to contact (T
c
) .
4.1.2 Moving Objects
To nd out the moving objects mainly the vision data is used. To do so rst step is to capture four frame[F1,
F5, F9, F13] from the given video stream (g 4.1.2) after that Temporal Derivative I
0
and I
1
is calculated as
I
0
= F
5
F
4
(28)
I
1
= F
10
F
7
(29)
Then the next step is to threshold the optical ow values, thresholding is done to remove the unwanted
objects also as in most scenarios, the vast majority of the scenes will be stationary with a small percentage of
the objects in motion. So, Simply compute a grid of motion vectors across the scene the moving object can be
found out by thresholding the optical ow values.
13
- -
Optical flow
Computation
Thresholding
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10
Input Video
Stream
Camera
Optical Flow
Component (Vx, Vy)
Fig 4.1.2: Sketch for Temporal Derivative
4.2 Compute Steering
Ideally, the robot uses the vision data to identify the direction in which it has to travel. The robots behavioral
goal is simply to move forward, steering away from obstacle. The robot steers smoothly to the new desired
heading with saturated visual feedback controls. The desired heading attempts to avoid obstacle is made by
choosing a desired heading angle
q
.
To nd out the desire heading a simple look table is used.
q
= k, (30)
where is calculate by the aid of look table and k is a constant which can be calibrated experimentally. Thus
when there is obstacle then robot turn at
q
and travel a distance d (where the distance is directly proportional
to time to contact). After traveling a distance d robot turn again at angle -
q
and then continue their path. This
process is shown in below state diagram.
14
Initialization
Optical Flow
Values
Tc
Obstacles
Heading Angle
Continue
Current Path
Velocity
Reduction
Distance
calculation
Turn
Travel Given
Distance
Reduce
Velocity
Stop
Tc<threshold Tc>=threshold
Fig 4.2 Body Control Automation
15
5 EXPERIMENTAL RESULTS
5.1 System Description
In our experiment, we use a custom made mobile robot which provide odometric measurements and digital
video stream(a web cam is xed on the front of the robot).We need to synchronized odometry data and video
stream (otherwise the detection will be incorrect). To give an overview of our experimental results(for object
detection), we will use a video where cars are moving on an express highway.
5.2 Result
Figures 5.2.1 to 5.2.7 show the video sequence on which we superimposed the contour of the detected objects,
respectively. The horizontal line in the image corresponds to the horizon.
Below some experimental results are displayed for obstacle detection. Here the frames were captured at the
rate of 15 frames per second.
Here the bounding box shows the near by obstacles
Fig 5.2.1:
Fig 5.2.2:
Fig 5.2.3:
Fig 5.2.4:
16
Fig 5.2.5: Fig 5.2.6:
Fig 5.2.7:
5.3 Comments
The model that we described here is not tested completely as a whole system mean that the different modules of
robot like obstacle detection (using vision data), obstacle avoidance (using ultrasonic data) are not synchronize
completely in that robot but as an independent system these module(obstacle detection, obstacle avoidance)
gives good result (as shown above).
17
6 FUTURE WORK
In future work, our main priority is synchronize the vision data with the ultrasonic data so that robot can
maneuver for a long time, also we will explore new models of motion estimation. Improvements can be made
in motion estimation with the use of optical remote sensing technology (i.e use LIDAR sensor for 3D imaging)
. We are currently researching this aspect and we expect to submit these results in future articles.
7 Conclusion
As with all such systems dealing with higher-level robotic intelligence, the performance can never be expected
to be completely foolproof. The best that one can do is to devise appropriate automatic error correction and
detection strategies. To briey discuss the various failure modes of our system, the vision-based collision
avoidance capability depends obviously on the visual contrast between the obstacles nearby things and thus
whenever this is not obstacle is not detected properly.
We proposed here a way of detecting obstacles in a mobile robot environment by motion estimation estima-
tion from an image sequence. The originality in this method is that we detect obstacle based on the motion in
the image sequences.
Firstly, we extract a optical ow of the environment and then we separate the moving obstacles and static
obstacle from the environment by trying to t the optical ow model to the observed video stream.
18
References
[1] Interactive Highway Safety Design Model: Accident Predictive Module by H. Lum and J.A Reagan
http://www.fhwa.dot.gov/publications/publicroads/95winter/p95wi14.cfm
[2] What were driving at by Sebastian Thrun, Software Engineer.
http://googleblog.blogspot.com.au/2010/10/what-were-driving-at.html
[3] D. Raviv and M. Herman, Visual Servoing from 2-D Image Cues, In Active Perception, Y. Aloimonos,
ed., Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 191-226, 1993.
[4] Practical Image Processing and video Processing Using Matlab by OGE Marques IEEE Press Chapter 22
page no.562.
[5] Determining Optical Flow Berthold K.P Horn and Brian G. Schunck Articial Intelligence Laboratory,
Massachusetts Institute Of Technology, M A 02139, USA.
[6] Real-Time Obstacle Avoidance Using Central Flow Divergence, and Peripheral Flow IEEE TRANSAC-
TIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 1, FEBRUARY1998.
[7] G. S. Young, T. H. Hong, M. Herman, and J. C. S. Yang, New visual invariants for obstacle detection
using optical ow induced from general motion, in Proc. IEEE Workshop Appl. Computer. Vision, Palm
Springs, CA, Nov. 30Dec. 2, 1992, pp. 100109.
[8] E. Dicksmanns, The development of machine vision for road vehicles in the last decade, 2002
[9] J. Borenstein and Y. Koren, Histogramic in-motion mapping for mobile robot obstacle avoidance, IEEE
Trans. Robot. Automat., vol. 7, pp. 535539, Aug. 1991.
[10] Automated Highway Systems Chapter 2 http://scholar.lib.vt.edu/theses/available/etd-
5414132139711101/unrestricted/ch2.pdf
19