Professional Documents
Culture Documents
Fig. 1. Top: The Nanokhod; bot: one of the cameras of the stereo rig
Research and Technology Center (ESTEC) are presented, and
some possible further developments are discussed.
1.2 Utilization
2 Calibration
A typical utilization scenario will deploy the imaging head
as soon as possible after the landing of the planetary system. Every planetary mission is a high-risk operation. During
Because of the strain on the parts during launch and landing, launch and landing, the lander and its contents are subject
the imaging head needs to be recalibrated. To accomplish this, to extreme forces. The mechanical properties of the imaging
it takes images of the terrain that are sent to earth, where the head are likely to have been affected by mechanical and ther-
calibration is performed using these images. From the same mal forces. For high-accuracy equipment, such as the imaging
images a 3-D reconstruction of the terrain is then computed. head, a small change in these mechanical properties results in
Since the cameras have a limited field of view (23◦ × 23◦ ) the a large degradation of the results, unless the new properties
entire environment is not recorded at once but it is segmented can be estimated. The cameras themselves are built so that
into rings according to the tilt angle and each ring is divided the intrinsic parameters during the mission can be assumed
into segments according to the pan angle of the imaging head to be identical to the parameters obtained through calibration
(Fig. 2). The outermost boundary of the recorded terrain lies on ground. If the camera housing were not so rigidly built,
20 m from the camera. For each of the segments, a stereo- the camera intrinsics would be likely to change during launch
image pair is recorded and sent down. The values of the actual or landing. Algorithms exist that can retrieve these intrinsic
pan and tilt angles can be read out from the encoders of the parameters from images too [10,16].
pan-tilt motors and sent down together with the corresponding Traditional calibration algorithms rely on known calibra-
images. tion objects with well-defined optical characteristics in the
The 3-D reconstruction of the planetary terrain, which is scene. By taking pictures of these artificial objects, the pose
obtained on earth, is then used by a team of scientists to in- of the cameras can be computed, yielding the extrinsic (me-
dicate possible interesting sites for the Nanokhod to visit and chanical) calibration of the stereo rig. There are two reasons
extract and analyze samples from. A path-planning procedure why this scheme is not suited in our case, in which the imag-
is applied to compute a route for the Nanokhod to follow, ing head is deployed on a distant planet. First, there is the
which minimizes a cost by combining different factors, such problem of where to place the calibration objects. We need to
as the risk of tipping over or the risk of losing visual contact. be absolutely sure of the pose of these objects for the calibra-
After a thorough verification through simulation, the route is tion to have any meaningful result. It is, of course, impossible
up-linked to the lander. The control system on the lander ob- to add objects to the terrain, so we have to consider placing
serves the Nanokhod via stereo images of the light emitting calibration “markers” on the lander itself. A typical lander
diodes (LEDs) on the Nanokhod, which are taken by the imag- consists of a “cocoon” that opens after landing, comparable to
ing head. Differencing the images with the LEDs turned on an opening flower. The markers could be applied to the open-
with background images yields the coordinates of the LEDs ing “petals.” However, we are never sure of the exact position
in the stereo images and, hence, the pose of the Nanokhod of these petals, which makes the markers much harder to use.
on the terrain. This allows for the system to control the rover, Even if we were to dispose of accurate markers on the lan-
such that it executes the uploaded tasks [14]. The results of der, a second problem would arise: During the design of the
the analysis of the samples, together with the telemetry of the imaging head, robustness was a very important issue; for that
rover, is sent back to the ground station, where the latter is reason, the number of moving items was minimized. There-
visualized in a 3-D simulation environment. fore, a fixed-focus lens was chosen. Since the accuracy of the
In the following sections, the imaging-head calibration stereo matching decreases with the square of the distance, the
process, the reconstruction of the planetary terrain in 3-D, cameras are focused on the far range to gain as much accuracy
and the path-planning are explained in more detail. Results in the far regions as possible. As a consequence, the images of
of a system test in a planetary testbed at the European Space near regions are blurred. Since the markers would be on the
M. Vergauwen et al.: A stereo-vision system for support of planetary surface exploration 7
Imagepair 1 and 2591 corners were matched to calculate the relative trans-
formation. We could compare the result with the ground-truth
value of the relative transformation. For comparison, the ro-
tational part of the relative transformation was represented as
a rotation around an axis (l, m, n) with a certain angle θ. The
ground truth was
L1 L1 L2 L’1
Transformed images Original images
G G
S S
4 Path planning
6 Future development
For the ROBUST project, calibration of the imaging head is
a critical issue. Because of known specifics of the imaging
head, the calibration algorithms can be targeted to this par-
ticular setup and take all known information on mechanics
and intrinsics of the system into account. One can imagine
situations where such information is not available. While the
algorithms described in this paper can no longer be used, it is
still possible to retrieve some calibration and 3-D reconstruc-
tion.
Fig. 14. A textured and a nontextured view of the reconstructed model 6.1 Uncalibrated 3-D reconstruction
of the ESTEC planetary testbed
How structure and motion can be retrieved from an uncali-
brated image sequence is described in [12]. The 3-D mod-
other component of the system came into play: the simulator. eling task is decomposed into a number of steps. First, suc-
This component allowed the operator to simulate low-level cessive images of the sequence are matched, and epipolar ge-
commands on the Nanokhod and imaging head in a virtual- ometry is computed using the same approach as described in
reality environment. A simple gravitation model was used to Sect. 2.1.1. The projection matrices of all views are computed
predict the pose of the Nanokhod after a motion. An overview by triangulation and pose-estimation algorithms. The ambigu-
of the GUI of this component is shown in Fig. 15. The GUI is ity on the reconstruction is then reduced from pure projective
divided into four windows. The upper two show what the left to metric (Euclidean up to scale) by self-calibration algorithms
and right camera of the stereo head were seeing. The bottom- (see also [10]). Finally, textured triangular mesh models of the
right window is a 3-D interaction window, and the bottom-left scene are constructed.
M. Vergauwen et al.: A stereo-vision system for support of planetary surface exploration 13
References
1. Cox I, Hingorani S, Rao S (1996) A maximum likelihood stereo
algorithm, In: Comput Vision Image Underst 63:3
2. Fischler M, Bolles R (1981) RANdom SAmpling Consensus:
a paradigm for model fitting with application to image analy-
sis and automated cartography. Commun Assoc Comput Mach
24:381–395
3. Faugeras O (1992) What can be seen in three dimensions with
an uncalibrated stereo rig. In: Computer Vision (ECCV’92),
Lecture notes in computer science, vol 588. Springer, Berlin
Heidelberg New York, pp 563–578
4. Harris C, Stephens M (1988) A combined corner and edge de-
tector. In: 4th Alvey Vision Conference, pp 147–151
Fig. 16. Results of the reconstruction process of a rock on the plan- 5. Knight J, Reid I (2000) Self-calibration of a stereo-rig in a planar
etary testbed. The upper left view shows one of the original images scene by data combination. In: Proceedings of the International
of the sequence. The upper right view displays reconstructed points Conference on Pattern Recognition (ICPR 2000), Barcelona,
and cameras. In the lower left view, a dense depth map is shown. The IEEE Press, Piscataway, NJ, pp 411–414
lower right view shows an arbitrary view of the 3-D reconstruction 6. Koch R (1997) Automatische Oberflachenmodellierung star-
rer dreidimensionaler Objekte aus stereoskopischen Rundum-
Ansichten. PhD thesis, University of Hannover, Germany, 1996
6.2 Experiment also published as Fortschritte-Berichte VDI, Reihe 10, Nr499,
VDI, Düsseldorf
7. Latombe JC (1991) Robot motion planning. In: International
The technique described could be of use during a planetary
series in engineering and computer science, vol 124. Kluwer,
exploration mission. The most important instrument of the Dordrecht
payload cab of the rover is probably a camera. This camera 8. Maybank S (1992) Theory of reconstruction from image motion.
will take images of samples and rocks on the terrain, but it Springer, Berlin Heidelberg New York
could also be used to perform close-range reconstruction of 9. Peleg S, Ben-Ezra M (1999) Stereo panorama with a single
the terrain, helping the rover to “dock” precisely onto the de- camera. In: Proceedings of the IEEE Conference on Computer
sired position. The camera would take images of the area of Vision and Pattern Recognition, IEEE Press, NY, pp 395–401
interest during the approach. These images could then be used 10. Pollefeys M, Koch R, Van Gool L (1999) Self-calibration and
as input for the reconstruction algorithm described above to metric reconstruction in spite of varying and unknown internal
generate a 3-D reconstruction. The resolution of this recon- camera parameters. Int J Comput Vision 32:7–25
struction would be far superior to that of the DEM obtained 11. Pollefeys M, Koch R, Van Gool L (1999) A simple and ef-
from the imaging head. ficient rectification method for general motion. In: Proceed-
During testing of the ROBUST system on the planetary ings of the IEEE International Conference on Computer Vision
testbed at ESTEC, a preliminary test was performed. Images (ICCV’99), Corfu, Greece, September, IEEE Computer Society
of some rocks on the testbed were taken by hand with a semi- Press, Kerkyra, Greece, pp 496–501
12. Pollefeys M, Koch R, Vergauwen M, Van Gool L (1998) Metric
professional digital camera. The images were processed by the
3-D Surface reconstruction from uncalibrated image sequences.
3-D reconstruction system. The resulting 3-D reconstruction In: Proceedings of the SMILE Workshop (post-ECCV’98). Lec-
of the rock is shown in Fig. 16. The results are very good and ture notes on computer science, vol 1506. pp 138–153, Springer,
show that this strategy could be an interesting extension of the Berlin Heidelberg New York
ROBUST system. 13. Rieder R, Wanke H, Hoerner Hv (1995) Nanokhod, a miniature
deployment device with instrumentation for chemical, miner-
alogical and geological analysis of planetary surfaces, for use
7 Conclusion in connection with fixed planetary surface stations. In: Lunar
Planet Sci XXVI:1261–1262
In this paper, we have proposed an approach for calibration and 14. Steinmetz B, Arbter K, Brunner B, Landzettel K (2001) Au-
tonomous vision-based navigation of the Nankhod rover, In:
3-D measurement from planetary-terrain images that allows
Proceedings of the International Symposium on Artificial In-
for an important simplification of the design of the imaging
telligence and Robotics and Automation in Space (iSAIRAS
system of the lander. The components described in this paper 2001), Montreal, June. CSA, Hubert, Canada
are part of an end-to-end system that can reconstruct an un- 15. Loop C, Zhang Z (1999) Computing rectifying homographies
known planetary terrain and guide a rover autonomously on for stereo vision. In: Proceedings of the IEEE Conference on
the planetary surface. The system has succeeded in a first test Computer Vision and Pattern Recognition (CVPR’99), Fort
in a planetary testbed at ESTEC. Collins, Colo, June. IEEE Press, NY
14 M. Vergauwen et al.: A stereo-vision system for support of planetary surface exploration
16. Zisserman A, Beardsley P, Reid I (1995) Metric calibration of Luc Van Gool is professor for computer
a stereo rig. In: Proceedings of the IEEE Workshop on Repre- vision at the K.U.Leuven in Belgium and
sentation of Visual Scenes, Cambridge, Mass. pp 93–100 the ETH in Zurich, Switzerland. He leads
a research team at both universities. He
Maarten Vergauwen was born in ’74 and has authored over 100 papers in computer
got a degree in electronical engineering at vision. He has been a PC member of sev-
the Katholieke Universiteit Leuven in Bel- eral, major conferences and is in the edi-
gium in 1997. Since then he has been a torial boards of several international jour-
phd student at the Katholieke Universiteit nals. His main interests include 3D mod-
Leuven. He has been involved in several eling, animation, tracking, gesture analy-
projects among which the ESA projects sis, recognition, and texture. He received a
Viable and Robust. David Marr Prize, a Tech-Art Prize and an
EITC Prize from the EC. He was the coordinator of several European
projects. He is a co-founder of the company Eyetronics.