You are on page 1of 18

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO.

1, JANUARY 2002 1

Integrated Position Estimation


Using Aerial Image Sequences
Dong-Gyu Sim, Rae-Hong Park, Senior Member, IEEE, Rin-Chul Kim, Member, IEEE,
Sang Uk Lee, Senior Member, IEEE, and Ihn-Cheol Kim

Abstract—This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where
navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed
integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation
recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive
aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft
goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position
error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model
(DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in
DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the
effectiveness of the proposed integrated position estimation algorithm.

Index Terms—Navigation, aerial image, image matching, digital elevation model (DEM), recovered elevation map (REM), relative
position estimation, absolute position estimation, robust-oriented Hausdorff measure.

1 INTRODUCTION

N AVIGATION guides a vehicle to a spot where one wants


to go. For autonomous navigation, it is important to
extract the accurate navigation parameters such as the
constraint on the flying trajectory and it cannot be
affected by any external signal. But, the estimated error
tends to increase as an aircraft goes on flying. Also, the
absolute position and the velocity of an aircraft, where the GPS has been known as an effective system that can
absolute position, represented with respect to the global easily estimate a position with a small error, however, it
coordinates of the earth, corresponds to the position can also be disturbed by an external signal.
component of the exterior orientation used in the photo- This paper investigates the practical position estimation
grammetry field [1]. Using these parameters, we can adjust system of an aircraft using sequential aerial images. Because
a velocity and a direction to the destination. aerial image sequence is used as an input in our video-
Many approaches to estimation of accurate navigation based navigation system, the navigation system has the
parameters have been presented [2], [3], [4], [5], [6]. advantage that it is not detected by enemies nor is guided
Various systems such as terrain contour matching by external signals, compared with other active approaches.
(TERCOM), inertial navigation system (INS), and global Also, it can be easily attached to an aircraft, without any
positioning system (GPS) are well-known. TERCOM special apparatus to compensate for an attitude change
navigates by matching and measuring the altitude of between successive frames. Four test aerial image sequences
terrain with special radar. It cannot accurately estimate its used in this paper were acquired from a camera fixed on a
own position in plain regions where change in elevation helicopter or a light airplane, in which the optical axis of the
is small and it can be out of control by external signals. camera varies according to the aircraft attitude.
The INS with an accelerometer and a gyroscope has been Many computer vision and image processing techniques
widely used for navigation. It does not require any have been proposed for a video-based navigation system.
However, they are not practical and are not focused on the
construction of the overall system. An online vehicle motion
. D.-G. Sim and R.-H. Park are with the Department of Electronic estimation method using visual information was proposed,
Engineering, Sogang University, C.P.O. Box 1142, Seoul 100-661, Korea. based on Kalman filter estimation with an expensive down-
E-mail: dgsim@good.to and rhpark@ccs.sogang.ac.kr.
. R.-C. Kim is with the School of Electrical and Computer Engineering, looking camera [4]. Another algorithm with a down-looking
University of Seoul, 90 Jeonnong, Tongdaemoon-ku, Seoul, 130-743, camera was presented [5]. A passive navigation algorithm
Korea. E-mail: rin@uoscc.uos.ac.kr. based on optical flow estimation and optimization [7] was
. S.U. Lee is with the School of Electrical Engineering, Seoul National
University, San 56-1, Shilim-Dong, Gwanak-Gu, Seoul 151-742, Korea. also proposed to estimate the instantaneous velocity of
E-mail: sanguk@sting.snu.ac.kr. camera motion (rotation and translation) parameters. It
. I.-C. Kim is with the Agency for Defense Development, C.P.O. Box 35, could be unstable with a high computational load because it
Daejon, Korea.
estimates not only translation but also rotation parameters.
Manuscript received 1 Mar. 2000; revised 27 Oct. 2000; accepted 5 July 2001. On the other hand, 3D reconstruction approaches have been
Recommended for acceptance by R. Kumar.
For information on obtaining reprints of this article, please send e-mail to: proposed for matching between the digital elevation model
tpami@computer.org, and reference IEEECS Log Number 111608. (DEM) and the recovered elevation map (REM) [8], [9], [10].
0162-8828/02/$17.00 ß 2002 IEEE
2 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 1. Block diagram of the proposed integrated position estimation system consisting of relative and absolute position estimation parts (In : nth
current input aerial image; ’n , !n , n , and hn : roll, pitch, yaw, and altitude parameters of an aircraft at the nth frame, respectively; Pn : nth estimated
current position.

However, they required a high-computational load and The rest of the paper is structured as follows: Section 2
were not applied to various video sequences. Many image presents the proposed integrated system for position
matching algorithms [3], [4], [5], [11] have been proposed; estimation. Sections 3 and 4 describe the relative position
however, it is difficult to directly apply them to practical estimation and absolute position estimation algorithms,
navigation systems because of their high-computational respectively. Section 5 gives some experimental results and
complexity and usage of a down-looking camera. The gray- discussions. Finally, Section 6 summarizes conclusions.
level-based image matching algorithm employs geodetical
calibration that requires an additional computational load
[11], which can be applied to absolute position estimation 2 INTEGRATED POSITION ESTIMATION
for autonomous navigation. The hybrid estimation of Fig. 1 shows the overall block diagram of the proposed
navigation parameters has been proposed to cope with position estimation system that consists of relative and
these problems [12]. In order to implement a cheap and absolute position estimation parts, where In is the current
accurate visual navigation system that can be easily input aerial image, with ’n , !n , n , and hn denoting roll,
attached to any aircraft, we propose an integrated naviga- pitch, yaw, and altitude parameters of an aircraft, respec-
tion parameter estimation system. tively. Note that, in this paper, the subscript n denotes the
This paper presents an integrated system for position
nth frame. Prel;n represents the relative position. In the
estimation using sequential aerial images. The main objec-
relative position estimation part, the displacement Bn of a
tive is to develop an effective vision-based position estima-
camera between two successive frames is computed based
tion algorithm for real-time implementation. The proposed
system is composed of two parts: relative position estimation on stereo modeling, and the current position Pn is obtained
and absolute position estimation. The former is based on the by accumulating the relative displacement estimates com-
stereo modeling of two successive frames, whereas the latter puted from two successive frames. Pabs signifies the absolute
is accomplished by DEM matching or by image matching position, with PDEM , PIRS , and Phigh denoting the positions
with Indian remote sensing (IRS) or high-resolution images. obtained by absolute position estimation algorithms using
IRS images are acquired from the satellite launched by the DEM, IRS images, and high-resolution images, respectively.
Department of Space, India, in 1995. Absolute position estimation is introduced to reduce the
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 3

error generated and accumulated in relative position matching is done using the Gaussian pyramid. The higher
estimation. level is reduced by a factor of two from the original
The integrated system is controlled by the switching image. The correspondence is detected with NCC at the
scheme, in which absolute position estimation is in- higher level. Then, the correspondence is found by
corporated with a planned trajectory of autonomous searching with a small search range (-1 ~ +1) on the
navigation. Absolute position estimation is performed original image plane. The final matching point MPn with
using images or DEM information. Absolute position subpixel accuracy is obtained by refining the coarse
estimation by image matching uses IRS or high-resolution displacement with the fine displacement estimated by
images as reference images. The term “high-resolution” is the optical flow method [14], [15], [16]. Subpixel update
used to indicate that the resolution of aerial images is displacement is obtained by regression of the optical flow
higher than that of IRS images. The ground resolution of constraint equations at every pixel in the matching
IRS images is 5 m/ pixel while that of aerial images is window [14], [16].
about 1-3 m/pixel. The absolute positioning system with Fig. 2b shows the displacement estimation based on
IRS images is effective for estimation of the current stereo matching, in which the proposed relative position
position in regions containing artificial structures such as estimation method extracts the displacement Bn of an
roads and buildings. Note that because US Geological aircraft using the feature point MPn of the current input
Survey (USGS) digital orthophoto quadrangles (DOQs) image In and the feature point MPn0 of the previous image
are about 1 m/pixel resolution, they could be used as In1 . Bn can be expressed as T Vn , where T denotes the
high-resolution images. If these images are available, the sampling interval and Vn signifies the velocity of an aircraft.
image matching algorithm can obtain the current position ðXN ; YN ; ZN Þ signifies the navigation coordinates, in which
accurately, based on the robust-oriented Hausdorff XN and YN are expressed in terms of the longitude and
measure (ROHM) [13]. latitude, respectively, and ZN denotes the altitude with
The current position can also be estimated by matching the respect to the sea level. f is the focal length of a camera, and
stored DEM with the REM that is reconstructed from multiple Mn ¼ ðMn;x ; Mn;y ; Mn;z Þ is a 3D DEM position, correspond-
successive image pairs [8], [9], [10]. The proposed absolute ing to the feature point. R0n ðRn Þ denotes the vector from
position estimation algorithm using the DEM information Pn1 ðPn Þ to the 3D DEM position Mn , and r0n ðrn Þ signifies its
consists of two stages: recovering the sampled elevations normalized vector obtained by converting the feature
from multiple aerial image pairs and matching the relative (matching) point in the image space into the real navigation
REM with the relative DEM. The relative REM and DEM are coordinates, with the attitude parameters acquired by the
defined with respect to the elevations at global feature points gyroscope. Note that the vector R0n ðRn Þ passes through the
of the REM and DEM, respectively. Thus, the proposed feature point MPn0 in the previous frame (matching point
algorithm can accurately estimate the absolute position using MPn in the current frame In ).
a wide recovered area and is much faster than conventional First, R0n and Rn must be calculated to obtain Bn . R0n is
algorithms [8], [9]. Each part of the overall system will be given by R0n ¼ ðMn;x  Pn1;x ; Mn;y  Pn1;y ; Mn;z  Pn1;z Þ,
presented in the following sections. where Pn1 is given and Mn is obtained by ray tracing
with Pn1 and r0n . Then, Rn ¼ ðRn;x ; Rn;y ; Rn;z Þ is computed
3 RELATIVE POSITION ESTIMATION using

Relative position estimation sequentially computes the Rn;x Rn;y Rn;z Mn;z  Pn;z
¼ ¼ ¼ ;
current position by accumulating the displacement of a rn;x rn;y rn;z rn;z
camera, estimated with respect to the previous position.
Fig. 2a shows the block diagram of the proposed relative where rn ¼ ðrn;x ; rn;y ; rn;z Þ and the altitude Pn;z with respect
position estimation algorithm, in which Z represents one- to the sea level are given by a gyroscope and an altimeter,
frame time delay and MPn0 denotes a feature point. The respectively. The displacement Bn is defined by
relative position estimation method recursively computes Bn ðMPn0 ; MPn ; Xn1 ; Yn1 ; Sn1;n Þ ¼ R0n  Rn ;
the relative displacement Bn between two positions (Pn1
and Pn ) and updates the position of an aircraft by where Sn1;n represents the parameter vector consisting of
accumulating the relative displacement Bn . roll, pitch, yaw, and altimeter parameters of two
Since a camera is tightly attached to an aircraft in the consecutive images In1 and In . Then, we can estimate
proposed algorithm, it is necessary to compensate for the the current position Pn of the aircraft by accumulating the
attitude change of a camera between two successive estimated displacement Bn to the previous position Pn1 .
frames. Thus, the previous input image In1 is trans- Also, the velocity Vn of an aircraft is calculated by
0 dividing the displacement Bn by the sampling interval T
formed to In1 for matching with the current input image
In according to the attitude of the aircraft, acquired from a of the image sequence, i.e., Vn ¼ Bn =T . In our system, the
gyroscope. In and In1 have different attitudes, thus In1 is sampling interval T is set to 1 second to guarantee that
0 the overlapping area between two successive frames is
required to be mapped. In1 is obtained by ray tracing
0 about half the whole image size. The proposed relative
from every point in In1 to its corresponding point in In1 .
Matching pairs are detected by the blockmatching algo- positioning algorithm was implemented in real time on
rithm (BMA) employing the normalized correlation coeffi- TMS320C80 [17].
cient (NCC) measure. Two-level hierarchical matching is Only a single correspondence is used for a frame to
used to reduce the computational load. The hierarchical estimate the displacement. The displacement could be
4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 2. Relative position estimation: (a) Block diagram of relative position estimation based on stereo matching (In : nth current input aerial image; ’n ,
!n , n , and hn : roll, pitch, yaw, and alititude parameters of an aircraft at the nth frame, respectively; Prel;n : nth position estimated by relative position
estimation). (b) Displacement estimation based on stereo matching (Pn : nth position; Bn : nth displacement, Mn : nth ground point corresponding to
feature points MPn0 and MPn ).

estimated by regression with multiple correspondences, because the acquired image could be distorted by a small
however, no improvement was empirically observed change in roll measurements. Actually, the roll parameter of
because the possibility of mismatching increases as the an aircraft changes easily because of wind, affecting the
number of correspondences increases. Additionally, be- accuracy of relative position estimation.
cause some correspondences have large errors, linear
estimation based on Gaussian noise assumption could yield
4 ABSOLUTE POSITION ESTIMATION
worse results due to outliers. The error in navigation
parameter estimation, accumulated by relative position We propose the absolute position estimation algorithm that
estimation, increases with time, thus the position has to be employs two approaches: matching using reference images
compensated by absolute position estimation. and DEM information. In the case of image matching [13], if
In relative position estimation, the position error is the position obtained by relative position estimation is
roughly in proportion with the error in altitude, so small located within the effective range (e.g., 400 m) from the
altitude error is desirable for good performance. Change in reference position prespecified, the absolute position
roll values is the most dominant error among other factors, estimation method using high-resolution aerial images or
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 5

IRS images is activated. For absolute position estimation generates the DT maps by applying two filters along
using DEM information, the effective range is set to 200m. opposite directions. Note that the computational complexity
Generally, navigation systems have been developed is OðNc  Nr Þ, where Nc and Nr denote the numbers of
under the assumption that the aircraft flies along a column and row pixels, respectively. The processing time of
predetermined trajectory. Our system employs two differ- the HD algorithm can be reduced using the DT maps
ent reference images: high-resolution aerial images and constructed by the time-efficient two-pass algorithm with a
IRS images. If high-resolution images along the trajectory low computational load.
are available as reference images, the absolute position can The directed ROHM
be estimated from them [18]. Otherwise, the absolute
position is estimated from the IRS images. hROHM ðAG ; AE ; BD Þ ðhROHM ðBG ; BE ; AD ÞÞ
The absolute position is estimated using the ROHM [13] is computed with the gradient image AG ðBG Þ, edge image
with the reference images (high-resolution or IRS images) AE ðBE Þ, and DT image BD ðAD Þ. It is computed by
when the aerial input image contains distinct geometric accumulating the weighted distance value at each edge
(artificial) structures such as roads, buildings, stadium, and pixel, where the weight is defined by the dot product of the
interchange. For mountain regions, it is difficult to use gradient images AG and BG computed at two matching
feature-based image matching. Thus, change of terrain
points.
elevation is used for absolute position estimation if the The final ROHM HROHM is determined by
regions show large or abrupt changes in elevation. In the
case of absolute position estimation using DEM data, we HROHM ¼ minðhROHM ðAG ; AE ; BD Þ; hROHM ðBG ; BE ; AD ÞÞ;
employ an estimation method, in which multiple image
pairs are used to construct a wide REM and the absolute where hROHM ðAG ; AE ; BD Þ and hROHM ðBG ; BE ; AD Þ de-
position is estimated using the REM and DEM information. note the directed ROHMs. The directed ROHM
hROHM ðAG ; AE ; BD Þ is defined as
4.1 Absolute Position Estimation X
Based on the ROHM hROHM ðAG ; AE ; BD Þ ¼ dAG ðaÞ  dBG ðaÞ (T ðBD ðaÞÞ
a2AS
The position error by relative position estimation is X
compensated by absolute position estimation based on the ¼ sðaÞ(T ðBD ðaÞÞ;
ROHM, where an input image is matched with stored a2AS

reference images. If the reference image contains distinct


where an orientation vector dAGðaÞ ðdBGðaÞ Þ represents a unit
artificial structures, the absolute position can be accurately
gradient vector of a gray-level image A ðBÞ at position a,
estimated. The ROHM is introduced by replacing the
distance measure by the accumulation of matching points, sðaÞ ¼ dAGðaÞ  dBGðaÞ denotes the dot product of two gradient
in which model parameters are estimated by accumulation vectors obtained from two images A and B, and BD ðaÞ ¼
operations in the parameter space as used in the Hough minb2BS k a  b k is the distance map value [19] of the
transform (HT) [13]. The ROHM algorithm is robust to image B at position a. The weight based on the orientation
severe noise embedded in an input image. The gradient difference is used to avoid inconsistent correspondences
information at two corresponding points in the aerial input with different orientations. The symmetric proximity func-
and reference images is used to validate the correct
tion, (T ðxÞ, shown in Fig. 3b is employed. Because the
matching. Edge images and distance transform (DT) images
proposed algorithm accumulates the transformed distance
are constructed, from which the ROHM is computed. Fig. 3a
shows the block diagram of the ROHM computation, in values of all edge pixels, with the weight specified by the
which gradient images, edge images, and DT images [19], gradient orientation information, it counts the weighted
[20], [21], [22] are used. Note that the DT image is the number of matching points as the HT does. As a result, the
distance map of an edge image, containing a distance value ROHM algorithm becomes robust to noisy data. Further-
from the nearest edge point. more, because the orientation information is used, the point
First, the gradient image AG ðBG Þ is obtained by in the input aerial image, with the orientation different from
applying the Sobel edge operator to the gray-level image
that of its matching point in the reference image, can be
A ðBÞ and the edge image AE ðBE Þ is constructed by
thresholding the gradient image AG ðBG Þ. Note that edge effectively removed. In Fig. 3c, the robustness and useful-
points in AE ðBE Þ yields the edge point set AS ðBS Þ. ness of orientation information are shown, where open and
Each pixel value of the gradient image is stored in vector closed circles represent edges in the input and reference
form. For example, the gradient value at point a in image A images, respectively. The similarity of the left example is
is expressed as AG ðaÞ ¼ ðGa;x ; Ga;y Þ, where Ga;x ðGa;y Þ the largest. Two input points correspond to two points
represents the derivative at point a, with respect to x ðyÞ. having the same directions, even though one point
Gradient magnitude is thresholded to generate the edge considered as an outlier finds no correspondence. In the
image, and orientation information is used to eliminate
center example, all of three points have corresponding
incorrect correspondences. Here, the threshold for edge
images is experimentally set to 100. points with different directions. The similarity of the right
The DT maps AD and BD are constructed from the edge example is larger than that of the left one with respect to
images AE and BE , respectively, in which Borgefors’ two- linear estimation, whereas the left example case is better
pass algorithm [19] is used. The two-pass algorithm than the right one in the context of robust statistics.
6 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 3. ROHM for absolute position estimation. (a) Block diagram of the ROHM using the reference image. (b) Proximity function used in the ROHM.
(c) Correspondence examples using the orientation information.

The absolute position is detected by finding the position and the absolute position of the input frame is calculated
having the maximum proximity of the ROHM value with the known position of the reference image. It is
between the input frames and the reference image. By assumed that the attitude and altitude parameters of the
sliding the reference image with respect to the input video reference image are given for matching of the reference
frame, the matching point is detected using the ROHM value image and the input video frame.
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 7

j j
Fig. 4. Sampled feature points (F Pnk1 ) and their corresponding matching points (MPnk ) for elevation reconstruction based on the relative
elevation at each point, with repsect to that reconstructed from the global feature point (F Pnk1 ) and its corresponding matching point (MPnk ),
where sampled feature points are selected on a line, 1
j
J.

Pixel-based matching is performed with the reference REM with the relative DEM. The proposed algorithm
image and input video images. Thus, the error is deter- consists of two stages: recovering the elevations at sample
mined by the ground resolution of images. As mentioned points and matching the relative REM with the relative
before, the ground resolution depends on the altitude of an DEM. At the first stage, the elevations at sample feature
aircraft. Usually, the ground resolution of high-resolution points along the line are calculated based on the stereo
images ranges from 0.5 to 3 m/pixel. For IRS satellite matching method, with point correspondences established
images, it is 5m/pixel. In pixel-based image matching, the between two consecutive aerial images. In the second stage,
maximum error corresponds to half-pixel. Then, the final a matching point is found by searching for the location in
position estimation error in ground is also determined with the DEM whose local terrain is the most similar to the
the attitude and altitude parameters of a camera as in reconstructed REM, where relative REMs and DEMs are
relative position estimation. employed for establishing correspondences.
At first, the global feature point that yields the maximum
4.2 Absolute Position Estimation local variance is extracted in the previous frame, where a
Using DEM Matching 31  31 local window is used for variance computation. It is
In the previous section, the image matching algorithm is used as a reference point over the whole frame in
employed to match artificial structures in scenes. However, computing the relative REMs and DEMs. Feature point
it is not effective for mountain areas where there is no correspondence is established by block-based matching,
artificial object having distinct gray-level changes. In such a using the NCC measure.
case, matching based on elevations is effective, thus the In Fig. 4, F Pnk1 denotes the global feature point having
DEM based absolute position estimation is proposed. the maximum variance at the ðn  k  1Þth (previous) frame
The proposed absolute position estimation algorithm Ink1 in an aerial image sequence. MPnk represents the
using DEM matching localizes an aircraft, based on the matching point in the ðn  kÞth (current) frame Ink ,
matching between the relative REM and the relative DEM corresponding to the global feature point F Pnk1 . Pnk1
[10]. The relative REM and DEM are obtained with respect and Pnk denote 3D positions of the optical centers of a
to the REM and DEM at a global feature point, MPn , camera attached on the aircraft in the previous and current
respectively. The global feature point is the point at which frames, respectively. The difference between these two
j
the variance of gray level is the maximum, as used in positions is denoted by Bnk . Similarly, F Pnk1 ; 1
j
J,
relative position estimation. In this algorithm, the wide is the jth sample feature point in the previous frame, where
REM is reconstructed by consecutively combining multiple J denotes the total number of sample feature points
j
sets of sampled REMs acquired along the 1D lines, each of employed. The sample feature points F Pnk1 are selected
which is obtained over the region overlapped by two along a fixed row with equal intervals. In experiments, they
successive images. Note that two consecutive frames are are selected along the 80th row, with 25 pixels apart from
enough to obtain the wide REM by combining the each other. It is assumed that the size of the area overlapped
1D elevations, with the assumption that the aircraft flies by two successive images is greater than half the image size.
with the prespecified altitude and velocity. Then, the The fixed line is arbitrarily chosen in the upper half region of
position of an aircraft is estimated by matching the relative the image, where an aircraft is assumed to fly upwards in the
8 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

TABLE 1
Image Characteristics of the Four Test Image Sequences

j
image plane. MPnk of the current frame is the correspond- denotes the size of the search area in the DEM. In
ing feature point detected by the NCC measure. Note that experiments, DEMðx; yÞ represents an elevation at position
these matching points are detected for (N  1) image pairs j
ðx; yÞ. DEMnk ðDEMnk Þ signifies the elevation at the
(k ¼ N  2 to 0). global feature point F Pnk1 (the jth sampled feature point
A sampled REM is recovered with the consecutive sets of j
F Pnk1 ). Rel DEMnkj
ðRel REMnkj
Þ signifies the relative
j
feature points and matching points, i.e., the REM at Mnk is j
DEM (REM) at Mnk with respect to that at Mnk . The error
calculated by the stereo matching method [8], [9] with in elevation estimation is assumed to be the same for all
j j
F Pnk1 and MPnk as well as attitude parameters ’, !, and feature points in a fixed frame. Thus, relative elevations
. From the triangle Pnk1 Pnk Mnk in Fig. 4, instead of absolute elevations are used in computing the
Bnk ¼ Pnk  Pnk1 ¼ ðMnk  Pnk1 Þ  ðMnk  Pnk Þ matching distance for magnitude-invariant matching.
For DEM matching, the error is largely dependent on the
¼ pðF Pnk1  Pnk1 ÞqðMPnk  Pnk Þ
resolution of REM and DEM. As addressed before, the
is given, where p and q can be computed from Pnk1 , Pnk , resolution of DEM is approximately 2m/pixel and the
j
F Pnk1 , and MPnk . The relative REM Rel REMnk at sampling interval is five pixels to match DEM and REM.
j
Mnk is defined by subtracting the REM at Mnk from that Additionally, the resolution of REM depends on the altitude
j
at Mnk , where the REM at Mnk is estimated from the of an aircraft, with the minimum accuracy of estimation is
global feature point F Pnk1 and its matching point MPnk . about 10m.
Similarly, the relative DEM Rel DEM jnk is defined.
In the second stage, the proposed localization algorithm
searches the area, with 5 pixels  5 pixels interval, to find 5 EXPERIMENTAL RESULTS AND DISCUSSIONS
the matching point in the DEM. Assuming that the Experiments with four sets of real sequential aerial images
resolution of the DEM is about 2 m/pixel, the real search were conducted to show the effectiveness of the proposed
interval is approximately equal to 10 m  10 m. The algorithm. The first aerial image sequence (test sequence I)
absolute position is estimated by finding the position was taken by a camera attached to a helicopter, over
having the minimum cost in matching the relative REM Daejon in Korea. The second, third, and fourth aerial image
and the relative DEM, where M-estimator [13] is used for sequences (test sequences II, III, and IV) were taken over
computing the cost. Note that M-estimator is one of robust Kongju, in Korea, by a camera attached to a light airplane.
estimators, defined by replacing the square operation of The test sequences I, II, III, and IV consist of about 1,400,
least squares by the symmetric cost function that limits the 1,770, 1,360, and 1,570 frames (sampled at one frame/
large error. Note that the Huber metric is used for the second), with the total flight trajectories corresponding to
symmetric cost function. The initial position is given by about 54, 27, 126, and 124 km long, respectively. The
relative position estimation and the absolute position is
trajectories are composed of a number of curved orbits and
obtained by searching around the initial position. The
straight paths. The test sequence I was acquired by a 0-cam
proposed DEM matching algorithm is described in [10].
video camera with the field of view equal to 42:4  54:7 ,
Mnk ¼ ðMnk;x ; Mnk;y ; Mnk;z Þ signifies the ground point
whereas the test sequences II, III, and IV were obtained by
corresponding to the global feature point F Pnk1 , with
Mnk;z corresponding to the REM at Mnk . Similarly, a Hi-8mm video camera with the field of view equal to
5:4  2:0 , 38:4  25:7 , and 18:0  22:5 , respectively.
j j j j Attitude and altitude parameters are obtained from a
Mnk ¼ ðMnk;x ; Mnk;y ; Mnk;z Þ
gyroscope and an altimeter, respectively. These equip-
is the ground point corresponding to the sampled feature ments were mounted with the camera and synchronized to
j
point F Pnk1 in the previous frame and its matching point correctly measure the parameters specific to an appropriate
j
MPnk in the current frame, and ðx1  x0 Þ  ðy1  y0 Þ frame. Characteristics of the four test image sequences are
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 9

Fig. 5. Sample test images. (a) Test sequence I (320  240). (b) Test sequence II (240  320). (c) Test sequence III (240  320). (d) Test
sequence IV (320  240). (e) IRS (256  256).

summarized in Table 1. Note that the ground resolution 31  15 for NCC-based block matching. Search area ðUx 
(m/pixel) is given by 2h tanðr =2Þ=Nc , where h, r , and Nc Uy Þ is set to 175  113 at the first frame and reduced to
are the altitude of a camera, field of view along the row 60  80 for the next consecutive frames by making use of
direction of the input frame, and the number of columns, the previously estimated displacement.
respectively, assuming that the camera is down-looking. Additionally, numerical error analysis of the proposed
Usually, the ground resolution ranges approximately from system is shown. It is difficult to analyze mathematically
0.5 to 2 m/pixel. We use universal transverse Mercator because the system is nonlinear. Thus, the robustness and
(UTM) as the global coordinates, and Kongju and Daejon sensitivity of the proposed system are shown through
are located in 52 zone. Window size ðWx  Wy Þ is set to experiments.
10 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 6. Attitude parameters of the four test sequences as a function of the frame index. (a) Roll. (b) Pitch. (c) Yaw.

5.1 Estimated Trajectories for acquired at 2,540 feet high on the helicopter. In parenthesis,
Four Aerial Image Sequences the image size is represented by Nc  Nr , where Nc and Nr
Figs. 5a, 5b, 5c, and 5d show sample aerial images of four denote the numbers of column and row pixels, respectively.
test image sequences I, II, III, and IV, respectively. All of Figs. 5b and 5c (240  320) were acquired at 5,000 and
them were quantized to eight bits. Fig. 5a (320  240) was 6,000 feet high on the light airplane, respectively. Fig. 5d
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 11

Fig. 7. Estimation result (test sequence I), (S: Satellite image matching; H: High-resolution matching; D: DEM matching). (a) Estimated trajectory
with relative and integrated position estimations. (b) Estimation error.

(320  240) was obtained at 6,000 feet on the light airplane. image looks different from the input video frame because
Fig. 5e shows the IRS image (256  256), which was the resolution and image characteristics are different. Thus,
quantized to six bits with 5 m/pixel ground resolution, image compensation is necessary for effective matching of
covering the same region as the aerial image shown in two images.
Fig. 5a. Even though both images show the same region Each image in Fig. 5 shows the image at the position at
which absolute position estimation was performed. Figs. 5a,
containing a number of apartments, they look different
5b, 5c, and 5d correspond to the reference images at the
because they were acquired with different attitude para- position whose symbol indices are given by S2 (in Fig. 7),
meters by different image sensors. While the resolution of H2 (in Fig. 8), D1 (in Fig. 9), and H3 (in Fig. 10),
an input frame is almost the same as that of high-resolution respectively. Fig. 5e is used as a reference image at the
reference images such as Figs. 5a, 5b, 5c, and 5d, the satellite position, whose symbol index is given by S2 (in Fig. 7).
12 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 8. Estimation result (test sequence II), (S: Satellite image matching; H: High-resolution image matching; D: DEM matching). (a) Estimated
trajectory with relative and integrated position estimations. (b) Estimation error.

Positions at which absolute position estimation is per- Figs. 6a, 6b, and 6c show the roll, pitch, and yaw
formed are marked with symbols consisting of two parameters as a function of the frame index for the four test
characters. The first character represents the type of sequences, respectively. The outline of the trajectory is
absolute position estimation methods: S, H, and D for determined by yaw and roll parameters, and the height of
matching using the IRS satellite image, the high-resolution an aircraft is affected by the pitch parameter. The variance
image, and DEM information, respectively. The second of roll values becomes large by strong wind. As shown in
Fig. 6a, the roll angle of the test sequences III and IV shows
numeric character denotes the sequential index. Because it
a large variation because of strong wind.
is assumed that the trajectory of the aircraft is planned
Figs. 7a, 8a, 9a, and 10a show the experimental results of
before flying, the spots at which absolute position estima- relative and absolute position estimations with the test
tion is performed are determined in advance. Image sequences I, II, III, and IV, respectively. In the trajectory
matching is employed for the images that contain distinct representation, thick and thin lines represent the real
artificial structures. Especially, the satellite image is used (correct) trajectory and the trajectory estimated by the
for the region where a high-resolution reference image is proposed integrated system, respectively. The dot line
not available. DEM matching is used for mountain regions denotes the trajectory obtained by manually identifying
showing a large change rate of elevation. the position of an aircraft on a 5000:1 map, in which the
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 13

Fig. 9. Estimation result (test sequence III), (S: Satellite image matching; H: High-resolution image matching; D: DEM matching). (a) Estimated
trajectory with relative and integrated position estimations. (b) Estimation error.

inevitable error is observed. It is very difficult to show the distinct features are manually estimated for accuracy.
accuracy analytically; however, the error in the image space Because the relative position estimation method is based
is less than ten pixels, which corresponds to approximately on a recursive approach, the estimated error increases with
30 m in the worst case. Note that only the positions having the frame number. The average position error defined as
14 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 10. imation result (test sequence IV), (S: Satellite image matching; H: High-resolution image matching; D: DEM matching). (a) Estimated
trajectory with relative and integrated position estimations. (b) Estimation error.

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1X K
relative/integrated position estimations, respectively, and
ðXk  Xk Þ2 þ ðYk  Yk Þ2
0 0

K k¼1 K denotes the total number of frames manually estimated.


For the test image sequence I, the average position error
is used as a performance measure, where (Xk0 ; Yk0 ) and by relative position estimation alone is 291 m. The average
(Xk ; Yk ) represent positions estimated by manual and position error by the proposed system with integrated
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 15

position estimation is 175 m. The maximum errors by Experimental results with four real test sequences show
relative and integrated position estimations are 1,211 and that relative position estimation yields reasonable estimates.
593 m, respectively. Experiments show that the estimated However, the estimation error tends to increase as the
error is greatly reduced by absolute position estimation. aircraft navigates, which can be reduced by absolute
Note that absolute position estimation pulls the trajectory, position estimation. High-resolution image matching is
deviated by error accumulation in relative position estima- more effective than other algorithms because resolution of
tion, to the real trajectory. It is confirmed that absolute reference images is high. IRS image matching is needed for
position estimation is effective. Fig. 7b shows that absolute the spots where high-resolution images are not available.
position estimation reduces the position error and that DEM matching is suitable for mountain regions with
absolute position estimation using DEM information pro- significant elevation change.
vides a good performance in mountain areas, in terms of the
position error. 5.2 Numerical Error Analysis
Fig. 8b shows the experimental results of relative and The sensitivity of the proposed system against error is
integrated position estimation algorithms with the test shown through experiments. There are two kinds of error
sequence II. Because the relative position estimation sources: internal and external. The internal error is due to
method is based on a recursive approach, the estimated incorrect correspondences, whereas the external error
error increases with time, similar to the case of the test includes measurement error and inaccuracy of the reference
sequence I. The average position error by relative position information. The error sensitivity of the relative position
estimation alone is 538 m, whereas that by the proposed estimation and measurement error is shown experimentally
system with integrated position estimation is 105 m. The with the test sequence I.
maximum errors by relative and integrated position The derivatives of the displacement with respect to the
estimations are 1,040 and 367 m, respectively. The estimated correspondence is given by
error is greatly reduced by absolute position estimation. @Bn;x
Because this test sequence is acquired on the relatively ¼ ðMn;z  Pn;z Þ
@MPn;x
straight flying trajectory, absolute position estimation is
fðR11 R32  R12 R31 ÞMPn;y þ ðR11 R33  R13 R31 Þfg
very effective. In the test sequence II, the angle of view is
small (see Table 1). As a result, the ground coverage of the ðR31 MPn;x þ R32 MPn;y þ R33 fÞ2
image is too small to match the relative DEM with the @Bn;x
¼ ðMn;z  Pn;z Þ
relative REM. Thus, the image matching algorithm is used @MPn;y
as an absolute position estimation method. Experiments fðR12 R31  R11 R32 ÞMPn;x þ ðR12 R33  R13 R32 Þfg
with the test sequence II also show that the proposed
ðR31 MPn;x þ R32 MPn;y þ R33 fÞ2
system yields a reasonable performance by effectively
@Bn;y
combining both relative and absolute position estimation ¼ ðMn;z  Pn;z Þ
methods. @MPn;x
Fig. 9b shows the experimental results of relative and fðR21 R32  R31 R22 ÞMPn;y þ ðR21 R33  R31 R23 Þfg
integrated position estimation methods with the test ðR31 MPn;x þ R32 MPn;y þ R33 fÞ2
sequence III. The average position error by relative position @Bn;y
estimation alone is 456 m, whereas that by the proposed ¼ ðMn;z  Pn;z Þ
@MPn;y
system with integrated position estimation is 309 m. The
maximum errors by relative and integrated position fðR22 R31  R32 R21 ÞMPn;x þ ðR22 R33  R32 R23 Þfg
:
estimations are 1,136 and 977 m, respectively. The estimated ðR31 MPn;x þ R32 MPn;y þ R33 fÞ2
error is reduced by absolute position estimation. Even
though the total flight distance is long compared with that From these derivatives, the sensitivities of the displacement
of the test sequence II, the estimation error is small because along the x and y directions are defined by
this test image sequence is acquired with relatively stable 8rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
   9
attitude parameters. >
> @Bn;x
MP þ
@Bn;y
MP >
>
< @MPn;x n;x @MPn;x n;x =
Fig. 10b shows the experimental results of relative and Sx ¼ EðSn;x Þ ¼ E qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ;
integrated position estimation methods with the test >
> >
>
: ðBn;x Þ2 þ ðBn;y Þ2 ;
sequence IV. The average position error by relative position
8rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
    9
estimation alone is 757 m, whereas that by the proposed > @Bn;x @Bn;y >
system with integrated position estimation is 209 m. The >
< @MPn;y MP n;y þ @MPn;y MP n;y >
=
maximum errors by relative and integrated position Sy ¼ EðSn;y Þ ¼ E qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ;
>
> >
>
estimations are 1,521 and 1,495 m, respectively. The : ðBn;x Þ2 þ ðBn;y Þ2 ;
estimated error is greatly reduced by absolute position
estimation. Experiments with the test sequence IV also where EðÞ represents expectation. The sensitivities Sx and
show that the proposed system greatly reduces the error Sy along x and y directions computed over the entire frames
accumulated in relative position estimation. When this test are experimentally obtained as 3.4 and 1.6 percent,
sequence was acquired, the attitude of the aircraft was very respectively, assuming that the correspondence error is a
unstable because of strong wind, which gives large half-pixel. On the other hand, the error caused by a large
fluctuations in roll parameter values. As a result, the mismatching is approximately given by 2h tanðr =2Þd=Nc ,
estimation error for this test sequence is larger than that of where h, r , d, and Nc represent the altitude of a camera,
the test sequence III. field of view along the row direction of the input frame,
16 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

Fig. 11. Error sensitivity of the proposed system. (a) Average relative error of Bx as a function of the standard deviation of the Gaussian noise added
in attitude parameters. (b) Average relative error of By as a function of the standard deviation of the Gaussian noise added in attitude parameters.
(c) Average relative arrors in the altitude parameter. (d) Estimation additive Gaussian noise (standard deviation= 1 degree). (e) Estimation error as a
function of the frame index with noisy attitude parameters contaminated by the additive Gaussian noise (standard deviation= 5 degrees).
(f) Estimation error as a function of the frame index with the noisy altitude parameter contaminated by the additive Gaussian noise (standard
deviation = 2 and 6 percent of the altitude value).
SIM ET AL.: INTEGRATED POSITION ESTIMATION USING AERIAL IMAGE SEQUENCES 17

disparity error, and the number of columns, respectively, . USGS DOQs: US Geological Survey digital ortho-
assuming that the camera is down-looking. The error is photo quadrangles
about 3.23 m/pixels when H is 1000 m. As a result, the error . UTM: Universal transverse Mercator
caused by a large mismatching (1 to 10 pixels) ranges from
3.23 m to 32.3 m. Such a large estimation error should be
ACKNOWLEDGMENTS
compensated by absolute position estimation.
Average relative errors of the displacements Bx and By This work was supported in part by the Agency for Defense
as a function of the standard deviation of the Gaussian Development and by ACRC (Automatic Control Research
noise added in attitude parameters are shown in Figs. 11a Center), Seoul National University.
and 11b, respectively. Also, those as a function of the
standard deviation of the Gaussian noise added in the
REFERENCES
altitude parameter are shown in Fig. 11c. Assuming that the
[1] P.R. Wolf, Elements of Photogrammetry. Tokyo, Japan: McGraw Hill
displacements calculated with acquired attitude parameters Kogakusha, 1974.
are correct, the average relative errors are obtained with [2] C.-F. Lin, Advanced Control Systems Design. Englewood Cliffs, N.J.:
noisy measurements. The displacement error largely de- Prentice-Hall, 1994.
[3] S.J. Merhav and Y. Bresler, “On-line Vehicle Motion Estimation
pends on the accuracy of roll parameter values among other
from Visual Terrain Information Part I: Recursive Image Registra-
parameters. tion,” IEEE Trans. Aerospace and Electronic Systems, vol. 22, no. 5,
Figs. 11d and 11e show the average position error as a pp. 583-587, Sept. 1986.
function of the frame index for the noisy attitude para- [4] Y. Bresler and S.J. Merhav, “On-line Vehicle Motion Estimation
from Visual Terrain Information Part II: Ground Velocity and
meters contaminated by the additive Gaussian noise with
Position Estimation,” IEEE Trans. Aerospace and Electronic System,
the standard deviation equal to 1 and 5 degrees, respec- vol. 22, no. 5, pp. 588-603, Sept. 1986.
tively, with the assumption that the trajectory estimated by [5] Q. Zheng and R. Chellappa, “A Computational Vision Approach
relative position estimation is correct. The roll parameter to Image Registration,” IEEE Trans. Image Processing, vol. 2, no. 3,
largely affects the estimation. Fig. 11f shows the average pp. 311-326, July 1993.
[6] J.P. Golden, “Terrain Contour Matching (TERCOM): A Cruise
position error as a function of the frame index for the noisy Missile Guidance Aid,” Proc. Int’l Soc. for Optical Eng. (SPIE) Image
altitude parameter contaminated by the additive Gaussian Processing for Missile Guidance, vol. 238, pp. 10-18, July/Aug. 1980.
noise with the standard deviation equal to 2 and 6 percent [7] B.K.P. Horn, Robot Vision. Cambridge, Mass.: MIT Press, 1986.
of the altitude value, which shows that the proposed system [8] J.J. Rodrı́guez and J.K. Aggarwal, “Matching Aerial Images to 3-D
Terrain Maps,” IEEE Trans. Pattern Analysis and Machine Intelli-
reasonably tolerates up to 5 percent error. gence, vol. 12, no. 12, pp. 1138-1149, Dec. 1990.
[9] M.S. Kang, R.-H. Park, and K.-H. Lee, “Recovering an Elevation
Map by Stereo Modeling of the Aerial Image Sequence,” Optical
6 CONCLUSIONS Eng., vol. 33, no. 11, pp. 3793-3802, Nov. 1994.
[10] D.-G. Sim and R.-H. Park, “Localization Based on the Gradient
This paper proposes an integrated location estimation Information for DEM Matching,” Proc. MVA ’98: IAPR Workshop
algorithm using aerial image sequences. The proposed Machine Vision Applications, pp. 266-269, Nov. 1998.
integrated system consists of the relative and absolute [11] R. Kumar, H.S. Sawhney, J.C. Asmuth, A. Pope, and S. Hsu,
positioning parts. The relative position estimation algo- “Registration of Video to Geo-Referenced Imagery,” Proc. 14th
Int’l Conf. Pattern Recognition, pp. 1393-1400, Aug. 1998.
rithm yields good estimates on the whole in short [12] D.-G. Sim, S.-Y. Jeong, D.-H. Lee, R.-H. Park, R.-C. Kim, S.U. Lee,
navigation. As an aircraft navigates, the estimation error and I.C. Kim, “Hybrid Estimation of Navigation Parameters from
is accumulated. To reduce the estimation error, absolute Aerial Image Sequence,” IEEE Trans. Image Processing, vol. 8, no. 3,
positioning algorithms based on image and DEM matching pp. 429-435, Mar. 1999.
[13] D.-G. Sim and R.-H. Park, “Two-Dimensional Object Alignment
are employed. Experiments with real aerial image se- based on the Robust Oriented Hausdorff Similarity Measure,”
quences show the effectiveness of the proposed integrated IEEE Trans. Image Processing, vol. 10, no. 3, pp. 475-483, Mar. 2001.
position estimation system, in terms of the average and [14] J.L. Barron, D.J. Fleet, and S.S. Beauchemin, “Performance of
maximum errors. Further research will focus on a real-time Optical Flow Techniques,” Int’l J. Computer Vision, vol. 12, no. 1,
implementation of the proposed system on the TMS320C80 pp. 43-77, Feb. 1994.
[15] B.K.P. Horn and B.G. Schunck, “Determining Optical Flow,”
digital signal processor. Artificial Intelligence, vol. 17, pp. 185-203, Aug. 1981.
[16] S.S. Beauchemin and J.L. Barron, “The Computation of Optical
Flow,” ACM Computing Surveys, vol. 27, no. 3, pp. 433-467, Sept.
APPENDIX 1995.
[17] J.-H. Park, K.-D. Hwang, S.B. Pan, R.-C. Kim, R.-H. Park, S.U. Lee,
LIST OF ACRONYMS and I.C. Kim, “Implementation of the Navigation Parameter
Extraction from the Aerial Image Sequence on TMS320C80 DSP
. BMA: Block matching algorithm Board,” Proc. Eighth Int’l Conf. Signal Processing Applications &
Technology, pp. 1562-1566, Sept. 1997.
. DEM: Digital elevation model [18] M.G. Lee, J.-H. Park, S.B. Pan, R.-C. Kim, K.S. Kim, R.-H. Park,
. DT: Distance transform S.U. Lee, and I.C. Kim, “Implementation of the Image-Based
. GPS: Global positioning system Absolute Position Compensation Algorithm in the Navigation
. HT: Hough transform Parameter Extraction System,” Proc. Ninth Int’l Conf. Signal
Processing Applications & Technology, pp. 1603-1607, Sept. 1998.
. INS: Inertial navigation system [19] G. Borgefors, “Hierarchical Chamfer Matching: A Parametric
. IRS: Indian remote sensing Edge Matching Algorithm,” IEEE Trans. Pattern Analysis and
. NCC: Normalized correlation coefficient Machine Intelligence, vol. 10, no. 6, pp. 849-865, Nov. 1988.
. REM: Recovered elevation map [20] D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge,
“Comparing Images using the Hausdorff Distance,” IEEE Trans.
. ROHM: Robust oriented Hausdorff measure Pattern Analysis Machine Intelligence, vol. 15, no. 9, pp. 850-863,
. TERCOM: Terrain contour matching Sept. 1993.
18 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1, JANUARY 2002

[21] M.-P. Dubuisson and A.K. Jain, “A Modified Hausdorff Distance Rin-Chul Kim (M ’95) received the BS, MS, and
for Object Matching,” Proc. 12th Int’l Conf. Pattern Recognition, PhD degrees all in control and instrumentation
pp. 566-568, Oct. 1994. engineering from Seoul National University, in
[22] D.-G. Sim, O.-K. Kwon, and R.-H. Park, “Object Matching 1985, 1987, and 1992, respectively. From 1992
Algorithm using Robust Hausdorff Distance Measures,” IEEE to 1994, he was with the Daewoo Electronics
Trans. Image Processing, vol. 8, no. 3, pp. 425-429, Mar. 1999. Co., LTD, and worked on the development of the
HDTV and DBS systems. From 1994 to 1999, he
Dong-Gyu Sim received the BS and MS was an assistant professor with the School of
degrees in electronics engineering from Sogang information and Computer Engineering at Han-
University, Seoul, Korea, in 1993 and 1995, sung University, Seoul. In 1999, he joined the
respectively. He also received the PhD degree School of Electrical and Computer Engineering at the University of Seoul
at the image processing laboratory of the same as an assistant professor. His current research interests include image
university, in 1999. He was at Hyundai Electro- processing, target tracking, image data compression, and VLSI signal
nics Co., Ltd from 1999 to 2000, where was processing. He is a member of the IEEE.
involved in MPEG-7 standardization. He was a
senior research engineer at Varo Vision Co., Sang Uk Lee (S ’75-M ’80-SM ’99) received
Ltd., working on MPEG-4 applications. Cur- the BS degree from Seoul National Univer-
rently, he works at Sogang University in the Department of Electronic sity, Seoul, Korea, in 1973, the MS degree
Engineering. His current research interests are image processing, from Iowa State University, Ames, in 1976,
computer vision, pattern recognition, and MPEG-4/7. and PhD degree from the University of
Southern California, Los Angeles, in 1980,
all in electrical engineering. In 1980-1981, he
Rae-Hong Park (S ’76-M ’84-SM ’99) received was with the General Electric Company,
the BS and MS degrees in electronics engineer- Lynchburg, Virginia, working on the develop-
ing from Seoul National University, Seoul, ment of digital mobile radio. In 1981-1983, he
Korea, in 1976 and 1979, respectively, and the was a member of technical staff, M/A-COM Research Center,
MS and PhD degrees in electrical engineering Rockville, Maryland. In 1983, he joined the Department of Control
from Stanford University, Stanford, Cailfornia, in and Instrumentation Engineering at Seoul National University as an
1981 and 1984, respectively. He joined the assistant professor, where he is now a professor in the School of
faculty of the Department of Electronic Engi- Electrical Engineering. Currently, Dr. Lee is also affiliated with the
neering at Sogang University, Seoul, Korea, in Automation and Systems Research Institute and the Institute of New
1984, where he is currently a professor. In 1990, Media and Communications at Seoul National University. His current
he spent his sabbatical year at the Computer Vision Laboratory of the research interests are in the areas of image and video signal
Center for Automation Research, University of Maryland, College Park, processing, digital communication, and computer vision. He served
as a visiting associate professor. He received a postdoctoral fellowship as an editor-in-chief for the Transaction of the Korean Institute of
from the Korea Science and Engineering Foundation (KOSEF) in 1990. Communication Science from 1994 to 1996. He is a member of the
He received the Academic Award in 1987 from the Korea Institute of editorial board of the Journal of Visual Communication and Image
Telematics and Electronics (KITE) and the Haedong Paper Award in Representation and an associate editor for IEEE Transactions on
2000 from the Institute of Electronics Engineers of Korea (IEEK). Also, Circuits and Systems for Video Technology. He is a member of Phi
he received the First Sogang Academic Award in 1997 and the Kappa Phi. He is a senior member of the IEEE.
Professor Achievement Excellence Award in 1999, from Sogang
University. He served as editor for the KITE Journal of Electronics Ihn-Cheol Kim received the BS and MS
Engineering in 1995-1996. His current research interests are computer degrees in electronics engineering from Dan-
vision, pattern recognition, and video communication. He is a senior
kook University, Seoul, Korea in 1985 and 1987,
member of the IEEE. respectively. He is a senior research engineer at
the Agency for Defense Development of Korea,
where he is working in the areas of computer
vision and image understanding.

. For more information on this or any other computing topic,


please visit our Digital Library at http://computer.org/publications/dlib.

You might also like