## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

ANALYTICAL PHOTOGRAMMETRY

LECTURE NOTES

Robert Burtch

Professor

August 2000

SURE 440 – Analytical Photogrammetry Lecture Notes Page i

TABLE OF CONTENT

TOPIC Page

Coordinate Transformations 1

Basic Principles 1

General Affine Transformation 3

Orthogonal Affine Transformation 5

Isogonal Affine Transformation 5

Example of an Isogonal Affine Transformation 6

Rigid Body Transformation 8

Polynomial Transformations 8

Projective Transformation 9

Transformations in Three Dimensions 9

Corrections to Photo Coordinates 12

Analytical Photogrammetry Instrumentation 12

Ground Targets 13

Abbe’s Comparator Principle 13

Basic Analytical Photogrammetric Theory 14

Interior Orientation 14

Film Deformation 15

Lens Distortion 17

Seidel Aberration Distortion 17

Decentering Distortion 18

Atmospheric Refraction 21

Earth Curvature 24

Example 27

Projective Equations 34

Introduction 34

Direction Cosines 35

Sequential Rotations 38

Derivation of the Gimbal Angles 39

Linearization of the Collinearity Equation 46

Numerical Resection and Orientation 51

Introduction 51

Case I 52

Example Single Photo Resection – Case I 55

Case II 60

Case III 63

SURE 440 – Analytical Photogrammetry Lecture Notes Page ii

Principles of Airborne GPS 67

Introduction 67

Advantages of Airborne GPS 68

Error Sources 69

Camera Calibration 71

GPS Signal Measurements 72

Flight Planning for Airborne GPS 72

Antenna Placement 73

Determining the Exposure Station Coordinates 75

Determination of Integer Ambiguity 80

GPS-Aided Navigation 81

Processing Airborne GPS Observations 82

Strip Airborne GPS 92

Combined INS and GPS Surveying 94

Texas DOT Accuracy Assessment Project 95

Economics of Airborne-GPS 97

References 101

Coordinate Transformations Page 1

COORDINATE TRANSFORMATIONS

Basic Principles

A coordinate transformation is a mathematical process whereby coordinate values expressed

in one system are converted (transformed) into coordinate values for a second coordinate

system. This does not change the physical location of the point. An example is when the

field surveyor sets up an arbitrary coordinate system, such as orienting the axes along two

perpendicular roads. Later, the office may want to place this survey onto the state plane

coordinate system. This can be done by a simple transformation. The geometry is shown in

figure 1.

Figure 1. Geometry of simple linear transformation in two dimensions.

In figure 1, a point P can be expressed in a U, V coordinate system as U

P

and V

P

. Likewise,

in the X, Y coordinate system, the point is defined as X

P

and Y

P

. Assume for the time that

both systems share the same origin. Then the X-axis is oriented at some angle α from the U-

axis. The same applies for the Y and V axes. The X-coordinate of the point can be shown to

be

ep de X

P

+ ·

From triangle feP:

α

· ⇒ · α

cos

fP

eP

eP

fP

cos

The X

P

coordinate can be written as

Coordinate Transformations Page 2

eP de X

P

+ ·

But,

α

· ∴ · α

α · ∴ · α

cos

U

eP

eP

U

cos

tan Y de

Y

de

tan

P P

P

P

Then,

α − α ·

+ α · α

α

+

α

α

·

α

+ α ·

sin Y cos X U

U sin Y cos X

cos

U

cos

sin Y

cos

U

tan Y X

P P P

P P P

P P

P

P P

In a similar fashion, the V

P

coordinate transformation can also be developed.

Pb ab V

P

+ ·

But,

( ) α − · ⇒

−

· α

α · ⇒ · α

sin bc X ab

bc X

ab

sin

tan Y bc

Y

bc

tan

P

P

P

P

Therefore, ab becomes

( )

α

α

− α ·

α α − ·

cos

sin

Y sin X

sin tan Y X ab

2

P P

P P

and Pb can be shown to be

α

· ⇒ · α

cos

Y

Pb

Pb

Y

cos

P P

Coordinate Transformations Page 3

The V

P

coordinate then becomes

α + α ·

,

_

¸

¸

α

α

−

α

+ α ·

α

+

α

α

− α ·

cos Y sin X

cos

sin

cos

1

Y sin X

cos

Y

cos

sin

Y sin X V

P P

2

P P

P

2

P P P

In these derivations, the angle of rotation (α) is a rotation to the right. It can easily be shown

that the conversion from U, V to X, Y will take on a similar form. Since the angle would be

in the opposite direction, one can insert a negative value for α and then arrive at the

following form of the transformation (recognizing that the sin (-α) = -sin α):

α + α − ·

α + α ·

cos V sin U Y

sin V cos U X

P P P

P P P

This can be shown in matrix form as:

1

]

1

¸

1

]

1

¸

α α −

α α

·

1

]

1

¸

P

P

P

P

V

U

cos sin

sin cos

Y

X

Next, we will take these transformation equations and expand them into different forms all

related to what is called the affine transformation.

General Affine Transformation

The general affine transformation is normally shown as

Which provides a unique solution if 0

b a

b a

2 2

1 1

≠ .

( ) 1

c y b x a ' y

c y b x a ' x

2 2 2

1 1 1

¹

¹

¹

;

¹

+ + ·

+ + ·

Coordinate Transformations Page 4

This is a two-dimensional linear transformation that is used in photogrammetry for the

following:

a) Transforms comparator coordinates to photo coordinates and is used for correcting

film distortion.

b) Connecting stereo models.

c) Transforms model coordinates to survey coordinates.

The property of the affine transformation is that it will carry parallel lines into parallel lines.

In other words, two lines that are parallel to each other prior to the transformation will

remain parallel after the transformation. It will not preserve orthogonality.

Figure 2. Physical interpretation of the affine transformation.

Figure 2 shows the physical interpretation involved in this transformation. The x and y axes

represent the original axis system while x’, y’ represent the newly transformed coordinate

system. The transformation can then be written as

where: ∆x’, ∆y’ are the translation elements in moving from the center of the original

coordinate system to the center of the transformed coordinate system,

C

x

, C

y

are scale factors in the x and y direction,

α is the angle of rotation, and

ε is the angle of non-orthogonality between the axes of the transformed

coordinate system.

Note that there are six parameters in this transformation (C

x

, C

y

, α, ε, ∆x’, and ∆y’). When

comparing equations (1) and (2), one can see the following.

( ) ( )

( ) ( ) ( ) ( )

( ) 2

' y cos y C sin x C ' y

' x sin y C cos x C ' x

y x

y x

¹

¹

¹

;

¹

∆ + ε + α + ε + α − ·

∆ + α + α ·

Coordinate Transformations Page 5

Orthogonal Affine Transformation

To the general affine case, one can impose the condition of orthogonality, i.e., ε → 0. This

results in a five parameter transformation (C

x

, C

y

, α, ∆x’, and ∆y’). This transformation is

useful when one takes into account the differences in the magnitude of film shrinkage in the

length of the film versus its width. The transformation is shown as:

Note that this transformation is non-linear.

Isogonal Affine Transformation

To the general case of the affine transformation one can impose two conditions. These

conditions are orthogonality (ε → 0) and uniform scale (C = C

x

= C

y

). The isogonal affine

transformation is also called the Helmert transformation, similarity transformation, Euclidean

transformation, and conformal transformation. It is shown as:

If one recalls those equalities expressed in equation (3), we can see that

Therefore, equation (5) can be expressed as

( )

( ) ( ) 3

' y c ' x c

cos C b sin C b

sin C a cos C a

2 1

y 2 y 1

x 2 x 1

¹

¹

¹

;

¹

∆ · ∆ ·

β + α · α ·

ε + α − · α ·

( ) 4

' y cos y C sin x C ' y

' x sin y C cos x C ' x

y x

y x

¹

¹

¹

;

¹

∆ + α + α − ·

∆ + α + α ·

( ) 5

' y cos y C sin x C ' y

' x sin y C cos x C ' x

¹

¹

¹

;

¹

∆ + α + α − ·

∆ + α + α ·

2 1

2 1

a b sin C

b a cos C

· − · α −

· · α

Coordinate Transformations Page 6

2 1 1

1 1 1

c y a x b ' y

c y b x a ' x

+ + − ·

+ + ·

or, as normally shown as:

In this form (6), the transformation is linear. The back solution can be shown as:

Example of an Isogonal Affine Transformation

The following are the

measured comparator

values:

x

UL

= 70.057 mm

y

UL

= -40.014 mm

x

LR

= 80.067 mm

y

LR

= -50.026 mm

x

PT

= 76.0985 mm

y

PT

= -41.9810 mm

The “true” photo

coordinates of the reseau are:

x’

UL

= 70.107 mm

y’

UL

= -39.843 mm

x’

LR

= 80.133 mm

y’

LR

= -49.820 mm

Recall that the transformation formulas can be written as:

( ) 6

d ay bx ' y

c by ax ' x

¹

¹

¹

;

¹

+ + − ·

+ + ·

( ) ( )

( ) ( )

( ) 7

b a

d ' y a c ' x b

y

b a

d ' y b c ' x a

x

2 2

2 2

¹

¹

¹

¹

¹

;

¹

+

− + −

·

+

− − −

·

Coordinate Transformations Page 7

The unknowns are a, b, c, and d while the measured values are x

UL

, y

UL

, x

LR

, and y

LR

. The

“true” values are x’

UL

, y’

UL

, x’

LR

, and y’

LR

. Differentiating the transformation formulas with

respect to the parameters would follow as:

0

d

' x

1

c

' x

y

b

' x

x

a

' x

UL UL

UL

UL

UL

UL

·

∂

∂

·

∂

∂

·

∂

∂

·

∂

∂

The design matrix (B) is shown as

The discrepancy vector (f) and the vector containing the parameters (∆ ∆∆ ∆) are shown as:

The normal equation is found using the following relationship where N is the normal

coefficient matrix:

N = B

t

B

d x b y a ' y

c y b x a ' x

d x b y a ' y

c y b x a ' x

LR LR LR

LR LR LR

UL UL UL

UL UL UL

+ − ·

+ + ·

+ − ·

+ + ·

( )

( )

( )

( )

1

1

1

1

]

1

¸

− −

−

− −

−

·

1

1

1

1

]

1

¸

−

−

·

1

1

1

1

1

1

1

1

1

]

1

¸

∂

∂

∂

∂

∂

∂

∂

∂

·

1 0 067 . 80 026 . 50

0 1 026 . 50 067 . 80

1 0 057 . 70 014 . 40

0 1 014 . 40 057 . 70

1 0 x y

0 1 y x

1 0 x y

0 1 y x

d , c , b , a

' y

d , c , b , a

' x

d , c , b , a

' y

d , c , b , a

' x

B

LR LR

LR LR

UL UL

UL UL

LR

LR

UL

UL

1

1

1

1

]

1

¸

· ∆

1

1

1

1

]

1

¸

−

−

·

1

1

1

1

]

1

¸

·

d

c

b

a

820 . 49

133 . 80

843 . 39

107 . 70

' y

' x

' y

' x

f

LR

LR

UL

UL

Coordinate Transformations Page 8

The inverse of the normal coefficient matrix is shown as:

The constant vector (t) and the solution (∆ ∆∆ ∆) are computed as:

The transformed coordinates of the point are then computed as:

Rigid Body Transformation

To the general affine transformation, one further set of conditions can be imposed. This

includes orthogonality and not scale (C

x

= C

y

= 1). In this case the transformation can be

shown with only three parameters (α, ∆x’, and ∆y’) as:

1

1

1

1

]

1

¸

− −

−

·

2

0 2

124 . 150 040 . 90 429 . 15422

040 . 90 124 . 150 0 429 . 15422

N

1

1

1

1

]

1

¸

−

·

−

942775 . 76

0 942775 . 76

748971 . 0 449211 . 0 009978 . 0

449211 . 0 748971 . 0 0 009978 . 0

N

1

1

1

1

1

]

1

¸

·

1

1

1

1

]

1

¸

−

−

· · ∆

1

1

1

1

]

1

¸

−

−

· ·

−

d

c

b

a

045424 . 0

014579 . 0

002547 . 0

999051 . 0

t N

663 . 89

240 . 150

776 . 33

068 . 1514

f B t

1 T

( )( ) ( )( )

( )( ) ( )( ) ( )

793 . 41

045424 . 0 9810 . 41 999051 . 0 0985 . 76 002547 . 0

d y a x b ' y

mm 148 . 76

014579 . 0 9810 . 41 002547 . 0 0985 . 76 999051 . 0

c y b x a ' x

PT PT PT

PT PT PT

− ·

− + − + − − ·

+ + − ·

·

+ − − + ·

+ + ·

Coordinate Transformations Page 9

( ) 8

' y cos y sin x ' y

' x sin y cos x ' x

¹

¹

¹

;

¹

∆ + α + α − ·

∆ + α + α ·

Polynomial Transformations

A polynomial can also be used to perform a transformation. This is given as

m

m

+ + + + + + ·

+ + + + + + ·

xy b y b x b y bb x b b ' y

xy a y a x a y a x a a ' x

5

2

4

2

3 2 1 0

5

2

4

2

3 2 1 0

An alternative from Mikhail [Ghosh, 1979] can also be used.

( ) ( )

( ) ( ) l

l

+ + − + + − ·

+ + − + + + ·

xy 2 A y x A y A x A B ' y

xy 2 A y x A y A x A A ' x

3

2 2

4 1 2 0

4

2 2

3 2 1 0

Projective Transformation

The projective equations are frequently used in photogrammetry. Shown here, without

derivation, is the form of the 2-D projective transformation.

1 y d x d

b y b x b

' y

1 y d x d

a y a x a

' x

2 1

3 2 1

2 1

3 2 1

+ +

+ +

·

+ +

+ +

·

Transformations in Three Dimensions

The developments that have already been presented represent the transformation in 2-D

space. Surveying measurements are increasingly being performed in a 3-D mode. The

approach is basically the same as above, except for the addition of one more axis about

which the transformation takes place. Discussion on the use of the projective equations will

be given in a later section.

Coordinate Transformations Page 10

Instead of using the projective equations, polynomials may be used to perform the 3-D

transformation. Ghosh [1979] gives the general form of this type of transformation.

m

m

m

+ + + + +

+ + + + + + + + ·

+ + + + +

+ + + + + + + + ·

+ + + + +

+ + + + + + + + ·

2

12

2

11

2

10 9

8 7

2

6

2

5

2

4 3 2 1 0

2

12

2

11

2

10 9

8 7

2

6

2

5

2

4 3 2 1 0

2

12

2

11

2

10 9

8 7

2

6

2

5

2

4 3 2 1 0

xz c y x c xy c zx c

yz c xy c z c y c x c z c y c x c c ' z

xz b y x b xy b zx b

yz b xy b z b y b x b z b y b x b b ' y

xz a y x a xy a zx a

yz a xy a z a y a x a z a y a x a a ' x

This transformation is not conformal therefore it should only be used where the rotation

angles are very small.

Mikhail presents another form of the 3-D polynomial, which is conformal in the three planes.

This is given as [Ghosh, 1979}:

( )

( )

( )

+ + + + + − − + + − + ·

+ + + + − + − + + + − ·

+ + + + − − + + + + ·

0 zx A 2 yz A 2 z y x A z A y A x A C ' z

xy A 2 0 yz A 2 z y x A z A y A x A B ' y

xy A 2 zx aA 0 z y x A z A y A x A A ' x

5 6

2 2 2

7 1 4 3 0

5 7

2 2 2

6 4 1 2 0

6 7

2 2 2

5 3 2 1 0

The 0’s here indicate that the coefficients for the terms, yz in x’, zx in y’, and xy in z’ are

zero.

A polynomial projective transformation can be shown, without derivation, as [Ghosh, 1979]:

1 z d y d x d

c z c y c x c

' z

1 z d y d x d

b z b y b x b

' y

1 z d y d x d

a z a y a x a

' x

3 2 1

4 3 2 1

3 2 1

4 3 2 1

3 2 1

4 3 2 1

+ + +

+ + +

·

+ + +

+ + +

·

+ + +

+ + +

·

Coordinate Transformations Page 11

A solution is possible provided that

0

d d d d

c c c c

b b b b

a a a a

4 3 2 1

4 3 2 1

4 3 2 1

4 3 2 1

≠

Corrections to Photo Coordinates Page 12

CORRECTIONS TO PHOTO COORDINATES

Analytical Photogrammetry Instrumentation

Analytical photogrammetry is performed on specialized instruments that have a very high

cost due to the fact that there is a limited market. With the onset of digital photogrammetry,

the instrumentation is cheaper (being the computer) but the software still remains expensive

for this specialized applications.

The design characteristics of analytical instrumentation include [Merchant, 1979]:

• High accuracy

• High reliability

• High measuring efficiency

• Low first cost

• Low cost of maintenance

In addition, operational efficiency becomes an important consideration. This factor involves the

necessary training required for the operator of the equipment. If the instrument requires an

individual with a basic theoretical background in photogrammetry along with experience, then

there will be a limited pool from which one can draw their operators. Operational efficiency

also involves on the comfort of the operator when operating the equipment. One of the

advantages of digital photogrammetry is that it has the capability, at least theoretically, to

completely automate the whole process and an individual with no basic understanding of

photogrammetric principles can do this.

There are various different kinds of photogrammetric instrumentation that can be used in

analytical photogrammetry. At the low end, precision analog, or semi-analytical (computer-

aided) stereoplotters can be used either in a monoscopic or stereoscopic mode. When used on a

stereoplotter, it is important to put all of the elements in their zero positions (ω’ = ω” = ϕ’ = ϕ”

= κ’ = κ” = by’ = by” = bz’ = bz” = 0) [Ghosh, 1979]. The base (bx), scale and Z-column

readings should be at some realistic value. Analytical plotters can also be used for analytical

photogrammetric measurements. These instruments are generally linked to analytical

photogrammetry software that helps the operator complete the photo measurements.

Comparators are designed specifically for precise photo measurements for analytical

photogrammetry. Comparators can be either monoscopic or stereoscopic. The photographs are

placed on the stages and all points that are imaged on the photo are measured. The last type of

instrument is the digital or softcopy plotter. Photos are scanned (or captured directly in a digital

form) and points are measured. With autocorrelation techniques the whole process of

aerotriangulation can be automated with the solution containing more points than can be done

manually.

Corrections to Photo Coordinates Page 13

To achieve the high accuracy demanded by many analytical photogrammetric applications, it is

important that the instrument upon which the measurements are made is well calibrated and

maintained. There are many systematic error sources associated with the comparator. They are

“a) Errors of the instrument system,

- scaling and periodic errors (of the x, y measuring systems involving scales,

spindles, coordinate counter, etc.);

- affinity errors (being the scale difference between x an y directions);

- errors of rectilinearity (bending) of the guide rails;

- lack of orthogonality between x and y axes (also known as ‘rectangularity

error’).

b) Backlash and tracking errors.

c) Dynamic errors (e.g., microscope velocity does not drop to zero at points to be

approached during the operation.

d) Errors of automation in the system,

- digital resolution (smallest incremental interval);

- errors due to deviation of the direction. This is because the control system

may not provide for the continuously variable scanning direction.”

[Ghosh, 1979, p.30]

One could determine the corrections to each of these error sources, although from a practical

perspective these errors are accounted for by transforming the photo measurements to the

“true” photo system, which is based on calibration.

Ground Targets

Ground targets can be one of three different types. Signalized points are targeted on the ground

prior to the flight. Several different target designs are used in photogrammetry. Detail points

are those well defined physical features that are imaged on the photography. These items can be

things like the intersection of roads (for small-scale mapping), intersections of sidewalks,

manholes, etc. The last type of control point is the artificial point that is added to the

photography after the film is processed. Using a point transfer instrument, such as the PUG by

Wild, points are marked on the emulsion of the film.

Abbe's Comparator Principle

Abbe's comparator principle states that the object that is to be measured and the measuring

instrument must be in contact or lie in the same plane. The design is based on the following

requirements:

"i) To exclusively base the measurement in all cases on a longitudinal graduation with

which the distance to be measured is directly compared; and

Corrections to Photo Coordinates Page 14

ii) To always design the measuring apparatus in such a way that the distance to be

measured will be the rectilinear extension of the graduation used as a scale." [Manual of

Photogrammetry, ASP, in Ghosh, 1979, p.7]

Basic Analytical Photogrammetric Theory

Analytical photogrammetry can be broken down into three fundamental categories: First Order

Theory, Second Order Theory and Third Order Theory. Fist Order Theory is the basic

collinearity concept where the light rays from the object space pass through the atmosphere and

the camera lens to the film in a straight line. Second Order Theory corrects for the most

significant errors that are not accounted for in First Order Theory. Those items that are normally

covered include lens distortion, atmospheric refraction, film deformation and earth curvature.

Third Order Theory consists of all the other sources of error in the imposition of the

collinearity condition, which are not included in Second Order Theory. These errors are usually

not accounted for except for special circumstances. They include platen unflatness, transient

thermal gradients across the lens cone, etc.

Interior Orientation

The first phase of analytical photogrammetric processing is the determination of the interior

orientation of the photography. The photogrammetric coordinate system is shown in figure 2.

The point, p, is imaged on the photograph with coordinates x

p

, yp, 0. The principal point is

determined through camera calibration and it generally is reported with respect to the center of

the photograph as defined by the intersection of opposite fiducial marks (indicated principal

point). It has coordinates x

o

, y

o

, 0. The perspective center is the location of the lens elements

Figure 3. Examples of Abbe's Comparator principle with simple measurement systems.

Corrections to Photo Coordinates Page 15

and it has coordinates x

o

, y

o

, f. The vector from the perspective center to the position on the

photo is given as

1

1

1

]

1

¸

−

−

−

·

f 0

y y

x x

a

o p

o p

**Interior orientation involves the determination of film deformation, lens distortion, atmospheric
**

refraction, and earth curvature. The purpose is to correct the image rays such that the line form

the object space to the image space is a straight line, thereby fulfilling the basic assumption used

in the collinearity condition.

Figure 4. Photographic coordinate system.

Film Deformation

When film is processed and used it is susceptible to dimensional change due to the tension

applied to the film as it is wound during both the picture taking and processing stages. In

addition, the introduction of water-based chemicals to the emulsion during processing and

the subsequent drying of the film may cause the emulsion to change dimensionally.

Therefore, these effects need to be compensated. The simplest approach is to use the

appropriate transformation model discussed in the previous section.

One of the problems with this approach is that it is possible that unmodelled distortion can still

be present when only four (or fewer) fiducial marks are employed. To overcome this problem,

reseau photography is commonly employed for applications requiring a higher degree of

accuracy. A reseau grid consists of a grid of targets that are fixed to the camera lens and imaged

Corrections to Photo Coordinates Page 16

on the film. One simple approach is to put a piece of glass in front of the film with the targets

etched on the surface. The reseau grid is calibrated so that the positions of the targets are

accurately known. By observing the reseau targets that surround the imaged points and using

one of the transformation models discussed earlier, the results should more accurately depict the

dimensional changes that occur due to film deformation. For example, the isogonal affine

model can be used. It will have the following form, taking into consideration the coordinates of

the principal point (x

o

, y

o

).

1

]

1

¸

−

1

]

1

¸

1

]

1

¸

α α −

α α

+

1

]

1

¸

∆

∆

·

1

]

1

¸

o

o

y

x

' y

' x

cos sin

sin cos

y

x

y

x

In its linear form it looks like:

1

]

1

¸

−

1

]

1

¸

+

1

]

1

¸

1

]

1

¸

−

·

1

]

1

¸

o

o

y

x

d

c

' y

' x

a b

b a

y

x

Using 4 fiducials, an 8-parameter projective transformation can be used. Its advantage is that

linear scale changes can be found in any direction. The correction for film deformation is

given as

o

2 1

3 2 1

o

2 1

3 2 1

y

1 ' y c ' x c

b ' y b ' x b

y

x

1 ' y c ' x c

a ' y a ' x a

x

−

+ +

+ +

·

−

+ +

+ +

·

Measurement of the four fiducials yields 8 observations. Therefore, this model provides a

unique solution.

Other approach to compensation of film deformation is to use a polynomial. One model, used

by the U.S. Coast and Geodetic Survey (now National Geodetic Survey) when four fiducials are

used is shown as:

xy b y b x b b y ' y y y

xy a y a x a a x ' x x x

3 2 1 0

3 2 1 0

+ + + + · − · ∆

+ + + + · − · ∆

This model can be expanded to an eight fiducial observational scheme as:

xy b y x b y b x b xy b y b x b b y ' y y y

xy a y x a y a x a xy a y a x a a x ' x x x

7

2

6

2

5

2

4 3 2 1 0

7

2

6

2

5

2

4 3 2 1 0

+ + + + + + + + · − · ∆

+ + + + + + + + · − · ∆

Corrections to Photo Coordinates Page 17

Lens Distortion

The effects of lens distortion are to move the image from its theoretically correct location to its

actual position. There are two components of lens distortion: radial distortion (Seidel

aberration) and decentering distortion. Radial lens distortion is caused from faulty grinding of

the lens. With today’s computer controlled lens manufacturing process, this distortion is almost

negligible at least to the accuracy of the camera calibration itself. Decentering distortion is

caused by faulty placement of the individual lens elements in the camera cone and other

manufacturing defects. The effects are small with today’s lens systems. The values for lens

distortion are determined from camera calibration. These values are generally reported by either

a table or in terms of a polynomial (see the example at the end of this section).

Seidel Aberration Distortion

Seidel has identified five lens aberrations. These include astigmatism, chromatic aberration (this

is sometimes broken into lateral and longitudinal chromatic aberration), spherical aberration,

coma, curvature of field, and distortion. An aberration is the "failure of an optical system to

bring all light rays received from a point object to a single image point or to a prescribed

geometric position" [ASPRS, 1980]. It is caused by the faulty grinding of the lens. Generally,

aberrations do not affect the geometry of the image but instead affect image quality. The

exception is Seidel's fifth aberration - distortion. Here the geometric position of the image point

is moved in image space and this change in position must be accounted for in analytical

photogrammetry. The effect of this distortion is radial from the principal point.

Conrady's intuitive development for handling this radial distortion is expressed in the following

polynomial form:

Figure 5. Radial Lens Distortion Geometry.

Corrections to Photo Coordinates Page 18

This is based on three general hypotheses:

“a. The axial ray passes the lens undeviated;

b. The distortion can be represented by a continuous function; and

c. The sense of the distortions should be positive for all outward displacement of the

image.” [Ghosh, 1979, p.88]

From Figure 3, recall that

r

2

= x

2

+ y

2

By similar triangles, the following relationship can be shown

y

y

x

x

r

r δ

·

δ

·

δ

the x and y Cartesian coordinate components of the effects of this distortion are thus found by:

The corrected photo coordinates can then be computed using the form:

DECENTERING DISTORTION

Decentering lens distortion is asymmetric about the principal point of autocollimation. When the

value is "one" then the radial line remains straight. This is called the axis of zero tangential

distortion (see figures 4 and 5).

+ + + + + · δ

9

4

7

3

5

2

3

1 0

r k r k r k r k r k r

( )

( )y r k r k k y

r

r

y

x r k r k k x

r

r

x

4

2

2

1 0

4

2

2

1 0

+ + + ·

δ

· δ

+ + + ·

δ

· δ

( )

( )y r k r k k 1 y

r

r

1 y y y

x r k r k k 1 x

r

r

1 x x x

4

2

2

1 0 c

4

2

2

1 0 c

− − − − ·

,

_

¸

¸ δ

− · δ − ·

− − − − ·

,

_

¸

¸ δ

− · δ − ·

Corrections to Photo Coordinates Page 19

Figure 6. Geometry of tangential distortion showing the tangential profile.

Figure 7. Effects of decentering distortion.

Duane Brown, using the developments by Washer, designed the corrections for the lens

distortion due to decentering. Brown called this the "Thin Prism Model" and it is shown as:

Corrections to Photo Coordinates Page 20

where: J

1

, J

2

are the coefficients of the profile function of the decentering distortion, and

ϕ

o

is the angle subtended by the axis of the maximum tangential distortion with the

photo x-axis.

The concept of the thin prism was found to be inadequate to fully describe the effects of

decentering distortion. Therefore, the Conrady-Brown model was developed to find the effects

of decentering on the x,y encoders:

A revised Conrady-Brown model made further refinements to the computation of decentering

distortion and this model is shown to be:

where:

1

3

4

1

2

3

o 1 2

o 1 1

J

J

P

J

J

P

cos J P

sin J P

·

·

ϕ ·

ϕ − ·

P’s define the tangential profile function. This is the tangential distortion along the axis of

maximum tangential distortion.

The corrected photo coordinates due to the effects of decentering distortion can then be found

by subtracting the errors computed in the previous equations. The corrected photo coordinates

become:

( )

( )

o o

4

2

2

1

o o

4

2

2

1

cos J cos r J r J y

sin J sin r J r J x

ϕ · ϕ + · δ

ϕ − · ϕ + − · δ

( )

( )

1

]

1

¸

ϕ

,

_

¸

¸

+ − ϕ + · δ

1

]

1

¸

ϕ − ϕ

,

_

¸

¸

− + · δ

o

2

2

o

2

4

2

2

1

o

2

o

2

2

4

2

2

1

cos

r

y 2

1 sin

r

y x 2

r J r J y

cos

r

y x 2

sin

r

x 2

1 r J r J x

( ) [ ][ ]

( ) [ ][ ] l

l

+ + + + + · δ

+ + + + + · δ

4

4

2

3

2 2

2 1

4

4

2

3 2

2 2

1

r P r P 1 y 2 r P y x P 2 y

r P r P 1 y x P 2 x 2 r P x

Corrections to Photo Coordinates Page 21

x

c

= x - δx

y

c

= y - δy

Atmospheric Refraction

Figure 8. Effects of atmospheric refraction on object space light ray.

Light rays bend due to refraction. The amount of refractions is a function of the refractive index

of the air along the path of that light ray. This index depends upon the temperature, pressure and

composition, including humidity, dust, carbon dioxide, etc. The light rays from the object space

to image space must pass through layers of differing density thereby bending that ray at various

layer boundaries along the path.

From Snell's Law we can express the law of refraction as

where: n = refractive index

dn = difference in refractive index between the two mediums

θ = angle of incidence, and

θ+dα = angle of refraction

Generalizing and simplifying yields

( ) ( ) α + θ · θ + d sin n sin dn n

i i i i

Corrections to Photo Coordinates Page 22

Integrating

where ln indicates the natural logarithm and the subscripts L is the camera station and P is the

ground point.

Generalizing

where K is the atmospheric refraction constant. For vertical photography, dθ can be expressed

with respect to r as

δr can also be expressed as a function of K using:

θ · α tan

n

dn

d

( )

o

P

o

p

o

P

n

n

n

n

n ln tan

n

dn

tan d ⋅ θ · θ · α · α

∫ ∫

α

α

θ ·

α

· θ tan K

2

d

( )

θ

+

· ∴

θ

,

_

¸

¸

+ ·

θ θ + ·

θ θ ·

θ ·

d

f

r f

dr

d

f

r

1 f

d tan 1 f

d sec f dr

tan f r

2 2

2

2

2

2

Corrections to Photo Coordinates Page 23

The radial component can also be expressed using a simplified power series:

+ + + ·

5

3

3

2 1

r k r k r k dr

where the k’s are constants. The Cartesian components of atmospheric refraction are

K is a constant determined from some model atmosphere. For example, the 1959 ARDC (Air

Rome Development Center) model developed from Bertram is shown as:

The atmospheric model developed by Saastamoinen for an altitude of up to eleven kilometers is

given by

For altitudes up to nine kilometers, this equation can be simplified as

,

_

¸

¸

+ · ∴

,

_

¸

¸ +

· θ

+

·

θ · θ

2

3

2 2 2 2

f

r

r K dr

f

r

K

f

r f

tan K

f

r f

dr

tan K d

y

f

r

1 K

r

r

y y

x

f

r

1 K

r

r

x x

2

2

2

2

,

_

¸

¸

+ ·

,

_

¸

¸ δ

· δ

,

_

¸

¸

+ ·

,

_

¸

¸ δ

· δ

6

2 2

10

H

h

250 h 6 h

h 2410

250 H 6 H

H 2410

K

−

⋅

1

]

1

¸

,

_

¸

¸

+ −

−

+ −

·

( ) ( ) ( ) [ ]

6 245 . 4 256 . 5 256 . 5

10 H 02257 . 0 1 0 . 277 H 02257 . 0 1 h 022576 . 0 1

H

1225

K

−

⋅

¹

;

¹

¹

'

¹

− − − − − ·

( ) ( ) [ ] { }

6

10 h H 2 02 . 0 1 h H 13 K

−

⋅ + − − ·

Corrections to Photo Coordinates Page 24

There are several other atmospheric models. Ghosh [1979] also identifies the US Standard

Atmosphere and the ICAO Standard atmosphere. He also states that, up to about 20 km,

these models are almost the same. Table 1 shows the amount of distortion using a focal

length of 153 mm and the ICAO Standard atmosphere [from Ghosh, 1979, p.95]. The

tabulated values, dr, are in micrometers.

EARTH CURVATURE

Earth curvature causes a displacement of a point due to the curvature of the earth. The point,

when projected onto a plane tangent to the ground nadir point, will occupy a position on that

plane at a distance of ∆H from the earth's surface. The image displacement, as shown in the

figure, is always radially inward towards the principal point. From the geometry, we can see

that

Flying

Height in m

For Radial Distance r of the Image Point from the Photo Center, in mm Coefficients

12 24 50 63 78 94 111 131 153 k

1

⋅10

-2

k

2

⋅10

-6

For Ground Elevation – 0 m above sea level

3000 0.4 0.9 1.9 2.6 3.4 4.5 5.9 7.9 10.7 3.4 1.53

6000 0.7 1.5 3.3 4.4 5.9 7.7 10.1 13.5 18.3 6.1 2.50

9000 0.9 1.9 4.2 5.7 7.5 9.9 13.0 17.3 23.4 7.7 2.23

For Ground Elevation – 500 m above sea level

3000 0.3 0.7 1.6 2.1 2.8 3.7 4.9 6.4 8.8 2.8 1.25

6000 0.7 1.3 3.0 4.0 5.3 6.9 9.1 12.2 15.4 5.4 2.3

9000 0.9 1.8 3.9 5.3 7.0 9.2 12.0 16.0 21.7 7.2 2.99

For Ground Elevation – 1000 m above sea level

3000 0.3 0.6 1.3 1.7 2.2 2.9 3.9 5.1 6.9 2.2 0.99

6000 0.6 1.2 2.7 3.6 4.8 6.3 8.2 10.9 14.5 4.8 2.08

9000 0.8 1.6 3.6 4.9 6.5 8.5 11.2 14.9 20.1 6.7 2.76

For Ground Elevation – 1500 m above sea level

3000 0.2 0.4 0.8 1.2 1.6 2.2 2.8 3.8 5.1 1.6 0.74

6000 0.5 1.1 2.4 3.2 4.2 5.5 7.3 9.7 13.1 4.2 1.87

9000 0.7 1.5 3.4 4.5 6.0 7.8 10.3 13.8 18.6 6.1 2.59

Table 1. Radial image distortion due to atmospheric refraction.

Corrections to Photo Coordinates Page 25

Figure 9. Earth Curvature Correction.

+ − ·

,

_

¸

¸

≈ θ ∴

≈ · θ

2

2

R 2

D

1

R

D

cos cos

R

D

R

' D

( )

R 2

D

H

R 2

D

1 1 R

cos 1 R

cos R R H

2

2

2

≈ ∆ ∴

,

_

¸

¸

− + − ·

θ − ·

θ − · ∆

From which we can write

But,

D

' H

f

dE ∆ ·

Corrections to Photo Coordinates Page 26

Therefore,

But

Yielding

Since H'/(2Rf

2

) is constant for any photograph

where:

2

f R 2

H

K ·

The effects of earth curvature are shown in the Table 2 with respect to the flying height (H)

and the radial distance from the nadir point [Ghosh, 1979; Doyle, 1981]. Looking at the

formula for earth curvature and the intuitive evaluation of the figure, one can see that the

effects will increase rapidly at higher flying heights and the farther one moves from the nadir

point.

H

' H

D

dE ∆ ≈

R H 2

fD

R 2

D

' H

D

' H

f

dE

2

3

2

·

·

r

f

' H

D ≈

2

3

f R 2

r ' H

dE ·

3

r K dE ·

Corrections to Photo Coordinates Page 27

EXAMPLE

A vertical aerial photograph is taken with an aerial camera having the following calibration data:

Calibrated focal length = 152.212 mm

Fiducial mark & principal point coordinates are shown in the next figure of the fiducial

marks.

The radial lens distortion is shown from the following diagram delineating the distortion curve.

Figure 10. Example showing calibration values for fiducials and principal point.

H in km R(mm)

0.5 1 2 4 6 8 10

10 0.0 0.0 0.0 0.0 0.0 0.0 0.0

20 0.0 0.0 0.1 0.1 0.2 0.2 0.3

40 0.1 0.2 0.4 0.9 1.3 1.8 2.2

60 0.4 0.8 1.5 3.0 4.5 6.0 7.6

80 0.9 1.8 3.6 7.2 10.8 14.3 17.9

100 1.8 3.5 7.0 14.0 21.0 28.0 35.0

120 3.1 6.0 12.1 24.2 36.3 48.4 60.5

140 4.9 9.6 19.2 38.4 57.6 76.8 96.0

160 7.1 14.3 28.6 57.2 85.7 114.3 142.9

Table 2. Amount of earth curvature (in mm) for vertical photography assuming a

focal length of 150 mm [from Ghosh, 1979, p.98].

Corrections to Photo Coordinates Page 28

Figure 11. Camera calibration graph of distortion using both polynomials and radial distortion.

The decentering lens distortion values are: J

1

= 8.10x10

-4

J

2

= -1.40x10

-8

Ν

o

= 108

o

00'

The flying height is 38,000' above mean sea level. The average height of the terrain is 400'

above mean sea level. The photograph is placed in the comparator and the following image

coordinates are measured:

point r

x

(mm) r

y

(mm)

1 28.202 13.032

2 240.341 16.260

3 237.068 228.432

4 24.980 225.160

Pt. p 228.640 36.426

Questions:

Radial Distance (mm) 20 40 60 80 100 120 140 160

Distortion (µm) +6 +9 +6 -1 -7 -9 -1 -13

Polynomial (µm) +5.3 +7.9 +5.2 +0.5 -7.1 -10.5 +0.6 +123

Table 2. Radial lens distortion for camera in the example.

Corrections to Photo Coordinates Page 29

1. What are the image coordinates of point p corrected for film deformation and reduced to

put the origin at the principal point? Use a 6-parameter general affine transformation and

compute the residuals.

2. What are the image coordinates of p corrected additionally for radial and decentering

lens distortion?

3. What are the image coordinate corrections at p for atmospheric refraction and earth

curvature?

4. What are the final corrected image coordinates of p?

SOLUTION

1. The observed photo coordinates are:

x = 228.640 mm

y = 36.426 mm

The design matrix (B) is:

The discrepancy vectors are:

The normal coefficient matrix inverse (N

-1

) is:

The parameters are:

a

1

= 0.99923 a

2

= -0.00428

1

1

1

1

]

1

¸

**0 . 1 160 . 225 980 . 24
**

0 . 1 432 . 228 068 . 237

0 . 1 260 . 16 341 . 240

0 . 1 031 . 13 202 . 28

1

1

1

1

]

1

¸

·

1

1

1

1

]

1

¸

·

622 . 226

974 . 228

973 . 16

648 . 14

F

984 . 23

928 . 235

257 . 238

274 . 26

F

2 1

1

1

1

]

1

¸

−

− −

·

−

9647056994 . 0

0026815783 . 0 0000222132 . 0

0029475275 . 0 0000000002 . 0 0000222209 . 0

N

1

Corrections to Photo Coordinates Page 30

b

1

= 0.00441 b

2

= 0.99917

c

1

= -1.96656 c

2

= 1.75196

The residuals are:

The transformed coordinates are:

x = 226.657 mm

y = 37.168 mm

The photo coordinates translated to the principal point become:

x = 226.657 mm - 131.104 mm

= 95.553 mm

y = 37.168 mm - 121.814 mm

= -84.646 mm

2. Lens distortions are computed as follows:

Siedel radial distortion in terms of their rectangular coordinate vales are:

1

1

1

1

]

1

¸

−

−

·

1

1

1

1

]

1

¸

−

−

·

00430 . 0

00430 . 0

00429 . 0

00430 . 0

V

00294 . 0

00294 . 0

00294 . 0

00294 . 0

V

2 1

( ) ( )

mm 653 . 127

646 . 84 553 . 95 y x r

2 2 2 2

·

− + · + ·

( ) ( )

mm 663 . 8

r 10 223 . 2 r 10 794 . 5 r 286 . 0 r

5 9 3 5

− ·

× + × − · ∆

− −

Corrections to Photo Coordinates Page 31

The decentering distortion using the revised Conrady-Brown model is shown as follows:

The coordinates corrected for decentering distortion then become:

3. Using the 1959 ARDC model:

( )

mm 652 . 84

653 . 127

00866 . 0

1 646 . 84

r

r

1 y y

mm 559 . 95

653 . 127

00866 . 0

1 553 . 95

r

r

1 x x

c

c

− ·

,

_

¸

¸ −

− − ·

,

_

¸

¸ ∆

− ·

·

,

_

¸

¸ −

− ·

,

_

¸

¸ ∆

− ·

000017 . 0

10 10 . 8

10 40 . 1

J

J

P

00025 . 0 108 cos 10 10 . 8 cos J P

00077 . 0 108 sin 10 10 . 8 sin J P

4

8

1

2

3

o 4

o 1 2

o 4

o 1 1

− ·

×

× −

· ·

− · × · ϕ ·

− · × − · ϕ − ·

−

−

−

−

( ) [ ][ ]

( ) ( ) ( )( )( ) [ ] ( )( ) [ ]

( ) [ ][ ]

( ) ( )( ) ( ) ( ) ( ) [ ] ( )( ) [ ]

mm 003 . 0

653 . 127 000017 . 0 1 655 . 84 2 653 . 127 00025 . 0 652 . 84 559 . 95 00077 . 0 2

r P 1 y 2 r P xy P 2 y

mm 016 . 0

653 . 127 0025 . 0 1 652 . 84 559 . 95 00025 . 0 2 559 . 95 2 653 . 127 00077 . 0

r P 1 xy P 2 x 2 r P x

2 2 2

2

3

2 2

2 1

2 2 2

2

2 2

2 2

1

·

− + − + − + − − ·

+ + + · δ

− ·

− + − − + + − ·

+ + + · δ

( )

mm 655 . 84

003 . 0 652 . 84 y y y

mm 576 . 95

016 . 0 559 . 95 x x x

c

c

− ·

− − · δ − ·

·

− − · δ − ·

Corrections to Photo Coordinates Page 32

The effects of earth curvature are presented as:

4. The corrected photo coordinated due to the effects of refraction are:

The coordinates corrected of earth curvature become:

km 12 . 0

m 1000

km 1

' 3937

m 1200

' 400

km 58 . 11

m 1000

km 1

' 3937

m 1200

' 000 , 38

·

,

_

¸

¸

,

_

¸

¸

·

,

_

¸

¸

,

_

¸

¸

( )

( )

0000887 . 0

10

250 58 . 11 6 58 . 11

58 . 11 2410

10

250 H 6 H

H 2410

K

6

2

6

2

·

×

1

]

1

¸

+ −

· ×

1

]

1

¸

+ −

·

− −

( )

mm 013 . 0

655 . 84

212 . 152

653 . 127

1 0000887 . 0 y

f

r

1 K y

mm 014 . 0

576 . 95

212 . 152

653 . 127

1 0000887 . 0 x

f

r

1 K x

2

2

2

2

2

2

2

2

− ·

−

,

_

¸

¸

+ ·

,

_

¸

¸

+ · δ

·

,

_

¸

¸

+ ·

,

_

¸

¸

+ · δ

( ) ( )

( ) ( )

mm 0807 . 0

000 , 906 , 20 212 . 152 2

400 000 , 38 653 . 127

R f 2

' H r

dE

2

2

2

3

·

−

· ·

( ) ( )

mm 642 . 84

013 . 0 655 . 84 y y y

mm 561 . 95

014 . 0 576 . 95 x x x

c

c

− ·

− − − · δ − ·

·

− · δ − ·

Corrections to Photo Coordinates Page 33

The final corrected photo coordinates are, thus,

x = 95.622 mm

y = -84.696 mm

( )

mm 696 . 84

653 . 127

0807 . 0

1 642 . 84

r

dE

1 y y

mm 622 . 95

653 . 127

0807 . 0

1 561 . 95

r

dE

1 x x

c

c

− ·

,

_

¸

¸

+ − ·

,

_

¸

¸

+ ·

·

,

_

¸

¸

+ ·

,

_

¸

¸

+ ·

Projective Equations Page 34

PROJECTIVE EQUATIONS

Introduction

In the first section we were introduced to coordinate transformations. The numerical

resection problem involves the transformation (rotation and translation) of the ground

coordinates to photo coordinates for comparison purposes in the least squares adjustment.

Before we begin this process, lets derive the rotation matrix that will be used to form the

collinearity condition.

In photogrammetry, the coordinates of the points imaged on the photograph are determined

through observations. The next procedure is to compare these photo coordinates with the

ground coordinates. On the photograph, the positive x-axis is taken in the direction of flight.

For any number of reasons, this will most probably never coincide with the ground X-axis.

The origin of the photographic coordinates is at the principal point which can be expressed as

1

1

1

]

1

¸

−

−

−

·

1

1

1

]

1

¸

f

y y

x x

' Z

' Y

' X

o

o

where: x, y are the photo coordinates of the imaged point with reference to the

intersection of the fiducial axes

x

o

, y

o

are the coordinates from the intersection of the fiducial axes to the principal

point

f is the focal length

Since the origin of the ground coordinates does not coincide with the origin of the

photographic coordinate system, a translation is necessary. We can write this as

1

1

1

]

1

¸

−

−

−

·

1

1

1

]

1

¸

L

L

L

1

1

1

Z Z

Y Y

X X

Z

Y

X

where: X, Y, Z are the ground coordinates of the point

X

L

, Y

L

, Z

L

are the ground coordinates of the ground nadir point

Thus, in the comparison, both ground coordinates and photo coordinates are referenced to the

same origin separated only by the flying height. Note that the ground nadir coordinates

would correspond to the principal point coordinates in X and Y if the photograph was truly

vertical.

Projective Equations Page 35

Direction Cosines

If we look at figure 1, we can see that point P has coordinates X

P

, Y

P

, Z

P

. The length of the

vector (distance) can be defined as

Figure 12. Vector OP in 3-D space.

[ ]

2

1

2

P

2

P

2

P

Z Y X OP + + ·

The direction of the vector can be written with respect to the 3 axes as:

OP

Z

cos

OP

Y

cos

OP

X

cos

P

P

P

· γ

· β

· α

These cosines are called the direction cosines of the vector from O to P. This concept can be

extended to any line in space. For example, figure 2 shows the line PQ. Here we can readily

see that the vector PQ can be defined as:

QP

Z Z

Y Y

X X

PQ

P Q

P Q

P Q

− ·

,

_

¸

¸

−

−

−

·

The length of the vector becomes

Projective Equations Page 36

( ) ( ) ( ) [ ]

2

1

2

P Q

2

P Q

2

P Q

Z Z Y Y X X PQ − + − + − ·

and the direction cosines are

Figure 13. Line vector PQ in space.

PQ

Z Z

cos

PQ

Y Y

cos

PQ

X X

cos

P Q

P Q

P Q

−

· γ

−

· β

−

· α

If we look at the unit vector as shown in figure 3, one can see that the vector from O to P can

be defined as

Projective Equations Page 37

k z j y i x OP + + ·

Figure 14. Unit vectors.

and the point P has coordinates (x, y, z)

T

.

Given a second set of coordinates axes (I, J, K), one can write similar relationships for the

same point P. Each coordinate

axes has an angular relationship to

each of the i, j, k coordinate axes.

For example, figure 4 shows the

relationship between J and i .

The angle between the axes is

defined as (xY). Since i has

similar angles to the other two

axes, one can write the unit vector

in terms of the direction cosines as:

( )

( )

( )1

1

1

]

1

¸

·

,

_

¸

¸

⋅

⋅

⋅

·

xZ cos

xY cos

xX cos

K i

J i

I i

i

Similarly, we have for j and k ,

( )

( )

( )

( )

( )

( )1

1

1

]

1

¸

·

1

1

1

]

1

¸

·

zZ cos

zY cos

zX cos

k

yZ cos

yY cos

yX cos

j

Figure 15. Rotation between Y and x axes.

Projective Equations Page 38

Then, the vector from O to P can be written as

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( ) 1

1

1

]

1

¸

·

1

1

1

]

1

¸

1

1

1

]

1

¸

·

1

1

1

]

1

¸

+

1

1

1

]

1

¸

+

1

1

1

]

1

¸

·

Z

Y

X

z

y

x

zZ cos

zY cos

zX cos

yZ cos

yY cos

yX cos

xZ cos

xY cos

xX cos

zZ cos

zY cos

zX cos

z

yZ cos

yY cos

yX cos

y

xZ cos

xY cos

xX cos

x OP

This can be written more generally as

x R X ·

To solve these unknowns using only three angles, 6 orthogonal conditions must be applied to

the rotation matrix, R. All vectors must have a length of 1 and any combination of the two

must be orthogonal [Novak, 1993]. Thus, designating R as three column vectors [R = (r

1

r

2

r

3

)], we have

0 r r r r r r

1 r r r r r r

1

T

3 3

T

2 2

T

1

3

T

3 2

t

2 1

T

1

· · ·

· · ·

Sequential Rotations

Applying three sequential rotations about three different axes forms the rotation matrix.

Doyle [1981] identifies a series of different combinations. These are shown in Table 1 and

they all presume a local space coordinate system.

Roll (ω) is a rotation about the x-axis where a positive rotation moves the +y-axis in the

direction of the +z-axis. Pitch (ϕ) is a rotation about the y-axis. When the +z-axis is moved

Combination Axes of Rotation

1) Roll (ω) – Pitch (ϕ) – Yaw (κ) x – y - z

2) Pitch (ϕ) – Roll (ω) – Yaw (κ) y – x - z

3) Heading (H) – Roll (ω) – Pitch (ϕ) z – x - y

4) Heading (H) – Pitch (ϕ) – Roll (ω) z – y - x

5) Azimuth (α) – Tilt (t) – Swing (s) z – x - z

6) Azimuth (α) – Elevation (h) – Swing (s) z – x - z

Table 1. Rotation combinations.

Projective Equations Page 39

towards the +x-axis then the rotation is positive. A rotation about the z-axis is called yaw (κ)

with a positive rotation occurring when the +x-axis is rotated towards the +y-axis. All of

these angles have a range from -180° to +180°. Heading (H) is a clockwise rotation about

the Z-axis from the +Y-axis to the +X-axis. Azimuth (α) is a clockwise rotation about the Z-

axis from the +Y-axis to the principal plane. Tilt (t) is a rotation about the x-axis and is

defined as the angle between the camera axis and the nadir or Z-axis. This rotation is

positive when the +x-axis is moved towards the +z-axis. Swing is a clockwise angle in the

plane of the photograph measured about the z-axis from the +y-axis to the nadir side of the

principal line. Heading, azimuth and swing have a range from 0° to 360° while the tilt angle

will vary between 0° to 180°. Finally, elevation (h) is a rotation in the vertical plane about

the x-axis from the X-Y plane to the camera axis. The rotation is positive when the camera

axis is above the X-Y plane.

The combinations (1) and (2) are frequently used in stereoplotters while (3) and (4) are

common in navigation. Professor Earl Church developed (5) in his photogrammetric

research whereas the ballistic cameras often used the 6

th

combination.

Derivation of the Gimbal Angles

For a physical interpretation of the rotation matrix written in terms of the directions cosines,

we can look at the planar rotations of the axes in sequence. In the first section we saw that

the coordinate transformation can be written in the following form:

1

]

1

¸

1

]

1

¸

α α −

α α

·

1

]

1

¸

P

P

P

P

V

U

cos sin

sin cos

Y

X

In the photogrammetric approach, we rotate the ground coordinates to a photo parallel

system. This involves three rotations: ω - primary, ϕ - secondary, and κ - tertiary. If we look

at the ω rotation about the X

1

axis, we should realize that the X-coordinate does not change

but the Y and Z coordinates do change (figure 5). Moreover, the new values for Y and Z are

not affected by the X-coordinate. Thus, one can write

Projective Equations Page 40

ω ⋅ + ω − ⋅ + ⋅ ·

ω ⋅ + ω ⋅ + ⋅ ·

⋅ + ⋅ + ·

cos Z ) sin ( Y 0 X Z

sin Z cos Y 0 X Y

0 Z 0 Y X X

1 1 1 2

1 1 1 2

1 1 1 2

or in matrix form

1

1

1

]

1

¸

1

1

1

]

1

¸

ω ω −

ω ω ·

1

1

1

]

1

¸

1

1

1

2

2

2

Z

Y

X

cos sin 0

sin cos 0

0 0 1

Z

Y

X

or more concisely,

C M C

2 ω

·

The next rotation is a ϕ- rotation about the once rotated Y

2

-axis. One can write

ϕ ⋅ + ⋅ + ϕ ⋅ ·

⋅ + + ⋅ ·

ϕ − ⋅ + ⋅ + ϕ ⋅ ·

cos Z 0 Y sin X Z

0 Z Y 0 X Y

) sin ( Z 0 Y cos X X

2 2 2 3

2 2 2 3

2 2 2 3

or in matrix form:

1

1

1

]

1

¸

1

1

1

]

1

¸

ϕ ϕ

ϕ − ϕ

·

1

1

1

]

1

¸

2

2

2

3

3

3

Z

Y

X

cos 0 sin

0 1 0

sin 0 cos

Z

Y

X

Figure 16. Rotation angles in photogrammetry.

Projective Equations Page 41

or more concisely,

2 3

C M C

ϕ

·

Finally, we have the κ-rotation about the twice-rotated Z

3

-axis (see figure 5). This becomes

( )

3 3 3

3 3 3

3 3 3

Z 0 Y 0 X ' Z

0 Z cos Y sin X ' Y

0 Z sin Y cos X ' X

+ ⋅ + ⋅ ·

⋅ + κ ⋅ + κ − ⋅ ·

⋅ + κ ⋅ + κ ⋅ ·

which in matrix form is

1

1

1

]

1

¸

1

1

1

]

1

¸

κ κ −

κ κ

·

1

1

1

]

1

¸

3

3

3

Z

Y

X

1 0 0

0 cos sin

0 sin cos

' Z

' Y

' X

or more concisely as

3

C M ' C

κ

·

Thus, the transformation from the survey parallel (X

1

, Y

1

, Z

1

) system is shown as

1

1

1

]

1

¸

·

1

1

1

]

1

¸

·

1

1

1

]

1

¸

ω ϕ κ

1

1

1

1

1

1

G

Z

Y

X

M M M

Z

Y

X

M

' Z

' Y

' X

Performing the multiplication, the elements of M

G

are shown as:

1

1

1

]

1

¸

ϕ ω ϕ ω − ϕ

κ ϕ ω + κ ω κ ϕ ω − κ ω κ ϕ −

κ ϕ ω − κ ω κ ϕ ω + κ ω κ ϕ

·

cos cos cos sin sin

sin sin cos cos sin sin sin sin cos cos sin cos

cos sin cos sin sin cos sin sin sin cos cos cos

M

G

If the rotation matrix is known, then the angles (κ, ϕ, ω) can be computed as [Doyle, 1981]

Projective Equations Page 42

11

21

31

33

32

m

m

tan

m sin

m

m

tan

−

· κ

· φ

−

· ω

If the so-called Church angles (t, s, α) are being used, then the rotation matrix can be derived

in a similar fashion. The values for M are:

1

1

1

]

1

¸

α − α −

− α − α − α − α

− α − α α − α −

·

t cos cos t sin sin t sin

s cos t sin s cos cos t cos sin s sin s cos sin t cos cos s sin

s sin t sin s sin cos t cos sin s cos s sin sin t cos cos s cos

M

If the rotation matrix is known then the Church angles can be found using the following

relationships [Doyle, 1981]:

23

13

2

23

2

13

2

32

2

31 33

32

31

m

m

s tan

m m m m t sin or m t cos

m

m

tan

·

+ · + · ·

· α

The collinearity concept means that the line form object space to the perspective center is the

same as the line from the perspective center to the image point (figure 6). The only

difference is a scale factor. Since the comparison is performed in image space, the object

space coordinates are rotated into a parallel coordinate system. This relationship can be

written as

A kM a ·

Recall that we wrote two basic equations relating the location of a point in the photo

coordinate system and ground nadir position.

1

1

1

]

1

¸

−

−

−

·

1

1

1

]

1

¸

1

1

1

]

1

¸

−

−

−

·

1

1

1

]

1

¸

L

L

L

1

1

1

o

o

Z Z

Y Y

X X

Z

Y

X

and

f

y y

x x

' Z

' Y

' X

Projective Equations Page 43

Figure 17. Collinearity condition.

Then,

1

1

1

]

1

¸

−

−

−

1

1

1

]

1

¸

·

1

1

1

]

1

¸

−

−

−

L

L

L

33 32 31

23 22 21

13 12 11

o

o

Z Z

Y Y

X X

m m m

m m m

m m m

k

f

y y

x x

where k is the scale factor. This equation takes the ground coordinates and translates them to

the ground nadir position. The rotation matrix (M

G

) takes those translated coordinates and

rotates them into a system that is parallel to the photograph. Finally, these coordinates are

scaled to the photograph. The result is the predicted photo coordinates of the ground points

given the exposure station coordinates (X

L

, Y

L

, Z

L

) and the tilt that exists in the photography

(κ, ϕ, ω). If we express this last equation algebraically, then we have

( ) ( ) ( ) [ ]

( ) ( ) ( ) [ ]

( ) ( ) ( ) [ ]

L 33 L 32 L 31

L 23 L 22 L 21 o

L 13 L 12 L 11 o

Z Z m Y Y m X X m k f

Z Z m Y Y m X X m k y y

Z Z m Y Y m X X m k x x

− + − + − · −

− + − + − · −

− + − + − · −

To eliminate the unknown scale factor, divide the first two equations by the third. Thus,

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

1

]

1

¸

− + − + −

− + − + −

− · −

1

]

1

¸

− + − + −

− + − + −

− · −

L 33 L 32 L 31

L 23 L 22 L 21

o

L 33 L 32 L 31

L 13 L 12 L 11

o

Z Z m Y Y m X X m

Z Z m Y Y m X X m

f y y

Z Z m Y Y m X X m

Z Z m Y Y m X X m

f x x

Projective Equations Page 44

Otto von Gruber first introduced this equation in 1930. This equation must satisfy two

conditions [Novak, 1993].

2

32

2

22

2

12

2

31

2

21

2

11

32 31 22 21 12 11

m m m m m m

0 m m m m m m

+ + · + +

· + +

If we look at the equation for M

G

above, lets see if the first condition is met.

( )

( )( )

( )

0

sin sin cos cos sin cos cos

sin cos sin sin cos cos sin cos cos

sin sin cos sin sin sin cos

cos sin cos cos sin cos sin cos cos sin cos cos

sin sin cos sin suin sin cos cos sin cos

cos sin sin sin cos cos cos m m m m m m

2 2

2

2

32 31 22 21 12 11

·

ω ϕ ϕ − ω κ κ ϕ −

κ + κ ω ϕ ϕ + ω κ κ ϕ ·

ω ϕ ϕ − ω κ ϕ ϕ +

ω κ κ ϕ − ω κ ϕ ϕ + ω κ κ ϕ ·

ω ϕ ϕ − κ ϕ ξ − κ ω κ ϕ − +

κ ϕ ω + κ ω κ ϕ · + +

Thus, the first condition is met. For the second constraint, lets first look at the left hand side

of the equation.

( ) 1 sin sin cos cos

sin sin cos cos cos m m m

2 2 2 2

2 2 2 2 2 2

31

2

21

2

11

· ϕ + κ + κ ϕ ·

ϕ + κ ϕ + κ ϕ · + +

The right side of the equation becomes

( ) [ ]

( ) 1 cos cos sin sin

sin cos cos sin cos sin sin

sin cos

sin sin sin cos cos sin cos sin cos sin

sin cos

sin sin sin sin cos sin cos sin 2 cos cos

sin cos sin sin cos sin cos sin 2 sin cos m m m

2 2 2 2

2 2 2 2 2 2 2

2 2

2 2 2 2 2 2 2 2 2 2

2 2

2 2 2 2

2 2 2 2 2 2

32

2

22

2

12

· ω + ϕ + ϕ ·

ω ϕ + ω + κ + κ ω ϕ ·

ω ϕ +

ω κ ϕ + ω κ + ω κ ϕ + ω κ ·

ω ϕ +

ω κ ϕ + ω ω κ κ ϕ − ω κ +

ω κ ϕ + ω ω κ κ ϕ + κ ω · + +

Thus, both sides of the equation equal are equal to one and to each other. Since (X – X

L

), (Y

– Y

L

) and (Z – Z

L

) are proportional to the direction cosines of A , these equations can also be

presented as [Doyle, 1981]:

Projective Equations Page 45

γ + β + α

γ + β + α

− · −

γ + β + α

γ + β + α

− · −

cos m cos m cos m

cos m cos m cos m

f y y

cos m cos m cos m

cos m cos m cos m

f x x

33 32 31

23 22 21

o

33 32 31

13 12 11

o

Here, cos α, cos β and cos γ are the direction cosines of A

The inverse relationship is

( )

( ) ( ) ( )

( ) ( ) ( )

( )

( ) ( ) ( )

( ) ( ) ( )

1

]

1

¸

− + − + −

− + − + −

− · −

1

]

1

¸

− + − + −

− + − + −

− · −

f m y y m x x m

f m y y m x x m

Z Z Y Y

f m y y m x x m

f m y y m x x m

Z Z X X

33 o 23 o 13

32 o 22 o 12

L L

33 o 23 o 13

31 o 21 o 11

L L

These equations are referred to as the collinearity equations.

It would be interesting to see how these equations stand up to the basic principles learned in

basic photogrammetry. Recall that for a truly vertical photograph that the scale at a point can

be written using

Y

y

X

x

h H

f

S · ·

−

·

Here we assumed that the principal point coincided with the indicated principal point and that

the X and Y ground coordinates were related to the origin, being at the nadir point with the

X-axis coinciding with the line from opposite fiducials in the flight direction.

If we look at the collinearity equations, the rotation matrix for a truly vertical photo would be

the identity matrix. Thus,

1

1

1

]

1

¸

·

1 0 0

0 1 0

0 0 1

M

Vert

Then, the projective equations become

( ) ( )

( ) ( )

( )

L

L o

L o

Z Z k f

Y Y k y y

X X k x x

− · −

− · −

− · −

Projective Equations Page 46

If we further assume that the principal point is located at the intersection of opposite fiducials

and if we substitute H for Z

L

and h for Z, then,

( )

( )

( ) h H k f

Y Y k y

X X k x

L

L

− ·

− ·

− ·

Dividing the first two equations by the third ad manipulating the equation yields the identical

scale relationships given in basic photogrammetry.

L L

Y Y

Y

h H

f

X X

x

−

·

−

·

−

LINEARIZATION OF THE COLLINEARITY EQUATION

The linearization of the collinearity equations are given in a number of different textbooks.

The developments presented here follow that outlined by Doyle [1981]. For simplicity, lets

define the projective equations in the following form.

( )

( ) 0

W

V

f y y F

0

W

U

f x x F

o 2

o 1

· + − ·

· + − ·

where U and V are the numerators in the projective equations given earlier and W is the

denominator. From adjustments, we know that the general form of the condition equations

can be written as

0 F B AV · + ∆ +

The deign matrix (B) is found by taking the partial derivative of the projective equations with

respect to the parameters. Thus, it will appear as:

1

1

1

1

1

]

1

¸

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

κ ∂

∂

ϕ ∂

∂

ω ∂

∂

∂

∂

∂

∂

∂

∂

κ ∂

∂

ϕ ∂

∂

ω ∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

·

i

2

i

2

i

2

i

1

i

1

i

1

2 2 2

L

2

L

2

L

2

1 1 1

L

1

L

1

L

1

2

o

2

o

2

1

o

1

o

1

Z

F

Y

F

X

F

Z

F

Y

F

X

F

F F F

Z

F

Y

F

X

F

F F F

Z

F

Y

F

X

F

f

F

y

F

x

F

f

F

y

F

x

F

B

The first section contains the partial derivatives with respect to the interior orientation, the

second group are the partials with respect to the exterior orientation, and the third group are

Projective Equations Page 47

the partials with respect to the ground coordinates. The partial derivatives of the interior

orientation (x

o

, y

o

, and f only) are very basic.

W

V

f

F

1

y

F

0

x

F

W

U

f

F

0

y

F

1

x

F

2

o

2

o

2

1

o

1

o

1

·

∂

∂

− ·

∂

∂

·

∂

∂

·

∂

∂

·

∂

∂

− ·

∂

∂

For the partial derivatives taken with respect to the exposure station coordinates, we will use

the following general differentiation formulas:

,

_

¸

¸

∂

∂

−

∂

∂

·

,

_

¸

¸

∂

∂

−

,

_

¸

¸

∂

∂

·

∂

∂

,

_

¸

¸

∂

∂

−

∂

∂

·

,

_

¸

¸

∂

∂

−

,

_

¸

¸

∂

∂

·

∂

∂

P

W

W

V

P

V

W

f

W

P

W

V

P

V

W

f

P

F

P

W

W

U

P

V

W

f

W

P

W

U

P

U

W

f

P

F

2

2

2

1

where P are the parameters. For the exposure station coordinates (X

L

, Y

L

, Z

L

), the partial

derivatives of the functions U, V, and W become:

33

L

32

L

31

L

23

L

22

L

21

L

13

L

12

L

11

L

m

Z

W

m

Y

W

m

X

W

m

Z

V

m

Y

V

m

X

V

m

Z

U

m

Y

U

m

X

U

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

− ·

∂

∂

Then the partial derivatives of the functions F

1

and F

2

can be shown to be

Projective Equations Page 48

,

_

¸

¸

+ − ·

∂

∂

,

_

¸

¸

+ − ·

∂

∂

,

_

¸

¸

+ − ·

∂

∂

,

_

¸

¸

+ − ·

∂

∂

,

_

¸

¸

+ − ·

∂

∂

,

_

¸

¸

+ − ·

∂

∂

33 23

L

2

33 13

L

1

32 22

L

2

32 12

L

1

31 21

L

2

31 11

L

1

m

W

V

m

W

f

Z

F

m

W

U

m

W

f

Z

F

m

W

V

m

W

f

Y

F

m

W

U

m

W

f

Y

F

m

W

V

m

W

f

X

F

m

W

U

m

W

f

X

F

Recall that the rotation matrix is given in the sequential form as

ω ϕ κ

· M M M M

G

then the partial derivatives of the orientation matrix with respect to the angles can be shown

to be

G

G

G

G

G

G

M

0 0 0

0 0 1

0 1 0

M M

M M

0 0 cos

0 0 sin

cos sin 0

M M

M

M

M

0 1 0

1 0 0

0 0 0

M

M

M M

M

1

1

1

]

1

¸

− ·

κ ∂

∂

·

κ ∂

∂

1

1

1

]

1

¸

ω

ω −

ω − ω

·

ϕ ∂

∂

·

ϕ ∂

∂

1

1

1

]

1

¸

−

·

ω ∂

∂

·

ω ∂

∂

ω ϕ

κ

ω

ϕ

κ

ω

ϕ κ

Then the partial derivatives of the functions U, V, and W taken with respect to the orientation

angles becomes

1

1

1

]

1

¸

−

−

−

·

1

1

1

]

1

¸

L i

L i

L i

G

Z Z

Y Y

X X

M

W

V

U

Yielding,

Projective Equations Page 49

1

1

1

1

1

1

]

1

¸

−

−

−

ω ∂

∂

·

1

1

1

1

1

1

1

1

]

1

¸

ω ∂

∂

ω ∂

∂

ω ∂

∂

L i

L i

L i

G

Z Z

Y Y

X X

M

W

V

U

1

1

1

1

1

1

]

1

¸

−

−

−

ϕ ∂

∂

·

1

1

1

1

1

1

1

1

1

]

1

¸

ϕ ∂

∂

ϕ ∂

∂

ϕ ∂

∂

L i

L i

L i

G

Z Z

Y Y

X X

M

W

V

U

1

1

1

1

1

1

]

1

¸

−

−

−

κ ∂

∂

·

1

1

1

1

1

1

1

1

]

1

¸

κ ∂

∂

κ ∂

∂

κ ∂

∂

L i

L i

L i

G

Z Z

Y Y

X X

M

W

V

U

Now one can evaluate the partial derivatives of F

1

and F

2

with respect to the orientation

angles.

,

_

¸

¸

κ ∂

∂

−

κ ∂

∂

·

κ ∂

∂

,

_

¸

¸

ϕ ∂

∂

−

ϕ ∂

∂

·

ϕ ∂

∂

,

_

¸

¸

ω ∂

∂

−

ω ∂

∂

·

ω ∂

∂

W

W

U U

W

f F

W

W

U U

W

f F

W

W

U U

W

f F

1

1

1

Projective Equations Page 50

,

_

¸

¸

κ ∂

∂

−

κ ∂

∂

·

κ ∂

∂

,

_

¸

¸

ϕ ∂

∂

−

ϕ ∂

∂

·

ϕ ∂

∂

,

_

¸

¸

ω ∂

∂

−

ω ∂

∂

·

ω ∂

∂

W

W

V V

W

f F

W

W

V V

W

f F

W

W

V V

W

f F

2

2

2

The partial derivatives of the functions F

1

and F

2

with respect to the survey points are shown

to be:

,

_

¸

¸

− ·

∂

∂

− ·

∂

∂

,

_

¸

¸

− ·

∂

∂

− ·

∂

∂

31 21

L

2

i

2

31 11

L

1

i

1

m

W

V

m

W

f

X

F

X

F

m

W

U

m

W

f

X

F

X

F

,

_

¸

¸

− ·

∂

∂

− ·

∂

∂

,

_

¸

¸

− ·

∂

∂

− ·

∂

∂

32 22

L

2

i

2

32 12

L

1

i

1

m

W

V

m

W

f

Y

F

Y

F

m

W

U

m

W

f

Y

F

Y

F

,

_

¸

¸

− ·

∂

∂

− ·

∂

∂

,

_

¸

¸

− ·

∂

∂

− ·

∂

∂

33 23

L

2

i

2

33 13

L

1

i

1

m

W

V

m

W

f

Z

F

Z

F

m

W

U

m

W

f

Z

F

Z

F

Numerical Resection and Orientation Page 51

NUMERICAL RESECTION AND ORIENTATION

Introduction

Numerical resection and orientation involves the determination of the coordinates of the

exposure station and the orientation of the photograph in space. Merchant [1973] has

identified four different cases, in order of increasing complexity.

Case I: Compute the elements of exterior orientation (κ, ϕ, ω, X

L

, Y

L

, and Z

L

)

by observing the photo coordinates (x

i

, y

i

) and treating the survey

control coordinates (X

i

, Y

i

, and Z

i

) as known.

Case II: This is an extension of Case I with the addition that the elements of

exterior orientation are also observed quantities. This can easily be

visualized by the use of the global positioning system (GPS) on-board

the aircraft.

Case III: This approach is an extension of Case II. Here the observations

include photo coordinates, exterior orientation, and survey coordinates

(to unknown points). The survey control (coordinates to known

points) is given. The solution is to find the adjusted exterior orientation

parameters and the survey coordinates.

Case IV: Case IV is a further refinement of Case III except that the elements of

interior orientation are observed in addition to the photo coordinates,

exterior orientation, and survey coordinates. The adjustment will

result in adjusted exterior and interior orientation and survey

coordinates.

The general notation for the mathematical model is given as

F = F(obs, X, Y) = 0

where: obs = the observed quantities, and

X, Y = the parameters for the condition function.

A Taylor’s Series evaluation is done to linearize the equation and this is shown as

( ) ( )

0 V

obs

F

X

F

F F

0

00

0

00

0

00

·

1

]

1

¸

∂

∂

+ ∆

1

]

1

¸

∂

∂

+ · (1)

Numerical Resection and Orientation Page 52

The subscript “0” indicates an observed parameter value whereas “00” means the current

estimate of the value. This series is evaluated by comparing the observations to the current

estimates of what those values need to be. Evaluation of this function results in the

observation equation.

or in a more general form:

where: V = the residuals on the observations,

∆ = the alteration vector to the parameters,

f = the discrepancy vector found by comparing the mathematical model

using the current estimate of the parameters with the observed values.

Case I

Case I is the simplest form of the space resection problem. The observed values are the

photo coordinates (x

i

, y

i

). The elements of interior orientation along with the survey

coordinate control are taken as error free. The observational variance-covariance matrix (3

oo

)

is estimated and the exterior orientation elements ((κ, ϕ, ω, X

L

, Y

L

, and Z

L

) and variance-

covariance matrix on the adjusted parameters (3

e

oo

) are computed. The math model employs

the central projective equations as the conditional function. It is shown in general form as:

where the central projective equations are, for x and y:

The observation equations are written as

0 AV B f F · + ∆ + ·

0 f B AV · + ∆ +

( )

( )

( )

1

]

1

¸

·

y F

x F

x F

( ) ( )

( ) ( ) 0

Z

Y

c y y y F

0

Z

X

c x x x F

o

o

·

∆

∆

− − ·

·

∆

∆

− − ·

0 f B AV

e e

· + ∆ +

Numerical Resection and Orientation Page 53

where:

( )

( )

( )

( )

( )

I

1 0

0 1

y , x

y F

y , x

x F

obs

F

A

j j

j

j j

j

j

j

j

·

1

]

1

¸

·

1

1

1

1

1

1

]

1

¸

∂

∂

∂

∂

·

1

1

]

1

¸

∂

∂

·

Thus, the general form for n points is

0 f B V

e e

· + ∆ + (2)

If the number of photo points is larger than three then a least squares adjustment is

performed. The function to be minimized is expressed as

where: λ = the Langrangian multiplier (vector of correlates), and

W = the weight matrix for the photo observations, which is defined as

The weight matrix is usually assumed to be a diagonal matrix derived from the a priori

estimates of the observational variance-covariance matrix. This is usually sufficient for a

two-axis comparator but the correlation cannot be neglected for polar comparators.

Differentiation of the function yields

( )

( )

( )

( )

( )1

1

1

1

1

1

]

1

¸

ω ϕ κ ∂

∂

ω ϕ κ ∂

∂

·

1

]

1

¸

∂

∂

·

L L L

j

L L L

j

j

j

e

Z , Y , X , , ,

y F

Z , Y , X , , ,

x F

Parameters

F

B

( )

( )

O

OO

j

j

j

j

y

x

j

y F

x F

f

v

v

V

1

]

1

¸

·

1

1

]

1

¸

·

[ ]

L L L

Z , Y , X , , , δ δ δ δω δϕ δκ · ∆

,

_

¸

¸

+ ∆ + λ − · f B V 2 WV V F

w e

T

∑ ·

−

obs

OO

1

W

Numerical Resection and Orientation Page 54

There are (4n + m) unknowns: 2n in V, 2n in λ, and m in

e

∆. Collecting the observation

equation and the differentiated function gives

0

f

0

0 V

0 B I

B 0 0

I 0 W

e

e

1

e

·

1

1

1

]

1

¸

+

1

1

1

]

1

¸

λ

∆

1

1

1

1

1

]

1

¸

,

_

¸

¸

−

(5)

Eliminating V and λ and substituting V from (3) into (2) yields

or

Substituting λ into (4) results in the normal equations

or

where: N = the normal coefficient matrix and

t = the constant vector

The adjusted parameters become

The adjusted parameters are found by adding the corrections to those parameters:

) 4 ( 0 B 2

F

) 3 ( 0 2 V W 2

V

F

T

e

e

· λ

,

_

¸

¸

− ·

∆ ∂

∂

· λ − ·

∂

∂

0 f B W

e e

1

· + ∆ + λ

−

Wf B W

e e

− ∆ − · λ

0 Wf B B W B

T

e e e

T

e

·

,

_

¸

¸

+ ∆

,

_

¸

¸

0 t N

e

· + ∆

t N

1

e

−

− · ∆

Numerical Resection and Orientation Page 55

In the least squares adjustment, the process is iterated until the alteration vector reaches some

predefined value. The process of updating the parameter values before undergoing another

adjustment is commonly referred to as the “Newton-Raphson” method.

The residuals are computed as follows:

Therefore

The unit variance is expressed as

with the variance-covariance matrix relating the adjusted parameters is

1 2

o

X

oo

N

e

−

σ ·

∑

Example Single Photo Resection – Case 1

Following is an example of a single photo resection and orientation, Case I problem. The

following data are entered into the program. Survey control is treated as error free, the photo

observations are measured quantities already corrected for atmospheric refraction, lens

distortion, and earth curvature. The exterior orientation is estimated. A weight matrix for the

photo observations was based on a standard error of 10µm.

Following is an example of the single photo resection and orientation.

e

oo a

X X ∆ + ·

0 f B V

e e

· + ∆ +

a

o

F V

0 f V

− ·

· +

6 n 2

WV V

T

2

o

−

· σ

Numerical Resection and Orientation Page 56

SINGLE PHOTO RESECTION AND ORIENTATION - CASE I

Photo Number 1

Photo observations:

Point No. x y

1 61.982 79.018

2 -73.147 78.240

3 -54.934 65.899

4 -26.046 -29.449

5 -34.893 -71.287

6 -23.980 -31.889

7 -11.783 88.922

8 -85.047 105.836

9 -26.468 -6.082

10 -12.523 79.026

11 27.972 85.027

12 12.094 -69.861

13 -80.458 -70.012

Survey Control Points:

Point No. X Y Z

1 44646.75000 111295.53700 273.86600

2 45527.20300 109932.63000 275.53100

3 45536.70500 110193.01300 275.10100

4 46322.43000 111086.31900 254.99000

5 46797.22300 111261.00100 263.21400

6 46334.26800 111122.89000 254.85000

7 45019.89000 110475.18200 262.84500

8 45328.04500 109650.87600 291.36500

9 46087.13500 110933.34300 255.65500

10 45126.21800 110531.17400 261.97300

11 44815.80000 110910.16300 288.32000

12 46489.27900 111729.17600 266.85200

13 47061.42300 110795.42700 268.63900

Exterior Orientation Elements (Estimated)

X

L

Y

L

Z

L

Kappa Phi Omega

45900.0000 111150.0000 2090.0000 2.1500 0.0000 0.0000

The initial values for the design matrix (B) and the discrepancy vector (f) are shown as:

Numerical Resection and Orientation Page 57

The initial values for the normal coefficient matrix (N) are:

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

]

1

¸

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

−

·

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

]

1

¸

−

− − − −

− −

− − − − −

− − −

− − − −

− −

− − −

−

− − − −

− −

− − −

− −

− − −

−

− − − −

−

− − − −

−

− − − −

− −

− − −

− −

− − −

− − −

− − − −

·

0875 . 5

6357 . 2

3097 . 2

4270 . 1

6046 . 2

1628 . 5

9897 . 2

6765 . 4

9310 . 2

9540 . 2

9669 . 3

4727 . 5

0857 . 3

8762 . 4

0136 . 3

4114 . 2

7489 . 3

7577 . 1

0504 . 3

4771 . 2

4395 . 3

5084 . 4

7212 . 3

8890 . 4

1034 . 2

6286 . 5

f

5647 . 70 6172 . 168 8223 . 77 0356 . 0 0457 . 0 0698 . 0

3670 . 142 8286 . 132 9245 . 64 0427 . 0 0698 . 0 0457 . 0

6635 . 104 0510 . 149 5210 . 13 0370 . 0 0456 . 0 0698 . 0

5123 . 131 8336 . 78 5513 . 67 0074 . 0 0698 . 0 0456 . 0

8692 . 94 9512 . 179 1348 . 33 0486 . 0 0462 . 0 0706 . 0

8061 . 122 1434 . 103 6316 . 87 0184 . 0 0706 . 0 0462 . 0

9679 . 110 9332 . 161 8465 . 7 0449 . 0 0455 . 0 0696 . 0

8732 . 129 8825 . 79 0157 . 82 0043 . 0 0696 . 0 0455 . 0

8317 . 82 5384 . 127 5140 . 23 0017 . 0 0454 . 0 0694 . 0

9943 . 129 6027 . 85 1510 . 3 0128 . 0 0694 . 0 0454 . 0

7222 . 174 1336 . 162 5743 . 79 0610 . 0 0463 . 0 0707 . 0

5405 . 193 8997 . 57 8029 . 109 0442 . 0 0707 . 0 0463 . 0

1848 . 117 5356 . 171 9068 . 6 0504 . 0 0455 . 0 0696 . 0

7678 . 129 8770 . 79 0077 . 92 0038 . 0 0696 . 0 0455 . 0

7773 . 82 0500 . 134 5686 . 21 0157 . 0 0453 . 0 0693 . 0

5356 . 127 3078 . 88 8754 . 28 0118 . 0 0693 . 0 0453 . 0

3077 . 87 3882 . 160 1354 . 33 0370 . 0 0456 . 0 0696 . 0

2036 . 125 4782 . 99 5381 . 67 0181 . 0 0696 . 0 0456 . 0

2877 . 82 2941 . 133 5689 . 23 0144 . 0 0453 . 0 0693 . 0

0349 . 128 6295 . 88 3986 . 26 0128 . 0 0693 . 0 0453 . 0

7656 . 119 0967 . 141 4256 . 50 0382 . 0 0458 . 0 0701 . 0

8062 . 153 1100 . 73 3384 . 69 0278 . 0 0701 . 0 0458 . 0

1935 . 138 0566 . 144 2580 . 68 0452 . 0 0459 . 0 0701 . 0

0129 . 173 1797 . 69 9612 . 81 0376 . 0 0701 . 0 0459 . 0

7032 . 76 1953 . 183 6106 . 67 0447 . 0 0458 . 0 0700 . 0

6350 . 132 8596 . 129 1204 . 81 0372 . 0 0700 . 0 0458 . 0

B

Numerical Resection and Orientation Page 58

The initial values for the constant vector (t) are compute a t = B

T

W and are shown as

The following data represent the values for the alteration vector for each iteration.

Iteration No. 1

Alteration Vector (Delta):

-8.15331

-3.94869

-0.15855

-0.02176

0.01941

0.00958

Iteration No. 2

Alteration Vector (Delta):

0.61417

0.72201

0.70268

-0.00013

0.00012

0.00022

1

1

1

1

1

1

1

1

]

1

¸

−

− −

−

− − −

− −

·

8000 . 4091640942

1900 . 479382745 9000 . 4273950234

9100 . 207855784 7400 . 641531976 1300 . 961373269

6170 . 535392 4928 . 216476 0000 . 0 8256 . 291

4125 . 1886194 6390 . 106967 2887 . 116276 2187 . 195 6894 . 905

6390 . 106967 9805 . 1942260 7816 . 353883 0311 . 65 0000 . 0 6894 . 905

N

1

1

1

1

1

1

1

1

]

1

¸

−

−

·

995 . 26185558

450 . 108535637

580 . 37790387

492 . 326

210 . 14980

527 . 51743

t

Numerical Resection and Orientation Page 59

Iteration No. 3

Alteration Vector (Delta):

0.0015

-0.0015

0.00033

0.00000

0.00000

0.00000

Iteration No. 4

Alteration Vector (Delta):

0.00000

0.00000

0.00000

0.00000

0.00000

0.00000

Exterior Orientation Elements (Adjusted)

X

L

Y

L

Z

L

Kappa Phi Omega

45892.4624 111146.7719 2090.5445 2.1281 0.0195 0.0098

Residuals on Photo Observations:

Point No. x y

1 -0.002 -0.009

2 0.004 0.007

3 -0.002 0.002

4 -0.001 -0.002

5 0.002 -0.004

6 -0.000 -0.000

7 0.006 0.011

8 0.006 0.001

9 -0.011 -0.000

10 -0.007 0.001

11 0.002 0.006

12 -0.001 0.007

13 0.004 -0.006

The A Posteriori Unit Variance is .3471294

Numerical Resection and Orientation Page 60

The Variance-Covariance Matrix of Adjusted Parameters is:

.0233948622 .0011026685 -.0020985439 -.0000016307 .0000104961 -.0000002099

.0011026685 .0154028192 -.0034834200 -.0000000932 .0000001678 -.0000075937

-.0020985439 -.0034834200 .0025329779 .0000001566 -.0000009114 .0000018958

-.0000016307 -.0000000932 .0000001566 .0000000005 -.0000000007 .0000000000

.0000104961 .0000001678 -.0000009114 -.0000000007 .0000000048 .0000000001

-.0000002099 -.0000075937 .0000018958 .0000000000 .0000000001 .0000000039

Case II

With Case II, we introduce direct observations on the parameters. A growing example of this

situation is the use of airborne GPS where the receiver is used to determine the exposure

station of the camera at the instant of exposure. Although this will only provide the exposure

station coordinates, integrated systems such as GPS with inertial navigation can yield the

rotational elements also. This new resection application adds a new math model to the

adjustment. This is, for all of the exterior orientation elements:

Since the observations have residuals, the adjusted parameters can only be estimated initially.

Thus,

L L

L L

L L

Z

oo

L Z

o

L

Y

oo

L Y

o

L

X

oo

L X

o

L

oo o

oo o

oo o

Z v Z

Y v Y

X v X

v

v

v

δ + · +

δ + · +

δ + · +

δ + ω · + ω

δ + ϕ · + ϕ

δ + κ · + κ

ω ω

ϕ ϕ

κ κ

( )

( )

( )

( )

( )

( ) 0 Z Z Z F

0 Y Y Y F

0 X X X F

0 F

0 F

0 F

a

L

o

L L

a

L

o

L L

a

L

o

L L

a o

a o

a o

· − ·

· − ·

· − ·

· ω − ω · ω

· ϕ − ϕ · ϕ

· κ − κ · κ

Numerical Resection and Orientation Page 61

Rearranging, we have

The observation equations are

Grouped with the observation equations developed for the photo coordinates, we have

or

where:

The function to be minimized is

where the weight matrix is shown to consist of

( )

( )

0 Z Z v

0 Y Y v

0 X X v

0 v

0 v

0 v

L L

L L

L L

Z

oo

L

o

L Z

Y

oo

L

o

L Y

X

oo

L

o

L X

oo o

oo o

oo o

· δ −

,

_

¸

¸

− +

· δ −

,

_

¸

¸

− +

· δ −

,

_

¸

¸

− +

· δ − ω − ω +

· δ −

,

_

¸

¸

ϕ − ϕ +

· δ − κ − κ +

ω ω

ϕ ϕ

κ κ

0 f V

e e e

· + ∆ −

0 f V

0 f B V

e e e

e e

· + ∆ −

· + ∆ +

0 f B V

e

· + ∆ +

1

]

1

¸

·

1

1

]

1

¸

−

·

1

]

1

¸

·

e

e

e

f

f

f

I

B

B

V

V

V

,

_

¸

¸

+ ∆ + λ − · f B V 2 V W V F

e

T

T

Numerical Resection and Orientation Page 62

1

]

1

¸

·

e

W 0

0 W

W

The normal equations are then written as

Which in an expanded form looks like

resulting in, after performing the multiplication

or generally shown as

On the first cycle in the adjustment the estimates of the parameters are the same as the

observed values

Therefore, the discrepancy vector becomes

Looking at the normal equations, one can see that as the weight matrix for the observed

exterior orientation goes to zero then the normal equation reduces to Case I. As before, the

solution is expressed as

The adjusted parameters are computed by adding the discrepancy vector to the current

estimate of the parameters.

( ) 0 f W B B W B

T

e

T

· + ∆

0

f

f

W 0

0 W

I B

I

B

W 0

0 W

I B

e e

e

T

e

e

e

e

T

·

1

]

1

¸

1

]

1

¸

1

]

1

¸

− + ∆

1

1

]

1

¸

−

1

]

1

¸

1

]

1

¸

−

0 f W Wf B W B W B

e e

e

T

e e e

e

T

·

,

_

¸

¸

− + ∆

,

_

¸

¸

+

0 t N

e

· + ∆

a

o

a

oo

X X ·

0 F f

O

OO

a e

· ·

t N

1

e

−

− · ∆

Numerical Resection and Orientation Page 63

The adjustment is iterated by making these adjusted parameters the current estimates. The

cycling continues until the solution reaches some acceptable level.

The residuals are then found by evaluating the function using the observed and final adjusted

values.

The unit variance is computed as

Finally, the a posteriori variance-covariance matrix is found by multiplying the unit variance

by N

-1

.

Case III

Case III is an extension of Case II in that we now introduce the spatial coordinates as

observed quantities thereby constraining the parameters. The math models are:

• For collinearity:

• For the exterior orientation:

e

oo a

X X ∆ + ·

o

a

F V − ·

6 26

V W V

T

2

o

−

· σ

2

2

o

x

oo

N

e

σ · ∑

( ) ( )

( ) ( ) 0

Z

Y

c y y y F

0

Z

X

c x x x F

o

o

·

∆

∆

− − ·

·

∆

∆

− − ·

( )

( )

( )

( )

( )

( ) 0 Z Z Z F

0 Y Y Y F

0 X X X F

0 F

0 F

0 F

a

L

o

L L

a

L

o

L L

a

L

o

L L

a o

a o

a o

· − ·

· − ·

· − ·

· ω − ω · ω

· ϕ − ϕ · ϕ

· κ − κ · κ

Numerical Resection and Orientation Page 64

• For the ground control:

The observation equations then become

Where the observational residuals on the exterior orientation

,

_

¸

¸

e

V , survey coordinates

,

_

¸

¸

s

V

and photo coordinates (V) are defined as:

The discrepancy vectors

,

_

¸

¸

f , f , f

s e

are computed by evaluating the functions using the current

estimates of the unknown parameters and the original observations.

The alterations to the current assumed value are shown as:

( )

( )

( ) 0 Z Z Z F

0 Y Y Y F

0 X X X F

a

j

o

j j

a

j

o

j j

a

j

o

j j

· − ·

· − ·

· − ·

0 f B B V

0 f V

0 f V

s s e e

s s s

e e e

· + ∆ + ∆ +

· + ∆ −

· + ∆ −

1

1

1

1

1

1

1

1

]

1

¸

·

1

1

1

1

1

1

1

1

]

1

¸

·

1

1

1

1

1

1

1

1

]

1

¸

·

ω

ϕ

κ

n

n

2

1

1

n

2

1

1

1

L

L

L

y

x

x

y

x

Z

X

Z

Y

X

s

Z

Y

X

e

x

v

v

v

v

V

v

v

v

v

v

V

v

v

v

v

v

v

V

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

1

1

1

1

1

1

1

1

1

]

1

¸

·

1

1

1

1

1

1

1

1

]

1

¸

ω

ϕ

κ

·

1

1

1

]

1

¸

·

n

n

n

J

1

1

s

L

L

L

e

j

j

Z F

Y F

X F

Z F

Y F

X F

f

Z F

Y F

X F

F

F

F

f

y F

x F

f

O

OO

o

oo

**Numerical Resection and Orientation Page 65
**

The design matrices,

e

Band

s

B, are presented as being

Collecting the observations

1

1

1

1

1

1

1

1

1

]

1

¸

δ

δ

δ

δ

δ

δ

· ∆

1

1

1

1

1

1

1

1

]

1

¸

δ

δ

δ

δω

δϕ

δκ

· ∆

n

n

n

1

1

1

S

L

L

L

e

Z

Y

X

Z

Y

X

Z

Y

X

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

]

1

¸

∂

∂

∂

∂

∂

∂

∂

∂

∂

∂

·

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

]

1

¸

ω ϕ κ ∂

∂

ω ϕ κ ∂

∂

ω ϕ κ ∂

∂

ω ϕ κ ∂

∂

·

n j 1 1 1

n

n j 1 1 1

j

n j 1 1 1

1

n j 1 1 1

1

n j 1 1 1

1

s

L L L

n

L L L

n

L L L

1

L L L

1

e

Z , , Z , , Z , Y , X

Z F

Z , , Z , , Z , Y , X

Z F

Z , , Z , , Z , Y , X

Z F

Z , , Z , , Z , Y , X

Y F

Z , , Z , , Z , Y , X

X F

B

Z , Y , X , , ,

y F

Z , Y , X , , ,

x F

Z , Y , X , , ,

y F

Z , Y , X , , ,

x F

B

**Numerical Resection and Orientation Page 66
**

or

The function to be minimized is

This leads to the normal equations

where the weight matrix is assumed to be free of any correlation and takes the form:

The normal equation in the expanded form is

or as

The solution is

Then the process is cycled until an acceptable solution is obtained.

0

f

f

f

I 0

0 I

B B

V

V

V

s

e

s

e

s e

s

e

·

1

1

1

]

1

¸

+

1

1

]

1

¸

∆

∆

1

1

1

1

]

1

¸

−

− +

1

1

1

]

1

¸

0 f B V · + ∆ +

( ) f B V 2 V W V F

T

T

+ ∆ + λ − ·

( ) ( ) 0 f W B B W B

T T

· + ∆

1

1

1

]

1

¸

·

s

e

W

W

W

W

0

f W Wf B

f W Wf B

W B W B

B W B W B W B

s s

s

T

e e

e

T

s

e

s s

s

T

s

e

T

e e

e

T

·

1

1

]

1

¸

−

−

+

1

1

]

1

¸

∆

∆

1

1

]

1

¸

+

+

0 t N · + ∆

t N

1 −

− · ∆

Principles of Airborne GPS Page 67

PRINCIPLES OF AIRBORNE GPS

INTRODUCTION

The utilization of the global positioning system (GPS) in photogrammetric mapping began

almost from the inception of this technology. Initially, GPS offered a major improvement in

the control needed for mapping. It provided coordinate values that were of higher quality and

more reliable than those using conventional field surveying techniques. At the same time the

cost and labor required for that control were lower than conventional surveying. Experiences

from using GPS-control showed several improvements [Salsig and Grissim, 1995]:

a) There was a better fit between the control and the aerotriangulation results,

particularly for large-area projects.

b) Surveyors were not concerned with issues like intervisibility between control

points, therefore the photogrammetrist often received the control points in

locations advantageous to them instead of the location determined from the

execution of a conventional field survey.

c) Visibility of the ground control point to the aerial camera is always important.

Fortunately, those points that are “visible” using the GPS receivers are also free

of major obstructions that would prevent the image from appearing in the

photography. This led to a better recovery rate for the control.

Unfortunately, the window from which GPS observations could be made was not always at

the most desirable time of day. This changed as the satellite constellation began to reach its

current operational status. Also, with these increasing windows came the idea of placing a

GPS receiver within the mapping aircraft.

Airborne-GPS is now a practical and operational technology that can be used to enhance the

efficiency of photogrammetry, although Abdullah et al [2000] reports that only about 30% of

the photogrammetry companies are using this technology at this time. But, this does account

for about 40% of the projects undertaken by photogrammetric firms. This data is based on

antidotal information. Airborne GPS can be used for:

precise navigation during the photo flight

centered or pin-point photography

determination of the coordinates of the nodal point for aerial triangulation

To achieve the first two applications the user requires real-time differential GPS positioning

[Habib and Novak, 1994]. Because the accuracy of position for navigation and centered

photography ranges from one to five meters, C/A-code or P-code pseudorange is all that is

required. The important capability is the real-time processing. For aerotriangulation, a higher

accuracy is needed which means observing pseudorange and phase. Here, real-time

processing is not as important in terms of functionality.

Principles of Airborne GPS Page 68

Airborne GPS is used to measure the location of the camera at the instant of exposure. This

gives the photogrammetrist X

L

, Y

L

, and Z

L

. GPS can also be used to derive the orientation

angles by using multiple antennas. Unfortunately, the derived angular relationships only

have a precision of about 1’ of arc while photogrammetrists need to obtain these values to

better than 10” of arc.

To compute the position of the camera during the project, two dual frequency geodetic GPS

receivers are commonly employed. One is placed over a point whose location is known and

the other is mounted on the aircraft. Carrier phase data are collected by both receivers during

the flight with sampling rates generally at either 0.5 or 1 second. The integer ambiguity must

be taken into account and this will be discussed later. Generally, on-the-fly integer

ambiguity resolution techniques are employed.

ADVANTAGES OF AIRBORNE GPS

The main limitation of photogrammetry is the need to obtain ground control to fix the

exterior orientation elements. The necessity of obtaining ground control is costly and time-

consuming. In addition, there are many instances where the ability to gather control is not

feasible. Corbett and Short [1995] identify situations where this exists:

a) Time. Because phenomena change with time, it is possible that the subject of the

mapping has either changed or disappeared when that the control has been collected.

Another limitation occurs when the results of the mapping need to be completed in a

very short time period.

b) Location. The physical location of the survey site may restrict access because of

geography or the logistics to complete a field survey may be such to make the survey

prohibitive.

c) Safety. The phenomena of interest may be hazardous or the subject may be located in

an area that is dangerous for field surveys.

d) Cost. Tied to the other problems is that of cost. The necessity of obtaining control

under the conditions outlined above may make the cost of the project prohibitive

because control surveys are a labor-intensive activity. Even under normal conditions

the charge for procuring control is high and, if too much is needed, could negate the

economic advantages that photogrammetry offers. GPS gives the photogrammetrist

the opportunity to minimize (or even eliminate) the amount of ground control and still

maintain the accuracy needed for a mapping project. Lapine [nd] points out that

almost all of the National Oceanic and Atmospheric Administration (NOAA) aerial

mapping projects utilize airborne-GPS because they have found efficiencies due to a

reduction in the amount of ground control required for their mapping.

While airborne GPS can be used to circumvent the necessity of ground control, it offers the

photogrammetrist additional advantages. These include [Abdullah et al, 2000; Lucas, 1994]:

Principles of Airborne GPS Page 69

• It has a stabilizing effect on the geometry.

• The attainable accuracy meets most mapping standards.

• Substantial cost reduction for medium and large-scale projects are possible.

• There is an increase in productivity by decreasing the amount of ground control

necessary for a project.

• It reduces the hazards due to traffic, particularly for highway corridor mapping.

• Precise flight navigation and pin-point photography are possible with this

technology.

It is now possible, at least theoretically, to use GPS aerotriangulation without any ground

control. This requires [Lucas, 1996] a near perfect system, an unlikely scenario. Moreover, it

would be extremely prudent to have control, if for no other reason than to check the results.

While airborne GPS is operational, there are special considerations that must be accounted

for to ensure success for a project.

Airborne GPS is operational and being used for more mapping projects. There are some

concerns that need to be addressed for a successful project. These include [Abdullah et al,

2000]:

• Risk is greater if the project is not properly planned and executed.

• There is less ground control.

• As ground control gets smaller, datum transformation problems become more

important.

• There is some initial financial investment by the mapping organization.

• Requires non-traditional technical support.

ERROR SOURCES

The use of GPS in photogrammetry contains two sets of error sources and the introduction of

additional errors inherent in the integration of these two technologies. For precise work, these

errors need to be accounted for.

Photogrammetric errors include the following:

a) Errors associated with the placement of targets. The Texas Department of

Transportation has determined that an error of 1 cm can be expected in centering the

target over the point [Bains, 1995]. This is based on a 10 cm wide cross target. The

main problem is that the center of the target is not precisely defined.

b) Errors inherent in the pug device used to mark control on the diapositives. If the pug

is not properly adjusted then the point transfer may locate pass- and tie-points

erroneously. Regardless, the process of marking control introduces another source of

error into the photogrammetric process.

Principles of Airborne GPS Page 70

c) Camera calibration is crucial in determining the distortion parameters of the aerial

camera used in photogrammetry. Bains [1995] has found that the current USGS

calibration certificate does not provide the information needed for GPS assisted

photogrammetry. Merchant [1992] states that a system calibration is more important

with airborne GPS.

d) The camera shutter can contain large random variability as to the time the shutter is

open. Most of the time, this error source is not that important but if this irregularity

is too great, contrast within the image could be lost. The major problem with this

non-uniformity is when trying to synchronize the time of exposure to the epoch in

which the GPS signal collecting data.

Error sources for GPS are well identified. A loss or disruption of the GPS signal could cause

problems in resolving the integer ambiguities and could result in erroneous positioning of the

camera location thereby invalidating the project. The GPS error sources include:

a) Software problems can cause problems with a GPS mission, particularly in the

kinematic mode. Some software cannot resolve cycle slips in a robust fashion,

although newer on-the-fly ambiguity resolution software will help. There is also a

limitation on the accuracy of different receivers used in the kinematic surveys.

Geodetic quality receivers, with 1-2 cm relative accuracy, should be employed for

projects where high precision is required.

b) Datum problems. The GPS position is determined in the WGS 84 system whereas the

survey coordinates are in some local coordinate system or in NAD 27 coordinates

where there is no exact mathematical relationship between systems.

c) Signal interruption. This is critical if continuous tracking is necessary in order to

process the GPS signal. Interruption may occur during sharp banking turns through

the flight.

d) Geometry of the satellite constellation.

e) Receiver clock drift. Although this error is relatively small, this drift should be

accounted for in the processing of GPS observations.

f) Multipath. This is particularly problemsome on surfaces such as the fuselage or on the

wings. This error is due to reception of a reflected signal, which represents a delay in

the reception time.

Errors that can be found in the integration of GPS with the aerial camera and

photogrammetry are [Bains, 1995; Merchant, 1992; Lapine, nd]:

a) The configuration of airborne GPS implies that the two data collectors are not

physically in the same location. The GPS antenna must be located outside and on top

of the aircraft to receive the satellite signals. The aerial camera is located within the

Principles of Airborne GPS Page 71

aircraft and is situated on the bottom of the craft. The separation distance between

the antenna and camera (the nodal point) needs to be accurately determined. This

distance is found through a calibration process prior to the flight. This value can also

be introduced in the adjustment by constraining the solution or by treating it in the

stochastic process.

b) Prior to beginning a GPS photogrammetric mission, the height between the ground

control point and the antenna needs to be measured. Experience has found that there

can be variability in this height based on the quantity of fuel in the aircraft. This

problem occurs only when the airborne-GPS system is based on an initialization

process when solving for the integer ambiguities.

c) The camera shutter can cause problems as was identified above. The effect of this

error creates a time bias. Of concern is the ability to trip the shutter on demand. In

the worst case, Merchant [1992] points out that the delay from making the demand

for an exposure to the midpoint of the actual exposure could be several seconds. For

large-scale photography this could cause serious problems because of the turbulent

air in the lower atmosphere and the interpretation from the GPS signal to the

effective exposure time. Early experiments with the Wild RC10 with an external

pulse generator showed wide variability in time between maximum aperture and

shutter release [van der Vegt, 1989]. The values ranged from 10-100 msec.

Traveling at 100 m/sec, positional errors from 1-10 m could be expected.

d) Interpolation algorithm used to compute the position of the phase center of the

antenna. Since the instant of exposure does not coincide with the sampling time in

the GPS receiver, an interpolation of the position of the antenna at the instant of

exposure must be computed. Different algorithms have varying characteristics,

which could introduce error in the position. Related to this uncertainty is the

sampling rate used to capture the GPS signal. Too low of a rate will increase the

processing whereas too high of a rate will degrade the accuracy of the interpolation

model.

e) Radio frequency interference can cause problems, particularly onboard the airplane.

A receiver that can filter out this noise should be used. One example receiver is the

Trimble 4000 SSI with Super-Trak signal processing which has been used

successfully in airborne-GPS [Salsig and Grissim, 1995].

Camera Calibration

One of the weak links in airborne GPS involves the camera calibration. As was pointed out

earlier, the traditional camera calibration may not provide the information needed when GPS

is used to locate the exposure station. What should be considered is a system calibration

whereby the whole process is calibrated and exercised under normal operating conditions

[Lapine, 1991; Merchant, 1992]. Because of the complex nature of combining different

measurement systems within airborne GPS, two important drawbacks are identified with the

traditional component approach to camera calibration [Lapine, 1991]:

Principles of Airborne GPS Page 72

1. The environment is different. In the laboratory, calibration can be performed under

ideal and controlled conditions, situations that are not possible in practice. This leads

to different atmospheric conditions and variations in the noise found in photo

measurements.

2. The effect of correlation between the different components of the total system are

not considered.

Traditionally, survey control on the ground had the effect of compensating for residual

systematic errors in the photogrammetric process [Lapine, 1991; Merchant, 1992]. This is

due to the projective transformation where ground control is transformed into the photo

coordinate system. The exposure station coordinates are free parameters that are allowed to

“float” during the adjustment thereby enforcing the collinearity condition. With GPS-

observed exposure coordinates, the space position of the nodal point of the camera are fixed

and ground coordinates become extrapolated variables. Because of this, calibration of the

photogrammetric system under operating conditions becomes critical if high-level accuracy

is to be maintained.

GPS Signal Measurements

There are many different methods of measuring with GPS: static, fast static, and kinematic.

Static surveying requires leaving the antennas over the points for an hour or more. It is the

most accurate method of obtaining GPS surveying data. Fast static is a newer approach that

yields high accuracies while increasing the productivity since the roving antenna need only

be left over a point for 10-15 minutes. The high accuracies are possible because the receiver

will revisit each point after an elapsed time of about an hour. Of course, neither of these

situations are possible in airborne-GPS. Kinematic measures the position of a point at the

instant of the measurement. At the next epoch, the GPS antenna has moved, and continues to

move. Because of this measurement process, baseline accuracies determined from kinematic

GPS will be 1 cm t 2 ppm of the baseline distance from the base station to the receiver

[Curry and Schuckman, 1993].

Flight Planning for Airborne GPS

When planning for an airborne GPS project, special consideration must be taken into account

for the addition of the GPS receivers that will be used to record the location of the camera.

The first issue is the form of initialization of the receiver to fix the integer ambiguities. Next,

when planning the flight lines, the potential loss of lock on the satellites has to be accounted

for. Depending on the location of the airborne receiver, wide banking turns by the pilot may

result is a loss of the GPS signal. Banking angles of 25° or less are recommended which

results in longer flight lines [Abdullah et al, 2000].

The location of the base receiver must also be considered during the planning. Will it be at

the airport or near the job site? The longer the distance between the base receiver and the

rover on the plane the more uncertain will be the positioning results. It is assumed that the

Principles of Airborne GPS Page 73

relative positioning of the rover will be based upon similar atmospheric conditions. The

longer the distance, the less this assumption is valid. Deploying at the site requires

additional manpower to deploy the receiver and assurances that the person who is occupying

the base is collecting data when the rover is collecting the same data.

When planning, try to find those times when the satellite coverage consists of 6 or more

satellites with minimum change in coverage [Abdullah et al, 2000]. Also plan for a PDOP

that is less than 3 to ensure optimal geometry. Additionally, one might have to arrive at a

compromise between favorable sun angle and favorable satellite availability.

Make sure that the GPS receiver has enough memory to store the satellite data. This is

particularly true when a static initialization is performed and satellite data is collected from

the airport. There may also be some consideration on the amount of sidelap and overlap

when the camera is locked down during the flight. This will be important when a combined

GPS-INS system is used. Finally, a flight management system should be used to precalculate

the exposure station locations during the flight

The limitations attributed to the loss of lock on the satellite places additional demands on

proper planning. These problems can be alleviated to some degree if additional drift

parameters are used in the photogrammetric block adjustment.

Antenna Placement

To achieve acceptable results using airborne GPS, it is essential that the offset between the

GPS antenna and the perspective center of the camera be accurately known in the image

coordinate system (figure 1). The measurement of this offset distance is performed by

leveling the aircraft using jack above the wheels. Then, either conventional surveying or

close range photogrammetry can be used to determine the actual offset.

Principles of Airborne GPS Page 74

Figure 18. GPS Offset

For simplicity, the camera can be locked in place during the flight. This helps maintain the

geometric relationship of the offset vector. But, the effect is that tilt and crab in the aircraft

could result in a loss of coverage on the ground unless more sidelap were accounted for in

the planning. If the camera is to be leveled during the flight then the amount of movement

should be measured in order to achieve higher accuracy.

The location of the antenna on the aircraft should be carefully considered. Although any

point on the top side of the plane could be thought of as a candidate site, two locations can be

studied further because of their advantages over other sites. These are on the fuselage directly

above the camera and the tip of the vertical stabilizer.

The location on the fuselage over the camera has the advantage of aligning the phase center

along the optical axis of the camera thereby making the measurement of the offset as well as

the mathematical modeling easier [Curry and Schuckman, 1993]. Moreover, the crab angle

is hardly affected and the tilt corrections are negligible for large image scale [Abdullah et al,

2000]. The disadvantages are as follows. First, the fuselage location increases the probability

of multipath. Second, this location, coupled with the wing placement, may lead to a loss of

signal because of shadowing. Antenna shadowing is the blockage of the GPS signal, which

could occur during sharp banking turns. Finally, mounting on the fuselage may require

special modification of the aircraft by certified airplane mechanics.

Placing the antenna on the vertical stabilizer will require more work in determining the offset

vector between the antenna and the camera [Curry and Schuckman, 1993]. But once

determined, it should not have to be remeasured unless some changes would suggest a

remeasurement be undertaken. The advantages are that both multipath and shadowing are

less likely to occur. Moreover, the actual installation might be far simpler since many aircraft

already have a strobe light on the stabilizer, which could easily be adapted to accommodate

an antenna.

Principles of Airborne GPS Page 75

Determining the Exposure Station Coordinates

The GPS receiver is preset to sample data at a certain rate, i.e., 1 second intervals. This

sample time may not coincide with the actual exposure time. Therefore, it is necessary to

interpolate the position of the exposure station between GPS observations. An error in timing

will result in a change in the coordinates of the exposure station. For example, if a plane is

traveling at 200 km/hr (- 56 m/sec), then a one millisecond difference will result in 6 cm of

coordinate error.

With the rotary shutters used in aerial cameras the time between when the shutter release

signal is sent (see figure 2) to the mid-point of the exposure station varies [Jacobsen, 1991].

Therefore, a sensor must be installed to record the time of exposure. Then, through a

calibration process, the offset from the recorded time to the effective instant of exposure can

be determined and taken into account. Without calibration the photographer should not

change the exposure during the flight thereby maintaining a constant offset distance, which

can be accounted for in the processing. This, though, can only be done approximately.

Many of the cameras now in use for airborne-GPS will send a signal to the receiver when the

exposure was taken. The receiver then records the GPS time for this event marker within the

data. Merchant [1993] points out that some cameras can determine the mid-exposure pulse

time to 0.1 ms whereas some of the other cameras use a TTL pulse that can be calibrated to

accurately measure the mid-point of the exposure. Accuracies better than 1 msec have been

reported for time intervals by using a light sensitive device within the aerial camera [van der

Vegt, 1989]. This device will create an electrical pulse when the shutter is at the maximum

aperture.

Prior to determining the exposure station coordinates, the location of the phase center of the

antenna must be interpolated. Since the receiver clock contains a small drift of about 1

µs/sec., Lapine [nd] suggests that the position of the antenna be time shifted so that the

positions are equally spaced. Several different interpolation models can be employed to

determine the trajectory of the aircraft. Some of them include the linear model, polynomial

approach, spline function, and quadratic time-dependent polynomial. Some field results

found very little difference between these methods [Forlani and Pinto, 1994]. This may have

been because they used the GPS receiver PPS (pulse per second) signal to trip the shutter on

the aerial camera. This meant that the effective instant of exposure was very close to the GPS

time signal.

Principles of Airborne GPS Page 76

Figure 19. Shutter release diagram for rotary shutters [from Jacobsen, 1991].

One of the most simplest interpolation models is the linear approach. The assumption is

made that the change in trajectory from one epoch to another is linear. Thus, one can write a

simple ratio as:

where: i = time interval between GPS epochs

∆(X,Y,Z) = changes in GPS coordinated between two epochs

di = time difference when the exposure was made within an epoch, and

d(X,Y,Z) = changes in GPS coordinates to the exposure time.

The advantage of this model is its simplicity. On the other hand, it assumes that the change in

position is linear which may not be true. Sudden changes in direction are very common at

lower altitudes where large scale mapping missions are flown. For example, figure 3 shows a

sudden change in the Z-direction during the flight. Assuming a linear change, the location of

the receiver could be considerably different than the actual location during exposure. One

alternative would be to decrease the sample interval to, say, 0.5 seconds. This would reduce

the effect of the error but increase the number of observations taken and the time to process

those data.

( ) ( ) Z , Y , X d

di

Z , Y , X

i

·

∆

Principles of Airborne GPS Page 77

Figure 20. Effects of linear interpolation model when the aircraft experiences sudden

changes in its trajectory between PG epochs

Because of the non-linear nature of the aircraft motion, Jacobsen [1993] suggests that a least-

squares polynomial fitting algorithm be used to determine the space position of the

perspective center. By varying the degree of the polynomial and the number of neighbors to

be included in the interpolation process, a more realistic trajectory should be obtained. The

degree and number of points will depend on the time interval between GPS epochs. The

added advantage of this method is that if a cycle slip is experienced, it can be used to

estimate better the exposure station coordinates than a linear model.

A second order polynomial is used by Lapine [nd] to determine the position offset, velocity

and acceleration of the aircraft in all three axes. This is done by fitting a curve to a five epoch

period around the exposure time. The effect of this polynomial is to smooth the trajectory of

the aircraft over the five epochs. The following model is used:

Similar equation can be generated for Y and Z. Thus the three models look, in a general

form, like:

( ) ( )

( ) ( )

( ) ( )

( ) ( )

( ) ( )

2

3 5 X 3 5 X X 5

2

3 4 X 3 4 X X 4

2

3 3 X 3 3 X X 3

2

3 2 X 3 2 X X 2

2

3 1 X 3 1 X X 1

t t c t t b a X

t t c t t b a X

t t c t t b a X

t t c t t b a X

t t c t t b a X

− + − + ·

− + − + ·

− + − + ·

− + − + ·

− + − + ·

Principles of Airborne GPS Page 78

where: t = t

i

- t

3

and i = 1, 2, ..., 5

a = distance from the origin

b = velocity, and

c = twice the acceleration

From this the observation equations can be written as

The design or coefficient matrix is found by differentiating the model with respect to the

unknown parameters. All three models have the same coefficient matrix:

The observation vectors (f) are:

The normal equations can then be expressed as

where ∆ represent the parameters ( ∆ = [a b c]

T

). The solution becomes

2

Z Z Z

2

Y Y Y

2

X X X

t c t b a Z

t c t b a Y

t c t b a X

+ + ·

+ + ·

+ + ·

0 Z t c t b a v

0 Y t c t b a v

0 X t c t b a v

2

Z Z Z Z

2

Y Y Y Y

2

X X X X

· − + + ·

· − + + ·

· − + + ·

( ) ( )

( ) ( )

( ) ( )

( ) ( )

( ) ( )

1

1

1

1

1

1

]

1

¸

− − −

− − −

− − −

− − −

− − −

·

1

1

1

1

1

1

]

1

¸

− − − − −

− − − − −

− − − − −

− − − − −

− − − − −

·

2

2

2

2

2

2

3 5 3 5

2

3 4 3 4

2

3 3 3 3

2

3 2 3 2

2

3 1 3 1

t t 1

t t 1

t t 1

t t 1

t t 1

t t t t 1

t t t t 1

t t t t 1

t t t t 1

t t t t 1

B

1

1

1

1

1

1

]

1

¸

−

−

−

−

−

·

1

1

1

1

1

1

]

1

¸

−

−

−

−

−

·

1

1

1

1

1

1

]

1

¸

−

−

−

−

−

·

5

4

3

2

1

Z

5

4

3

2

1

Y

5

4

3

2

1

X

Z

Z

Z

Z

Z

f

Y

Y

Y

Y

Y

f

X

X

X

X

X

f

Z Z Z

Y Y Y

X X X

f B v

f B v

f B v

+ ∆ ·

+ ∆ ·

+ ∆ ·

Principles of Airborne GPS Page 79

where W is the weight matrix. Assuming a weight of 1, the weight matrix then becomes the

identity matrix and

For the X observed values, as an example,

The weighting scheme is important in the adjustment because an inappropriate choice of

weights may biased or unduly influence the results. Lapine looked at assigning equal weights

but this choice was rejected because the trajectory of the aircraft may be non-uniform. The

final weighting scheme used a binomial expansion technique whereby times further from the

central time epoch (t

3

) were weighted less than those closest to the middle. Using a variance

of 1.0 cm

2

for the central time epoch, the variance scheme looks like

( )

( )

( )

Z

T

1

T

Z

Y

T

1

T

Y

X

T

1

T

x

Wf B WB B

Wf B WB B

Wf B WB B

−

−

−

− · ∆

− · ∆

− · ∆

1

1

1

]

1

¸

· ·

4 3 2

3 2

2

T T

t 5 t 5 t 5

t 5 t 5 t 5

t 5 t 5 5

IB B WB B

1

1

1

]

1

¸

+ + + +

+ + + +

+ + + +

· ·

2

5

2

4

2

3

2

2

2

1

5 3 2 1

5 4 3 2 1

X

T

X

T

t X t X t X t X t X

t X t X t X t X t X

X X X X X

If B Wf B

1

1

1

1

1

1

]

1

¸

·

1

1

1

1

1

1

]

1

¸

⋅

⋅

⋅

⋅

⋅

2

2

2

2

2

2 2

2 2

2 2

2 2

2 2

cm 4

cm 4

cm 4

cm 4

cm 4

m 01 . 0 2

m 01 . 0 2

m 01 . 0 2

m 01 . 0 2

m 01 . 0 2

Principles of Airborne GPS Page 80

where the off-diagonal values are all zero (0). A basic assumption made in Lapine's study

was that the observations are independent therefore there is no covariance. Once the

coefficients are solved for, the position of the antenna phase center can be computed using

the following expressions

Determination of Integer Ambiguity

The important error concern in airborne-GPS is the determination of the integer ambiguity.

Unlike ground-based measurements, the whole photogrammetric mission could be lost if a

cycle slip occurs and the receiver cannot resolve the ambiguity problem. There are two

principal methods of solving for this integer ambiguity: static initialization over a know

reference point or using a dual-frequency receiver with on-the-fly ambiguity resolution

techniques [Habib and Novak, 1994].

Static initialization can be performed in two basic modes [Abdullah et al, 2000]. The first

method of resolving the integer ambiguities is to place the aircraft over a point on a baseline

with know coordinates. Only a few observations are required because the vector from the

reference receiver to the aircraft is known. The accuracy of the baseline must be better than

6-7 cm. The second approach is a static determination of the vector over a know baseline or

from the reference station to the antenna on the aircraft. The integer ambiguities are solved

for in a conventional static solution. This method may require a longer time period to

complete, varying from 5 minutes to one hour, due to the length of the vector, type of GPS

receiver, post-processing software, satellite geometry, and ionospheric stability. When static

initialization is performed it does require that the receiver on-board the aircraft maintain a

constant lock on at least 4 and preferable 5 GPS satellites.

Abdullah et al [2000] identify several weaknesses to static initialization:

• The methods add time to the project and are cumbersome to perform.

• GPS data collection begins at the airport during this initialization.

• Since the data are collected for so long, large amounts of data are collected and need

to be processed – about 7 Mbytes per hour.

• The receiver is susceptable to cycle slips or loss of lock.

• It is possible that the initial solution of the integers was incorrect thereby invalidating

the entire photo mission.

The use of on-the-fly (OTF) ambiguity integer resolution makes the process much easier.

The new GPS receiver and post-processing software are much more robust and easy to use

while the receiver is in flight. OTF requires P-code receivers where carrier phase data are

( ) ( )

( ) ( )

( ) ( )

2

3 exp Z 3 exp Z Z exp

2

3 exp Y 3 exp Y Y exp

2

3 exp X 3 exp X X exp

t t c t t b a Z

t t c t t b a Y

t t c t t b a X

− + − + ·

− + − + ·

− + − + ·

Principles of Airborne GPS Page 81

collected using both the L1 and L2 frequencies. The solution requires about 10-15 minutes

of measurements before entering the project area.

Component integration can also create problems. For example, a test conducted by the

National Land Survey, Sweden, experienced cycle slips when using the aircraft

communication transmitter [Jonsson and Jivall, 1990]. Receiving information was not a

problem, just transmissions. This test involved pre-flight initialization with the goal of

re-observation over the reference station at the end of the mission. This was not possible.

GPS-Aided Navigation

One of the exciting applications of airborne-GPS is its utilization of in flight navigation. The

ability to precisely locate the exposure station and activate the shutter at a predetermined

interval along the flight line is beneficial for centering the photography over a geographic

region, such as in quad-centered photography for orthophoto production.

An early test by the Swedish National Land Survey [Jonsson and Jivall, 1990] showed early

progress in this endeavor. The system configuration is shown in figure 4. Two personal

computers (PCs) where used in the early test - one for navigation and the other for

determination of the exposure time.

Figure 21. Configuration of navigation-mode GPS equipment [from Jonsson and Jivall,

1990].

The test consisted of orientation of the receiver on the plane prior to the mission over a

ground reference mark. This initialization is performed to solve for the integer ambiguity.

Principles of Airborne GPS Page 82

This method of fixing the ambiguity requires no loss of lock during the flight thus

necessitating long banking turns, which adds to the amount of data collected.

A flight plan was computed with the location of each exposure station identified. The PC

used for the navigation activated a pulse that was sent to the aerial camera to trip the shutter.

The test showed that this approach yielded about a 0.5 second delay. Thus, the exposure

station locations were 20-40 meters too late. An accuracy of about 6 meters was found at the

preselected position along the strip. When compared to the photogrammetrically derived

exposure station coordinates, the relative carrier phase measurements were within about 0.15

meters in agreement.

The Texas Department of Transportation (TDOT) had a different problem [Bains, 1992].

Using airborne-GPS gave TDOT the ability to reduce the amount of ground control for their

design mapping. With GPS one paneled control point was placed at the beginning of the

project and a second at the end. If the site was greater than 10 km in length then a third

paneled control point was placed near the center. For their low altitude flights (photo scale of

1 cm = 30 m), the desire was to control the side-lap to 50 m. Using real-time differential

GPS, accuracies of better than 10 m, at that time, were realistic. Using this 10 m error value,

this amount of error would only cause a variation in side-lap of 7%. TDOT uses 60% side-lap

for their large scale mapping.

For the high altitude mapping (photo scale of 1 cm = 300 m) and 30% side-lap, it was

determined that the "50 m was not really necessary. This 50 m value would cause a variation

of only about 2%.

PROCESSING AIRBORNE GPS OBSERVATIONS

The mathematical model utilized in analytical photogrammetry is the collinearity model,

which simply states that the line from object space through the lens cone to the negative

plane is a straight line. The functional representation of this model is shown as:

where: x

ij

, y

ij

are the observed photo coordinates, i, for photo j

x

o

, y

o

are the coordinates of the principal point

c is the camera constant

∆X

i

, ∆Y

i

, ∆Z

i

are the transformed ground coordinates

This mathematical model is often presented in the following form:

( ) ( )

( ) ( ) 0

Z

Y

c y y y F

0

Z

X

c x x x F

i

i

o ij

i

i

o ij

·

∆

∆

− − ·

·

∆

∆

− − ·

Principles of Airborne GPS Page 83

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

L i 33 L i 32 L i 31

L i 23 L i 22 L i 21

o y ij

L i 33 L i 32 L i 31

L i 13 L i 12 L i 11

o x ij

Z Z m Y Y m X X m

Z Z m Y Y m X X m

c y v y

Z Z m Y Y m X X m

Z Z m Y Y m X X m

c x v x

ij

ij

− + − + −

− + − + −

− · +

− + − + −

− + − + −

− · +

where: v

x

, v

y

are the residuals in x and y respectively for point i on photo j

X, Y, Z are the ground coordinates of point i

X

L

,Y

L

,Z

L

are the space rectangular coordinates of the exposure station for

photo j

m

11

... m

33

is the 3x3 rotation matrix that transforms the ground coordinates to

a photo parallel system.

The model implies that the difference between the observed photo coordinates, corrected for

the location of the principal point, should equal the predicted values of the photo coordinates

based upon the current estimates of the parameters. These parameters include the location of

the exposure station and the orientation of the photo at the instant of exposure. The former

values could be observed quantities from onboard GPS. These central projective equations

form the basis for the aerotriangulation.

It is common to treat observations as stochastic variables. This is done by expanding the

mathematical model. For example, Merchant [1973] gives the additional mathematical

model when observations are made on the exterior orientation elements as:

The mathematical model for observation on survey control can be similarly.

( )

( )

( )

( )

( )

( ) 0 Z Z Z F

0 Y Y Y F

0 X X X F

0 F

0 F

0 F

a

L

o

L L

a

L

o

L L

a

L

o

L L

a o

a o

a o

· − ·

· − ·

· − ·

· ω − ω · ω

· ϕ − ϕ · ϕ

· κ − κ · κ

Principles of Airborne GPS Page 84

Figure 22. Position ambiguity for a single photo resection [from Lucas, 1996, p.125].

Using GPS to determine the exposure station coordinates without ground control is not

applicable to all photogrammetric problems. Ground control is needed for a single photo

resection and orientation [Lucas, 1996]. If the exposure station coordinates are precisely

known then the only thing known is that the camera lies in some sphere with a radius equal to

the offset distance from the GPS antenna to the cameras nodal point, figure 5. The antenna is

located at the center of the circle. All positions on the sphere are theoretically possible but

from a practical viewpoint, one knows that the camera, being located below the aircraft and

pointing to the ground, is below the antenna. The antenna, naturally, is located on top of the

aircraft to receive the satellite signals.

Adding a second photo reduces some of the uncertainty. This is due to the additional

constraint of the collinearity condition that is placed on the rays from the control to the image

position. The collinearity theory will provide the relative orientation between the two photos

[Lucas, 1996]. Without ground control, the camera is then free to rotate about a line that

passes through the two antenna locations (see figure 6). Without ground control, or some

other mechanism to constrain the roll angle along the single strip, this situation could be

found throughout a single strip of photography.

Principles of Airborne GPS Page 85

Figure 23. Ambiguity of the camera position for a pair of aerial photos [from Lucas, 1996,

p.125].

While independent model triangulation continues to be employed in practice, the usual

iterative adjustment cannot be used with the recommended 4 corner control points [Jacobsen,

1993]. Moreover, the 7-parameter solution to independent model triangulation results in a

loss of accuracy in the solution.

Determining the coordinates of the exposure stations can be easily visualized in the following

model [Merchant, 1992]. Assume that the photo coordinate system (x,y,z) are aligned with

the coordinate system (U, V, W). Further, assume that the survey control (X, Y, Z) is

reported in the WGS 84 system. Then, it remains to transform the offset between the

receivers phase center and the nodal point of the aerial camera (DU, DV, DW) into the

corresponding survey coordinate system. This is shown as

where: DU, DV, DW are the offset distances

M

M

is the camera mount orientation

M

E

is the exterior orientation elements of the camera

1

1

1

]

1

¸

+

1

1

1

]

1

¸

·

1

1

1

]

1

¸

DW

DV

DU

M M

Z

Y

X

Z

Y

X

M E

A

a

a

L

L

L

Principles of Airborne GPS Page 86

The camera mount orientation is necessary to ensure that the camera is heading correctly

down the flight path.

In the normal acquisition of aerial photos, the camera is leveled prior to each exposure. This

is done so that the photography can be nearly vertical at the instant of exposure even though

the aircraft is experiencing pitch, roll and swing (crab or drift). When the coordinate offsets

between the antenna and camera were surveyed, the orientation angles on the mount are

leveled. A problem occurs if there is an offset between the location of the nodal point and the

gimbals’ rotational center on the mount. When the camera is rotated, the relationship

between the two points should be considered.

The simplest way to ensure that the relationship between the receiver and the camera are

consistent would be to forgo any rotation of the camera during the flight. With this rigid

relationship fixed, the antenna coordinates can be rotated into a parallel system with respect

to the ground by using the tilts experienced during the flight.

Alternatively, Lapine [nd] points out that the transformation of the offsets to the local

coordinate system can easily be performed using the standard gimbal form. In this situation,

pitch and swing angles between the aircraft and the camera are measured. Then, one can

simply algebraically sum the camera mount angles with the appropriate measured pitch and

swing angles. Here, κ and swing are added to form one rotational element and φ and pitch are

similarly combined. Since roll was not measured during the test, ω is treated independently.

Using the Wild RC-10 camera mount, Lapine found that the optical axis of the camera

coincided with the vertical axis of the mount. That meant that the combination of κ and

swing would not produce any eccentricity. Testing revealed that the gimbal center was

located approximately 27 mm from the nodal points. Thus, an eccentricity error could be

introduced. During the flight, a 1.5

o

maximum pitch angle between the aircraft and the

camera mount was found. Thus the error in neglecting this effect in the flight direction would

be

maximum pitch error = 0.027m * sin 1.5

o

= 0.0007 meters

Experiences from tests in Europe [Jacobsen, 1991] indicate that the GPS positions of the

projection centers differ from the coordinates obtained from a bundle adjustment. Moreover,

many of the data sets have shown a time dependent drift pattern in the GPS values. When this

systematic tendency is accounted for in the adjustment, excellent results are possible. For

relative positioning, 4 cm can be reached whereas 60 cm are possible using absolute

positioning.

A second approach to perform airborne GPS aerial triangulation is sometimes referred to as

the Stuttgart method. In this technique, certain physical conditions are assumed or accepted

[Ackermann, 1993]. First, it is accepted that loss of lock will occur. This means that low

banking angles onboard the aircraft will not be used as in those methods where a loss of lock

means a thwarted mission. Because loss of lock, it is also unnecessary to perform a stationary

observation prior to take-off to resolve the integer ambiguities. These ambiguities are solved

Principles of Airborne GPS Page 87

on-the-fly and can be determined for each strip if loss of lock occurs during the banking (or

at other times during the photo mission). Seldom will loss of lock happen along the strip

though. Second, it will be assumed that single frequency receivers will be used on the

aircraft. Finally, the ground or base receiver will probably be located at a great distance from

the photo mission.

The solution of the integer ambiguities is performed using C/A-code pseudorange

positioning. These positions can be affected by selective availability (SA). Because of this,

there will be bias in the solution. These drift errors, which can include other effects such as

datum effects, are systematic in nature and consist of a linear and time dependent component.

The block adjustment is used to solve for these biases.

Early test results added confusion to the drift error biases. In a test by the Survey Department

of Rijkswaterstaat, Netherlands, a systematic effect was not noticeable on all photo strips

[van der Vegt, 1989]. Evaluation of the results indicated that this was probably due to the

GPS processing of the cycle slips. The accuracy of the position in the differential mode is

predicated on the accuracy of solving these integer ambiguities at both the base receiver and

the rover. This test used a technique where the differences between the observed

pseudoranges and the phase measurements were averaged. The accuracy of this approach

will be dependent upon the accuracy of the measurements, the satellite geometry and how

many uncorrelated observations are used in the averaging approach.

If no loss of lock occurs during the photo mission, the aircraft trajectories will be continuous

and, therefore, only one set of drift parameters need to be carried in the bundle adjustment.

Unfortunately, banking turns could have an adverse effect by blocking the signal to some of

the satellites causing cycle slips. Høghlen [1993] states that an alternative to the strip-wise

application of the biases, the block may be able to be split into parts where the aircraft

trajectories are continuous thereby decreasing the number of unknown parameters within the

adjustment.

The advantage of modeling these drift parameters is that the ground receiver does not have to

be situated near the site. It could be 500 km or farther away [Ackermann, 1993]. This is

important because it can decrease the costs associated with photo missions. Logistical

concerns include not only the deployment of the aircraft but also the ground personnel on the

site to operate the base. When projects are located at great distances from the airplanes’ home

base, uncertainty in weather could mean field crews already on the site but the photo mission

canceled. It also is an asset to flight planning in that on-site GPS ground receivers will

require fixing the flight lines at least one day before the mission. During the flying season

this could be a problem [Jacobsen, 1994]. In Germany, the problem is solved because of the

existence of permanent reference stations throughout the country that could be occupied by

the ground receiver.

Using the mathematical model for additional stochastic observations within the adjustment as

outlined earlier [Merchant, 1973], a new set of observations can be written for the

perspective center coordinates as [Blankenberg, 1992].

Principles of Airborne GPS Page 88

where: (X

L

, Y

L

, Z

L

)

GPS

= the perspective center coordinates observed with GPS

v

X

, v

Y

, v

Z

= residuals on the observed principal center coordinates

X

L

, Y

L

, Z

L

= the adjusted perspective center coordinates used within the

bundle adjustment

As it was discussed earlier, the antenna does not occupy the same location as the camera

nodal point. The geometry is shown in figure 7. Relating the antenna offset to the ground is

dependent upon the rotation of the camera with respect to the aircraft and the orientation of

the aircraft to the ground. The bundle adjustment can be used to correct the camera offset if

the camera remains fixed to the aircraft during the photo mission. If this condition is met

then the orientation of the camera offset will only be dependent upon the orientation elements

(κ, ϕ, ω).

The new additional observation equations to the collinear model are given as [Ackermann,

1993; Høghlen, 1993]:

( )

j

Z

Y

X

j

Z

Y

X

A

PC

A

PC

A

PC

i

i

L

L

L

i

Z

Y

X

i

A

A

A

b

b

b

dt

a

a

a

z

y

x

, , R

Z

Y

X

v

v

v

Z

Y

X

GPS

GPS

GPS

1

1

1

]

1

¸

+

1

1

1

]

1

¸

+

1

1

1

]

1

¸

κ ω ϕ +

1

1

1

]

1

¸

·

1

1

1

]

1

¸

+

1

1

1

]

1

¸

where: (X

A

, Y

A

, Z

A

)

GPS

= ground coordinates of the GPS antenna for photo i

i

L

L

L

i

Z

Y

X

i

L

L

L

Z

Y

X

v

v

v

Z

Y

X

GPS

GPS

GPS

1

1

1

]

1

¸

·

1

1

1

]

1

¸

+

1

1

1

]

1

¸

**Figure 24. Geometry of the GPS antenna with respect to the aerial camera (zerox
**

copy, source unknown)

Principles of Airborne GPS Page 89

v

X

, v

Y

, v

Z

= residuals for the GPS antenna coordinates (X

A

, Y

A

, Z

A

)

GPS

for

photo i

X

L

, Y

L

, Z

L

= exposure station coordinates of photo i

x

A

PC

, y

A

PC

, z

A

PC

= eccentricity components to the GPS antenna

a

X

, a

Y

, a

Z

= GPS drift parameters for strip j representing the constant term

dt = difference between the exposure time for photo I and the time

at the start of strip j

b

X

, b

Y

, b

Z

= GPS drift parameters for strip j representing the linear time-

dependent terms

R(ϕ, ω, κ) = orthogonal rotation matrix.

It is recognized in analytical photogrammetry that adding parameters to the adjustment

weakens the solution. To strengthen the problem, one can introduce more ground control, but

this defeats one of the advantages of airborne GPS. Introducing the stepwise drift parameters

and using four ground control points located at the corners of the project, there are three

approaches to reducing the instability of the block [Ackermann, 1993]. These are shown in

figure 8 and are:

i) using both 60% end- and 60% side-lap

ii) using 60% end-lap and 20% side-lap and adding an additional vertical control point

at both ends of each strip, and

iii) using the conventional amount of overlap as indicated in (ii) and flying at least two

cross-strips of photography.

The block schemes shown in figure 8 are idealized depictions. The figure 8(i) scheme can

be used for airborne GPS when no drift parameters are employed in the block adjustment. It

is important that the receiver maintains lock during the flight which necessitates flat turns

between flights. Maintaining lock ensures that the phase history is recorded from take-off to

landing. Abdullah et al [2000] points out that this is the most accurate type of configuration

Figure 25. Idealized block schemes.

Principles of Airborne GPS Page 90

in a production environment. The same control scheme can also be used when block drift

parameters are used in the bundle adjustment.

If strip drift parameters are used then a control configuration as shown in figure 8(ii) should

be used. Here, drift parameters are developed for each flight line strip which requires

additional height control at the ends of each strip. The control configuration in figure 8(iii)

incorporates two cross strips of photography. This model increases the geometry and

provides a check against any gross errors in the ground control. But it does add to the cost of

the project because more photography is required to be taken and measured. For that reason,

it is not frequently utilized in a production environment.

More often the area is not rectangular but rather irregular. In this situation it is advisable to

add additional cross-strips or provide more ground control. Figure 9 is an example.

Figure 26. GPS block control configuration.

Theoretically, it is possible to perform the block adjustment without any ground control. This

can easily be visualized if one considers supplanting the ground control by control located at

Principles of Airborne GPS Page 91

the exposure stations. Nonetheless, it is prudent to include control on every job, if nothing

more than providing a check to the aerotriangulation. Using the four control point scheme as

just presented has the advantage of using the GPS position for interpolation only within the

strip.

As is known, conventional aerotriangulation requires ground control. As an example, for

planimetric mapping, control is required at an interval of approximately every seventh photo

on the edge of the block. Topographic mapping requires vertical control within the block at

about the same spacing. Using this background and simulated data, Lucas [1996] was able to

develop error ellipses from a bundle adjustment showing the accumulation of error along the

edges of the block (figure 10). The is commonly referred to as the edge effect and stems from

a weakened geometric configuration that exists because of a loss in redundancy. Under

normal circumstances, a point in the middle of a block should be visible on at least nine

photos. But on the edge, the photos are taken only from one side of view.

Figure 27. Error ellipses with ground points positioned by conventional aerotriangulation

adjustment of a photo block [Lucasm 1994].

Using the same simulated data, Lucas [1996] also showed the error ellipses one would expect

to find using 60% end- and side-lap photography along with airborne GPS and no control.

The results show that for planimetry, the results are similar. Larger error ellipses were found

at the control points but at every other point they were either smaller or nearly equivalent.

Elevation errors were much different within the two simulations. Using just aerotriangulation

without control, error ellipses grew larger towards the center of the block. Using kinematic

GPS, on the other hand, kept the error from getting larger. Compared with the original

simulation with vertical control within the block, each point had improvements, except the

Principles of Airborne GPS Page 92

control points that were fixed in the conventional adjustment. Lucas [1996] states that the

reason for the improvement lies in the fact that each exposure station is now a control point

and the distance between the control is less than one would find conventionally. It would not

be practical to have the same density of control as one would have in the air. These results

are based on simulations therefore reflect what is possible and not necessarily what one

would find in real data.

Accuracy considerations are important in determining the viability of using GPS

observations within a combined bundle adjustment. Results of projects conducted with

combined GPS bundle adjustment show that this approach is not only feasible but also

desirable. In conventional aerotriangulation, ground control points helped suppress the

effects of block deformation. GPS observed perspective center coordinates stabilize the

adjustment thus negating the necessity for extensive control. In fact, their main function now

becomes one of assisting in the datum transformation problem [Ackermann, 1993].

If the position of exposure station can be ascertained to an accuracy of 10 cm or better, then

the accuracy of the adjustment becomes primarily dependent upon the precision of the

measurement of the photo coordinates [Ackermann, 1993]. Designating the standard error of

the photo observations as σ

0

, the projected values expressed in ground units are O σ . Then as

long as O

GPS

σ ≤ σ , Ackermann indicates that the following rule could apply. The expected

horizontal accuracy (X, Y) will be approximately O 5 . 1 σ and the vertical accuracy (Z) would

be around O 0 . 2 σ . This assumes using the six drift parameters for each strip, four control

points and cross-strips.

Strip Airborne GPS

For route surveys, such as transportation systems, there is a problem with airborne GPS when

the GPS measurements are exclusively used to control the flight. Theoretically, a solution is

possible if the exposure stations are distributed along a block and are non-collinear. In the

case of strip photography, the exposure station coordinates will nearly lie on a line making it

an ill-conditioned or singular system. Therefore, some kind of control needs to be provided

on the ground to eliminate the weak solution that would otherwise exist. As an example,

Lucas [1996] shows the error ellipses one would expect with only ground control and then

with kinematic GPS. These are shown in figure 11 for horizontal values and figure 12 for

vertical control.

Merchant [1994] states that to solve this adjustment problem, existing ground control could

be utilized in the adjustment. Most transportation projects have monumented points

throughout the project and intervisible control should be reasonably expected.

A test was performed to evaluate the idea of using control for strip photography [Merchant,

1994]. A strip of three photos was taken from a Wild RC-20 aerial camera in a Cessna

Citation over the Transportation Research Center test site in Ohio. The aircraft was

pressurized and the flying height above the ground was approximately 1800 m. A Trimble

Principles of Airborne GPS Page 93

SSE receiver was used with a distance to the ground-based receiver being approximately 35

km. The photography was acquired with 60% end-lap. Corrections applied to the measured

photo coordinates included lens distortion compensation (both Seidel's aberration radial

distortion and decentering distortion using the Brown/Conrady model), atmospheric

refraction (also accounting for the refraction due to the pressurized cabin), and film

deformation (USC&GS 8-parameter model).

Figure 28

Figure 29.

The middle photo had 30 targeted image points. For this test, only one or two were used

while the remaining control values were withheld. The results are shown in the following

table. The full field method utilized all of the checkpoints within the photography. The

corridor method only used a narrow band of points along the route, which is typical of the

area of interest for many transportation departments [Merchant, 1994]. The results are

expressed in terms of the root mean square error (rmse) defined as the measure of variability

of the observed and "true" (or withheld) values for the checkpoints. The method is shown as:

Principles of Airborne GPS Page 94

where n is the number of test points.

rmse (meters)

Number of Test

Points

X

Y

Z

Using 2 targeted ground control

points

Full Field

28

0.079

0.057

0.087

Corridor

10

0.031

0.026

0.073

Using 1 targeted ground control

point

Full Field

29

0.084

0.050

0.086

Corridor

11

0.034

0.033

0.082

The results indicate that accuracies in elevation are better than 1:20,000 of the flying height,

which are comparable to results found from conventional block adjustments. It should also be

noted that pass points were targeted therefore errors that may occur due to the marking of

conjugate imagery is not present. Moreover, the adjustment also included calibration of the

system. Nonetheless, good results can be expected by using ground control to alleviate the ill

conditioning of the normal equations. A minimum of one point is needed with additional

points being used as a check.

Another approach, other than including additional control, would be to fly a cross strip

perpendicular to the strip of photography. This will have the effect of anchoring the strip

thereby preventing it from accumulating large amounts of error. If the strip was only a single

strip, then it is recommended that a cross strip be obtained at both ends of the strip [Lucas,

1996].

Combined INS and GPS Surveying

The combination of a combined inertial navigation system (INS) with GPS gives the

surveyor the ability to exploit the advantages of both systems. INS has a very high short-term

accuracy, which can be used to eliminate multipath effects and aid in the solution of the

ambiguity problem. The long-term accuracy of the GPS can be used to correct for the time-

( )

n

observed true

rmse

2

∑ −

·

Principles of Airborne GPS Page 95

dependent drift found within the inertial systems. Used together will give the surveyor not

only good relative accuracies but also good absolute accuracies as well. Moreover, within the

bundle adjustment, only the shift parameters need to be included within the adjustment model

[Jacobsen, 1993], thereby increasing, at least theoretically, the accuracy of the

aerotriangulation.

Texas DOT Accuracy Assessment Project

The Texas Department of Transportation undertook a project to assess the accuracy level that

is achievable using GPS and photogrammetry. Bains [1995] describes the project in length.

Three considerations were addressed in this project: system description, airborne GPS

kinematic processing and statistical analysis.

The system description can be summarized as follows:

The site selected was an abandoned U.S. Air Force base located near Bryan, Texas. This

site was selected because the targets could be permanently set and there would be

minimal obstructions due to traffic. Being an abandoned facility, expansion of the test

facility was possible. In addition, the facility could handle the King Air airplane.

Target design is important for the aerial triangulation. A 60 x 60 cm cross target with a

pin in the center was selected (based on a photo scale of 1:3000). The location of the

center of the target allowed for the precise centering of the ground receiver over the

point. In areas where there was no hard surface to paint the target, a prefabricated painter

wafer board target was employed.

All of the targets were measured using static GPS measurements. Each target was

observed at least once. Using 8 receivers, two occupied master control points while the

remaining six simultaneously observed the satellites over the photo control points. The

goal was to achieve Order B accuracy in 3-D of 1:1,000,000. In addition, differential

levels were run over all targets to test the accuracy of the GPS-derived heights.

The offset between the antenna and the camera was measured four times and the mean

values determined. Prior to the measurement, the aircraft was jacked up and leveled. The

aerial camera was then leveled and locked into place. The offset distances were then

measured.

The flight specifications were designed to optimize the accuracy of the test. They are:

Photo Scale: 1:3000

Flying Height: 500 meters

Flight Direction: North-South

Forward Overlap: 60% minimum

Side-lap: 60%

Number of Strips: 3

Principles of Airborne GPS Page 96

Exposures per Strip: 12

Focal Length: 152 mm

Format: 230 x 230 mm

Camera: Wild RC 20

Film Type: Kodak Panatomic 2412 Black/White

Sun Angle: 30

o

minimum

Cloud Cover: None

GDOP: #4

The mission began by measuring the height of the antenna when the aircraft was parked.

The ground receiver was turned on and a sample rate of 1 second was used. The rover

receiver in the aircraft was then turned on and tracked the satellites for five minutes with

the same one-second sampling rate. Then the aircraft took off and flew its mission.

The processing steps involved the kinematic solution of the GPS observations. The PNAV

software was used for on-the-fly ambiguity resolution. The software vendor recommended

that the processing be done both forward and backward for better accuracy but the test

indicated that, at least for this project, there was no increase in the accuracy when performing

that kind of processing.

The photogrammetry was processed using soft-copy photogrammetry. A 15Fm pixel size was

used. The aerial triangulation was then performed with the GAPP software using only four

ground control stations; two at the start and two at the end. The results were then statistically

processed using the SAS (Statistical Analysis System).

The results of this study showed that the accuracy achieved fell within specifications. In fact,

the GPS results were either equal to or better than the accuracy of conventional positioning

systems. The results also indicated that there was a need to have a reference point within the

site to aid in the transformation to State Plane Coordinates. As an example, Table 1 shows

the comparison between the GPS-derived control and the values from the ground truth. These

results show that airborne GPS can meet the accuracy specifications for photogrammetric

mapping.

Principles of Airborne GPS Page 97

Number of

Observations

Variable

Minimum

Maximum

Mean

Standard

Deviation

95

Easting

-0.100

0.057

-0.021

0.031

95

Northing

-0.075

0.089

-0.003

0.026

95

Elevation

-0.068

0.105

-0.008

0.027

Table 1. Comparison of airborne GPS assisted triangulation with ground truth on day 279,

1993 over a long strip [from Bains, 1995, p.40].

ECONOMICS OF AIRBORNE-GPS

While no studies have been conducted that describe the economic advantages of airborne-

GPS, some general findings are available [Ackermann, 1993]. Utilization of airborne-GPS

does increase the aerotriangulation costs by about 25% over the conventional approach. This

increase includes:

• flying additional cross-strips

• film

• GPS equipment

• GPS base observations

• processing the GPS data and computation of aircraft trajectories

• aerotriangulation

• point transfer and photo observations, and

• combined block adjustment

The real savings accrue in the control where the costs are 10% or less than those required

using conventional aerotriangulation. The overall net savings will be about 40% when

looking at the total project costs.

If higher order accuracy is required (Ackermann uses the example of cadastral

photogrammetry which needs 1-2 cm accuracy) then the savings will decrease because

additional ground control are necessary.

References Page 98

REFERENCES

Ackermann, F., 1993. "GPS for Photogrammetry", The Photogrammetric Journal of Finland,

13(20):7-15.

Bains, H.S., 1992. "Photogrammetric Surveying by GPS Navigation", Proceedings of the 6

th

International geodetic Symposium on Satellite Positioning, Vol. II, Columbus, OH, March

17-20, pp 731-738.

Bains, H.S., 1995. "Airborne GPS Performance on a Texas Project", ACSM/ASPRS Annual

Convention and Exposition Technical Papers, Vol. 2, February 27 - March 2, pp 31-42.

Corbett, S.J. and T.M. Short, 1995. "Development of an Airborne Positioning System",

Photogrammetric Record, 15(85):3-15.

Curry, S. and K. Schuckman, 1993. “Practical Guidelines for the Use of GPS

Photogrammetry”, ACSM/ASPRS Annual Convention and Exposition Technical Papers,

Vol. 3, New Orleans, LA, pp 79-88.

Forlani, G. and L. Pinto, 1994. "Experiences of Combined Block Adjustment with GPS

Data", International Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3,

Munich, Germany, September 5-9, pp 219-226.

Ghosh, S.K., 1979. Analytical Photogrammetry, Pergamon Press, New York, 203p.

Habib, A. and K. Novak, 1994. "GPS Controlled Aerial Triangulation of Single Flight

Lines", Proceedings of ASPRS/ACSM Annual Convention and Exposition, Vol 1, Reno, NV,

April 25-28, pp 225-235; also, International Archives of Photogrammetry and Remote

Sensing, Vol. 30, Part 2, Ottawa, Canada, June 6-10, pp 203-210.

Høghlen, A., 1993. "GPS-Supported Aerotriangulation in Finland - The Eura Block", The

Photogrammetric Journal of Finland, 13(2):68-77.

Jacobsen, K., 1991. "Trends in GPS Photogrammetry", Proceedings of ACSM-ASPRS

Annual Convention, Vol. 5, Baltimore, MD pp 208-217.

Jacobsen, K., 1993. “Correction of GPS Antenna Position for Combined Block Adjustment”,

ACSM/ASPRS Annual Convention and Exposition Technical Papes, Vol. 3, New Orleans,

LA pp 152-158.

Jacobsen, K., 1994. “Combined Block Adjustment with Precise Differential GPS-Data”,

International Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3, Munich,

Germany, September 5-9, pp 422-426.

Jonsson, B. and A. Jivall, 1990. “Experiences from Kinematic GPS Measurements”, Paper

presented at the Nordic Geodetic Commission 11

th

General Meeting, Copenhagen, 12p.

References Page 99

Lapine, L.A., 1991. “Analytical Calibration of the Airborne Photogrammetric System Using

A Priori Knowledge of the Exposure Station Obtained from Kinematic Global Positioning

System Techniques”, Department of Geodetic Science and Survey Report No. 411, The Ohio

State University, Columbus, OH, 188p.

Lapine, L.A., nd. "Airborne Kinematic GPS Positioning for Photogrammetry - The

Determination of the Camera Exposure Station", Xerox copy, source unknown.

Lucas, J.R., 1996. “Covariance Propagation in Kinematic GPS Photogrammetry” in Digital

Photogrammetry: An Addendum to the Manual of Photogrammetry, ASPRS, pp 124-129.

Merchant, D. C., 1973. “Elements of Photogrammetry, Part II – Computational

Photogrammetry, Department of Geodetic Science, Ohio State University, 75p.

Merchant, D.C., 1979. “Instrumentation for Analytical Photogrammetry”, Paper presented at

the IX Congresso Brasileiro de Cartografia, Brasil, February 4-9, 8p.

Merchant, D.C., 1992. "GPS-Controlled Aerial Photogrammetry", ASPRS/ACSM/RT92

Technical Papers, Col. 2, Washington, D.D., August 3-8, pp 76-85.

Merchant, D.C., 1994. "Airborne GPS-Photogrammetry for Transportation Systems",

Proceedings of ASPRS/ACSM Annual Convention and Exposition, Vol. 1, Reno, NV, April

25-28, pp 392-395.

Novak, K, 1993. Analytical Photogrammetry lecture notes, Department of Geodetic Science

and Surveying, The Ohio State University, 133p.

Salsig, G. and T. Grissim, 1995. “GPS in Aerial Mapping”, Proceedings of Trimble

Surveying and Mapping Users Conference, Santa Clara, CA, August 9-11, pp 48-53.

van der Vegt, 1989. “Differential GPS: Efficient Tool in Photogrammetry”, Surveying

Engineering, 115(3):285-296.

SURE 440 – Analytical Photogrammetry Lecture Notes

Page i

TABLE OF CONTENT

TOPIC Coordinate Transformations Basic Principles General Affine Transformation Orthogonal Affine Transformation Isogonal Affine Transformation Example of an Isogonal Affine Transformation Rigid Body Transformation Polynomial Transformations Projective Transformation Transformations in Three Dimensions Corrections to Photo Coordinates Analytical Photogrammetry Instrumentation Ground Targets Abbe’s Comparator Principle Basic Analytical Photogrammetric Theory Interior Orientation Film Deformation Lens Distortion Seidel Aberration Distortion Decentering Distortion Atmospheric Refraction Earth Curvature Example Projective Equations Introduction Direction Cosines Sequential Rotations Derivation of the Gimbal Angles Linearization of the Collinearity Equation Numerical Resection and Orientation Introduction Case I Example Single Photo Resection – Case I Case II Case III Page 1 1 3 5 5 6 8 8 9 9 12 12 13 13 14 14 15 17 17 18 21 24 27 34 34 35 38 39 46 51 51 52 55 60 63

SURE 440 – Analytical Photogrammetry Lecture Notes

Page ii

Principles of Airborne GPS Introduction Advantages of Airborne GPS Error Sources Camera Calibration GPS Signal Measurements Flight Planning for Airborne GPS Antenna Placement Determining the Exposure Station Coordinates Determination of Integer Ambiguity GPS-Aided Navigation Processing Airborne GPS Observations Strip Airborne GPS Combined INS and GPS Surveying Texas DOT Accuracy Assessment Project Economics of Airborne-GPS References

67 67 68 69 71 72 72 73 75 80 81 82 92 94 95 97 101

Coordinate Transformations

Page 1

**COORDINATE TRANSFORMATIONS Basic Principles
**

A coordinate transformation is a mathematical process whereby coordinate values expressed in one system are converted (transformed) into coordinate values for a second coordinate system. This does not change the physical location of the point. An example is when the field surveyor sets up an arbitrary coordinate system, such as orienting the axes along two perpendicular roads. Later, the office may want to place this survey onto the state plane coordinate system. This can be done by a simple transformation. The geometry is shown in figure 1.

Figure 1. Geometry of simple linear transformation in two dimensions.

In figure 1, a point P can be expressed in a U, V coordinate system as UP and VP. Likewise, in the X, Y coordinate system, the point is defined as XP and YP. Assume for the time that both systems share the same origin. Then the X-axis is oriented at some angle α from the Uaxis. The same applies for the Y and V axes. The X-coordinate of the point can be shown to be

X P = de + ep

**From triangle feP:
**

cos α = fP eP ⇒ eP = fP cos α

The XP coordinate can be written as

Coordinate Transformations

Page 2

X P = de + eP

But,

tan α =

de YP UP eP

∴ de = YP tan α

cos α =

∴ eP =

UP cos α

Then,

UP cos α Y sin α UP = P + cos α cos α X P cos α = YP sin α + U P X P = YP tan α + U P = X P cos α − YP sin α

**In a similar fashion, the VP coordinate transformation can also be developed.
**

VP = ab + Pb

But,

tan α = sin α = bc YP ab X P − bc ⇒ ⇒ bc = YP tan α ab = (X P − bc )sin α

Therefore, ab becomes ab = (X P − YP tan α )sin α = X P sin α − YP and Pb can be shown to be sin 2 α cos α

cos α =

YP Pb

⇒

Pb =

YP cos α

Coordinate Transformations

Page 3

The VP coordinate then becomes

VP = X P sin α − YP

Y sin 2 α + P cos α cos α 1 sin 2 α = X P sin α + YP cos α − cos α = X P sin α + YP cos α

In these derivations, the angle of rotation (α) is a rotation to the right. It can easily be shown that the conversion from U, V to X, Y will take on a similar form. Since the angle would be in the opposite direction, one can insert a negative value for α and then arrive at the following form of the transformation (recognizing that the sin (-α) = -sin α):

X P = U P cos α + VP sin α YP = − U P sin α + VP cos α This can be shown in matrix form as:

X P cos α sin α Y = − sin α cos α P U P V P

Next, we will take these transformation equations and expand them into different forms all related to what is called the affine transformation.

**General Affine Transformation
**

The general affine transformation is normally shown as

x ' = a1x + b1 y + c1 y' = a 2 x + b 2 y + c 2

(1)

Which provides a unique solution if

a1 a2

b1 ≠ 0. b2

c) Transforms model coordinates to survey coordinates. two lines that are parallel to each other prior to the transformation will remain parallel after the transformation. Cy α ε are the translation elements in moving from the center of the original coordinate system to the center of the transformed coordinate system. ε. In other words. Physical interpretation of the affine transformation. α. y’ represent the newly transformed coordinate system. and ∆y’). Figure 2 shows the physical interpretation involved in this transformation. Figure 2. Cy. ∆x’. The property of the affine transformation is that it will carry parallel lines into parallel lines. The transformation can then be written as x ' = C x (x )cos α + C y (y )sin α + ∆x ' y' = −C x (x )sin (α + ε ) + C y (y )cos(α + ε ) + ∆y' (2) where: ∆x’. and is the angle of non-orthogonality between the axes of the transformed coordinate system. is the angle of rotation.Coordinate Transformations Page 4 This is a two-dimensional linear transformation that is used in photogrammetry for the following: a) Transforms comparator coordinates to photo coordinates and is used for correcting film distortion. When comparing equations (1) and (2). b) Connecting stereo models. are scale factors in the x and y direction. The x and y axes represent the original axis system while x’. It will not preserve orthogonality. ∆y’ Cx. one can see the following. . Note that there are six parameters in this transformation (Cx.

one can impose the condition of orthogonality. This transformation is useful when one takes into account the differences in the magnitude of film shrinkage in the length of the film versus its width. similarity transformation. It is shown as: x ' = C x cos α + C y sin α + ∆x ' y' = −C x sin α + C y cos α + ∆y' (5) If one recalls those equalities expressed in equation (3). and ∆y’).Coordinate Transformations Page 5 a1 = C x cos α b1 = C y sin α c1 = ∆x ' b 2 = C y cos(α + β ) c 2 = ∆y' a 2 = −C x sin (α + ε ) (3) Orthogonal Affine Transformation To the general affine case. The isogonal affine transformation is also called the Helmert transformation. ε → 0. These conditions are orthogonality (ε → 0) and uniform scale (C = Cx = Cy). This results in a five parameter transformation (Cx. Isogonal Affine Transformation To the general case of the affine transformation one can impose two conditions. equation (5) can be expressed as . The transformation is shown as: x ' = C x x cos α + C y y sin α + ∆x ' y' = −C x x sin α + C y y cos α + ∆y' (4) Note that this transformation is non-linear. Euclidean transformation. α.. and conformal transformation. we can see that C cos α = a1 = b 2 − C sin α = −b1 = a 2 Therefore. i. Cy. ∆x’.e.

The back solution can be shown as: x= a (x ' − c ) − b (y' − d ) a 2 + b2 b (x ' − c ) + a (y' − d ) y= a 2 + b2 (7 ) Example of an Isogonal Affine Transformation The following are the measured comparator values: yUL = -40.067 mm yLR = -50.057 mm .107 mm y’UL = -39.026 mm xPT = 76.0985 mm yPT = -41. as normally shown as: x ' = ax + by + c y' = −bx + ay + d (6) In this form (6).820 mm Recall that the transformation formulas can be written as: xUL = 70.014 mm xLR = 80.133 mm y’LR = -49.843 mm x’LR = 80.9810 mm The “true” photo coordinates of the reseau are: x’UL = 70.Coordinate Transformations Page 6 x ' = a 1 x + b1 y + c1 y' = − b1 x + a 1 y + c 2 or. the transformation is linear.

and y’LR. b. b.057 0 1 − 40. and yLR.133 y'LR − 49.Coordinate Transformations Page 7 x 'UL = a x UL + b y UL + c y'UL = a y UL − b x UL + d x 'LR = a x LR + b y LR + c y'LR = a y LR − b x LR + d The unknowns are a. b. d ) y LR ∂ y'LR ∂ (a . c.067 0 1 − 50.057 0 1 − 50. d ) x ∂ y'UL UL ∂ (a . The “true” values are x’UL. c. d ) ∂x ' UL = y UL ∂b ∂x ' UL =0 ∂d y UL − x UL y LR − x LR 1 0 70.026 1 0 − 80. b.067 0 1 The discrepancy vector (f) and the vector containing the parameters (∆) are shown as: ∆ x 'UL 70. b.014 1 0 − 70. c.107 y' − 39. Differentiating the transformation formulas with respect to the parameters would follow as: ∂ x ' UL = x UL ∂a ∂ x ' UL =1 ∂c The design matrix (B) is shown as ∂ x 'UL ∂ (a .014 = 1 0 80. c. and d while the measured values are xUL.843 UL = f = x 'LR 80. xLR. yUL.026 − 40. d ) y UL B= = ∂ x 'LR x LR ∂ (a . x’LR.820 a b ∆= c d The normal equation is found using the following relationship where N is the normal coefficient matrix: N = BtB . c. y’UL.

In this case the transformation can be shown with only three parameters (α.663 0.002547 )(− 41. and ∆y’) as: .240 − 89.999051)(76.449211 0.014579 c − 0.748971 −1 N = 76.0985) + (0.748971 0. ∆x’.942775 The constant vector (t) and the solution (∆) are computed as: ∆ 1514.776 t = BT f = 150.429 − 90.429 15422.999051)(− 41.942775 0 76.040 15422.009978 0.9810) + 0.002547 )(76.040 − 150.0985) + (− 0.999051 a − 0.Coordinate Transformations Page 8 0 150.148 mm y'PT = −b x PT + a y PT + d = −(− 0.9810) + (− 0.045424) = −41. This includes orthogonality and not scale (Cx = Cy = 1).045424 d The transformed coordinates of the point are then computed as: x 'PT = a x PT + b y PT + c = (0.793 Rigid Body Transformation To the general affine transformation.009978 0.124 N= 2 0 2 The inverse of the normal coefficient matrix is shown as: 0 − 0.068 − 33.002547 b = ∆ = N −1t = 0.014579 = 76. one further set of conditions can be imposed.449211 0.124 − 90.

Surveying measurements are increasingly being performed in a 3-D mode. The approach is basically the same as above. is the form of the 2-D projective transformation. without derivation. 1979] can also be used. x' = a 1x + a 2 y + a 3 d1x + d 2 y + 1 b1 x + b 2 y + b 3 d1x + d 2 y + 1 y' = Transformations in Three Dimensions The developments that have already been presented represent the transformation in 2-D space. except for the addition of one more axis about which the transformation takes place.Coordinate Transformations Page 9 x ' = x cos α + y sin α + ∆x ' y' = − x sin α + y cos α + ∆y' (8) Polynomial Transformations A polynomial can also be used to perform a transformation. x ' = A 0 + A 1 x + A 2 y + A 3 x 2 − y 2 + A 4 ( 2xy ) + l y' = B 0 − A 2 x + A1 y + A 4 x 2 − y 2 + A 3 ( 2 xy ) + l ( ) ( ) Projective Transformation The projective equations are frequently used in photogrammetry. Shown here. This is given as x ' = a 0 + a 1 x + a 2 y + a 3 x 2 + a 4 y 2 + a 5 xy + m y' = b 0 + b1 x + bb 2 y + b 3 x 2 + b 4 y 2 + b 5 xy + m An alternative from Mikhail [Ghosh. Discussion on the use of the projective equations will be given in a later section. .

A polynomial projective transformation can be shown. without derivation. Ghosh [1979] gives the general form of this type of transformation.Coordinate Transformations Page 10 Instead of using the projective equations. which is conformal in the three planes. zx in y’. and xy in z’ are zero. This is given as [Ghosh. Mikhail presents another form of the 3-D polynomial. x ' = a 0 + a 1 x + a 2 y + a 3 z + a 4 x 2 + a 5 y 2 + a 6 z 2 + a 7 xy + a 8 yz + a 9 zx + a 10 xy 2 + a 11 x 2 y + a 12 xz 2 + m y' = b 0 + b1 x + b 2 y + b 3 z + b 4 x 2 + b 5 y 2 + b 6 z 2 + b 7 xy + b 8 yz + b 9 zx + b10 xy 2 + b11 x 2 y + b12 xz 2 + m z' = c 0 + c1 x + c 2 y + c 3 z + c 4 x 2 + c 5 y 2 + c 6 z 2 + c 7 xy + c 8 yz + c 9 zx + c10 xy 2 + c11 x 2 y + c12 xz 2 + m This transformation is not conformal therefore it should only be used where the rotation angles are very small. 1979]: x' = a 1x + a 2 y + a 3z + a 4 d1 x + d 2 y + d 3 z + 1 b1 x + b 2 y + b 3 z + b 4 d1x + d 2 y + d 3 z + 1 c1 x + c 2 y + c 3 z + c 4 d1x + d 2 y + d 3 z + 1 ( ) ( ) ( ) y' = z' = . 1979}: x ' = A 0 + A1 x + A 2 y + A 3 z + A 5 x 2 − y 2 − z 2 + 0 + aA 7 zx + 2A 6 xy + y' = B 0 − A 2 x + A1 y + A 4 z + A 6 − x 2 + y 2 − z 2 + 2A 7 yz + 0 + 2A 5 xy + z' = C 0 + A 3 x − A 4 y + A1 z + A 7 − x 2 − y 2 + z 2 + 2A 6 yz + 2A 5 zx + 0 + The 0’s here indicate that the coefficients for the terms. yz in x’. polynomials may be used to perform the 3-D transformation. as [Ghosh.

Coordinate Transformations Page 11 A solution is possible provided that a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c3 d3 a4 b4 ≠0 c4 d4 .

This factor involves the necessary training required for the operator of the equipment. Comparators are designed specifically for precise photo measurements for analytical photogrammetry. scale and Z-column readings should be at some realistic value. at least theoretically. Photos are scanned (or captured directly in a digital form) and points are measured.Corrections to Photo Coordinates Page 12 CORRECTIONS TO PHOTO COORDINATES Analytical Photogrammetry Instrumentation Analytical photogrammetry is performed on specialized instruments that have a very high cost due to the fact that there is a limited market. or semi-analytical (computeraided) stereoplotters can be used either in a monoscopic or stereoscopic mode. The base (bx). then there will be a limited pool from which one can draw their operators. to completely automate the whole process and an individual with no basic understanding of photogrammetric principles can do this. When used on a stereoplotter. If the instrument requires an individual with a basic theoretical background in photogrammetry along with experience. Comparators can be either monoscopic or stereoscopic. With autocorrelation techniques the whole process of aerotriangulation can be automated with the solution containing more points than can be done manually. it is important to put all of the elements in their zero positions (ω’ = ω” = ϕ’ = ϕ” = κ’ = κ” = by’ = by” = bz’ = bz” = 0) [Ghosh. 1979]: • • • • • High accuracy High reliability High measuring efficiency Low first cost Low cost of maintenance In addition. The design characteristics of analytical instrumentation include [Merchant. precision analog. With the onset of digital photogrammetry. the instrumentation is cheaper (being the computer) but the software still remains expensive for this specialized applications. 1979]. operational efficiency becomes an important consideration. Operational efficiency also involves on the comfort of the operator when operating the equipment. The photographs are placed on the stages and all points that are imaged on the photo are measured. . There are various different kinds of photogrammetric instrumentation that can be used in analytical photogrammetry. At the low end. These instruments are generally linked to analytical photogrammetry software that helps the operator complete the photo measurements. One of the advantages of digital photogrammetry is that it has the capability. Analytical plotters can also be used for analytical photogrammetric measurements. The last type of instrument is the digital or softcopy plotter.

. . p.). it is important that the instrument upon which the measurements are made is well calibrated and maintained. Using a point transfer instrument. which is based on calibration.errors of rectilinearity (bending) of the guide rails.g.scaling and periodic errors (of the x. manholes.lack of orthogonality between x and y axes (also known as ‘rectangularity error’). The last type of control point is the artificial point that is added to the photography after the film is processed. Several different target designs are used in photogrammetry. There are many systematic error sources associated with the comparator. points are marked on the emulsion of the film. Signalized points are targeted on the ground prior to the flight. although from a practical perspective these errors are accounted for by transforming the photo measurements to the “true” photo system. This is because the control system may not provide for the continuously variable scanning direction. The design is based on the following requirements: "i) To exclusively base the measurement in all cases on a longitudinal graduation with which the distance to be measured is directly compared. b) Backlash and tracking errors.errors due to deviation of the direction. such as the PUG by Wild.. .affinity errors (being the scale difference between x an y directions). microscope velocity does not drop to zero at points to be approached during the operation.” [Ghosh. coordinate counter. . These items can be things like the intersection of roads (for small-scale mapping). Ground Targets Ground targets can be one of three different types. etc. 1979. etc. d) Errors of automation in the system. intersections of sidewalks. Abbe's Comparator Principle Abbe's comparator principle states that the object that is to be measured and the measuring instrument must be in contact or lie in the same plane. c) Dynamic errors (e. y measuring systems involving scales. spindles.Corrections to Photo Coordinates Page 13 To achieve the high accuracy demanded by many analytical photogrammetric applications. . They are “a) Errors of the instrument system.digital resolution (smallest incremental interval). Detail points are those well defined physical features that are imaged on the photography. .30] One could determine the corrections to each of these error sources. and .

Corrections to Photo Coordinates Page 14 ii) To always design the measuring apparatus in such a way that the distance to be measured will be the rectilinear extension of the graduation used as a scale. Third Order Theory consists of all the other sources of error in the imposition of the collinearity condition. It has coordinates xo. 0. These errors are usually not accounted for except for special circumstances. Those items that are normally covered include lens distortion. The principal point is determined through camera calibration and it generally is reported with respect to the center of the photograph as defined by the intersection of opposite fiducial marks (indicated principal point). is imaged on the photograph with coordinates xp. The photogrammetric coordinate system is shown in figure 2. Examples of Abbe's Comparator principle with simple measurement systems. Second Order Theory and Third Order Theory. Figure 3. 1979.7] Basic Analytical Photogrammetric Theory Analytical photogrammetry can be broken down into three fundamental categories: First Order Theory. yp. Second Order Theory corrects for the most significant errors that are not accounted for in First Order Theory. ASP. p. p. transient thermal gradients across the lens cone. etc. Interior Orientation The first phase of analytical photogrammetric processing is the determination of the interior orientation of the photography." [Manual of Photogrammetry. The perspective center is the location of the lens elements . in Ghosh. They include platen unflatness. film deformation and earth curvature. Fist Order Theory is the basic collinearity concept where the light rays from the object space pass through the atmosphere and the camera lens to the film in a straight line. 0. The point. atmospheric refraction. which are not included in Second Order Theory. yo.

A reseau grid consists of a grid of targets that are fixed to the camera lens and imaged . Film Deformation When film is processed and used it is susceptible to dimensional change due to the tension applied to the film as it is wound during both the picture taking and processing stages. atmospheric refraction. yo. f. reseau photography is commonly employed for applications requiring a higher degree of accuracy. lens distortion. the introduction of water-based chemicals to the emulsion during processing and the subsequent drying of the film may cause the emulsion to change dimensionally. these effects need to be compensated. Photographic coordinate system. The purpose is to correct the image rays such that the line form the object space to the image space is a straight line. One of the problems with this approach is that it is possible that unmodelled distortion can still be present when only four (or fewer) fiducial marks are employed.Corrections to Photo Coordinates Page 15 and it has coordinates xo. thereby fulfilling the basic assumption used in the collinearity condition. The simplest approach is to use the appropriate transformation model discussed in the previous section. The vector from the perspective center to the position on the photo is given as x p − x o a = y p − yo 0−f Interior orientation involves the determination of film deformation. In addition. and earth curvature. To overcome this problem. Figure 4. Therefore.

an 8-parameter projective transformation can be used. Other approach to compensation of film deformation is to use a polynomial. taking into consideration the coordinates of the principal point (xo. the results should more accurately depict the dimensional changes that occur due to film deformation. used by the U. The reseau grid is calibrated so that the positions of the targets are accurately known. By observing the reseau targets that surround the imaged points and using one of the transformation models discussed earlier. Its advantage is that linear scale changes can be found in any direction. the isogonal affine model can be used.S. Coast and Geodetic Survey (now National Geodetic Survey) when four fiducials are used is shown as: ∆x = x − x ' = x + a 0 + a 1 x + a 2 y + a 3 xy ∆y = y − y' = y + b 0 + b1 x + b 2 y + b 3 xy This model can be expanded to an eight fiducial observational scheme as: ∆x = x − x ' = x + a 0 + a 1 x + a 2 y + a 3 xy + a 4 x 2 + a 5 y 2 + a 6 x 2 y + a 7 xy ∆y = y − y' = y + b 0 + b1 x + b 2 y + b 3 xy + b 4 x 2 + b 5 y 2 + b 6 x 2 y + b 7 xy . this model provides a unique solution. For example.Corrections to Photo Coordinates Page 16 on the film. One model. It will have the following form. x ∆x cos α sin α x ' x o y = ∆y + − sin α cos α y' − y o In its linear form it looks like: x a b x ' c x o y = − b a y' + d − y o Using 4 fiducials. yo). The correction for film deformation is given as x= a 1 x ' + a 2 y' + a 3 − xo c1 x ' + c 2 y' + 1 b 1 x ' + b 2 y' + b 3 − yo c 1 x ' + c 2 y' + 1 y= Measurement of the four fiducials yields 8 observations. Therefore. One simple approach is to put a piece of glass in front of the film with the targets etched on the surface.

The exception is Seidel's fifth aberration . Seidel Aberration Distortion Figure 5. The effect of this distortion is radial from the principal point. aberrations do not affect the geometry of the image but instead affect image quality. curvature of field. Here the geometric position of the image point is moved in image space and this change in position must be accounted for in analytical photogrammetry. 1980]. There are two components of lens distortion: radial distortion (Seidel aberration) and decentering distortion. With today’s computer controlled lens manufacturing process. Radial Lens Distortion Geometry. Seidel has identified five lens aberrations. The effects are small with today’s lens systems. The values for lens distortion are determined from camera calibration. coma. Conrady's intuitive development for handling this radial distortion is expressed in the following polynomial form: . An aberration is the "failure of an optical system to bring all light rays received from a point object to a single image point or to a prescribed geometric position" [ASPRS. Decentering distortion is caused by faulty placement of the individual lens elements in the camera cone and other manufacturing defects. and distortion. These include astigmatism. Radial lens distortion is caused from faulty grinding of the lens.distortion. spherical aberration. chromatic aberration (this is sometimes broken into lateral and longitudinal chromatic aberration). this distortion is almost negligible at least to the accuracy of the camera calibration itself.Corrections to Photo Coordinates Page 17 Lens Distortion The effects of lens distortion are to move the image from its theoretically correct location to its actual position. These values are generally reported by either a table or in terms of a polynomial (see the example at the end of this section). It is caused by the faulty grinding of the lens. Generally.

recall that r2 = x2 + y2 By similar triangles.88] From Figure 3. and c. The axial ray passes the lens undeviated. This is called the axis of zero tangential distortion (see figures 4 and 5). p. When the value is "one" then the radial line remains straight. The distortion can be represented by a continuous function.” [Ghosh. . The sense of the distortions should be positive for all outward displacement of the image. 1979. b. the following relationship can be shown δr δx δy = = r x y the x and y Cartesian coordinate components of the effects of this distortion are thus found by: δx = δr x = k 0 + k1r 2 + k 2 r 4 + x r ( ) δy = δr y = k 0 + k1r 2 + k 2 r 4 + y r ( ) The corrected photo coordinates can then be computed using the form: δr x c = x − δx = 1 − x = 1 − k 0 − k1r 2 − k 2 r 4 − x r ( ) δr y c = y − δy = 1 − y = 1 − k 0 − k1r 2 − k 2 r 4 − y r ( ) DECENTERING DISTORTION Decentering lens distortion is asymmetric about the principal point of autocollimation.Corrections to Photo Coordinates Page 18 δr = k 0 r + k1r 3 + k 2 r 5 + k 3r 7 + k 4 r 9 + This is based on three general hypotheses: “a.

designed the corrections for the lens distortion due to decentering. using the developments by Washer. Figure 7. Duane Brown. Geometry of tangential distortion showing the tangential profile.Corrections to Photo Coordinates Page 19 Figure 6. Brown called this the "Thin Prism Model" and it is shown as: . Effects of decentering distortion.

The corrected photo coordinates become: . Therefore. J2 are the coefficients of the profile function of the decentering distortion. The concept of the thin prism was found to be inadequate to fully describe the effects of decentering distortion. and ϕo is the angle subtended by the axis of the maximum tangential distortion with the photo x-axis. This is the tangential distortion along the axis of maximum tangential distortion. the Conrady-Brown model was developed to find the effects of decentering on the x.y encoders: 2 x 2 2x y δx = J1r 2 + J 2 r 4 1 − 2 sin ϕo − 2 cos ϕo r r ( ) 2x y 2 y2 δy = J1r 2 + J 2 r 4 2 sin ϕo − 1 + 2 cos ϕo r r ( ) A revised Conrady-Brown model made further refinements to the computation of decentering distortion and this model is shown to be: δx = P1 r 2 + 2x 2 + 2P2 x y 1 + P3r 2 + P4 r 4 + l δy = 2P1x y + P2 r 2 + 2 y 2 1 + P3r 2 + P4 r 4 + l P1 = − J1 sin ϕo P2 = J1 cos ϕo [ ( ) ][ ] [ ( )][ ] where: P3 = P4 = J2 J1 J3 J1 P’s define the tangential profile function. The corrected photo coordinates due to the effects of decentering distortion can then be found by subtracting the errors computed in the previous equations.Corrections to Photo Coordinates Page 20 ( δy = (J r 1 δx = − J1r 2 + J 2 r 4 sin ϕo = − J sin ϕo 2 + J2r 4 ) )cos ϕ o = J cos ϕo where: J1.

δy Atmospheric Refraction Figure 8. The light rays from the object space to image space must pass through layers of differing density thereby bending that ray at various layer boundaries along the path. and θ+dα = angle of refraction Generalizing and simplifying yields .δx yc = y . This index depends upon the temperature. Effects of atmospheric refraction on object space light ray. Light rays bend due to refraction. etc. carbon dioxide. pressure and composition.Corrections to Photo Coordinates Page 21 xc = x . including humidity. The amount of refractions is a function of the refractive index of the air along the path of that light ray. From Snell's Law we can express the law of refraction as (n i + dn )sin θi = n i sin (θi + dα ) where: n = refractive index dn = difference in refractive index between the two mediums θ = angle of incidence. dust.

dθ can be expressed with respect to r as r = f tan θ dr = f sec 2 θ dθ = f 1 + tan 2 θ dθ r2 = f 1 + 2 dθ f ∴ dr = f 2 + r2 dθ f ( ) δr can also be expressed as a function of K using: .Corrections to Photo Coordinates Page 22 dα = Integrating α= dn tan θ n ∫ αo dα = tan θ αP ∫ no np dn = tan θ ⋅ ln (n ) n oP n n where ln indicates the natural logarithm and the subscripts L is the camera station and P is the ground point. For vertical photography. Generalizing dθ = α = K tan θ 2 where K is the atmospheric refraction constant.

022576 h )5.256 − 277.0 (1 − 0.245 ⋅ 10− 6 K= H [ ] For altitudes up to nine kilometers.02257 H )4.02257 H )5. For example.Corrections to Photo Coordinates Page 23 dθ = K tan θ dr = f 2 + r2 f 2 + r2 r K tan θ = K f f f r3 ∴ dr = K r + 2 f The radial component can also be expressed using a simplified power series: dr = k 1 r + k 2 r 3 + k 3 r 5 + where the k’s are constants. this equation can be simplified as K = { (H − h ) [1 − 0. the 1959 ARDC (Air Rome Development Center) model developed from Bertram is shown as: 2410 H 2410 h h −6 − 2 K= 2 ⋅ 10 H − 6 H + 250 h − 6 h + 250 H The atmospheric model developed by Saastamoinen for an altitude of up to eleven kilometers is given by 1225 (1 − 0.256 − (1 − 0. The Cartesian components of atmospheric refraction are r2 δr δx = x = K 1 + 2 x f r r2 δr δy = y = K 1 + 2 y r f K is a constant determined from some model atmosphere.02 (2H + h )]}⋅ 10−6 13 .

The tabulated values.4 21.8 3.4 4.6 3.8 6.4 1.7 18.3 0.9 1.3 7.3 13.2 1.2 4.Corrections to Photo Coordinates Page 24 There are several other atmospheric models.6 4.2 5.9 For Ground Elevation – 1500 m above sea level 0.23 1.2 5. as shown in the figure.9 7.9 6.76 0.7 0.3 2.9 0.59 Table 1.8 2.4 4.2 2.25 2.5 11.7 0.8 0. these models are almost the same. He also states that.5 20.4 0.9 0.7 7.2 14. will occupy a position on that plane at a distance of ∆H from the earth's surface.6 0.4 7.9 13.87 2.9 2.9 5.2 2. The point.0 4.0 5. is always radially inward towards the principal point.1 13.5 50 63 78 94 111 131 For Ground Elevation – 0 m above sea level 1.2 3.2 0.9 0.7 1. Ghosh [1979] also identifies the US Standard Atmosphere and the ICAO Standard atmosphere.5 9.0 16. we can see that .5 4.5 6.4 3. Radial image distortion due to atmospheric refraction.7 3.6 2. when projected onto a plane tangent to the ground nadir point.9 3.7 2.3 9.4 5.6 0.1 2.8 5.50 2. From the geometry.9 14.0 9.8 6.3 For Ground Elevation – 500 m above sea level 1.1 7. EARTH CURVATURE Earth curvature causes a displacement of a point due to the curvature of the earth.8 15.6 1.4 3.95].5 7.5 5.1 2.6 4.9 7.9 3.8 153 10.3 4.7 4.6 Coefficients k1⋅10-2 3.2 12.99 2.2 6.0 7.0 For Ground Elevation – 1000 m above sea level 1. in mm 12 0.3 1.2 2.6 4.6 2.74 1.1 5.4 6.3 1.0 17.7 24 0.7 1. Flying Height in m 3000 6000 9000 3000 6000 9000 3000 6000 9000 3000 6000 9000 For Radial Distance r of the Image Point from the Photo Center.3 23.3 6. dr.8 1.9 3.99 0.8 0.9 6.1 18. are in micrometers.7 6. 1979.7 2.53 2.2 1.08 2.8 10. The image displacement.9 9.3 0.5 0.5 1.1 k2⋅10-6 1.2 4. Table 1 shows the amount of distortion using a focal length of 153 mm and the ICAO Standard atmosphere [from Ghosh.5 8.9 5.1 13.7 10.1 12.1 1. up to about 20 km.3 8.2 10.8 3. p.7 3.4 8.

θ= D' D ≈ R R D2 D ∴ cos θ ≈ cos = 1 − + 2R2 R ∆H = R − R cos θ = R (1 − cos θ) D2 = R 1 − 1 + − 2 2R ∴ ∆H ≈ From which we can write dE = f ∆D H' D2 2R But.Corrections to Photo Coordinates Page 25 Figure 9. . Earth Curvature Correction.

. Looking at the formula for earth curvature and the intuitive evaluation of the figure. dE = = But f D D2 H' H' 2 R fD3 2 H2R D≈ H' r f Yielding H' r 3 dE = 2R f 2 Since H'/(2Rf2) is constant for any photograph dE = K r 3 where: K= H 2R f 2 The effects of earth curvature are shown in the Table 2 with respect to the flying height (H) and the radial distance from the nadir point [Ghosh. Doyle. 1981].Corrections to Photo Coordinates Page 26 dE ≈ D ∆H H' Therefore. 1979. one can see that the effects will increase rapidly at higher flying heights and the farther one moves from the nadir point.

98].2 28. p. Figure 10. .2 6 0.4 1.2 1.4 76.0 0.1 4.7 8 0.3 4.5 3.0 0.2 38.8 21.1 0.8 114.3 28.0 0.9 Table 2.5 96.1 19.4 0.9 35.9 1.3 10 0.0 60.1 0. Amount of earth curvature (in mm) for vertical photography assuming a focal length of 150 mm [from Ghosh.0 0. 1979.2 14. Example showing calibration values for fiducials and principal point.4 57.0 0.1 0.2 0.0 0.0 24.0 0.0 7.6 85.0 0.5 6.3 2 0.0 9.0 14.8 3.0 142.5 0.0 0.8 6.6 17.0 36.3 57.0 48.8 3.6 H in km 4 0.2 1.9 7.9 3.0 12. The radial lens distortion is shown from the following diagram delineating the distortion curve.2 7.8 1.Corrections to Photo Coordinates Page 27 R(mm) 10 20 40 60 80 100 120 140 160 0.212 mm Fiducial mark & principal point coordinates are shown in the next figure of the fiducial marks.6 14.1 1 0.6 7.5 10.3 2. EXAMPLE A vertical aerial photograph is taken with an aerial camera having the following calibration data: Calibrated focal length = 152.

The average height of the terrain is 400' above mean sea level.980 228.260 228.202 240.032 16.3 40 +9 +7.5 -7. Radial Distance (mm) Distortion (µm) Polynomial (µm) 20 +6 +5.6 160 -13 +123 +0.640 ry (mm) 13.068 24.341 237.5 140 -1 +0.000' above mean sea level.160 36. Radial lens distortion for camera in the example.Corrections to Photo Coordinates Page 28 Figure 11.2 80 -1 100 -7 120 -9 -10.10x10-4 J2 = -1. Camera calibration graph of distortion using both polynomials and radial distortion. p Questions: rx (mm) 28.432 225. The photograph is placed in the comparator and the following image coordinates are measured: point 1 2 3 4 Pt.40x10-8 Νo = 108o 00' The flying height is 38.426 . The decentering lens distortion values are: J1 = 8.1 Table 2.9 60 +6 +5.

What are the image coordinates of point p corrected for film deformation and reduced to put the origin at the principal point? Use a 6-parameter general affine transformation and compute the residuals.928 23. 4.974 226.622 The parameters are: a1 = 0.973 F2 = 228.257 F1 = 235.0 1.0 14.0000000002 − 0.068 228.031 240.980 225.0029475275 N = 0.160 The discrepancy vectors are: 26.0000222132 − 0. 3.Corrections to Photo Coordinates Page 29 1.426 mm The design matrix (B) is: 28.9647056994 −1 1.640 mm y = 36.648 16.274 238.202 13.984 The normal coefficient matrix inverse (N-1) is: 0.0 1. The observed photo coordinates are: x = 228.0 1. 2.341 16.260 237. What are the image coordinates of p corrected additionally for radial and decentering lens distortion? What are the image coordinate corrections at p for atmospheric refraction and earth curvature? What are the final corrected image coordinates of p? SOLUTION 1.432 24.99923 a2 = -0.0026815783 0.0000222209 − 0.00428 .

131.00294 0.00430 The photo coordinates translated to the principal point become: x = 226.657 mm .646)2 ( ) ( ) Siedel radial distortion in terms of their rectangular coordinate vales are: . Lens distortions are computed as follows: r = x 2 + y2 = = 127.104 mm = 95.663 mm (95.653 mm ∆r = 0.96656 The residuals are: − 0.121.814 mm = -84.223 × 10−9 r 5 = −8.794 × 10−5 r 3 + 2.168 mm b2 = 0.99917 c2 = 1.553)2 + (− 84.00441 c1 = -1.168 mm .75196 0.286 r − 5.00430 − 0.00294 V1 = − 0.657 mm y = 37.Corrections to Photo Coordinates Page 30 b1 = 0.00294 0.646 mm 2.00430 − 0.553 mm y = 37.00294 The transformed coordinates are: x = 226.00429 V2 = 0.

652 ) 1 + (− 0.Corrections to Photo Coordinates Page 31 ∆r − 0.00077 127.646) 1 − r 127.00866 x c = x 1 − = 95.00025)(95.016) = 95.00025 J 2 − 1.559 ) + 2 (− 0. Using the 1959 ARDC model: .653 = 95.000017 J1 8.6532 2 [ ( [ [ ( ) ][ ] ) ][ ( )] δy = 2P1xy + P2 r 2 + 2 y 2 1 + P3r 2 = 0.655) 1 + (− 0.003 = −84.553 1 − r 127.652 ) + (− 0.00077 P2 = J1 cos ϕo = 8.652 mm The decentering distortion using the revised Conrady-Brown model is shown as follows: P1 = −J1 sin ϕo = −8.559 )(− 84.10 × 10− 4 cos108o = −0.00077 ) (95.559 )(− 84.10 × 10− 4 δx = P1 r 2 + 2x 2 + 2P2 xy 1 + P2 r 2 = −0.653 = −84.6532 + 2 (95.6532 + 2 (− 84.40 × 10− 8 P3 = = = −0.0025) 127.652 − 0.559 mm ∆r − 0.016 mm = − 0.655 mm 3.003 mm = 2 (− 0.00025) 127.576 mm y c = y − δy = −84.559 − (− 0.000017 ) 127.6532 2 [ ( )][ ] ( )][ ( )] The coordinates corrected for decentering distortion then become: x c = x − δx = 95.10 × 10 −4 sin 108o = −0.00866 y c = y 1 − = (− 84.

212 2 95.642 mm The coordinates corrected of earth curvature become: .014 mm 127.Corrections to Photo Coordinates Page 32 1200 m 1 km = 11.6532 r2 δy = K 1 + 2 y = 0.000 − 400) = 2f 2 R 2 (152.6532 r2 δx = K 1 + 2 x = 0.000) = 0.12 km 400' 3937' 1000 m 2410 H 2410 (11.0000887 1 + 152.0000887 127.576 f = 0.58) −6 K= 2 × 10 − 6 = × 10 2 H − 6H + 250 11.58 km 38.0000887 1 + 152. The corrected photo coordinated due to the effects of refraction are: x c = x − δx = 95.0807 mm 2 4.013) = −84.655) − (− 0.561 mm y c = y − δy = (− 84.576 − 0.58) + 250 = 0.014 = 95.655) f = −0.653) (38.212)2 (20.000' 3937' 1000 m 1200 m 1 km = 0.906.013 mm The effects of earth curvature are presented as: dE = r 3H ' (127.212 2 (− 84.58 − 6 (11.

642) 1 + r 127. thus.653 = −84.Corrections to Photo Coordinates Page 33 0. x = 95.696 mm The final corrected photo coordinates are.0807 dE x c = x 1 + = 95.653 = 95.0807 dE y c = y 1 + = (− 84.622 mm 0.622 mm y = -84.561 1 + r 127.696 mm .

in the comparison. The origin of the photographic coordinates is at the principal point which can be expressed as X' x − x o Y' = y − y o Z' − f where: x. Z XL. this will most probably never coincide with the ground X-axis. y xo. On the photograph. the coordinates of the points imaged on the photograph are determined through observations. Y. ZL are the ground coordinates of the point are the ground coordinates of the ground nadir point Thus. The next procedure is to compare these photo coordinates with the ground coordinates. a translation is necessary. In photogrammetry. The numerical resection problem involves the transformation (rotation and translation) of the ground coordinates to photo coordinates for comparison purposes in the least squares adjustment. lets derive the rotation matrix that will be used to form the collinearity condition. We can write this as X 1 X − X L Y = Y − Y L 1 Z1 Z − Z L where: X.Projective Equations Page 34 PROJECTIVE EQUATIONS Introduction In the first section we were introduced to coordinate transformations. For any number of reasons. Note that the ground nadir coordinates would correspond to the principal point coordinates in X and Y if the photograph was truly vertical. the positive x-axis is taken in the direction of flight. YL. both ground coordinates and photo coordinates are referenced to the same origin separated only by the flying height. Before we begin this process. yo f are the photo coordinates of the imaged point with reference to the intersection of the fiducial axes are the coordinates from the intersection of the fiducial axes to the principal point is the focal length Since the origin of the ground coordinates does not coincide with the origin of the photographic coordinate system. .

Projective Equations Page 35 Direction Cosines If we look at figure 1. Here we can readily see that the vector PQ can be defined as: XQ − XP PQ = YQ − YP = −QP Z −Z P Q The length of the vector becomes . Vector OP in 3-D space. we can see that point P has coordinates XP. figure 2 shows the line PQ. The length of the vector (distance) can be defined as Figure 12. OP = X + Y + Z 2 P 2 P [ 1 2 2 P ] The direction of the vector can be written with respect to the 3 axes as: XP OP Y cos β = P OP Z cos γ = P OP cos α = These cosines are called the direction cosines of the vector from O to P. YP. For example. This concept can be extended to any line in space. ZP.

Line vector PQ in space. cos α = cos β = cos γ = XQ − XP PQ YQ − YP PQ ZQ − ZP PQ If we look at the unit vector as shown in figure 3. one can see that the vector from O to P can be defined as .PQ = (X Q − X P ) + (YQ − YP ) + (Z Q − Z P ) 2 2 [ Projective Equations 2 ] Page 36 2 1 and the direction cosines are Figure 13.

Since i has similar angles to the other two axes. and the point P has coordinates (x. j. Unit vectors. For example. Rotation between Y and x axes. z)T. The angle between the axes is defined as (xY). Given a second set of coordinates axes (I. figure 4 shows the relationship between J and i . we have for j and k .Projective Equations Page 37 OP = xi + y j + z k Figure 14. cos(yX) j = cos(yY) cos(yZ) cos(zX ) k = cos(zY ) cos(zZ) Figure 15. one can write the unit vector in terms of the direction cosines as: i ⋅ I cos(xX ) i = i ⋅ J = cos(xY ) i ⋅ K cos(xZ) Similarly. . K). one can write similar relationships for the same point P. y. k coordinate axes. J. Each coordinate axes has an angular relationship to each of the i.

the vector from O to P can be written as cos(xX ) cos(yX ) cos(zX ) cos(xY ) + y cos(yY ) + z cos(zY ) OP = x cos(xZ ) cos(yZ) cos(zZ) cos(xX ) cos(yX ) cos(zX ) x X = cos(xY ) cos(yY ) cos(zY ) y = Y cos(xZ ) cos(yZ) cos(zZ) z Z This can be written more generally as X = Rx To solve these unknowns using only three angles. 1993]. Rotation combinations. R. designating R as three column vectors [R = (r1 r2 r3)]. 6 orthogonal conditions must be applied to the rotation matrix. When the +z-axis is moved . we have r1T r1 = r2t r2 = r3T r3 = 1 r1T r2 = r2T r3 = r3T r1 = 0 Sequential Rotations Combination 1) Roll (ω) – Pitch (ϕ) – Yaw (κ) 2) Pitch (ϕ) – Roll (ω) – Yaw (κ) 3) Heading (H) – Roll (ω) – Pitch (ϕ) 4) Heading (H) – Pitch (ϕ) – Roll (ω) 5) Azimuth (α) – Tilt (t) – Swing (s) 6) Azimuth (α) – Elevation (h) – Swing (s) Table 1.Projective Equations Page 38 Then. These are shown in Table 1 and they all presume a local space coordinate system. Roll (ω) is a rotation about the x-axis where a positive rotation moves the +y-axis in the direction of the +z-axis. Thus. All vectors must have a length of 1 and any combination of the two must be orthogonal [Novak. Axes of Rotation x–y-z y–x-z z–x-y z–y-x z–x-z z–x-z Applying three sequential rotations about three different axes forms the rotation matrix. Doyle [1981] identifies a series of different combinations. Pitch (ϕ) is a rotation about the y-axis.

tertiary. azimuth and swing have a range from 0° to 360° while the tilt angle will vary between 0° to 180°. Moreover. Thus. Swing is a clockwise angle in the plane of the photograph measured about the z-axis from the +y-axis to the nadir side of the principal line. we rotate the ground coordinates to a photo parallel system. one can write . Professor Earl Church developed (5) in his photogrammetric research whereas the ballistic cameras often used the 6th combination. In the first section we saw that the coordinate transformation can be written in the following form: X P cos α sin α U P Y = − sin α cos α V P P In the photogrammetric approach. This rotation is positive when the +x-axis is moved towards the +z-axis. and κ . The rotation is positive when the camera axis is above the X-Y plane. we can look at the planar rotations of the axes in sequence. the new values for Y and Z are not affected by the X-coordinate. we should realize that the X-coordinate does not change but the Y and Z coordinates do change (figure 5).Projective Equations Page 39 towards the +x-axis then the rotation is positive. If we look at the ω rotation about the X1 axis. Heading.primary. Heading (H) is a clockwise rotation about the Z-axis from the +Y-axis to the +X-axis. The combinations (1) and (2) are frequently used in stereoplotters while (3) and (4) are common in navigation. All of these angles have a range from -180° to +180°. Tilt (t) is a rotation about the x-axis and is defined as the angle between the camera axis and the nadir or Z-axis. This involves three rotations: ω . Azimuth (α) is a clockwise rotation about the Zaxis from the +Y-axis to the principal plane. Derivation of the Gimbal Angles For a physical interpretation of the rotation matrix written in terms of the directions cosines. elevation (h) is a rotation in the vertical plane about the x-axis from the X-Y plane to the camera axis.secondary. Finally. A rotation about the z-axis is called yaw (κ) with a positive rotation occurring when the +x-axis is rotated towards the +y-axis. ϕ .

C 2 = M ωC The next rotation is a ϕ.rotation about the once rotated Y2-axis. One can write X 3 = X 2 ⋅ cos ϕ + Y2 ⋅ 0 + Z 2 ⋅ (− sin ϕ) Y3 = X 2 ⋅ 0 + Y2 + Z 2 ⋅ 0 Z 3 = X 2 ⋅ sin ϕ + Y2 ⋅ 0 + Z 2 ⋅ cos ϕ or in matrix form: X 3 cos ϕ 0 − sin ϕ Y = 0 1 0 3 Z 3 sin ϕ 0 cos ϕ X 2 Y 2 Z2 .Projective Equations Page 40 Figure 16. X 2 = X 1 + Y1 ⋅ 0 + Z1 ⋅ 0 Y2 = X1 ⋅ 0 + Y1 ⋅ cos ω + Z1 ⋅ sin ω Z 2 = X1 ⋅ 0 + Y1 ⋅ (− sin ω) + Z1 ⋅ cos ω or in matrix form 0 0 X1 X 2 1 Y = 0 cos ω sin ω Y 2 1 Z 2 0 − sin ω cos ω Z1 or more concisely. Rotation angles in photogrammetry.

the elements of MG are shown as: cos ϕ cos κ cos ω sin κ + sin ω sin ϕ cos κ sin ω sin κ − cos ω sin ϕ cos κ M G = − cos ϕ sin κ cos ω cos κ − sin ω sin ϕ sin κ sin ω cos κ + cos ω sin ϕ sin κ sin ϕ − sin ω cos ϕ cos ω cos ϕ If the rotation matrix is known. then the angles (κ. we have the κ-rotation about the twice-rotated Z3-axis (see figure 5). Y1. This becomes X' = X 3 ⋅ cos κ + Y3 ⋅ sin κ + Z 3 ⋅ 0 Z' = X 3 ⋅ 0 + Y3 ⋅ 0 + Z 3 which in matrix form is X' cos κ sin κ 0 Y' = − sin κ cos κ 0 Z' 0 0 1 or more concisely as C' = M κ C 3 Y' = X 3 ⋅ (− sin κ ) + Y3 ⋅ cos κ + Z 3 ⋅ 0 X 3 Y 3 Z3 Thus. 1981] . the transformation from the survey parallel (X1. ω) can be computed as [Doyle.Projective Equations Page 41 or more concisely. C3 = M ϕC2 Finally. Z1) system is shown as X' X 1 Y' = M Y = M M M G 1 κ ϕ ω Z' Z1 X1 Y 1 Z1 Performing the multiplication. ϕ.

Since the comparison is performed in image space. α) are being used. This relationship can be written as a = kM A Recall that we wrote two basic equations relating the location of a point in the photo coordinate system and ground nadir position. X' x − x o Y' = y − y o Z' − f X 1 X − X L Y = Y − Y L 1 Z1 Z − Z L and . the object space coordinates are rotated into a parallel coordinate system. The only difference is a scale factor. s. then the rotation matrix can be derived in a similar fashion. The values for M are: − cos s cos α − cos t sin α sin s cos s sin α − cos t cos α sin s − sin t sin s M = sin s cos α − cos t sin α cos s − sin s sin α − cos t cos α cos s − sin t cos s − sin t sin α − sin t cos α cos t If the rotation matrix is known then the Church angles can be found using the following relationships [Doyle.Projective Equations Page 42 tan ω = − m 32 m 33 sin φ = m 31 tan κ = − m 21 m11 If the so-called Church angles (t. 1981]: tan α = m 31 m 32 or 2 2 2 sin t = m 31 + m 32 = m13 + m 2 23 cos t = m 33 tan s = m13 m 23 The collinearity concept means that the line form object space to the perspective center is the same as the line from the perspective center to the image point (figure 6).

YL. This equation takes the ground coordinates and translates them to the ground nadir position. The rotation matrix (MG) takes those translated coordinates and rotates them into a system that is parallel to the photograph. ZL) and the tilt that exists in the photography (κ. ϕ. If we express this last equation algebraically. Then.Projective Equations Page 43 Figure 17. divide the first two equations by the third. Finally. The result is the predicted photo coordinates of the ground points given the exposure station coordinates (XL. Thus. Collinearity condition. then we have x − x o = k [m11 (X − X L ) + m12 (Y − YL ) + m13 (Z − Z L )] y − y o = k [m 21 (X − X L ) + m 22 (Y − YL ) + m 23 (Z − Z L )] − f = k [m 31 (X − X L ) + m 32 (Y − YL ) + m 33 (Z − Z L )] To eliminate the unknown scale factor. ω). these coordinates are scaled to the photograph. m (X − X L ) + m12 (Y − YL ) + m13 (Z − Z L ) x − x o = −f 11 m 31 (X − X L ) + m 32 (Y − YL ) + m 33 (Z − Z L ) m (X − X L ) + m 22 (Y − YL ) + m 23 (Z − Z L ) y − y o = −f 21 m 31 (X − X L ) + m 32 (Y − YL ) + m 33 (Z − Z L ) . x − x o y − y = k o −f m11 m 21 m 31 m12 m 22 m 32 m13 m 23 m 33 X − X L Y − Y L Z − ZL where k is the scale factor.

For the second constraint. lets see if the first condition is met. m11m12 + m 21m 22 + m 31m32 = 0 2 2 2 2 m11 + m 2 + m31 = m12 + m 2 + m 32 21 22 If we look at the equation for MG above. 1981]: [ ] .Projective Equations Page 44 Otto von Gruber first introduced this equation in 1930. 2 2 m11 + m 2 + m 31 = cos 2 ϕ cos 2 κ + cos 2 ϕ sin 2 κ + sin 2 ϕ 21 ( ) = cos 2 ϕ (cos 2 κ + sin 2 κ ) + sin 2 ϕ = 1 The right side of the equation becomes 2 2 m12 + m 2 + m 32 = cos 2 ω sin 2 κ + 2 sin ϕ cos κ sin κ cos ω sin ω + sin 2 ϕ cos 2 κ sin 2 ω 22 + cos 2 κ cos 2 ω − 2 sin ϕ cos κ sin κ cos ω sin ω + sin ϕ sin 2 κ sin 2 ω + cos 2 ϕ sin 2 ω = sin 2 κ cos 2 ω + sin 2 ϕ cos 2 κ sin 2 ω + cos 2 κ cos 2 ω + sin 2 ϕ sin 2 κ sin 2 ω = sin 2 ϕ sin 2 ω (cos 2 κ + sin 2 κ ) + cos 2 ω + cos 2 ϕ sin 2 ω + cos 2 ϕ sin 2 ω = sin 2 (sin 2 ϕ + cos 2 ϕ) + cos 2 ω = 1 Thus. Since (X – XL). the first condition is met. (Y – YL) and (Z – ZL) are proportional to the direction cosines of A . these equations can also be presented as [Doyle. This equation must satisfy two conditions [Novak. lets first look at the left hand side of the equation. 1993]. both sides of the equation equal are equal to one and to each other. m11 m12 + m 21 m 22 + m 31 m 32 = cos ϕ cos κ(cos ω sin κ + sin ω sin ϕ cos κ ) + (− cos ϕ sin κ )(cos ω cos κ − sin ξsuinϕ sin κ ) − cos ϕ sin ϕ sin ω = cos ϕ cos κ sin κ cos ω + cos ϕ sin ϕ cos 2 κ sin ω − cos ϕ cos κ sin κ cos ω + cos ϕ sin ϕ sin 2 κ sin ω − cos ϕ sin ϕ sin ω = cos ϕ cos κ sin κ cos ω + cos ϕ sin ϕ sin ω cos 2 κ + sin 2 κ − cos ϕ cos κ sin κ cos ω − cos ϕ sin ϕ sin ω =0 Thus.

It would be interesting to see how these equations stand up to the basic principles learned in basic photogrammetry. the projective equations become (x − x o ) = k (X − X L ) (y − y o ) = k (Y − YL ) − f = k (Z − Z L ) . If we look at the collinearity equations. cos β and cos γ are the direction cosines of A The inverse relationship is m (x − x o ) + m 21 (y − y o + m 31 (− f )) X − X L = (Z − Z L ) 11 m13 (x − x o ) + m 23 (y − y o + m 33 (− f )) m (x − x o ) + m 22 (y − y o + m 32 (− f )) Y − YL = (Z − Z L ) 12 m13 (x − x o ) + m 23 (y − y o + m 33 (− f )) These equations are referred to as the collinearity equations. the rotation matrix for a truly vertical photo would be the identity matrix.Projective Equations Page 45 x − x o = −f m11 cos α + m12 cos β + m13 cos γ m 31 cos α + m 32 cos β + m 33 cos γ m 21 cos α + m 22 cos β + m 23 cos γ m 31 cos α + m 32 cos β + m 33 cos γ y − y o = −f Here. being at the nadir point with the X-axis coinciding with the line from opposite fiducials in the flight direction. Thus. Recall that for a truly vertical photograph that the scale at a point can be written using S= f x y = = H−h X Y Here we assumed that the principal point coincided with the indicated principal point and that the X and Y ground coordinates were related to the origin. 1 0 0 = 0 1 0 0 0 1 M Vert Then. cos α.

F1 = (x − x o ) + f F2 = (y − y o ) + f U =0 W V =0 W where U and V are the numerators in the projective equations given earlier and W is the denominator. Thus. For simplicity. it will appear as: ∂F1 ∂x o B= ∂F2 ∂x o ∂F1 ∂y o ∂F2 ∂y o ∂F1 ∂F1 ∂f ∂X L ∂F2 ∂F2 ∂f ∂X L ∂F1 ∂YL ∂F2 ∂YL ∂F1 ∂Z L ∂F2 ∂Z L ∂F1 ∂ω ∂F2 ∂ω ∂F1 ∂ϕ ∂F2 ∂ϕ ∂F1 ∂F1 ∂κ ∂X i ∂F2 ∂F2 ∂κ ∂X i ∂F1 ∂Yi ∂F2 ∂Yi ∂F1 ∂Z i ∂F2 ∂Z i The first section contains the partial derivatives with respect to the interior orientation. x = k (X − X L ) y = k (Y − YL ) f = k (H − h ) Dividing the first two equations by the third ad manipulating the equation yields the identical scale relationships given in basic photogrammetry. the second group are the partials with respect to the exterior orientation.Projective Equations Page 46 If we further assume that the principal point is located at the intersection of opposite fiducials and if we substitute H for ZL and h for Z. From adjustments. x f Y = = X − X L H − h Y − YL LINEARIZATION OF THE COLLINEARITY EQUATION The linearization of the collinearity equations are given in a number of different textbooks. The developments presented here follow that outlined by Doyle [1981]. and the third group are . then. lets define the projective equations in the following form. we know that the general form of the condition equations can be written as AV + B∆ + F = 0 The deign matrix (B) is found by taking the partial derivative of the projective equations with respect to the parameters.

we will use the following general differentiation formulas: ∂U ∂W W −U ∂F1 ∂P ∂P = f ∂V − U ∂W =f W ∂P W ∂P ∂P W2 ∂V ∂W W −V ∂F2 ∂P ∂P = f =f W ∂P W2 ∂V V ∂W − ∂P W ∂P where P are the parameters. yo.Projective Equations Page 47 the partials with respect to the ground coordinates. YL. and W become: ∂U = − m11 ∂X L ∂V = − m 21 ∂X L ∂W = − m 31 ∂X L ∂U = − m12 ∂YL ∂V = −m 22 ∂YL ∂W = − m 32 ∂YL ∂U = − m13 ∂Z L ∂V = − m 23 ∂Z L ∂W = − m 33 ∂Z L Then the partial derivatives of the functions F1 and F2 can be shown to be . For the exposure station coordinates (XL. the partial derivatives of the functions U. The partial derivatives of the interior orientation (xo. ZL). ∂F1 = −1 ∂x o ∂F2 =0 ∂x o ∂F1 =0 ∂y o ∂F2 = −1 ∂y o ∂F1 U = W ∂f ∂F2 V = W ∂f For the partial derivatives taken with respect to the exposure station coordinates. V. and f only) are very basic.

and W taken with respect to the orientation angles becomes U V = M G W Yielding. X i − X L Y − Y L i Zi − Z L .Projective Equations Page 48 ∂F1 f = ∂X L W U m 31 − m11 + W ∂F2 f = ∂X L W V − m 21 + m 31 W ∂F1 f U m 32 = − m12 + W ∂YL W ∂F1 f U m 33 = − m13 + W ∂Z L W ∂F2 f V = − m 22 + m 32 W ∂YL W ∂F2 f V m 33 = − m 23 + W ∂Z L W Recall that the rotation matrix is given in the sequential form as M G = M κ M ϕM ω then the partial derivatives of the orientation matrix with respect to the angles can be shown to be 0 0 0 ∂M G ∂M ω = MκMϕ = M G 0 0 1 ∂ω ∂ω 0 − 1 0 sin ω − cos ω 0 ∂M ϕ ∂M G − sin ω = Mκ Mω = MG 0 0 ∂ϕ ∂ϕ cos ω 0 0 0 1 0 ∂M G ∂M κ = M ϕ M ω = − 1 0 0 M G ∂κ ∂κ 0 0 0 Then the partial derivatives of the functions U. V.

∂F1 f ∂U U ∂W = − ∂ω W ∂ω W ∂ω ∂F1 f ∂U U ∂W = − ∂ϕ W ∂ϕ W ∂ϕ ∂F1 f ∂U U ∂W = − ∂κ W ∂κ W ∂κ .Projective Equations Page 49 ∂U ∂ω X i − X L ∂M G ∂V = Yi − YL ∂ω ∂ω ∂W Zi − Z L ∂ω ∂U ∂ϕ ∂V = ∂M G ∂ϕ ∂ϕ ∂W ∂ϕ ∂U ∂κ ∂V = ∂M G ∂κ ∂κ ∂W ∂κ X i − X L Yi − YL Zi − Z L X i − X L Yi − YL Zi − Z L Now one can evaluate the partial derivatives of F1 and F2 with respect to the orientation angles.

Projective Equations Page 50 ∂F2 f ∂V V ∂W = − ∂ω W ∂ω W ∂ω ∂F2 f = ∂ϕ W ∂V V ∂W ∂ϕ − W ∂ϕ ∂F2 f ∂V V ∂W − = ∂κ W ∂κ W ∂κ The partial derivatives of the functions F1 and F2 with respect to the survey points are shown to be: ∂F1 ∂F f =− 1 = ∂X i ∂X L W U m11 − m 31 W ∂F2 ∂F f V =− 2 = m 21 − m 31 ∂X i ∂X L W W ∂F1 ∂F f U =− 1 = m 32 m 12 − ∂Yi ∂YL W W ∂F2 ∂F f V =− 2 = m 32 m 22 − ∂Yi ∂YL W W ∂F1 ∂F f U =− 1 = m 33 m 13 − ∂Z i ∂Z L W W ∂F2 ∂F f V =− 2 = m 33 m 23 − ∂Z i ∂Z L W W .

Case I: Compute the elements of exterior orientation (κ. This can easily be visualized by the use of the global positioning system (GPS) on-board the aircraft. XL. yi) and treating the survey control coordinates (Xi. Case IV is a further refinement of Case III except that the elements of interior orientation are observed in addition to the photo coordinates. and survey coordinates. The solution is to find the adjusted exterior orientation parameters and the survey coordinates. Y = the parameters for the condition function. Here the observations include photo coordinates. This approach is an extension of Case II. and survey coordinates (to unknown points). and Zi) as known. and ZL) by observing the photo coordinates (xi. Merchant [1973] has identified four different cases. ω. exterior orientation. Y) = 0 where: obs = the observed quantities.Numerical Resection and Orientation Page 51 NUMERICAL RESECTION AND ORIENTATION Introduction Numerical resection and orientation involves the determination of the coordinates of the exposure station and the orientation of the photograph in space. Case II: Case III: Case IV: The general notation for the mathematical model is given as F = F(obs. exterior orientation. in order of increasing complexity. The survey control (coordinates to known points) is given. YL. A Taylor’s Series evaluation is done to linearize the equation and this is shown as ∂F ∂F F = F0 + ∆ 0 + ∂ (obs ) V = 0 00 0 ∂ (X ) 00 00 (1) . and X. Yi. The adjustment will result in adjusted exterior and interior orientation and survey coordinates. X. This is an extension of Case I with the addition that the elements of exterior orientation are also observed quantities. ϕ.

The math model employs the central projective equations as the conditional function. F = f + B∆ + AV = 0 or in a more general form: AV + B∆ + f = 0 where: V = the residuals on the observations. ϕ. Evaluation of this function results in the observation equation. f = the discrepancy vector found by comparing the mathematical model using the current estimate of the parameters with the observed values. YL. XL. This series is evaluated by comparing the observations to the current estimates of what those values need to be. ∆ = the alteration vector to the parameters. The observed values are the photo coordinates (xi. and ZL) and variancecovariance matrix on the adjusted parameters (3eoo) are computed. ω. Case I Case I is the simplest form of the space resection problem. for x and y: F (x ) = ( x − x o ) − c F (y ) = ( y − y o ) − c ∆X =0 ∆Z ∆Y =0 ∆Z The observation equations are written as AV + B∆ + f = 0 e e . It is shown in general form as: F (x ) F(x ) = F (y ) where the central projective equations are.Numerical Resection and Orientation Page 52 The subscript “0” indicates an observed parameter value whereas “00” means the current estimate of the value. yi). The elements of interior orientation along with the survey coordinate control are taken as error free. The observational variance-covariance matrix (3oo) is estimated and the exterior orientation elements ((κ.

ϕ. This is usually sufficient for a two-axis comparator but the correlation cannot be neglected for polar comparators. ω. δX L . Differentiation of the function yields . ω. δϕ.where: ∂ F(x j ) ∂ (x j . the general form for n points is V + B ∆+ f = 0 (2) If the number of photo points is larger than three then a least squares adjustment is performed. X L . Z L ) ∂ Fj Bj = = ∂ (Parameters ) ∂ F (y j ) ∂ (κ. δZ L ] Thus. y j ) ∂ Fj 1 0 = Aj = = =I ∂ (obs j ) ∂ F(y ) 0 1 j ∂ (x j . The function to be minimized is expressed as e w F = V T WV − 2 λ V + B ∆ + f e e where: λ = the Langrangian multiplier (vector of correlates). ϕ. YL . y j ) ∂ F (x j ) e ∂ (κ. X L . δYL . Z L ) Numerical Resection and Orientation Page 53 v x Vj = j v y j F (x j ) f = F (y j ) O OO ∆ = [δκ. and W = the weight matrix for the photo observations. YL . which is defined as W = ∑ −1 OO obs The weight matrix is usually assumed to be a diagonal matrix derived from the a priori estimates of the observational variance-covariance matrix. δω.

and m in ∆ . Collecting the observation equation and the differentiated function gives I V 0 W 0 −1 e e 0 0 B ∆ + 0 = 0 λ f e I B 0 Eliminating V and λ and substituting V from (3) into (2) yields W −1λ + B ∆ + f = 0 or λ = − W B ∆ − Wf Substituting λ into (4) results in the normal equations e e e e B W B ∆ + B Wf = 0 T T e e e e (5) or N ∆+ t = 0 e where: N = the normal coefficient matrix and t = the constant vector The adjusted parameters become ∆ = − N −1 t e The adjusted parameters are found by adding the corrections to those parameters: . 2n in λ.Numerical Resection and Orientation Page 54 ∂F = 2W V − 2λ = 0 ∂V e = −2 B λ = 0 e ∂∆ e (3) ∂F T ( 4) There are (4n + m) unknowns: 2n in V.

lens distortion. . and earth curvature. the process is iterated until the alteration vector reaches some predefined value. A weight matrix for the photo observations was based on a standard error of 10µm. Case I problem. The exterior orientation is estimated. Following is an example of the single photo resection and orientation. Survey control is treated as error free.Numerical Resection and Orientation Page 55 X = X+ ∆ a oo e In the least squares adjustment. The process of updating the parameter values before undergoing another adjustment is commonly referred to as the “Newton-Raphson” method. the photo observations are measured quantities already corrected for atmospheric refraction. The residuals are computed as follows: V + B∆ + f = 0 Therefore V+f =0 V = −Fo a e e The unit variance is expressed as 2 σo = V T WV 2n − 6 with the variance-covariance matrix relating the adjusted parameters is ∑ X oo e 2 = σ o N −1 Example Single Photo Resection – Case 1 Following is an example of a single photo resection and orientation. The following data are entered into the program.

147 -54.922 105.21400 254.21800 11 44815. X 1 44646.027 -69.18200 109650.89000 110475.43000 5 46797.047 -26.449 -71.861 -70.97300 288.1500 Phi 0.00100 111122.783 -85.982 -73.0000 The initial values for the design matrix (B) and the discrepancy vector (f) are shown as: .458 y 79.85200 268.27900 13 47061.85000 262.99000 263.012 Survey Control Points: Point No.22300 6 46334.86600 275.889 88.026 85.240 65.01300 111086.082 79.65500 261.10100 254.17400 110910.53100 275.13500 10 45126.89000 8 45328.Numerical Resection and Orientation Page 56 SINGLE PHOTO RESECTION AND ORIENTATION .094 -80.04500 9 46087.32000 266.0000 Omega 0. 1 2 3 4 5 6 7 8 9 10 11 12 13 x 61.34300 110531.046 -34.468 -12.972 12.53700 109932.63900 Exterior Orientation Elements (Estimated) XL YL ZL 45900.80000 12 46489.17600 110795.42700 Z 273.893 -23.CASE I Photo Number 1 Photo observations: Point No.42300 Y 111295.84500 291.980 -11.63000 110193.934 -26.16300 111729.26800 7 45019.287 -31.0000 2090.36500 255.836 -6.87600 110933.018 78.0000 111150.0000 Kappa 2.899 -29.31900 111261.75000 2 45527.523 27.70500 4 46322.20300 3 45536.

0455 B= 0.0382 0.0129 138.7656 − 128.0456 − 0.0566 − 73.0701 0.6172 − 132.8029 79.9669 − 2.0875 The initial values for the normal coefficient matrix (N) are: .0462 0.3670 70.0453 − 0.0694 0.0038 − 0.0698 0.6635 − 142.9897 − 5.1336 − 85.1510 23.0857 − 5.4270 − 2.0455 − 0.0510 − 132.0077 6.0456 0.5513 − 13.6357 − 5.1100 − 141.0181 0.0698 − 0.7222 − 129.5356 − 57.0157 0.0458 − 0.0504 0.3077 − 127.3384 50.0459 0.0701 0.0454 0.5356 82.0455 0.9512 − 78.5686 92.7773 − 129.0184 − 0.0693 0.1348 − 67.0696 0.8770 − 171.0693 0.7577 − 3.1434 − 179.0349 82.6350 76.8465 87.0447 0.9540 − 2.5084 − 3.3078 − 134.0706 0.0157 7.0967 − 88.0376 − 0.8317 − 129.5123 104.8732 110.8754 21.0696 0.1797 − 144.0370 0.1953 − 69.0696 0.0458 0.0456 0.0043 − 0.0707 0.0128 0.0144 0.9332 − 103.0698 0.0462 − 0.1034 − 4.0427 0.0500 − 79.8692 − 131.8336 − 149.9679 − 122.0278 − 0.0486 − 0.8596 − 183.0136 − 4.0706 0.0463 0.9245 77.6027 − 127.0694 0.5405 174.5381 33.0453 − 0.0442 − 0.5743 − 3.2036 87.0449 − 0.1935 − 153.0452 0.9068 109.8762 f= − 3.8062 119.0610 0.5210 − 64.0118 0.7212 − 4.4727 − 3.4114 − 3.4782 − 160.8286 − 168.2941 − 99.0017 0.7678 117.0463 − 0.0459 − 0.0074 0.9943 82.0457 0.8825 − 161.8061 94.0701 0.0700 0.3986 23.4395 − 2.0457 − 0.8890 − 3.7032 − 173.4771 − 3.2580 69.1354 − 28.3097 − 2.0696 0.0455 − 0.0128 0.9310 − 4.8223 − 129.0693 0.1628 − 2.3882 − 88.0456 − 0.0696 0.0453 0.0356 81.5689 − 67.6316 − 33.6046 − 1.0693 0.5384 − 79.9612 68.0504 − 1.4256 − 26.6295 − 133.6286 − 2.0458 − 0.0370 0.2877 − 125.1204 − 67.7489 − 2.6765 − 2.1848 − 193.0700 0.0696 0.0458 0.Numerical Resection and Orientation Page 57 0.0698 0.0454 − 0.8997 − 162.6106 81.0372 − 0.0453 0.0707 0.0701 0.5647 − 5.5140 82.

0000 216476.70268 -0. 1 Alteration Vector (Delta): -8.9000 − 479382745.0311 − 1942260.72201 0.9100 4273950234.15331 -3.00022 .2187 − 116276.15855 -0.00013 0.01941 0.527 14980.0000 − 106967. 2 Alteration Vector (Delta): 0.7400 − 207855784.8256 0.6390 905.7816 106967.492 t= − 37790387.1900 4091640942.995 The following data represent the values for the alteration vector for each iteration.00958 Iteration No.61417 0.00012 0.450 26185558.8000 The initial values for the constant vector (t) are compute a t = BTW and are shown as − 51743.6894 0.580 108535637.4125 − 535392.210 326.6390 905.6894 − 195. Iteration No.Numerical Resection and Orientation Page 58 − 65.4928 N= 961373269.94869 -0.6170 291.02176 0.1300 − 641531976.9805 353883.2887 1886194.

001 0.0098 Residuals on Photo Observations: Point No.009 0.000 0.00000 0.00000 0.00000 Exterior Orientation Elements (Adjusted) XL 45892.00000 0. 4 Alteration Vector (Delta): 0.1281 Phi Omega 0.00000 0.0015 0.004 -0.002 -0.002 0.006 0.00000 0.007 0.002 -0.4624 YL 111146.001 0.002 -0.00033 0.7719 ZL 2090. 1 2 3 4 5 6 7 8 9 10 11 12 13 x -0.000 0.006 0.00000 0.000 0.004 y -0.002 -0.007 -0.001 -0.3471294 .011 0.0195 0.011 -0. 3 Alteration Vector (Delta): 0.001 0.00000 0.5445 Kappa 2.Numerical Resection and Orientation Page 59 Iteration No.002 -0.006 -0.0015 -0.004 -0.007 0.006 The A Posteriori Unit Variance is .00000 Iteration No.

0233948622 .0011026685 .0020985439 -.0000000000 .0000002099 .0000001678 -.0000016307 .0020985439 -.0025329779 .0000000001 -.0000001566 -.0000000007 -.0000000932 . the adjusted parameters can only be estimated initially.0000000005 -. κ + vκ = κ + δκ o o oo oo ϕ + vϕ = ϕ + δϕ ω + vω = ω + δω o oo X L + v XL = X L + δ XL o oo YL + v YL = YL + δ YL o oo Z L + v ZL = Z L + δ Z L o oo .0000016307 .0000104961 -.0000000007 .0000001678 .0000018958 .0000104961 -.0000000048 . This is. This new resection application adds a new math model to the adjustment.0000000039 Case II With Case II. we introduce direct observations on the parameters.0000075937 -. Although this will only provide the exposure station coordinates. integrated systems such as GPS with inertial navigation can yield the rotational elements also.0000075937 .0000000932 . for all of the exterior orientation elements: F(κ ) = κ − κ = 0 F(ϕ) = ϕ − ϕ = 0 F(ω) = ω − ω = 0 F(X L ) = X L − X L = 0 F(YL ) = YL − YL = 0 F(ZL ) = Z L − ZL = 0 o a o a o a o a o a o a Since the observations have residuals. A growing example of this situation is the use of airborne GPS where the receiver is used to determine the exposure station of the camera at the instant of exposure.0000000001 .0000018958 -.0000009114 .0154028192 -.Numerical Resection and Orientation Page 60 The Variance-Covariance Matrix of Adjusted Parameters is: .0000001566 -.0034834200 -.0000002099 -.0011026685 -.0000000000 .0000009114 . Thus.0034834200 .

we have vκ + κ − κ − δκ = 0 o oo ( ) vϕ + ϕ − ϕ − δϕ = 0 o oo vω + ω − ω − δω = 0 ( o oo ) v XL + X L − X L − δ XL = 0 oo o v YL + YL − YL − δ YL = 0 oo o v Z L + Z L − Z L − δ ZL = 0 oo o The observation equations are V −∆ +f =0 Grouped with the observation equations developed for the photo coordinates. we have V + B∆ + f = 0 V −∆ +f =0 e e e e e e e e or V + B∆ + f = 0 where: V V=e V e B=B − I f f = e f e The function to be minimized is e T F = V W V − 2 λT V + B ∆ + f where the weight matrix is shown to consist of .Numerical Resection and Orientation Page 61 Rearranging.

one can see that as the weight matrix for the observed exterior orientation goes to zero then the normal equation reduces to Case I. e −1 . As before. after performing the multiplication e e e e e eT e B W B + W ∆ + BT Wf − W f = 0 or generally shown as N∆ + t = 0 On the first cycle in the adjustment the estimates of the parameters are the same as the observed values X=X oo o a a e Therefore.Numerical Resection and Orientation Page 62 W 0 e W= 0 W The normal equations are then written as (B WB)∆ + B Wf = 0 T e T Which in an expanded form looks like eT B e W 0 B e eT e − I ∆ + B 0 W − I W 0 f e − I e = 0 0 W f resulting in. the solution is expressed as ∆ = −N t The adjusted parameters are computed by adding the discrepancy vector to the current estimate of the parameters. the discrepancy vector becomes O f = F OO = 0 e a Looking at the normal equations.

The math models are: • For collinearity: F (x ) = ( x − x o ) − c F (y ) = ( y − y o ) − c ∆X =0 ∆Z ∆Y =0 ∆Z • For the exterior orientation: F(κ ) = κ − κ = 0 F(ϕ) = ϕ − ϕ = 0 F(ω) = ω − ω = 0 F(X L ) = X L − X L = 0 F(YL ) = YL − YL = 0 F(Z L ) = Z L − Z L = 0 o a o a o a o a o a o a . The cycling continues until the solution reaches some acceptable level. The residuals are then found by evaluating the function using the observed and final adjusted values. the a posteriori variance-covariance matrix is found by multiplying the unit variance by N-1. 2 ∑ = σo N oo x e 2 Case III Case III is an extension of Case II in that we now introduce the spatial coordinates as observed quantities thereby constraining the parameters.Numerical Resection and Orientation Page 63 X=X+∆ a oo e The adjustment is iterated by making these adjusted parameters the current estimates. V = −Fo a The unit variance is computed as V WV σ = 26 − 6 2 o T Finally.

survey coordinates V and photo coordinates (V) are defined as: e s e s e s e e s s vκ v ϕ e vω V= v XL vY L v ZL v X1 v Y1 s vZ V= 1 v X 2 v Zn v x1 v y1 v x V= 2 v x n x yn e s The discrepancy vectors f . f . f are computed by evaluating the functions using the current estimates of the unknown parameters and the original observations. F(x j ) f = F(y j ) o oo F (κ ) F (ϕ) e F (ω) f = F (X L ) F (YL ) F (Z L ) O OO F(X1 ) F(Y ) 1 F(Z J ) s f = F(X n ) F(Yn ) F(Z ) n The alterations to the current assumed value are shown as: .Numerical Resection and Orientation • Page 64 For the ground control: F(X j ) = X j − X j = 0 F(Yj ) = Yj − Yj = 0 F(Z j ) = Z j − Z j = 0 o a o a o a The observation equations then become V −∆ +f =0 V −∆ +f =0 V + B∆ + B∆ + f = 0 e s Where the observational residuals on the exterior orientation V .

Z j . Z n ) s B= ∂F(Z j ) ∂ (X 1 . Y1 . B and B . Z1 . X L . . are presented as being ∂F(X1 ) ∂ (X . Z . Z L ) ∂F(y n ) ∂ (κ. Y1 . X . ϕ. Z n ) ∂F(Z1 ) ∂ (X 1 . . . YL . Z j . Z1 . . X L . YL . . ω. ω. Z . Z ) 1 1 1 j n ∂F(Y1 ) ∂ (X 1 . Y . ϕ. Y . ω. Y1 . . Z j . Z1 . X L . Z n ) ∂F(Z n ) ∂ (X 1 . . Z L ) e B= ∂F(x n ) ∂ (κ. . Z n ) ∂F(x 1 ) ∂ (κ. Z L ) Collecting the observations . ϕ.Numerical Resection and Orientation Page 65 δκ δϕ e δω ∆= δX L δYL δZ L e s δX 1 δY 1 δZ1 S ∆= δX n δYn δZ n The design matrices. ω. Z ) L L L ∂F(y1 ) ∂ (κ. Z1 . . ϕ. Z j . . Y1 . YL .

.Numerical Resection and Orientation s e V B B e V + − I 0 s V 0 − I Page 66 f e e ∆ + f = 0 s ∆ s f or V + B∆ + f = 0 The function to be minimized is F = V W V − 2λT V + B∆ + f This leads to the normal equations T ( ) (B WB)∆ + (B Wf )= 0 T T where the weight matrix is assumed to be free of any correlation and takes the form: W e W= W s W The normal equation in the expanded form is e e e s eT B WB + W BT W B s s s BT W B + W e e e e T ∆ + B Wf − W f = 0 s s s s ∆ B T Wf − W f or as N∆ + t = 0 The solution is ∆ = −N t −1 Then the process is cycled until an acceptable solution is obtained.

a higher accuracy is needed which means observing pseudorange and phase. Experiences from using GPS-control showed several improvements [Salsig and Grissim. . For aerotriangulation. But. At the same time the cost and labor required for that control were lower than conventional surveying. those points that are “visible” using the GPS receivers are also free of major obstructions that would prevent the image from appearing in the photography.Principles of Airborne GPS Page 67 PRINCIPLES OF AIRBORNE GPS INTRODUCTION The utilization of the global positioning system (GPS) in photogrammetric mapping began almost from the inception of this technology. This data is based on antidotal information. therefore the photogrammetrist often received the control points in locations advantageous to them instead of the location determined from the execution of a conventional field survey. C/A-code or P-code pseudorange is all that is required. Here. Fortunately. This led to a better recovery rate for the control. Airborne-GPS is now a practical and operational technology that can be used to enhance the efficiency of photogrammetry. Because the accuracy of position for navigation and centered photography ranges from one to five meters. c) Unfortunately. Surveyors were not concerned with issues like intervisibility between control points. this does account for about 40% of the projects undertaken by photogrammetric firms. with these increasing windows came the idea of placing a GPS receiver within the mapping aircraft. 1994]. This changed as the satellite constellation began to reach its current operational status. particularly for large-area projects. the window from which GPS observations could be made was not always at the most desirable time of day. although Abdullah et al [2000] reports that only about 30% of the photogrammetry companies are using this technology at this time. Initially. real-time processing is not as important in terms of functionality. Airborne GPS can be used for: precise navigation during the photo flight centered or pin-point photography determination of the coordinates of the nodal point for aerial triangulation To achieve the first two applications the user requires real-time differential GPS positioning [Habib and Novak. It provided coordinate values that were of higher quality and more reliable than those using conventional field surveying techniques. GPS offered a major improvement in the control needed for mapping. The important capability is the real-time processing. 1995]: a) b) There was a better fit between the control and the aerotriangulation results. Visibility of the ground control point to the aerial camera is always important. Also.

YL. In addition. Unfortunately. One is placed over a point whose location is known and the other is mounted on the aircraft. the derived angular relationships only have a precision of about 1’ of arc while photogrammetrists need to obtain these values to better than 10” of arc. ADVANTAGES OF AIRBORNE GPS The main limitation of photogrammetry is the need to obtain ground control to fix the exterior orientation elements. The necessity of obtaining ground control is costly and timeconsuming. To compute the position of the camera during the project. Lapine [nd] points out that almost all of the National Oceanic and Atmospheric Administration (NOAA) aerial mapping projects utilize airborne-GPS because they have found efficiencies due to a reduction in the amount of ground control required for their mapping. GPS can also be used to derive the orientation angles by using multiple antennas. Generally.5 or 1 second. it offers the photogrammetrist additional advantages. Because phenomena change with time. Location. there are many instances where the ability to gather control is not feasible. Lucas. it is possible that the subject of the mapping has either changed or disappeared when that the control has been collected. Another limitation occurs when the results of the mapping need to be completed in a very short time period. Corbett and Short [1995] identify situations where this exists: a) Time. Cost. This gives the photogrammetrist XL. two dual frequency geodetic GPS receivers are commonly employed. The physical location of the survey site may restrict access because of geography or the logistics to complete a field survey may be such to make the survey prohibitive. The phenomena of interest may be hazardous or the subject may be located in an area that is dangerous for field surveys. could negate the economic advantages that photogrammetry offers. These include [Abdullah et al. and ZL. 2000. Even under normal conditions the charge for procuring control is high and. if too much is needed. GPS gives the photogrammetrist the opportunity to minimize (or even eliminate) the amount of ground control and still maintain the accuracy needed for a mapping project. b) c) d) While airborne GPS can be used to circumvent the necessity of ground control. Safety. The necessity of obtaining control under the conditions outlined above may make the cost of the project prohibitive because control surveys are a labor-intensive activity.Principles of Airborne GPS Page 68 Airborne GPS is used to measure the location of the camera at the instant of exposure. on-the-fly integer ambiguity resolution techniques are employed. The integer ambiguity must be taken into account and this will be discussed later. Carrier phase data are collected by both receivers during the flight with sampling rates generally at either 0. Tied to the other problems is that of cost. 1994]: .

If the pug is not properly adjusted then the point transfer may locate pass. It is now possible. The Texas Department of Transportation has determined that an error of 1 cm can be expected in centering the target over the point [Bains. For precise work. if for no other reason than to check the results. Photogrammetric errors include the following: a) Errors associated with the placement of targets. • As ground control gets smaller. • There is some initial financial investment by the mapping organization. This requires [Lucas. datum transformation problems become more important. The main problem is that the center of the target is not precisely defined. This is based on a 10 cm wide cross target. • Precise flight navigation and pin-point photography are possible with this technology. 1996] a near perfect system.Principles of Airborne GPS Page 69 • • • • It has a stabilizing effect on the geometry. These include [Abdullah et al. 1995]. an unlikely scenario.and tie-points erroneously. • There is less ground control. the process of marking control introduces another source of error into the photogrammetric process. it would be extremely prudent to have control. ERROR SOURCES The use of GPS in photogrammetry contains two sets of error sources and the introduction of additional errors inherent in the integration of these two technologies. there are special considerations that must be accounted for to ensure success for a project. • It reduces the hazards due to traffic. to use GPS aerotriangulation without any ground control. particularly for highway corridor mapping. these errors need to be accounted for. Regardless. b) . While airborne GPS is operational. There are some concerns that need to be addressed for a successful project. The attainable accuracy meets most mapping standards. Moreover. 2000]: • Risk is greater if the project is not properly planned and executed. Airborne GPS is operational and being used for more mapping projects. There is an increase in productivity by decreasing the amount of ground control necessary for a project. Substantial cost reduction for medium and large-scale projects are possible. at least theoretically. Errors inherent in the pug device used to mark control on the diapositives. • Requires non-traditional technical support.

this drift should be accounted for in the processing of GPS observations. Errors that can be found in the integration of GPS with the aerial camera and photogrammetry are [Bains. Some software cannot resolve cycle slips in a robust fashion. Geodetic quality receivers. c) Signal interruption. This is critical if continuous tracking is necessary in order to process the GPS signal. Bains [1995] has found that the current USGS calibration certificate does not provide the information needed for GPS assisted photogrammetry. b) Datum problems. 1995. Interruption may occur during sharp banking turns through the flight. should be employed for projects where high precision is required. nd]: a) The configuration of airborne GPS implies that the two data collectors are not physically in the same location. contrast within the image could be lost. The aerial camera is located within the . e) Receiver clock drift. Lapine. The GPS position is determined in the WGS 84 system whereas the survey coordinates are in some local coordinate system or in NAD 27 coordinates where there is no exact mathematical relationship between systems. d) Error sources for GPS are well identified. which represents a delay in the reception time. particularly in the kinematic mode. This error is due to reception of a reflected signal. The camera shutter can contain large random variability as to the time the shutter is open.Principles of Airborne GPS Page 70 c) Camera calibration is crucial in determining the distortion parameters of the aerial camera used in photogrammetry. Most of the time. There is also a limitation on the accuracy of different receivers used in the kinematic surveys. although newer on-the-fly ambiguity resolution software will help. Although this error is relatively small. This is particularly problemsome on surfaces such as the fuselage or on the wings. 1992. d) Geometry of the satellite constellation. with 1-2 cm relative accuracy. this error source is not that important but if this irregularity is too great. The GPS error sources include: a) Software problems can cause problems with a GPS mission. A loss or disruption of the GPS signal could cause problems in resolving the integer ambiguities and could result in erroneous positioning of the camera location thereby invalidating the project. f) Multipath. Merchant. The GPS antenna must be located outside and on top of the aircraft to receive the satellite signals. The major problem with this non-uniformity is when trying to synchronize the time of exposure to the epoch in which the GPS signal collecting data. Merchant [1992] states that a system calibration is more important with airborne GPS.

Merchant. The camera shutter can cause problems as was identified above. A receiver that can filter out this noise should be used. Since the instant of exposure does not coincide with the sampling time in the GPS receiver. which could introduce error in the position. Early experiments with the Wild RC10 with an external pulse generator showed wide variability in time between maximum aperture and shutter release [van der Vegt. particularly onboard the airplane. 1991. What should be considered is a system calibration whereby the whole process is calibrated and exercised under normal operating conditions [Lapine. c) d) e) Camera Calibration One of the weak links in airborne GPS involves the camera calibration. positional errors from 1-10 m could be expected. Different algorithms have varying characteristics. Of concern is the ability to trip the shutter on demand. This problem occurs only when the airborne-GPS system is based on an initialization process when solving for the integer ambiguities. Merchant [1992] points out that the delay from making the demand for an exposure to the midpoint of the actual exposure could be several seconds. Experience has found that there can be variability in this height based on the quantity of fuel in the aircraft. Because of the complex nature of combining different measurement systems within airborne GPS. the traditional camera calibration may not provide the information needed when GPS is used to locate the exposure station. 1992]. 1995]. Interpolation algorithm used to compute the position of the phase center of the antenna. an interpolation of the position of the antenna at the instant of exposure must be computed. For large-scale photography this could cause serious problems because of the turbulent air in the lower atmosphere and the interpretation from the GPS signal to the effective exposure time. As was pointed out earlier.Principles of Airborne GPS Page 71 aircraft and is situated on the bottom of the craft. In the worst case. Radio frequency interference can cause problems. The values ranged from 10-100 msec. Related to this uncertainty is the sampling rate used to capture the GPS signal. The separation distance between the antenna and camera (the nodal point) needs to be accurately determined. 1991]: . 1989]. This distance is found through a calibration process prior to the flight. b) Prior to beginning a GPS photogrammetric mission. Traveling at 100 m/sec. One example receiver is the Trimble 4000 SSI with Super-Trak signal processing which has been used successfully in airborne-GPS [Salsig and Grissim. The effect of this error creates a time bias. This value can also be introduced in the adjustment by constraining the solution or by treating it in the stochastic process. the height between the ground control point and the antenna needs to be measured. Too low of a rate will increase the processing whereas too high of a rate will degrade the accuracy of the interpolation model. two important drawbacks are identified with the traditional component approach to camera calibration [Lapine.

Kinematic measures the position of a point at the instant of the measurement. In the laboratory. Static surveying requires leaving the antennas over the points for an hour or more. fast static. It is the most accurate method of obtaining GPS surveying data. the space position of the nodal point of the camera are fixed and ground coordinates become extrapolated variables. Because of this. the potential loss of lock on the satellites has to be accounted for. It is assumed that the . Next. and kinematic. and continues to move.Principles of Airborne GPS Page 72 1. The exposure station coordinates are free parameters that are allowed to “float” during the adjustment thereby enforcing the collinearity condition. The first issue is the form of initialization of the receiver to fix the integer ambiguities. calibration can be performed under ideal and controlled conditions. This leads to different atmospheric conditions and variations in the noise found in photo measurements. Will it be at the airport or near the job site? The longer the distance between the base receiver and the rover on the plane the more uncertain will be the positioning results. 1993]. calibration of the photogrammetric system under operating conditions becomes critical if high-level accuracy is to be maintained. baseline accuracies determined from kinematic GPS will be 1 cm ± 2 ppm of the baseline distance from the base station to the receiver [Curry and Schuckman. when planning the flight lines. 2. Banking angles of 25° or less are recommended which results in longer flight lines [Abdullah et al. The effect of correlation between the different components of the total system are not considered. Merchant. survey control on the ground had the effect of compensating for residual systematic errors in the photogrammetric process [Lapine. At the next epoch. Flight Planning for Airborne GPS When planning for an airborne GPS project. The high accuracies are possible because the receiver will revisit each point after an elapsed time of about an hour. neither of these situations are possible in airborne-GPS. 1991. The environment is different. Because of this measurement process. 1992]. the GPS antenna has moved. This is due to the projective transformation where ground control is transformed into the photo coordinate system. Depending on the location of the airborne receiver. The location of the base receiver must also be considered during the planning. situations that are not possible in practice. special consideration must be taken into account for the addition of the GPS receivers that will be used to record the location of the camera. GPS Signal Measurements There are many different methods of measuring with GPS: static. Of course. With GPSobserved exposure coordinates. 2000]. wide banking turns by the pilot may result is a loss of the GPS signal. Fast static is a newer approach that yields high accuracies while increasing the productivity since the roving antenna need only be left over a point for 10-15 minutes. Traditionally.

one might have to arrive at a compromise between favorable sun angle and favorable satellite availability. 2000]. the less this assumption is valid. The longer the distance. either conventional surveying or close range photogrammetry can be used to determine the actual offset. Make sure that the GPS receiver has enough memory to store the satellite data. This will be important when a combined GPS-INS system is used. Then. This is particularly true when a static initialization is performed and satellite data is collected from the airport. There may also be some consideration on the amount of sidelap and overlap when the camera is locked down during the flight. try to find those times when the satellite coverage consists of 6 or more satellites with minimum change in coverage [Abdullah et al. a flight management system should be used to precalculate the exposure station locations during the flight The limitations attributed to the loss of lock on the satellite places additional demands on proper planning. The measurement of this offset distance is performed by leveling the aircraft using jack above the wheels. Finally. Deploying at the site requires additional manpower to deploy the receiver and assurances that the person who is occupying the base is collecting data when the rover is collecting the same data. When planning. Also plan for a PDOP that is less than 3 to ensure optimal geometry. . it is essential that the offset between the GPS antenna and the perspective center of the camera be accurately known in the image coordinate system (figure 1). Antenna Placement To achieve acceptable results using airborne GPS. These problems can be alleviated to some degree if additional drift parameters are used in the photogrammetric block adjustment. Additionally.Principles of Airborne GPS Page 73 relative positioning of the rover will be based upon similar atmospheric conditions.

1993]. the actual installation might be far simpler since many aircraft already have a strobe light on the stabilizer. Antenna shadowing is the blockage of the GPS signal. Placing the antenna on the vertical stabilizer will require more work in determining the offset vector between the antenna and the camera [Curry and Schuckman. mounting on the fuselage may require special modification of the aircraft by certified airplane mechanics. The advantages are that both multipath and shadowing are less likely to occur.Principles of Airborne GPS Page 74 Figure 18. Finally. the fuselage location increases the probability of multipath. the camera can be locked in place during the flight. the effect is that tilt and crab in the aircraft could result in a loss of coverage on the ground unless more sidelap were accounted for in the planning. 1993]. which could occur during sharp banking turns. 2000]. which could easily be adapted to accommodate an antenna. But once determined. GPS Offset For simplicity. The disadvantages are as follows. If the camera is to be leveled during the flight then the amount of movement should be measured in order to achieve higher accuracy. Moreover. But. this location. Second. the crab angle is hardly affected and the tilt corrections are negligible for large image scale [Abdullah et al. may lead to a loss of signal because of shadowing. Moreover. it should not have to be remeasured unless some changes would suggest a remeasurement be undertaken. two locations can be studied further because of their advantages over other sites. The location on the fuselage over the camera has the advantage of aligning the phase center along the optical axis of the camera thereby making the measurement of the offset as well as the mathematical modeling easier [Curry and Schuckman. First. coupled with the wing placement. The location of the antenna on the aircraft should be carefully considered. These are on the fuselage directly above the camera and the tip of the vertical stabilizer. . This helps maintain the geometric relationship of the offset vector. Although any point on the top side of the plane could be thought of as a candidate site.

1 second intervals. .. Many of the cameras now in use for airborne-GPS will send a signal to the receiver when the exposure was taken. which can be accounted for in the processing.1 ms whereas some of the other cameras use a TTL pulse that can be calibrated to accurately measure the mid-point of the exposure. spline function. An error in timing will result in a change in the coordinates of the exposure station. Since the receiver clock contains a small drift of about 1 µs/sec. Without calibration the photographer should not change the exposure during the flight thereby maintaining a constant offset distance. a sensor must be installed to record the time of exposure.56 m/sec). With the rotary shutters used in aerial cameras the time between when the shutter release signal is sent (see figure 2) to the mid-point of the exposure station varies [Jacobsen. Some field results found very little difference between these methods [Forlani and Pinto. Therefore. Prior to determining the exposure station coordinates. i.. 1994]. This may have been because they used the GPS receiver PPS (pulse per second) signal to trip the shutter on the aerial camera. Then. the location of the phase center of the antenna must be interpolated. Accuracies better than 1 msec have been reported for time intervals by using a light sensitive device within the aerial camera [van der Vegt. can only be done approximately. This meant that the effective instant of exposure was very close to the GPS time signal. polynomial approach. if a plane is traveling at 200 km/hr (. and quadratic time-dependent polynomial. Therefore. Some of them include the linear model. For example.e. though. This device will create an electrical pulse when the shutter is at the maximum aperture. the offset from the recorded time to the effective instant of exposure can be determined and taken into account. through a calibration process. Lapine [nd] suggests that the position of the antenna be time shifted so that the positions are equally spaced. 1989].Principles of Airborne GPS Page 75 Determining the Exposure Station Coordinates The GPS receiver is preset to sample data at a certain rate. This. The receiver then records the GPS time for this event marker within the data. 1991]. This sample time may not coincide with the actual exposure time. it is necessary to interpolate the position of the exposure station between GPS observations. Merchant [1993] points out that some cameras can determine the mid-exposure pulse time to 0. Several different interpolation models can be employed to determine the trajectory of the aircraft. then a one millisecond difference will result in 6 cm of coordinate error.

Y. the location of the receiver could be considerably different than the actual location during exposure. On the other hand.Principles of Airborne GPS Page 76 Figure 19. say. Z) d (X. For example.Z) = = = = time interval between GPS epochs changes in GPS coordinated between two epochs time difference when the exposure was made within an epoch. . Y. Z ) where: i ∆(X.5 seconds.Y. 1991]. figure 3 shows a sudden change in the Z-direction during the flight. Thus. 0. and changes in GPS coordinates to the exposure time.Y. This would reduce the effect of the error but increase the number of observations taken and the time to process those data. One of the most simplest interpolation models is the linear approach. Sudden changes in direction are very common at lower altitudes where large scale mapping missions are flown. The assumption is made that the change in trajectory from one epoch to another is linear. Shutter release diagram for rotary shutters [from Jacobsen.Z) di d(X. it assumes that the change in position is linear which may not be true. Assuming a linear change. one can write a simple ratio as: i di = ∆(X. The advantage of this model is its simplicity. One alternative would be to decrease the sample interval to.

Effects of linear interpolation model when the aircraft experiences sudden changes in its trajectory between PG epochs Because of the non-linear nature of the aircraft motion. Jacobsen [1993] suggests that a leastsquares polynomial fitting algorithm be used to determine the space position of the perspective center. The effect of this polynomial is to smooth the trajectory of the aircraft over the five epochs. a more realistic trajectory should be obtained. velocity and acceleration of the aircraft in all three axes. like: . A second order polynomial is used by Lapine [nd] to determine the position offset. it can be used to estimate better the exposure station coordinates than a linear model. By varying the degree of the polynomial and the number of neighbors to be included in the interpolation process.Principles of Airborne GPS Page 77 Figure 20. The degree and number of points will depend on the time interval between GPS epochs. in a general form. The following model is used: X1 = a X + b X (t1 − t 3 ) + c X (t1 − t 3 ) 2 2 X 2 = a X + b X (t 2 − t 3 ) + c X (t 2 − t 3 ) X 3 = a X + b X (t 3 − t 3 ) + c X (t 3 − t 3 ) X 4 = a X + b X (t 4 − t 3 ) + c X (t 4 − t 3 ) X 5 = a X + b X (t 5 − t 3 ) + c X (t 5 − t 3 ) 2 2 2 Similar equation can be generated for Y and Z. Thus the three models look. The added advantage of this method is that if a cycle slip is experienced. This is done by fitting a curve to a five epoch period around the exposure time.

.. . 2.Principles of Airborne GPS Page 78 X = a X + bX t + cX t Z = a Z + b Z t + cZ t 2 2 Y = a Y + bY t + cY t 2 where: t = ti . and c = twice the acceleration From this the observation equations can be written as vX = a X + bX t + cX t 2 − X = 0 vY = a Y + bY t + cY t 2 − Y = 0 vZ = a Z + bZt + cZt 2 − Z = 0 The design or coefficient matrix is found by differentiating the model with respect to the unknown parameters.t3 and i = 1. The solution becomes .. All three models have the same coefficient matrix: − 1 − 1 B = − 1 − 1 − 1 2 − (t1 − t 3 ) − (t1 − t 3 ) − 1 2 − (t 2 − t 3 ) − (t 2 − t 3 ) − 1 2 − (t 3 − t 3 ) − (t 3 − t 3 ) = − 1 2 − (t 4 − t 3 ) − (t 4 − t 3 ) − 1 2 − (t 5 − t 3 ) − (t 5 − t 3 ) − 1 − t − t2 − t − t2 − t − t2 − t − t2 − t − t2 The observation vectors (f) are: − X1 − X 2 fX = − X3 − X 4 − X5 − Y1 − Y 2 f Y = − Y3 − Y4 − Y5 − Z1 − Z 2 f Z = − Z3 − Z 4 − Z5 The normal equations can then be expressed as v X = B∆ X + f X v Y = B∆ Y + f Y v Z = B∆ Z + f Z where ∆ represent the parameters ( ∆ = [a b c]T ). 5 a = distance from the origin b = velocity.

Principles of Airborne GPS

∆ x = − BT WB ∆Y ∆Z

T T

( ) = −(B WB) = −(B WB)

Page 79

−1 −1

BT Wf X BT Wf Y BT Wf Z

−1

where W is the weight matrix. Assuming a weight of 1, the weight matrix then becomes the identity matrix and

5 T T B WB = B IB = 5t 2 5t 5t 5t 2 5t 3 5t 2 5t 3 5t 4

For the X observed values, as an example, X1 + X 2 + X 3 + X 4 + X 5 X t+X t+X t+X t+X t T T B Wf X = B If X = 1 2 3 5 X1t 2 + X 2 t 2 + X 3 t 2 + X 4 t 2 + X 5 t 2 The weighting scheme is important in the adjustment because an inappropriate choice of weights may biased or unduly influence the results. Lapine looked at assigning equal weights but this choice was rejected because the trajectory of the aircraft may be non-uniform. The final weighting scheme used a binomial expansion technique whereby times further from the central time epoch (t3) were weighted less than those closest to the middle. Using a variance of 1.0 cm2 for the central time epoch, the variance scheme looks like 2 2 ⋅ 0.01 m 2 2 2 2 ⋅ 0.01 m 2 4 cm

2 ⋅ 0.01 m

2

2

22 ⋅ 0.01 m 2 2 2 ⋅ 0.01 m 2

4 cm 2 =

4 cm

2

4 cm 2 4 cm 2

Principles of Airborne GPS

Page 80

where the off-diagonal values are all zero (0). A basic assumption made in Lapine's study was that the observations are independent therefore there is no covariance. Once the coefficients are solved for, the position of the antenna phase center can be computed using the following expressions

**X exp = a X + b X (t exp − t 3 ) + c X (t exp − t 3 ) Yexp = a Y + b Y (t exp − t 3 ) + c Y (t exp − t 3 ) Zexp = a Z + b Z (t exp − t 3 ) + c Z (t exp − t 3 )
**

2

2

2

**Determination of Integer Ambiguity
**

The important error concern in airborne-GPS is the determination of the integer ambiguity. Unlike ground-based measurements, the whole photogrammetric mission could be lost if a cycle slip occurs and the receiver cannot resolve the ambiguity problem. There are two principal methods of solving for this integer ambiguity: static initialization over a know reference point or using a dual-frequency receiver with on-the-fly ambiguity resolution techniques [Habib and Novak, 1994]. Static initialization can be performed in two basic modes [Abdullah et al, 2000]. The first method of resolving the integer ambiguities is to place the aircraft over a point on a baseline with know coordinates. Only a few observations are required because the vector from the reference receiver to the aircraft is known. The accuracy of the baseline must be better than 6-7 cm. The second approach is a static determination of the vector over a know baseline or from the reference station to the antenna on the aircraft. The integer ambiguities are solved for in a conventional static solution. This method may require a longer time period to complete, varying from 5 minutes to one hour, due to the length of the vector, type of GPS receiver, post-processing software, satellite geometry, and ionospheric stability. When static initialization is performed it does require that the receiver on-board the aircraft maintain a constant lock on at least 4 and preferable 5 GPS satellites. Abdullah et al [2000] identify several weaknesses to static initialization: • • • • • The methods add time to the project and are cumbersome to perform. GPS data collection begins at the airport during this initialization. Since the data are collected for so long, large amounts of data are collected and need to be processed – about 7 Mbytes per hour. The receiver is susceptable to cycle slips or loss of lock. It is possible that the initial solution of the integers was incorrect thereby invalidating the entire photo mission.

The use of on-the-fly (OTF) ambiguity integer resolution makes the process much easier. The new GPS receiver and post-processing software are much more robust and easy to use while the receiver is in flight. OTF requires P-code receivers where carrier phase data are

Principles of Airborne GPS

Page 81

collected using both the L1 and L2 frequencies. The solution requires about 10-15 minutes of measurements before entering the project area. Component integration can also create problems. For example, a test conducted by the National Land Survey, Sweden, experienced cycle slips when using the aircraft communication transmitter [Jonsson and Jivall, 1990]. Receiving information was not a problem, just transmissions. This test involved pre-flight initialization with the goal of re-observation over the reference station at the end of the mission. This was not possible.

**GPS-Aided Navigation
**

One of the exciting applications of airborne-GPS is its utilization of in flight navigation. The ability to precisely locate the exposure station and activate the shutter at a predetermined interval along the flight line is beneficial for centering the photography over a geographic region, such as in quad-centered photography for orthophoto production. An early test by the Swedish National Land Survey [Jonsson and Jivall, 1990] showed early progress in this endeavor. The system configuration is shown in figure 4. Two personal computers (PCs) where used in the early test - one for navigation and the other for determination of the exposure time.

Figure 21. Configuration of navigation-mode GPS equipment [from Jonsson and Jivall, 1990]. The test consisted of orientation of the receiver on the plane prior to the mission over a ground reference mark. This initialization is performed to solve for the integer ambiguity.

TDOT uses 60% side-lap for their large scale mapping. With GPS one paneled control point was placed at the beginning of the project and a second at the end. for photo j xo.5 second delay. If the site was greater than 10 km in length then a third paneled control point was placed near the center. For their low altitude flights (photo scale of 1 cm = 30 m).15 meters in agreement. Using airborne-GPS gave TDOT the ability to reduce the amount of ground control for their design mapping. ∆Zi are the transformed ground coordinates This mathematical model is often presented in the following form: . yo are the coordinates of the principal point c is the camera constant ∆Xi. The Texas Department of Transportation (TDOT) had a different problem [Bains. Using this 10 m error value. The test showed that this approach yielded about a 0. i. Thus. When compared to the photogrammetrically derived exposure station coordinates. accuracies of better than 10 m. PROCESSING AIRBORNE GPS OBSERVATIONS The mathematical model utilized in analytical photogrammetry is the collinearity model. at that time. Using real-time differential GPS. this amount of error would only cause a variation in side-lap of 7%. This 50 m value would cause a variation of only about 2%. ∆Yi. which simply states that the line from object space through the lens cone to the negative plane is a straight line. The functional representation of this model is shown as: F(x ) = (x ij − x o ) − c F(y ) = (yij − y o ) − c ∆X i =0 ∆Zi ∆Yi =0 ∆Zi where: xij. The PC used for the navigation activated a pulse that was sent to the aerial camera to trip the shutter. A flight plan was computed with the location of each exposure station identified. were realistic. which adds to the amount of data collected. 1992].Principles of Airborne GPS Page 82 This method of fixing the ambiguity requires no loss of lock during the flight thus necessitating long banking turns. For the high altitude mapping (photo scale of 1 cm = 300 m) and 30% side-lap. it was determined that the "50 m was not really necessary. An accuracy of about 6 meters was found at the preselected position along the strip. the exposure station locations were 20-40 meters too late. yij are the observed photo coordinates. the desire was to control the side-lap to 50 m. the relative carrier phase measurements were within about 0.

. These parameters include the location of the exposure station and the orientation of the photo at the instant of exposure. Y. vy X. Merchant [1973] gives the additional mathematical model when observations are made on the exterior orientation elements as: F(κ ) = κ − κ = 0 F(ϕ) = ϕ − ϕ = 0 F(ω) = ω − ω = 0 F(X L ) = X L − X L = 0 F(YL ) = YL − YL = 0 F(ZL ) = ZL − ZL = 0 o a o a o a o a o a o a The mathematical model for observation on survey control can be similarly.YL. should equal the predicted values of the photo coordinates based upon the current estimates of the parameters. These central projective equations form the basis for the aerotriangulation.Principles of Airborne GPS Page 83 x ij + v x ij = x o − c m11 (X i − X L ) + m12 (Yi − YL ) + m13 (Z i − Z L ) m 31 (X i − X L ) + m 32 (Yi − YL ) + m 33 (Z i − Z L ) m 21 (X i − X L ) + m 22 (Yi − YL ) + m 23 (Z i − Z L ) m 31 (X i − X L ) + m 32 (Yi − YL ) + m 33 (Z i − Z L ) y ij + v yij = y o − c where: vx. The former values could be observed quantities from onboard GPS. This is done by expanding the mathematical model. It is common to treat observations as stochastic variables. Z XL. For example... corrected for the location of the principal point.ZL m11 . m33 are the residuals in x and y respectively for point i on photo j are the ground coordinates of point i are the space rectangular coordinates of the exposure station for photo j is the 3x3 rotation matrix that transforms the ground coordinates to a photo parallel system. The model implies that the difference between the observed photo coordinates.

. Adding a second photo reduces some of the uncertainty. Using GPS to determine the exposure station coordinates without ground control is not applicable to all photogrammetric problems. is below the antenna. All positions on the sphere are theoretically possible but from a practical viewpoint. Ground control is needed for a single photo resection and orientation [Lucas. 1996]. naturally. is located on top of the aircraft to receive the satellite signals. being located below the aircraft and pointing to the ground. or some other mechanism to constrain the roll angle along the single strip. 1996. p. Without ground control. This is due to the additional constraint of the collinearity condition that is placed on the rays from the control to the image position.125]. Without ground control. If the exposure station coordinates are precisely known then the only thing known is that the camera lies in some sphere with a radius equal to the offset distance from the GPS antenna to the cameras nodal point. Position ambiguity for a single photo resection [from Lucas.Principles of Airborne GPS Page 84 Figure 22. one knows that the camera. The antenna. figure 5. 1996]. The antenna is located at the center of the circle. this situation could be found throughout a single strip of photography. The collinearity theory will provide the relative orientation between the two photos [Lucas. the camera is then free to rotate about a line that passes through the two antenna locations (see figure 6).

assume that the survey control (X. Further. This is shown as X L X a Y = Y + M M E M L a ZL ZA where: DU DV DW DU. the 7-parameter solution to independent model triangulation results in a loss of accuracy in the solution. DV. 1992].Principles of Airborne GPS Page 85 Figure 23. Z) is reported in the WGS 84 system. Y. p. DV. Ambiguity of the camera position for a pair of aerial photos [from Lucas. Moreover. Determining the coordinates of the exposure stations can be easily visualized in the following model [Merchant. 1993]. W). Then. it remains to transform the offset between the receivers phase center and the nodal point of the aerial camera (DU. DW are the offset distances MM is the camera mount orientation ME is the exterior orientation elements of the camera .z) are aligned with the coordinate system (U.y. the usual iterative adjustment cannot be used with the recommended 4 corner control points [Jacobsen. DW) into the corresponding survey coordinate system. While independent model triangulation continues to be employed in practice.125]. Assume that the photo coordinate system (x. 1996. V.

5o maximum pitch angle between the aircraft and the camera mount was found. With this rigid relationship fixed. This means that low banking angles onboard the aircraft will not be used as in those methods where a loss of lock means a thwarted mission. When the coordinate offsets between the antenna and camera were surveyed. it is accepted that loss of lock will occur. Lapine found that the optical axis of the camera coincided with the vertical axis of the mount. Here. an eccentricity error could be introduced. certain physical conditions are assumed or accepted [Ackermann. In the normal acquisition of aerial photos. a 1. κ and swing are added to form one rotational element and φ and pitch are similarly combined.5o = 0. 1993]. 4 cm can be reached whereas 60 cm are possible using absolute positioning. Testing revealed that the gimbal center was located approximately 27 mm from the nodal points.027m * sin 1. the orientation angles on the mount are leveled.Principles of Airborne GPS Page 86 The camera mount orientation is necessary to ensure that the camera is heading correctly down the flight path. In this technique. A problem occurs if there is an offset between the location of the nodal point and the gimbals’ rotational center on the mount. Since roll was not measured during the test. Because loss of lock. Using the Wild RC-10 camera mount. First. When this systematic tendency is accounted for in the adjustment. Thus the error in neglecting this effect in the flight direction would be maximum pitch error = 0. one can simply algebraically sum the camera mount angles with the appropriate measured pitch and swing angles.0007 meters Experiences from tests in Europe [Jacobsen. the camera is leveled prior to each exposure. For relative positioning. When the camera is rotated. it is also unnecessary to perform a stationary observation prior to take-off to resolve the integer ambiguities. During the flight. Lapine [nd] points out that the transformation of the offsets to the local coordinate system can easily be performed using the standard gimbal form. Thus. pitch and swing angles between the aircraft and the camera are measured. many of the data sets have shown a time dependent drift pattern in the GPS values. The simplest way to ensure that the relationship between the receiver and the camera are consistent would be to forgo any rotation of the camera during the flight. Moreover. That meant that the combination of κ and swing would not produce any eccentricity. the antenna coordinates can be rotated into a parallel system with respect to the ground by using the tilts experienced during the flight. excellent results are possible. A second approach to perform airborne GPS aerial triangulation is sometimes referred to as the Stuttgart method. In this situation. roll and swing (crab or drift). Alternatively. These ambiguities are solved . 1991] indicate that the GPS positions of the projection centers differ from the coordinates obtained from a bundle adjustment. Then. ω is treated independently. This is done so that the photography can be nearly vertical at the instant of exposure even though the aircraft is experiencing pitch. the relationship between the two points should be considered.

a new set of observations can be written for the perspective center coordinates as [Blankenberg. The accuracy of this approach will be dependent upon the accuracy of the measurements. Seldom will loss of lock happen along the strip though. Using the mathematical model for additional stochastic observations within the adjustment as outlined earlier [Merchant. Evaluation of the results indicated that this was probably due to the GPS processing of the cycle slips. uncertainty in weather could mean field crews already on the site but the photo mission canceled. the block may be able to be split into parts where the aircraft trajectories are continuous thereby decreasing the number of unknown parameters within the adjustment. the problem is solved because of the existence of permanent reference stations throughout the country that could be occupied by the ground receiver. Høghlen [1993] states that an alternative to the strip-wise application of the biases. This is important because it can decrease the costs associated with photo missions. 1992]. therefore. which can include other effects such as datum effects. banking turns could have an adverse effect by blocking the signal to some of the satellites causing cycle slips. are systematic in nature and consist of a linear and time dependent component. It also is an asset to flight planning in that on-site GPS ground receivers will require fixing the flight lines at least one day before the mission. During the flying season this could be a problem [Jacobsen. The accuracy of the position in the differential mode is predicated on the accuracy of solving these integer ambiguities at both the base receiver and the rover. . 1994]. only one set of drift parameters need to be carried in the bundle adjustment. It could be 500 km or farther away [Ackermann. These positions can be affected by selective availability (SA).Principles of Airborne GPS Page 87 on-the-fly and can be determined for each strip if loss of lock occurs during the banking (or at other times during the photo mission). Finally. 1973]. the ground or base receiver will probably be located at a great distance from the photo mission. the satellite geometry and how many uncorrelated observations are used in the averaging approach. Early test results added confusion to the drift error biases. a systematic effect was not noticeable on all photo strips [van der Vegt. These drift errors. In a test by the Survey Department of Rijkswaterstaat. If no loss of lock occurs during the photo mission. Netherlands. it will be assumed that single frequency receivers will be used on the aircraft. 1989]. This test used a technique where the differences between the observed pseudoranges and the phase measurements were averaged. The block adjustment is used to solve for these biases. there will be bias in the solution. Second. the aircraft trajectories will be continuous and. Logistical concerns include not only the deployment of the aircraft but also the ground personnel on the site to operate the base. When projects are located at great distances from the airplanes’ home base. 1993]. In Germany. The advantage of modeling these drift parameters is that the ground receiver does not have to be situated near the site. Because of this. The solution of the integer ambiguities is performed using C/A-code pseudorange positioning. Unfortunately.

Geometry of the GPS antenna with respect to the aerial camera (zerox copy. source unknown) As it was discussed earlier. κ )i ZA v Z ZL GPS i i i where: x A a X PC A y PC + a Y + dt z A a Z PC j b X b Y b Z j (XA. Relating the antenna offset to the ground is dependent upon the rotation of the camera with respect to the aircraft and the orientation of the aircraft to the ground. If this condition is met then the orientation of the camera offset will only be dependent upon the orientation elements (κ. YA. ω). YL. ZL)GPS vX. ϕ. 1993. The bundle adjustment can be used to correct the camera offset if the camera remains fixed to the aircraft during the photo mission. YL. ZA)GPS = ground coordinates of the GPS antenna for photo i . ω. vY. Høghlen. vZ XL. the antenna does not occupy the same location as the camera nodal point. 1993]: X A GPS v X X L YA GPS + v Y = YL + R (ϕ.Principles of Airborne GPS Page 88 X L GPS v X X L YL GPS + v Y = YL ZL v Z ZL GPS i i i where: (XL. The new additional observation equations to the collinear model are given as [Ackermann. The geometry is shown in figure 7. ZL = the perspective center coordinates observed with GPS = residuals on the observed principal center coordinates = the adjusted perspective center coordinates used within the bundle adjustment Figure 24.

aZ dt bX. bY. YA. one can introduce more ground control. vY. Maintaining lock ensures that the phase history is recorded from take-off to landing. but this defeats one of the advantages of airborne GPS. Idealized block schemes. The figure 8(i) scheme can be used for airborne GPS when no drift parameters are employed in the block adjustment. Introducing the stepwise drift parameters and using four ground control points located at the corners of the project. κ) = residuals for the GPS antenna coordinates (XA. It is recognized in analytical photogrammetry that adding parameters to the adjustment weakens the solution. YL. ZL xAPC. there are three approaches to reducing the instability of the block [Ackermann. ω.and 60% side-lap using 60% end-lap and 20% side-lap and adding an additional vertical control point Figure 25. The block schemes shown in figure 8 are idealized depictions. and using the conventional amount of overlap as indicated in (ii) and flying at least two cross-strips of photography. Abdullah et al [2000] points out that this is the most accurate type of configuration . bZ R(ϕ. ZA)GPS for photo i = exposure station coordinates of photo i = eccentricity components to the GPS antenna = GPS drift parameters for strip j representing the constant term = difference between the exposure time for photo I and the time at the start of strip j = GPS drift parameters for strip j representing the linear timedependent terms = orthogonal rotation matrix. vZ XL. zAPC aX. aY. iii) at both ends of each strip. To strengthen the problem. These are shown in figure 8 and are: i) ii) using both 60% end. 1993]. It is important that the receiver maintains lock during the flight which necessitates flat turns between flights. yAPC.Principles of Airborne GPS Page 89 vX.

In this situation it is advisable to add additional cross-strips or provide more ground control. Figure 26.Principles of Airborne GPS Page 90 in a production environment. GPS block control configuration. This can easily be visualized if one considers supplanting the ground control by control located at . The same control scheme can also be used when block drift parameters are used in the bundle adjustment. it is not frequently utilized in a production environment. For that reason. drift parameters are developed for each flight line strip which requires additional height control at the ends of each strip. This model increases the geometry and provides a check against any gross errors in the ground control. But it does add to the cost of the project because more photography is required to be taken and measured. Here. it is possible to perform the block adjustment without any ground control. If strip drift parameters are used then a control configuration as shown in figure 8(ii) should be used. Figure 9 is an example. More often the area is not rectangular but rather irregular. The control configuration in figure 8(iii) incorporates two cross strips of photography. Theoretically.

Using this background and simulated data. control is required at an interval of approximately every seventh photo on the edge of the block. As is known. The is commonly referred to as the edge effect and stems from a weakened geometric configuration that exists because of a loss in redundancy. Using kinematic GPS.and side-lap photography along with airborne GPS and no control. As an example. conventional aerotriangulation requires ground control. Using the same simulated data. Nonetheless. Figure 27. the photos are taken only from one side of view. it is prudent to include control on every job. The results show that for planimetry. error ellipses grew larger towards the center of the block. Under normal circumstances. the results are similar. on the other hand. Larger error ellipses were found at the control points but at every other point they were either smaller or nearly equivalent. Topographic mapping requires vertical control within the block at about the same spacing. But on the edge. if nothing more than providing a check to the aerotriangulation. Error ellipses with ground points positioned by conventional aerotriangulation adjustment of a photo block [Lucasm 1994]. Using the four control point scheme as just presented has the advantage of using the GPS position for interpolation only within the strip. Compared with the original simulation with vertical control within the block.Principles of Airborne GPS Page 91 the exposure stations. Using just aerotriangulation without control. a point in the middle of a block should be visible on at least nine photos. Lucas [1996] was able to develop error ellipses from a bundle adjustment showing the accumulation of error along the edges of the block (figure 10). Elevation errors were much different within the two simulations. kept the error from getting larger. for planimetric mapping. Lucas [1996] also showed the error ellipses one would expect to find using 60% end. each point had improvements. except the .

These results are based on simulations therefore reflect what is possible and not necessarily what one would find in real data. Most transportation projects have monumented points throughout the project and intervisible control should be reasonably expected. The expected horizontal accuracy (X. Merchant [1994] states that to solve this adjustment problem. 1993]. In the case of strip photography. If the position of exposure station can be ascertained to an accuracy of 10 cm or better. A Trimble . In conventional aerotriangulation. A strip of three photos was taken from a Wild RC-20 aerial camera in a Cessna Citation over the Transportation Research Center test site in Ohio. the exposure station coordinates will nearly lie on a line making it an ill-conditioned or singular system. GPS observed perspective center coordinates stabilize the adjustment thus negating the necessity for extensive control. This assumes using the six drift parameters for each strip. In fact.Principles of Airborne GPS Page 92 control points that were fixed in the conventional adjustment. The aircraft was pressurized and the flying height above the ground was approximately 1800 m. a solution is possible if the exposure stations are distributed along a block and are non-collinear. These are shown in figure 11 for horizontal values and figure 12 for vertical control.5 σ O and the vertical accuracy (Z) would be around 2. A test was performed to evaluate the idea of using control for strip photography [Merchant. four control points and cross-strips. there is a problem with airborne GPS when the GPS measurements are exclusively used to control the flight.0 σ O . Then as long as σ GPS ≤ σ O . Lucas [1996] states that the reason for the improvement lies in the fact that each exposure station is now a control point and the distance between the control is less than one would find conventionally. 1994]. some kind of control needs to be provided on the ground to eliminate the weak solution that would otherwise exist. Ackermann indicates that the following rule could apply. the projected values expressed in ground units are σ O . ground control points helped suppress the effects of block deformation. Strip Airborne GPS For route surveys. then the accuracy of the adjustment becomes primarily dependent upon the precision of the measurement of the photo coordinates [Ackermann. such as transportation systems. Therefore. Lucas [1996] shows the error ellipses one would expect with only ground control and then with kinematic GPS. their main function now becomes one of assisting in the datum transformation problem [Ackermann. Y) will be approximately 1. As an example. 1993]. Results of projects conducted with combined GPS bundle adjustment show that this approach is not only feasible but also desirable. Accuracy considerations are important in determining the viability of using GPS observations within a combined bundle adjustment. existing ground control could be utilized in the adjustment. It would not be practical to have the same density of control as one would have in the air. Designating the standard error of the photo observations as σ0. Theoretically.

Principles of Airborne GPS Page 93 SSE receiver was used with a distance to the ground-based receiver being approximately 35 km. atmospheric refraction (also accounting for the refraction due to the pressurized cabin). Figure 28 Figure 29. The method is shown as: . The photography was acquired with 60% end-lap. and film deformation (USC&GS 8-parameter model). The corridor method only used a narrow band of points along the route. For this test. The middle photo had 30 targeted image points. The full field method utilized all of the checkpoints within the photography. only one or two were used while the remaining control values were withheld. 1994]. Corrections applied to the measured photo coordinates included lens distortion compensation (both Seidel's aberration radial distortion and decentering distortion using the Brown/Conrady model). The results are shown in the following table. The results are expressed in terms of the root mean square error (rmse) defined as the measure of variability of the observed and "true" (or withheld) values for the checkpoints. which is typical of the area of interest for many transportation departments [Merchant.

Number of Test Points rmse (meters) X Using 2 targeted ground control points Full Field Corridor 28 10 0.031 0. Moreover. INS has a very high short-term accuracy. This will have the effect of anchoring the strip thereby preventing it from accumulating large amounts of error. which are comparable to results found from conventional block adjustments.057 0. which can be used to eliminate multipath effects and aid in the solution of the ambiguity problem.079 0. Combined INS and GPS Surveying The combination of a combined inertial navigation system (INS) with GPS gives the surveyor the ability to exploit the advantages of both systems.050 0.Principles of Airborne GPS Page 94 2 rmse = ∑ (true − observed ) n where n is the number of test points. If the strip was only a single strip. Another approach.073 Y Z Using 1 targeted ground control point Full Field Corridor 29 11 0.084 0. A minimum of one point is needed with additional points being used as a check. other than including additional control. It should also be noted that pass points were targeted therefore errors that may occur due to the marking of conjugate imagery is not present.087 0.086 0.034 0. the adjustment also included calibration of the system. 1996]. good results can be expected by using ground control to alleviate the ill conditioning of the normal equations.082 The results indicate that accuracies in elevation are better than 1:20. The long-term accuracy of the GPS can be used to correct for the time- . then it is recommended that a cross strip be obtained at both ends of the strip [Lucas.000 of the flying height.033 0. would be to fly a cross strip perpendicular to the strip of photography. Nonetheless.026 0.

The offset distances were then measured. thereby increasing. Being an abandoned facility. the facility could handle the King Air airplane. only the shift parameters need to be included within the adjustment model [Jacobsen. two occupied master control points while the remaining six simultaneously observed the satellites over the photo control points.000. In addition. They are: Photo Scale: Flying Height: Flight Direction: Forward Overlap: Side-lap: Number of Strips: 1:3000 500 meters North-South 60% minimum 60% 3 . within the bundle adjustment. Prior to the measurement.Principles of Airborne GPS Page 95 dependent drift found within the inertial systems. Air Force base located near Bryan. Target design is important for the aerial triangulation. In areas where there was no hard surface to paint the target. Three considerations were addressed in this project: system description. This site was selected because the targets could be permanently set and there would be minimal obstructions due to traffic. Texas DOT Accuracy Assessment Project The Texas Department of Transportation undertook a project to assess the accuracy level that is achievable using GPS and photogrammetry. Texas.S. The offset between the antenna and the camera was measured four times and the mean values determined. The system description can be summarized as follows: The site selected was an abandoned U. a prefabricated painter wafer board target was employed. Used together will give the surveyor not only good relative accuracies but also good absolute accuracies as well. Bains [1995] describes the project in length. The goal was to achieve Order B accuracy in 3-D of 1:1. The flight specifications were designed to optimize the accuracy of the test. Each target was observed at least once. expansion of the test facility was possible. the accuracy of the aerotriangulation. Using 8 receivers. A 60 x 60 cm cross target with a pin in the center was selected (based on a photo scale of 1:3000). the aircraft was jacked up and leveled. at least theoretically.000. 1993]. Moreover. The location of the center of the target allowed for the precise centering of the ground receiver over the point. The aerial camera was then leveled and locked into place. In addition. differential levels were run over all targets to test the accuracy of the GPS-derived heights. airborne GPS kinematic processing and statistical analysis. All of the targets were measured using static GPS measurements.

The results were then statistically processed using the SAS (Statistical Analysis System). The ground receiver was turned on and a sample rate of 1 second was used. The rover receiver in the aircraft was then turned on and tracked the satellites for five minutes with the same one-second sampling rate. . The results of this study showed that the accuracy achieved fell within specifications. at least for this project.Principles of Airborne GPS Page 96 Exposures per Strip: Focal Length: Format: Camera: Film Type: Sun Angle: Cloud Cover: GDOP: 12 152 mm 230 x 230 mm Wild RC 20 Kodak Panatomic 2412 Black/White 30o minimum None #4 The mission began by measuring the height of the antenna when the aircraft was parked. The software vendor recommended that the processing be done both forward and backward for better accuracy but the test indicated that. Table 1 shows the comparison between the GPS-derived control and the values from the ground truth. In fact. Then the aircraft took off and flew its mission. These results show that airborne GPS can meet the accuracy specifications for photogrammetric mapping. The PNAV software was used for on-the-fly ambiguity resolution. The processing steps involved the kinematic solution of the GPS observations. The aerial triangulation was then performed with the GAPP software using only four ground control stations. the GPS results were either equal to or better than the accuracy of conventional positioning systems. As an example. A 15Fm pixel size was used. The photogrammetry was processed using soft-copy photogrammetry. two at the start and two at the end. there was no increase in the accuracy when performing that kind of processing. The results also indicated that there was a need to have a reference point within the site to aid in the transformation to State Plane Coordinates.

031 0.068 Maximum 0. and combined block adjustment The real savings accrue in the control where the costs are 10% or less than those required using conventional aerotriangulation. 1993].105 Mean -0. The overall net savings will be about 40% when looking at the total project costs.021 -0.Principles of Airborne GPS Page 97 Number of Observations 95 95 95 Variable Easting Northing Elevation Minimum -0.089 0.026 0. some general findings are available [Ackermann.008 Standard Deviation 0.003 -0.027 Table 1. Comparison of airborne GPS assisted triangulation with ground truth on day 279.100 -0. This increase includes: • • • • • • • • flying additional cross-strips film GPS equipment GPS base observations processing the GPS data and computation of aircraft trajectories aerotriangulation point transfer and photo observations.057 0. 1993 over a long strip [from Bains.40]. p. ECONOMICS OF AIRBORNE-GPS While no studies have been conducted that describe the economic advantages of airborneGPS. Utilization of airborne-GPS does increase the aerotriangulation costs by about 25% over the conventional approach. If higher order accuracy is required (Ackermann uses the example of cadastral photogrammetry which needs 1-2 cm accuracy) then the savings will decrease because additional ground control are necessary. 1995.075 -0. .

"Experiences of Combined Block Adjustment with GPS Data". New York.J. F.S. H. Munich. Ghosh. “Correction of GPS Antenna Position for Combined Block Adjustment”.The Eura Block". Reno. 1995.. Germany. S. Pinto. S. Analytical Photogrammetry. 3. and L. Vol 1. Jivall. 1979.References Page 98 REFERENCES Ackermann. 1994. Vol.March 2. LA pp 152-158. 30. 13(20):7-15.. H. ACSM/ASPRS Annual Convention and Exposition Technical Papes. Jacobsen. pp 31-42. MD pp 208-217. 15(85):3-15.. “Practical Guidelines for the Use of GPS Photogrammetry”. 2. 1993. A. “Experiences from Kinematic GPS Measurements”. Munich. 3. and T. Proceedings of ASPRS/ACSM Annual Convention and Exposition.. also. Vol. NV. ACSM/ASPRS Annual Convention and Exposition Technical Papers. Germany. Novak.S. 1993. pp 731-738. Habib. K.. September 5-9. 12p. pp 219-226. Jacobsen. "GPS Controlled Aerial Triangulation of Single Flight Lines". International Archives of Photogrammetry and Remote Sensing. K. Vol. April 25-28.M. "GPS for Photogrammetry". June 6-10. 1995. 1994. LA. Baltimore. Jonsson.. pp 225-235. 203p. Canada. 5. Vol. K. "Trends in GPS Photogrammetry". 30. 1991.K. Proceedings of ACSM-ASPRS Annual Convention. Proceedings of the 6th International geodetic Symposium on Satellite Positioning. Vol. "GPS-Supported Aerotriangulation in Finland . Vol. and K. 1993. S. B. A. 1994. Bains. March 17-20. . International Archives of Photogrammetry and Remote Sensing.. "Photogrammetric Surveying by GPS Navigation". New Orleans. Forlani. Copenhagen. II. The Photogrammetric Journal of Finland. and K. The Photogrammetric Journal of Finland. Columbus. Part 3. New Orleans. "Development of an Airborne Positioning System". Schuckman. G. Short. Bains. 1993. 13(2):68-77. 1990.. OH. Curry. 30. Vol. ACSM/ASPRS Annual Convention and Exposition Technical Papers. International Archives of Photogrammetry and Remote Sensing. Ottawa. pp 79-88. “Combined Block Adjustment with Precise Differential GPS-Data”. Photogrammetric Record. Vol. Paper presented at the Nordic Geodetic Commission 11th General Meeting. Corbett. Part 3. Jacobsen. Høghlen. and A. pp 203-210. Pergamon Press. February 27 . 1992. pp 422-426. "Airborne GPS Performance on a Texas Project". Part 2. September 5-9.

Merchant. “Analytical Calibration of the Airborne Photogrammetric System Using A Priori Knowledge of the Exposure Station Obtained from Kinematic Global Positioning System Techniques”..C. Novak. 2. 1989. C. OH. Vol. “GPS in Aerial Mapping”. ASPRS/ACSM/RT92 Technical Papers. D. 1. D. Lucas.A. source unknown. 1993. 1996. D. The Ohio State University. G. “Differential GPS: Efficient Tool in Photogrammetry”. K. Part II – Computational Photogrammetry..D.References Page 99 Lapine. “Elements of Photogrammetry. 75p. "Airborne Kinematic GPS Positioning for Photogrammetry . 133p. Department of Geodetic Science and Survey Report No. Xerox copy.. Lapine. pp 76-85. Col. Grissim. Paper presented at the IX Congresso Brasileiro de Cartografia. Merchant.. ASPRS. The Ohio State University. 411.C. August 9-11. "Airborne GPS-Photogrammetry for Transportation Systems". Salsig. CA. pp 124-129. 1995. D..A. Ohio State University. Analytical Photogrammetry lecture notes.. . NV. Proceedings of ASPRS/ACSM Annual Convention and Exposition. L. D. van der Vegt. Reno. J. February 4-9. 1992.. Department of Geodetic Science. L. Surveying Engineering. 1979. 115(3):285-296. and T.C. 1994. “Instrumentation for Analytical Photogrammetry”. “Covariance Propagation in Kinematic GPS Photogrammetry” in Digital Photogrammetry: An Addendum to the Manual of Photogrammetry.. Columbus. Merchant. "GPS-Controlled Aerial Photogrammetry". April 25-28. Proceedings of Trimble Surveying and Mapping Users Conference. Merchant. pp 392-395. Santa Clara. pp 48-53. Washington. Department of Geodetic Science and Surveying. Brasil. 188p.The Determination of the Camera Exposure Station". August 3-8. 1991. 8p. nd. 1973.R.

- mkiydy
- SUG541 - Advanced Photogrammetry - Aerial Triangulation Method
- HYDROGRAPHY
- SUG533 - Adjustment Computation - LOPOV for Latitude and Departure Computation
- Aerial Missions With Small Unmanned Aircraft Systems to Monitor Sediment Flow and Changing Topography Resulting From the Removal of Dams on the Elwha River
- SUG553 - Geographic Information System - Introduction to GIS
- 05_optics.ppt
- SUG514 - Hydrographic Surveying - SUG514 - Assignment 4 - WGS84 To Cassini - Soldner Coordinate Transformation
- SUG514 - Hydrographic Surveying - Topo Map vs Navigation Chart, ENC & ECDIS
- Photogrammetry Flight Planning
- SUG514 - Hydrographic Surveying - Run Line vs Cross Line Calculation
- SUG413 - Advanced Engineering Survey (Route Survey)
- SUG596 - Field Scheme II (Engineering Survey)
- a
- SUG541 - Advanced Photogrammetry - 17 Questions
- Physics 12 Notes
- Optometry
- JEE Main Sample Paper (3)
- LBO Fizik F4
- Mechanics.pdf
- Picture Window Pro 5 Manual
- Research 2
- Checker 4G Quick Start Guide
- lenses and mirrors
- Different Methods of Surveying
- 16 Lens
- 0625_w12_qp_33
- 5. Refraction Through Lenses & Optical Instruments5 - Copy
- One Dimention Standing Wave
- Laser Tech

Skip carousel

- 2012 LUCID Optics Catalog
- Crimson Trace Laser Grips Owners Guide
- As NZS ISO 10322.2-2011 Ophthalmic Optics - Semi-Finished Spectacle Lens Blanks Specifications for Progressiv
- Costco motion to intervene in contact lens lawsuit
- Half-hours with the TelescopeBeing a Popular Guide to the Use of the Telescope as aMeans of Amusement and Instruction. by Proctor, Richard Anthony, 1837-1888
- Experimental Determination of the Velocity of LightMade at the U.S. Naval Academy, Annapolis by Michelson, Albert A., 1852-1931
- ImmerVision v. CBC et. al.
- Castlen Optometrics v. Essilor International
- As 2228.2-1992 Spectacles Spectacle Frames
- As NZS ISO 8624-2011 Ophthalmic Optics - Spectacle Frames - Measuring System and Terminology
- How the NX Lenses Are Launched Into the World PART II - The NX Lens Planners
- 2010 Vortex Optics Hunting Catalog
- Spectacles & Visual Aids
- Scientific American Supplement, No. 415, December 15, 1883 by Various
- As ISO 11979.1-2003 Opthalmic Implants - Intraocular Lenses Vocabulary
- At a Winter's Fire by Capes, Bernard Edward Joseph, 1854-1918
- As ISO 11979.6-2003 Opthalmic Implants - Intraocular Lenses Shelf-Life and Transport Stability
- Global contact lens market to grow to $11.6 billion in 2016
- As ISO 14534-2003 Opthalmic Optics - Contact Lenses and Contact Lens Care Products - Fundamental Requirements
- Sanders v. Optical Radiation, 4th Cir. (1996)
- Good News - The NX200 Wins AP Awards 2012!
- Apollo Experience Report Command and Service Module Controls and Displays Subsystem
- Loupes Magnification and Illumination
- Premier Systems v. Shah et. al.
- Castlen Optometrics v. Essilor International et. al.
- As NZS ISO 8980.2-2011 Ophthalmic Optics - Uncut Finished Spectacle Lenses Specifications for Progressive Pow
- A Review on the application on fresnel lenses for solar radiations concentrator
- 60760_1860-1864
- As ISO 11979.3-2003 Opthalmic Implants - Intraocular Lenses Mechanical Properties and Test Methods
- Costco vs. Johnson & Johnson

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading