You are on page 1of 15

Basic Mathematics of Photogrammetry

Group 8

An academic exercise presented in partial fulfilment for the degree of Bachelor of Technology
with Honors in Civil Engineering.

Supervisor: Dr.O. S. Olaniyan

Department of Civil Engineering


Ladoke Akintola University of Technology, Ogbomoso
2021

1
Contributors
1. Oyeleke Meleyotan Daniel ………………………………………………………….. 180464

2. Adeleye Ifeoluwa Olamide …………………………………………………………... 181338

3. Oladeji Favour Temilola …………………………………………………………... 180792

4. Aderinto Emmanuel Adetola ………………………………………………………... 180362

5. Lawore Praise Adeola …………………………………………………………... 191081

6. Akintola Muizz Aderemi…………………………………………………………... 193040

7. Adesanmi Isaac O. …………………………………………………………... 183007

8. Kayode Iyanuoluwa Samuel …………………………………………………………... 183127

9. Oladipo Samuel Ayomide …………………………………………………………... 180865

10. Olaniyi Ifemidayo OluwaPelumi …………………………………………………... 182766

11. Tejumola Habeeb Gbolahan ………………………………………………………... 182454

12. Saheed Ololade Nasiff …………………………………………………………… 182347

13. Adeyemi Adedamola Elijah …………………………………………………………..18

2
Contents

Acknowledgements
Summary
1.0 Introduction

2.1 Geometric Coordinate Systems


2.1.1 Latitude and Longitude
2.1.2 Three-Dimensional Coordinate Systems
2.1.3 Datum: The Three Dimensional Transformations
2.2 Basic Image Geometry
2.2.1 The Geometry of the Single image
2.2.2 The geometry of the image pair
2.2.3 The geometry of the image triplet
2.2.4 Synopsis
2.3 Time Dependent Sensor Models
2.3.1 Types of Sensors
2.3.2 Coordinate system
2.3.3 Collinearity equations
2.3.4 Interior Orientation
2.3.5 Time
2.3.6 Rotation Matrix
2.3.7 Rotation Matrix
2.3.8 Sensor Position
2.3.9 Corrections to sensor position and attitude parameter
Atmospheric Refraction
Properties of Collinearity equations
2.4 Systematic Error Corrections

3
2.4.1 Atmospheric Refraction

Acknowledgements
First, we like to take this opportunity to express sincere appreciation and gratitude to our
supervisor, Dr. O. S. Olaniyan, for his constant concern, patient guidance and invaluable
suggestions throughout the preparation of this project.

4
Summary
The main objective of this project is to write a simplified mathematical approach to
photogrammetry. In this project, a summary of the theory of navigation and the mathematical
background of the different methods for finding the longitude and latitude are being covered.

5
Introduction
The goal of photogrammetry is to obtain information about the physical environment from
images. This project is dedicated to the mathematical relations that allow one to extract
geometric 3D measurements from 2D perspective images. Its aim is to give a brief and gentle
overview for students or researchers in neighboring disciplines.

6
2.1 Geometric Coordinate System
The position of an observer on the earth's surface can be specified by the terrestrial coordinates,
latitude and longitude.

2.1.1 Latitude and Longitude


Lines of latitude are imaginary lines which run in an east-west direction around the world. They are
also called parallels of latitude because they run parallel to each other. Latitude is measured in
degrees (°).
The most important line of latitude is the Equator (0°). The North Pole is 90° North (90°N) and
the South Pole is 90° South (90°S). All other lines of latitude are given a number between 0° and
90°, either North (N) or South (S) of the Equator. Some other important lines of latitude are the
Tropic of Cancer (23.5°N), Tropic of Capricorn (23.5°S), Arctic Circle (66.5°N) and Antarctic
Circle (66.5°S).
Lines of longitude are imaginary lines which run in a north-south direction, from the North Pole
to the South Pole (Figure 2.2). They are also measured in degrees (°).
Any circle on the surface of a sphere whose plane passes through the center of the sphere is
called a great circle. Thus, a great circle is a circle with the greatest possible diameter on the
surface of a sphere. Any circle on the surface of a sphere whose plane does not pass through the
center of the sphere is called a small circle.
A meridian is a great circle going through the geographic poles, the poles where the axis of
rotation (polar axis) intersects the earth's surface. The upper branch of a meridian is the half of
the great circle from pole to pole passing through a given position; the lower branch is the
opposite half. The equator is the only great circle whose plane is perpendicular to the polar axis.
Further the equator is the only parallel of latitude being a great circle. Any other parallel of
latitude is a small circle whose plane is parallel to the plane of the equator.

The Greenwich meridian, the meridian passing through the Royal Greenwich Observatory in
London (closed in 1998), was adopted as the prime meridian at the International Meridian
Conference in October 1884. Its upper branch (0°) is the reference for measuring longitudes, its
lower branch (180°) is known as the International Dateline. All the lines of longitude are given a
number between 0° and 180°, either East (E) or West (W) of the Greenwich Meridian.

7
2.1.2 Three Dimensional Coordinate System
2.2.1 Obtaining Useful Geometric Cues
A patch in the image could theoretically be generated by a surface of any orientation in the
world. To determine which orientation is most likely, we need to use all of the available cues:
material, location, texture gradients, shading, vanishing points, etc. Much of this information,
however, can be extracted only when something is known about the structure of the scene. For
instance, knowledge about the intersection of nearly parallel lines in the image is often extremely
useful for determining the 3Dorientation, but only when we know that the lines belong to the
same planar surface (e.g. the face of a building or the ground). Our solution is to slowly build our
structural knowledge of the image: from pixels to super pixels to related groups of super pixels
(see Figure 2).
2.1. Multiple Hypothesis Method
Ideally, we would evaluate all possible segmentations of an image to ensure that we find the best
one. To make this tractable, we sample a small number of segmentations that are representative
of the entire distribution. Since sampling from all of the possible pixel segmentations is
infeasible, we reduce the combinatorial complexity of the search further by sampling sets of
super pixels. Our approach is to make multiple segmentation hypotheses based on simple cues
and then use each hypothesis’ increased spatial support to better evaluate its quality. Different
hypotheses vary in the number of segments and make

8
2.3.3 Collinearity Equations
The Collinearity equation is a physical model representing the geometry between a sensors
(projection center), the ground coordinates of an object and the image coordinates, while the
coordinate transformation technique as mentioned in 9.5 can be considered as a black box type of
correction. The collinearity equation gives the geometry of a bundle of rays connecting the
projection center of a sensor, an image point and an object on the ground, as shown in Figure
9.6.1.

For convenience, an optical camera system is described to illustrate the principle. Let the
projection center or lens be 0 (X0, Y0, Z0), with rotation angles , , around X, Y and Z axis
respectively (roll, pitch and yaw angles), the image coordinates be p (x,y) and the ground
coordinates be P(X,Y, Z). The collinearity equation is given as follows where f: focal length of
lens, and a1 to a9 are given by the following matrix relationship.
In the case of a camera, the previous formula includes six unknown parameters (X0,Y0,Z0 ; , , )
which can be determined with the use of more than three ground control points (Ixia; Xi,Yi,Zi).
The collinearity equation can be inversed as follows-

9
In the case of a flat plane (Z: constant), the formula coincides with the two dimensional
projection as listed in Table 9.5.1. The geometry of an optical mechanical scanner and a CCD
linear array sensor is a little different from the one of a frame camera. Only the cross track
direction is a central projection similar to a frame camera, while along track direction is almost
parallel (y=0) with a slight variation of orbit and attitude, as a function of time or line number, of
not more than a third order as follows.

X0 = X0(l) = X0 + X1 l+ X2 l + X3 l
Y0 = Y0 (l) = Y0 + Y1 l+ Y2 l + Y3 l
Z0 = Z0 (l) = Z0 + Z1 l+ Z2 l + Z3 l
0 = 0(l) = 0 + 1 l+ 2 l + 3 l
0 = 0(l) = 0 + 1 l+ 2 l + 3 l
0 = 0(l) = 0 + 1 l+ 2 l + 3 l
, where l is line number.

10
2.3.4 Interior Orientation
The determination of the attitude, the position Andreas the intrinsic geometric characteristics of
the camera is recognized as the fundamental photogrammetric problem. It can be summarized as
the determination of camera interior and exterior orientation parameters, as well as the
determination of 3D coordinates of object points
Camera-object geometry Interior orientation refers to the parameters linking the pixel
coordinates of an image point (x im , y im ), with the corresponding coordinates in the camera
reference frame (x, y,-f). Specifically, the interior orientation parameters are the coordinates in
pixel of the image center, or the principal point (x o , y o ), the focal length f and any parameters
used to model lens distortion dx. Exterior orientation refers to the position (X o , Y o , Z o ) W
and orientation (ω, φ, κ) of the camera with respect to a world reference frame, in this case the
TLS sensor frame. The orientation is described by the elements in the 3D rotation matrix relating
the 3D coordinates of a point in the TLS sensor frame to the camera coordinates of the
corresponding point. The camera calibration and pose parameters are estimated by solving the
collinearity equations. To increase the accuracy of the parameters, the collinearity equations are
extended with corrections for the systematically distorted image coordinates.
2.3.8 Sensors Definitions and Conventions
A transducer is generally defined as a device that converts a signal from one physical
form to a corresponding signal having different physical form. Energy can be converted
from one form into another for a purpose of transmitting power or information.
Mechanical energy can be converted into electrical energy, or one form of mechanical
energy can be converted into another form of mechanical energy. Examples of
transducers include a loud speaker, which convert and electrical input into an audio wave
output. A microphone, which converts an audio wave input into an electrical output and a
stepper motor, which convert an electrical input into a rotary position change.

A sensor is generally defined as an input device that provides a usable output in response to a
specific physical quantity input. The physical quantity input that is to be measured, called the
measure, and affects the sensor in a way that causes a response represented in the output. The
output of many modern sensors is an electrical signal, but alternatively, could be motion,
pressure, flow, or other usable type of output. Some examples of sensors include thermocouple
pair, which converts a temperature difference into an electrical output; a pressure sensing
diaphragm, which converts a fluid pressure into a force or position change; and a linear variables
differentials transformer (LVDT), which converts a position into an electrical output.
A position sensor is a sensor that facilitate measurement of mechanical position. A position
sensor may indicate absolute position (location) or relative position (displacement), in terms of
linear travel, rotational angle, or three dimensional space.
Obviously, according to definition a transducer can sometimes be a sensor and vice versa. For
example, a microphone fits the description of both a transducer and a sensor. This can be

11
confusing, and many specialized terms are used in particular area of measurement. (An audio
engineer would seldom refer to a microphone as a sensor, preferring to call it a transducer.)
Although the general term transducer refers to both the input and output devices, a sensor is an
input device that provides a usable output in response to the input measured.

2.3.9 Position versus displacement


A position transducer measures the distance between a reference point and the present location of
the target. The word target is used in the case to mean that element of which the position or
displacement is to be determined; the reference point can be one end, the face of flange, or a
mark on the body of the position transducer (such as a fixed reference datum in an absolute
transducer).
A displacement transducer measure the distance between the present position of the target
and the position recorded previously, an example of this would be an incremental magnetic
encoder. Position transducer can be used as displacement transducer by adding circuitry to
remember the previous position and subtract the new position, yielding the difference as the
displacement. Alternatively, the data from a position transducer may be recorded into memory
by a microcontroller, and difference calculated as needed to indicate displacement.
Position sensor specifications
The lists of parameters that are important to specify in characterizing a position sensor
maybe somewhat different from those that would be important to specify in, for example, a
sensor of gas analysis. Compared to gas sensor, the position sensor may have the similar needs to
list power supply requirements, operating temperature range, and nonlinearity but there will be
differences related to the specific measuring technique. A position sensor specification should
indicate whether it measures linear or angular motion, if the reading is absolute or incremental,
and whether it uses contact or contactless sensing and actuation. Conversely, a gas sensor spec
would indicate what kind of gas is detected, how well it ignores other interfering gases, if it
measures gas by percent volume or partial pressure, and the shell life (if it is an electrochemical
type of sensor having a limited lifetime). So there exist a number of specifications that are
important when describing the performance capability of a position transducer and its suitability
for use in a given application.
Calibrated accuracy
A transducer exhibits a given performance, including nonlinearity, hysteresis, and
temperature sensitivity and so on; however, the actual performance in the application is also
affected by the accuracy to which the transducer output was calibrated to a known standard. For
a position sensor, length for reference accuracy can be measured with a linear encoder, a laser
interferometer or another sensing technique capable of accuracy sufficient higher than that
expected from the sensor being measured. The normal requirement is that the reference standard
should exhibit an error 10 times less than that of the device to be tested. In this case, the error in
the reference devise can be essentially ignored. Sometimes, though, this ratio of error is not

12
practically available. When using a ratio of less than 10, an allowance should be made for this
when evaluating the data.
Calibrated accuracy is the absolute accuracy of the individual transducer calibration and
includes the accuracy of the standard used as well as the ability of the calibration technique to
produce a setting that matches the standard. For example, it the setting is made by turning a
potentiometer adjustment, the operator ties to obtain a setting that results in a particular output
reading. The operator will be able to achieve this to within some level of tolerance. The tolerance
will become part of the calibrated accuracy specification, in addition to any allowance made due
to the accuracy of the reference standard that was used. Rather than specifying a calibrated
accuracy of 99.9% for example, it is more common to list a calibration error of 0.1%. When
evaluating the total error budget of an application, the calibration error must be included as well
as the nonlinearity, hysteresis, temperature error, and other factors.

13
Conclusion

14
References
McGlone JC (ed) (2013) Manual of Photogrammetry, sixth edition. American Society for
Photogrammetry and Remote Sensing

Google

15

You might also like