Professional Documents
Culture Documents
Group 8
An academic exercise presented in partial fulfilment for the degree of Bachelor of Technology
with Honors in Civil Engineering.
1
Contributors
1. Oyeleke Meleyotan Daniel ………………………………………………………….. 180464
2
Contents
Acknowledgements
Summary
1.0 Introduction
3
2.4.1 Atmospheric Refraction
Acknowledgements
First, we like to take this opportunity to express sincere appreciation and gratitude to our
supervisor, Dr. O. S. Olaniyan, for his constant concern, patient guidance and invaluable
suggestions throughout the preparation of this project.
4
Summary
The main objective of this project is to write a simplified mathematical approach to
photogrammetry. In this project, a summary of the theory of navigation and the mathematical
background of the different methods for finding the longitude and latitude are being covered.
5
Introduction
The goal of photogrammetry is to obtain information about the physical environment from
images. This project is dedicated to the mathematical relations that allow one to extract
geometric 3D measurements from 2D perspective images. Its aim is to give a brief and gentle
overview for students or researchers in neighboring disciplines.
6
2.1 Geometric Coordinate System
The position of an observer on the earth's surface can be specified by the terrestrial coordinates,
latitude and longitude.
The Greenwich meridian, the meridian passing through the Royal Greenwich Observatory in
London (closed in 1998), was adopted as the prime meridian at the International Meridian
Conference in October 1884. Its upper branch (0°) is the reference for measuring longitudes, its
lower branch (180°) is known as the International Dateline. All the lines of longitude are given a
number between 0° and 180°, either East (E) or West (W) of the Greenwich Meridian.
7
2.1.2 Three Dimensional Coordinate System
2.2.1 Obtaining Useful Geometric Cues
A patch in the image could theoretically be generated by a surface of any orientation in the
world. To determine which orientation is most likely, we need to use all of the available cues:
material, location, texture gradients, shading, vanishing points, etc. Much of this information,
however, can be extracted only when something is known about the structure of the scene. For
instance, knowledge about the intersection of nearly parallel lines in the image is often extremely
useful for determining the 3Dorientation, but only when we know that the lines belong to the
same planar surface (e.g. the face of a building or the ground). Our solution is to slowly build our
structural knowledge of the image: from pixels to super pixels to related groups of super pixels
(see Figure 2).
2.1. Multiple Hypothesis Method
Ideally, we would evaluate all possible segmentations of an image to ensure that we find the best
one. To make this tractable, we sample a small number of segmentations that are representative
of the entire distribution. Since sampling from all of the possible pixel segmentations is
infeasible, we reduce the combinatorial complexity of the search further by sampling sets of
super pixels. Our approach is to make multiple segmentation hypotheses based on simple cues
and then use each hypothesis’ increased spatial support to better evaluate its quality. Different
hypotheses vary in the number of segments and make
8
2.3.3 Collinearity Equations
The Collinearity equation is a physical model representing the geometry between a sensors
(projection center), the ground coordinates of an object and the image coordinates, while the
coordinate transformation technique as mentioned in 9.5 can be considered as a black box type of
correction. The collinearity equation gives the geometry of a bundle of rays connecting the
projection center of a sensor, an image point and an object on the ground, as shown in Figure
9.6.1.
For convenience, an optical camera system is described to illustrate the principle. Let the
projection center or lens be 0 (X0, Y0, Z0), with rotation angles , , around X, Y and Z axis
respectively (roll, pitch and yaw angles), the image coordinates be p (x,y) and the ground
coordinates be P(X,Y, Z). The collinearity equation is given as follows where f: focal length of
lens, and a1 to a9 are given by the following matrix relationship.
In the case of a camera, the previous formula includes six unknown parameters (X0,Y0,Z0 ; , , )
which can be determined with the use of more than three ground control points (Ixia; Xi,Yi,Zi).
The collinearity equation can be inversed as follows-
9
In the case of a flat plane (Z: constant), the formula coincides with the two dimensional
projection as listed in Table 9.5.1. The geometry of an optical mechanical scanner and a CCD
linear array sensor is a little different from the one of a frame camera. Only the cross track
direction is a central projection similar to a frame camera, while along track direction is almost
parallel (y=0) with a slight variation of orbit and attitude, as a function of time or line number, of
not more than a third order as follows.
X0 = X0(l) = X0 + X1 l+ X2 l + X3 l
Y0 = Y0 (l) = Y0 + Y1 l+ Y2 l + Y3 l
Z0 = Z0 (l) = Z0 + Z1 l+ Z2 l + Z3 l
0 = 0(l) = 0 + 1 l+ 2 l + 3 l
0 = 0(l) = 0 + 1 l+ 2 l + 3 l
0 = 0(l) = 0 + 1 l+ 2 l + 3 l
, where l is line number.
10
2.3.4 Interior Orientation
The determination of the attitude, the position Andreas the intrinsic geometric characteristics of
the camera is recognized as the fundamental photogrammetric problem. It can be summarized as
the determination of camera interior and exterior orientation parameters, as well as the
determination of 3D coordinates of object points
Camera-object geometry Interior orientation refers to the parameters linking the pixel
coordinates of an image point (x im , y im ), with the corresponding coordinates in the camera
reference frame (x, y,-f). Specifically, the interior orientation parameters are the coordinates in
pixel of the image center, or the principal point (x o , y o ), the focal length f and any parameters
used to model lens distortion dx. Exterior orientation refers to the position (X o , Y o , Z o ) W
and orientation (ω, φ, κ) of the camera with respect to a world reference frame, in this case the
TLS sensor frame. The orientation is described by the elements in the 3D rotation matrix relating
the 3D coordinates of a point in the TLS sensor frame to the camera coordinates of the
corresponding point. The camera calibration and pose parameters are estimated by solving the
collinearity equations. To increase the accuracy of the parameters, the collinearity equations are
extended with corrections for the systematically distorted image coordinates.
2.3.8 Sensors Definitions and Conventions
A transducer is generally defined as a device that converts a signal from one physical
form to a corresponding signal having different physical form. Energy can be converted
from one form into another for a purpose of transmitting power or information.
Mechanical energy can be converted into electrical energy, or one form of mechanical
energy can be converted into another form of mechanical energy. Examples of
transducers include a loud speaker, which convert and electrical input into an audio wave
output. A microphone, which converts an audio wave input into an electrical output and a
stepper motor, which convert an electrical input into a rotary position change.
A sensor is generally defined as an input device that provides a usable output in response to a
specific physical quantity input. The physical quantity input that is to be measured, called the
measure, and affects the sensor in a way that causes a response represented in the output. The
output of many modern sensors is an electrical signal, but alternatively, could be motion,
pressure, flow, or other usable type of output. Some examples of sensors include thermocouple
pair, which converts a temperature difference into an electrical output; a pressure sensing
diaphragm, which converts a fluid pressure into a force or position change; and a linear variables
differentials transformer (LVDT), which converts a position into an electrical output.
A position sensor is a sensor that facilitate measurement of mechanical position. A position
sensor may indicate absolute position (location) or relative position (displacement), in terms of
linear travel, rotational angle, or three dimensional space.
Obviously, according to definition a transducer can sometimes be a sensor and vice versa. For
example, a microphone fits the description of both a transducer and a sensor. This can be
11
confusing, and many specialized terms are used in particular area of measurement. (An audio
engineer would seldom refer to a microphone as a sensor, preferring to call it a transducer.)
Although the general term transducer refers to both the input and output devices, a sensor is an
input device that provides a usable output in response to the input measured.
12
practically available. When using a ratio of less than 10, an allowance should be made for this
when evaluating the data.
Calibrated accuracy is the absolute accuracy of the individual transducer calibration and
includes the accuracy of the standard used as well as the ability of the calibration technique to
produce a setting that matches the standard. For example, it the setting is made by turning a
potentiometer adjustment, the operator ties to obtain a setting that results in a particular output
reading. The operator will be able to achieve this to within some level of tolerance. The tolerance
will become part of the calibrated accuracy specification, in addition to any allowance made due
to the accuracy of the reference standard that was used. Rather than specifying a calibrated
accuracy of 99.9% for example, it is more common to list a calibration error of 0.1%. When
evaluating the total error budget of an application, the calibration error must be included as well
as the nonlinearity, hysteresis, temperature error, and other factors.
13
Conclusion
14
References
McGlone JC (ed) (2013) Manual of Photogrammetry, sixth edition. American Society for
Photogrammetry and Remote Sensing
15