Professional Documents
Culture Documents
Mike R. James
Lancaster University
The following exercise was compiled as part of the IAVCEI ‘Drone’ workshop, held on 13th August,
2017 in Portland, USA. Completing the exercise should enable you to:
Exercise Data
Much of the exercise can be repeated using your own data but, if you want to follow the exercise
steps specifically, the exercise data are freely available here . NOTE: This is a 11 Gb compressed
(.tar.gz) file. To uncompress the file, Windows users will have to use something like the free 7-zip
utility. Instructions on how to download 7-zip and uncompress .tar.gz files are widely available on
the web – e.g. here, for Windows 10. Once the data are downloaded and uncompressed, you will
have a ‘3_Exercise’ folder containing all the material mentioned throughout the exercise.
IAVCEI 2017 – The Drone Workshop
This exercise was constructed using PhotoScan Pro v.1.3.2 and may not work with other versions.
This exercise demonstrates how to process images into a 3-D model using PhotoScan software. It
caters for users who are either unfamiliar with PhotoScan, or who have a reasonable working
knowledge, with a focus on rigorous processing to understand and maximise model precision.
Contents
1 Introduction.................................................................................................................. 1
2 Initial 3-D model building ................................................................................................ 3
2.1 Add photos ............................................................................................................ 3
2.2 Assess image quality and remove poor images ............................................................ 4
2.3 Align photos ........................................................................................................... 5
3 Tie point quality control.................................................................................................. 7
3.1 Refine image selection............................................................................................. 7
3.2 Refine tie points by quality metrics ............................................................................ 7
3.3 Remove tie points manually...................................................................................... 8
4 Adding control data for georeferencing............................................................................. 8
4.1 Importing GCP ground survey data ............................................................................ 9
4.2 Making GCP image observations ..............................................................................10
4.3 Update georeference..............................................................................................10
4.1 Outlier image observations of GCPs ..........................................................................11
5 Bundle adjustment and camera model.............................................................................11
5.1 Weighting observations ..........................................................................................12
5.2 Camera model .......................................................................................................12
6 Dense matching, and DEM and orthomosaic products........................................................13
6.1 Dense matching .....................................................................................................13
6.2 Building a DEM ......................................................................................................14
6.3 Building an orthomosaic image ................................................................................15
7 Precision maps .............................................................................................................16
8 Finish ..........................................................................................................................18
9 References and resources ..............................................................................................19
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
1 Introduction
This exercise aims to give you experience in processing photographs into a 3-D model (and
associated DEM and orthomosaic products) using PhotoScan software. It is intended to be accessible
without prior experience in PhotoScan and to develop a rigorous approach when using SfM
software, along with an understanding of characteristics such as measurement precision. Although
based on a UAV-acquired dataset, the procedures are equally applicable to ground-based surveys.
The exercise is split into sections, with each rated by the level of detail/complexity. If you just
want a quick and easy 3-D visualisation, then completing only the ‘Basic’ aspects will suffice.
‘Intermediate’ level material will develop a greater insight into the underlying photogrammetric
processing to enhance the repeatability of survey results, and the ‘advanced’ material covers
considerations of measurement precision. Note that the exercise will not cover details specifically
associated with very large projects e.g. >1000 images (such as working with multiple chunks).
Survey data:
Data for the exercise are provided on the workshop’s USB, in the Exercise folder (along with a copy
of these instructions). The data are organised into sub-folders associated with the different sections
of this document.
The data are from a survey of aeolian gravel ripples that have formed since the eruption of Laki,
Iceland. The ripples are composed of pumice (light-colored, low density) and basalt (dark-colored,
high density), but the rate of sediment transport of these odd features is not known. An aerial
survey of these ripples was acquired in 2015 using kite aerial photography, and again in 2016 using a
common quadcopter, the DJI Phantom 3 Professional. The exercise is based on the 2016 UAV-
acquired dataset, kindly provided by Stephen Scheidt (Scheidt et al., 2017).
1
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
Prior to field deployment, base imagery from Google Earth was downloaded to the app (installed on
an IPad Mini). In the field, a survey area was defined using the app by simply drawing a polygon on
the map where a grid of images was desired. The app automatically estimated the maximum
allowable area of the survey using the quadcopter’s expected flight time as a limiting factor. In this
version of the app, a grid is defined assuming that two sets of orthogonal flights lines will be flown
with the camera pointed slightly off-nadir.
Prior to the flight, orange survey cones were placed in the survey area as ground
control points (GCPs), and their coordinates surveyed using a survey-grade R10
differential global positioning system (dGPS) from Trimble.
Example images:
2
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
To add photos to a project you can use the main menu bar: Workflow → Add Photos, or,
alternatively, drag/drop the image files directly into the Workspace pane.
Using either method, load the 18 images provided in the Section_2_Initial_model folder. The
images should then appear in a ‘Chunk’ in the Workspace pane (a ‘chunk’ is just a collection of
images that will be processed together, along with the results). Expanding the project tree by
clicking on it will give:
3
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
At this point, you should save your new project. From the main menu bar: File → Save as. Save the
project wherever you want, ensuring that the ‘Save as type:’ box is ‘PhotoScan Project (*.psx)’.
For small surveys (e.g. <100 images), it is practical to check image quality visually. Double-
click on the first image in the Photos pane to load and display the image. In the image pane
that appears, zoom in (mouse wheel) to check focus and blurring. Pressing ‘Page up’ / ‘Page
down’ keys will allow you to quickly navigate through all the images. For the images you
have, there are some small variations in quality, but they are generally very good and
certainly sufficient for processing.
For large projects, PhotoScan has an image quality metric that can be a useful
guide to highlight the poorest images. To calculate the metric, in the Photo pane,
select an image, then right-click, and Estimate Image Quality… Apply to all
cameras. The results can be viewed by changing the view style of the Photos pane, using the
right-most button in the Panes toolbar to change the view to ‘Details’. Click on the Quality
column header to order the images by quality. Any images with a quality score of <0.5 can
probably be immediately removed, but you will see that yours all score much closer to 1.0
(maximum quality).
To remove any poor images from the project, select them in the Photos pane and click the
‘Remove cameras’ button. Note that this only removes the image from the project, it does
not delete the image file.
4
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
You can see this information in the Reference pane, where the camera position coordinates are
given in the upper table. Camera positions can also be visualised in 3-D within the Model pane, via
the main menu: View → Show/Hide Items → Show Cameras, or via the main toolbar ‘Show
Cameras’ button.
Finally, click on the Settings button in the Reference pane to bring up the Reference
Settings dialog box. You will see that the Coordinate System is currently set to WGS 84,
which appropriately reflects the GPS values for the camera positions. Close the dialog box.
5
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
To carry out ‘SfM’ processing to align the cameras and generate a sparse 3-D point cloud, use the
main menu: Workflow → Align photos. A dialog box will appear:
For the ‘General’ settings, we’ll use ‘High’ accuracy (which may be inconveniently slow for very large
surveys). Ensure that both Generic preselection and Reference preselection are ticked. Both of these
speed up photo alignment (Reference preselection uses the preliminary camera position data to help
select images to match and will not be available if camera positions are completely unknown).
‘Advanced’ settings can be left at their default values.
Start the processing, and when it is complete (hopefully in less than a few minutes), you should see
something like this, showing the sparse cloud of 3-D tie points (grey) and aligned cameras (blue
squares) in the Model pane:
You may need to zoom in and out (mouse wheel) or scale the blue camera squares (shift-mouse
wheel) to find the most useful visualisation).
6
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
This project provides 362 images for which cameras have been oriented and the sparse point cloud
generated. However, although image quality data have been calculated, they have not yet been used
to remove any poor images.
Note that selecting images in the model Pane also highlights the appropriate rows in the Camera
table of the Reference pane. In the Camera table, scroll to the right and sort the images by the Error
(pix) column that gives the RMS tie point image residual for each image. By selecting poor-quality
images in the Photos pane, you will see that they are often associated with large RMS tie point
residual values (e.g. > 3 pix). Viewing the cameras in the Model pane also demonstrates that many
are at unusual angles, suggesting that they were taken during manoeuvres between flight lines,
where aircraft stability is likely to be reduced, and thus poor quality images more likely. Remove
poor-quality images from the project – I removed 13 with the greatest RMS error, to leave 349.
You can repeat this selection and filtering process using some of the other criteria listed below.
Appropriate threshold values will vary and there will not be a ‘right’ one to use. However, gradual
selection is a valuable tool to identify and remove points that are either outliers or at the weakest
end of the quality distribution.
Reprojection error: This metric represents image residuals, but is complicated by the fact that
PhotoScan scales these values based on the image matching, so they don’t directly reflect
values in pixels for each point. Nevertheless, it is useful in order to identify and remove the
worst points (largest values).
7
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
Reconstruction uncertainty: This is a complex metric that reflects how elongate the precision
ellipse is on any point – large values indicate elongated ellipses (for UAV surveys, this
usually indicates much weaker vertical precision than horizontal precision). Appropriate
values to use as thresholds will vary between projects, and will depend on the number of
images matched per point and the imaging geometry.
Projection accuracy: I’m not entirely clear on this one…! From the PhotoScan manual: “This
criterion allows to filter out points which projections were relatively poorer localised due
to their bigger size”. It might be to do with the scale that points have been matched at.
Following refinements, I had ~80,000 tie points remaining. At this point, it is worth checking that
there are no images for which almost all observations have been removed. In the Reference pane,
view the ‘Projections’ column in the Cameras; images with few observations (e.g. <500) would be
good candidates for removal. Ideally, the distribution of such points, rather than their total number,
should be the criterion for removal. PhotoScan does not currently offer a way to visualise tie point
distributions but, if you are interested, it can be done using sfm_georef (James & Robson, 2012;
James et al. 2017a).
8
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
Some UAV systems offer much greater precision from dual-frequency on-board receivers and can
deliver centimetre-level survey precision. Detailed analyses of these types of ‘directly
georeferenced’ surveys are out of scope of this exercise.
In using these camera position data, PhotoScan has detected (or assumed) that the camera
coordinate values are in WGS 84 (latitude and longitude). To avoid conflict with GCP data provided in
a different coordinate system, select all the cameras in the Reference pane (click on one row in the
table, then press Control-A) and untick the check boxes. This deselects the camera position data
from being used in any further georeferencing calculations.
2) The GCP coordinates are given in UTM Zone 28N, WGS84 and this needs to be set as the
coordinate system. Find the coordinate system by going to More… in the dropdown box then,
under Projected Coordinate Systems, find ‘World Geodetic System 1984’ and select WGS 84 /
UTM zone 28N (or use the filter box to search for 32628, the EPSG code!). Select the
coordinate system and return to the Import CSV dialog box.
3) Tick the ‘Accuracy’ checkbox and set the accuracy column values to 5, 6 and 7 for Easting,
Northing and Altitude respectively:
4) Click OK. PhotoScan will say it can’t find a match (no existing photo or marker with the same
name as the labels), so click ‘new marker for all’. Note that the orientation of the model will
9
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
change, which is due to the change in the project’s coordinate system. You will see the
imported GCP coordinates appear in the Markers table of the Reference pane.
Now, identify your first GCP in an image: in image …DJI_0056, look for the red cone of gcp-002 (just
under half way across the image and about three quarters the way up the image). Zoom in to see the
cone clearly, right click on the top right corner of the cone base, then select Place marker…. gcp-002.
A white dot will appear at the point, attached to a green flag, denoting a pinned GCP observation.
In the Reference panel, look on the right hand side of the Markers table, and you should see that a
number of observations have now been automatically made of that GCP (the ‘Projections’ column).
If this has remained at 1, find the same GCP in another image (e.g. …DJI_0057) to make another
manual measurement. This time, you will be guided to its location by a striped line, along which
PhotoScan is expecting the marker to be located. Find the GCP, and place the marker as before.
With multiple observations, PhotoScan has sufficient information to estimate where a marker should
be in other images. In the Reference pane, right click on the table entry for gcp-002, and select
‘Filter Photos by Markers’. The Photos pane will now only list images in which this GCP is expected
to be visible. With the Photos pane showing the image thumbnails, images in which an observation
has been manually set (pinned) will be annotated with a green flag. Images in which the GCP has
been identified by automated image matching are annotated with a blue flag, and grey furled flags
indicate images in which the GCP is expected, but has not been manually identified or successfully
located by image matching. Grey-flagged positions are not used in georeferencing calculations.
Double-click on an image annotated by a grey furled flag in the Photos pane, then drag the marker
into the appropriate position in the image (if you are confident you know where it should go!). This
will pin the observation, as indicated by the green flag annotation in the Photos pane. Note: You
don’t have to convert all (or any) of the grey flags but aim for a minimum of ~5 observations per
marker (easily exceeded in this project). Poor-quality observations are not usually worth it.
Practice this process on two more GCPs before proceeding to Section 4.3:
gcp-004: find it in …DJI_0101, (the cone is located above the centre of the image and, again,
place the marker on top right hand corner of the cone base)
gcp-007: find it in …DJI_0121, (as above, but with the cone located about a third the way
across the image, and three quarters the way up.
10
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
control coordinates (provided by the GPS survey). Thus, ‘update’ does not change the shape of the
3-D model, just its size, position and orientation.
You will now see values appear in the ‘error’ column of the Markers table, which represent the misfit
between the photogrammetric and the control data. If any are substantially larger than expected
(e.g. metres in this case), then it is likely that a GCP has been incorrectly identified. The values you
see should be somewhere in the 0.03 – 0.04 m range.
The survey now has a preliminary georeference based on the GCPs identified in the images, and
PhotoScan can estimate the positions of the remaining GCPs in images. For any remaining GCPs with
no observations (except gcp-001), use the ‘Filter photos by marker’ function to enable you to locate
the GCPs in the images. Note – ignore gcp-001 as it does not relate to a cone location!
You might notice that PhotoScan estimated that gcp-005 was rather far from its location in the
image and, having pinned the marker, it shows a much greater error than the others. This suggests
that it is not consistent with all the other GCPs. Click the Update button again to re-calculate the
transform. Error now increases overall (~0.17 m), and particularly on GCPs next to gcp-005 (4 and 6).
Uncheck gcp-005 in the Markers table to remove it from georeferenceing calculations and re-run the
update. RMS error on the control points should decrease to ~0.06 m, but error on gcp-005 (now
used only as a check point) will be high – 0.66 m.
This straightforward exploration of the error distribution on GCPs helps identify potential problems in
the data – here, gcp-005 has been identified as being substantially less consistent with the
photogrammetric model than all the other GCPs.
11
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
control data within the overall optimisation (the ‘bundle adjustment’), shape and georeferencing can
be optimised simultaneously.
In PhotoScan, bundle adjustment is carried out via the ‘Optimize Cameras’ button on the
Reference pane toolbar. Ensuring that all your GCPs are checked (active), click the Optimize
Cameras button and then, leaving the selection of camera parameters at its default values, click OK.
Following the adjustment, the RMS error on the control points should drop to ~0.13 m.
1. To change the weightings, in the Reference pane, click on the Settings button, and
edit the values in the ‘Image coordinates accuracy’ box appropriately (e.g. 1.4 pix
for ‘Marker accuracy’ and 1.3 pix for ‘Tie point accuracy’).
2. Re-run the bundle adjustment, and check that RMS image error values have not changed
substantially. Small changes can be used to update the settings values and the adjustment
run again, if required.
3. As you did previously, see how removing gcp-005 from the bundle adjustment (by
unchecking its box) affects the results. Note down the total error values for control and
check points. Do you think gcp-005 should be included in the adjustment?
The importance of appropriate observation weighting will vary with the relative numbers/precisions
/distributions of markers and tie points and, ultimately, with the accuracy requirements of the
survey. If decimetric accuracy or better is required, then appropriate weighting may well be
important. See James et al. (2017a) for more details and the impacts of inappropriate weighting.
12
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
We will now see what the effect of removing tangential distortion from the camera model will be:
1. On the Adjusted tab, edit the p1 and p2 values to 0.0 and click ‘OK’.
2. Run the bundle adjustment again, ensuring that ‘Fit p1’ and ‘Fit p2’ are unchecked so that
they will not be included in the optimisation.
3. When the adjustment is complete, check the RMS on the control points and check points,
which should now be something like 0.07 and 0.14 m respectively. So, the simpler camera
model has resulted in a very slight increase in the overall error on the control points, but a
substantial reduction (from ~0.66 m) on the independent check point. Thus, if the simplified
camera model is used, the fit to the GCPs appears more generic. Nevertheless, the error on
gcp-005 does remain elevated, so it may well still be an outlier. To resolve this, we’d really
need additional GCPs deployed so that more could be used as independent check points.
More advanced analysis of camera models can be carried out using visualisations accessed through
the Camera Calibration window (see the previous image). In the Camera Calibration window, right-
click on the blue-highlighted camera group (top left) and select Distortion Plot…. This provides plots
of the distortion model and the residuals as well as listing parameter values, precisions and
correlations. A detailed discussion is out of scope here (see conventional photogrammetry literature)
but, ideally, residuals should be small and randomly oriented. For more information on assessing
over-parameterisation in SfM projects see James et al. (2017a, b). Here, I’d suggest using the
simplified camera model and leaving gcp-005 as a check point.
Now, from the main menu: Workflow → Build dense cloud. In the dialog box that appears, set the
quality to Medium (higher quality gives more points, but is slower), and click OK.
To view the result, click the ‘Dense Cloud’ button on the main toolbar:
13
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
To export the DEM, from the main menu: File → Export DEM… and select the export file type that
you want. At this point, you can change the DEM extent and resolution to suit your requirements.
14
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
To export an orthomosaic image from the main menu, use File → Export orthomosaic and select the
export file type you want.
15
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
In rigorous photogrammetric processing, precision estimates are provided for all optimised
parameters, including the sparse point coordinates. Unfortunately, current SfM-based software does
not generally offer this, but point coordinate precision can be estimated using PhotoScan and a
Monte Carlo approach (James et al., 2017b).
This Monte Carlo precision processing has been carried out for you on the full Flight1 survey. For
your interest, the Python script used, precision_estimates.py and a processing output log
_precision_log.txt are provided in the Section_7_Precision_estimates data folder.
The _precision_log.txt file provides a number of statistics that characterise the overall
precision of the survey. Full details are given in James et al. (2017b), but a few interesting ones to
note are the relative precision ratios given at the end of the file. Here, mean point precision is given
in terms of observation distance, overall survey extent and pixels. The values here support the
overall quality of the survey – mean horizontal precision is ~1 pixel and vertical precision is ~2 pixels.
Also in the Section_7_Precision_estimates folder are the two output files from the Monte
Carlo processing which can be used to visualise how the survey precision varies spatially:
If you have CloudCompare (or a similar point cloud visualisation application), you can import the
data in these text file outputs as point clouds (import the X, Y, Z fields as point coordinates, the sX,
sY, sZ fields as scalars and ignore the other fields – we won’t consider covariance here). By using the
sZ scalar field to colour the point cloud, you can assess the variation in vertical precision across the
survey, effectively generating a ‘precision map’. Such maps give insight into the survey performance,
and indicate what aspects are limiting the precision achievable (James et al. 2017b).
16
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
_point_precision_and_covars.txt
Precision, Z (mm)
50
Note:
Precision is shown to generally vary smoothly so, overall, precision is being limited by
the georeferencing. This is because, for the full survey, a large area lies outside the
region covered by the GCPs. As you move away from the weighted centroid of the
control measurements, precision will deteriorate because the effects of uncertainty
in georeferencing scale and angular orientation become amplified.
There are some localised areas which show poorer precision, and these can be
assessed in more detail with the overall survey georeferencing uncertainty removed:
__point_precision_and_covars_shape_only.txt
Precision, Z (mm)
50
Note:
The precision associated with survey shape only is generally worst at the survey
edges where the number of overlapping images will be smallest. In the survey centre,
image overlap does not appear to substantially limit precision (e.g. there is only some
evidence of image overlap outlines).
Other, more discrete areas of weak precision reflect ground features, so changes in
image texture due to surface variations is having some identifiable effects on
precision.
17
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
8 Finish
Having completed this exercise, you should now be able to:
Load images into PhotoScan, build a georeferenced 3-D model and export associated
point clouds, DEMs and orthomosaic products.
Improve your survey quality by identifying and removing weak images and tie points.
Identify GCPs that may have unacceptable error.
Appropriately weight observations within the bundle adjustment (optimisation) in
order to maximise the repeatability of a survey’s results.
Consider the influence of differing camera models and carry out basic tests for over-
parameterisation.
Interpret precision maps in terms of the precision-limiting factors affecting a survey
and, hence, make suitable recommendations for improving survey precision.
This exercise has focussed on the processing techniques that come after data acquisition but
acquiring appropriate data starts by designing the image acquisition strategy to meet the survey
requirements. Dimensionless estimates of precision can help guide survey design, for example, by
using 1:1000 for mean precision : observation distance (James et al. 2012) as an initial guide. Further
recommendations can be found in Eltner et al. (2016), O’Connor et al. (2017) and Mosbrucker et al.
(2017).
Dimensionless parameters also represent a valuable way to report survey quality, for example,
giving precision ratios or precision expressed in pixels. PhotoScan provides some useful parameters
in a processing report that can be generated from the main menu File → Generate Report… For
more detailed analysis, see the precision processing logs generated along with precision maps
(discussed in Section 7; James et al., 2017b). Such metrics should be used to clearly communicate
the quality of your surveys.
Happy flying!
18
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
There are a wide range of additional resources on the web to get you started with PhotoScan
(including tutorials on the PhotoScan website above), although many have somewhat different
suggestions! Those from UNAVCO are recommended:
Structure from Motion guide - Practical survey considerations.
Structure from Motion AgiSoft processing guide
19
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
Photogrammetry books
Ultimately, to improve SfM-MVS results though a deeper understanding of photogrammetric
processing, recourse to standard text books is fully recommended; some excellent example are:
Krauss, K. (1993) Photogrammetry, Vol. 1, Fundamentals and Standard Processes, Dümmlers.
Luhmann ,T., Robson, S., Kyle S. and Harley I. (2006) Close Range Photogrammetry: Principles,
Methods and Applications, Whittles, Caitness.
McGlone, J. C. (2013) Manual of Photogrammetry, American Society for Photogrammetry and
Remote Sensing, Bethesda.
Wolf, P.R., Dewitt., B.A. and Wilkinson, B. E. (2014) Elements of Photogrammetry with
Applications in GIS, McGraw-Hill Education.
20
IAVCEI 2017 – The Drone Workshop PhotoScan exercise Mike James
21