Professional Documents
Culture Documents
Digital Image Analysis of Rock Fragmentation From Blasting
Digital Image Analysis of Rock Fragmentation From Blasting
0; Canada
Bibliothque nationale
du Canada
Acquisitions and
Bibliographie Services Branch
National library
K1A QN4
YOUf
"le
vorre
r~l~rt:'l'ICe
ISBN 0-612-19709-3
Canada
Ta my parents
Abstract
A nove! digital image analysis technique to measure the size of fragments on the
surface of a muck-pi!e is presented in this thesis. The technique takes into consideration the physical characteristics of fragment representation and measurement
problems. Using an adaptive smoothing lilter prior to edge detection, each fragment
on the surface is represented by a group of edge segments outlining its boundaries.
These segments are then gror:ped to form continuous contours.
A muiti-layer analysis of the digital image is then formulated where fragments
on the surfacp. are grouped into three layers, each of which is categorized by global
characteristics and is related to other neighbouring layers by local characteristics.
These local relatioDships between the layers are used to approximate the missing
parts of the fragment contour.
Rsum
Cette thse propose une nouvelle technique d'analyse d'images numriques dans le
but de mesurer la taille des fragments la surface d'un empilement de roche. La
technique tient compte des caractristiques physiques relies la reprsentation des
fragments et leur mesure. Aprs avoir filtr l'image d'un empilement. chaque
fragment en surface est reprsent par une srie de segments dlimitant ses
contours. Ceux-ci sont par la suite groups pour former des contours continus.
L'image de la surface de l'empilement est ensuite divise en trois niveaux,
le premier tant constitu de tous les fragments entirement visibles. Chaque
niveau possde des caractristiques globales et est reli aux autres par des
caractristiques locales. Ces dernires servent tablir les contours cachs des
fragments situs au deux niveaux infrieurs.
Une analyse dtaille du processus de tamisage sen tablir une relation
entre la forme et la taille d'un fragment, permettant ainsi de developper une mesure
multi-variable pour le caractriser. Cett mesure est finalement utilise pour tablir
la distribution de la taille des fragments dans l'empilement. Les rsultats de cette
no'Velle technique se comparent favorablement ceux obtenus par des mthodes
plus onreuses couramment utilises, dmontrant ainsi son efficacit et justifiant
son application future.
Acknowledgements
1 would like to thank my supervisors. Laeeque Daneshmend for his gidance and
enthusiasm throughout this work. and his consistent support over the past several
years, and Carl Hendricks for his valid advice and suggestions on the practicaI aspects
of fragment measurements. 1would like also to thank Gregory Dudek for his comments
and suggestions in image anaIysis, and Malcolm Scoble, for his helpful input on mining
issues.
1 would like to thank both Roussos Dimitrakopoulos and Gregory Carayannis, for
their guidance and support during the first two years of this research. 1 would like
to thank Raymond Langlois of the MineraI Processing laboratory and Mohammed
Hijazi of the Department of ChemicaI Engineering for their help in setting up the
laboratory c-xperiments. 1 would aIso like to thank the staff of the MineraI Processing
lab at Queen's University for lending severaI sieves which enabled me to widen the
range of data used.
Thanks are also due to Mohammed Amjad for reading and discussing sevp.raI parts
of my thesis, Behram Kapadia and Osama Abu-Shihab for taking the time to proof
read this thesis, Ameen MaIuf and JoJ:.ann Legault for the French translation of the
abstracto Furthermore, 1 wish to thank the McGill Centre for Intelligent Machines
(CIM) for use of its computer facilities, and the Canadian Centre for Automation
and Robotics in Mining (CCAfuVl) at McGill University for providing me with t.he
necessary equipment and the laboratory facilities.
.'
FinaIly, 1 would like express my warmest thanks to my parents for their care
and,support both moraIly and financiaIly uring the long period of my study. 1 am
grateful to my wife Magda for her constant support, encouragement and patience.
This work has been partiaIly funded by the NaturaI Sciences and Engineering .
Research Council of Canada (NSERC) under a Strategic Grant, and by the Institute
Contents
Chapter 1 Introduction
1.1
ll'lotivation.....
.)
1.3 Objectives . . .
2.1
10
2.2
12
2.3
2.4
2.2.1
12
2.2.2
14
16
2.3.1
16
2.3.2
19
2.3.3
Rock Modelling
22
Conclusion
24
25
Chapter 3 Preprocessing
3.1
26
3.2
Image Smoothing
28
32
34
3.5 Conclusion......
41
43
~Iodelling
44
of Fragment Contours.
4.1.1
Contour Representation
44
4.1.2
Local Parameters . . . .
45
4.1.3
49
4.1.4
Global Parameters
51
53
56
4.2
4.3.1
58
4.5
4.4.1
Interpolation
58
4.4.2
Shape Completion
60
4.4.3
60
Conclusion...........
65
67
68
-?
1-
5.4
Conclusion
Muck-~ile
80
./.
82
84
85
Sampling of a
6.3.1
Spherical Madel .
86
6.3.2
Ellipsoidal Model
88
6.3.3
92
6.3.-1
6.-1
Conclusion.....................
95
:'\Iuck-Pile Description .
96
-1.-.)
Fragment Measurement .
100
.2.1
Preprocessing .
100
-.)
?
1._.-
Image Analysis
101
.2.3
Classification
109
Conclusion
III
112
S.l
112
S.2
120
124
S.3.2
Kemeny's Method .
125
-,
1?-
S.4.1
1?_1
S.4.2
1?_1
130
S.5.1
Intermediate Results
130
8.5.2
Overall Performance
132
8.6 Conclusions -. . . .
132
152
Chapter 9 Conclusious
9.1
123
S.3.1
original Contributions
9.2 Limitations
~).j
.1
.3
9:2
153.
155
156
iii
"~
Appendices
178
178
180
iv
List of Figures
1.1
1.2
2.1
Static System
2.2
systems
17
18
2.3
Dynamie System
19
3.1
26
3.2
31
3.4
Geometrie Filter
35
3.5
3.6
.
ta
35
complement . . . . . . . . . . . .
36
3.i
37
3.8
38
3.9
40
42
4.1
Curvature
4i
4.2
53
4.3
55
4.4
ellipse . . . . . . . . . . . . .
4.5
.62
63
4.6
64
4.
65
5.1
69
5.2
5.3
5.4
Weighting Function versus grid size for spheres and other objects
5.5
5.6
5.
6.1
83
6.2
89
.1
96
.2
.3
.4
Overlapping rocks . . .
98
.5
99
.6
100
103
.8
Junction analysis . . . . .
105
.9
106
lO
. . . . .
.11 Results of applying the contour completion algorithms 1.0 the second
layer . . . . . . . . . . . . . . . . . . . .
108
110
8.1
115
8.2
116
vi
8.3
~ocks
= 5 . . . .
ll;
119
121
122
8.; Size frequency and distribution of spread rocks using the Virtual Sieving
123
method
8.8 Cumulative size distribution of spread rocks
8.9
-:
124
methods . . . . . . . . . . . . . . . . . . . . .
128
134
135
136
8.13
Cumulativ~
completion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
137
138
139
140
141
142
143
8.20 Thinning and noise removal of the edge map of the muck-pile image.
144
145
146
147
148
149
vii
150
151
vili
Chapter 1'
Introduction
The open-pit mining process is generaily made up of a sequence of unit operations
including drilling, blasting, loading, hauling and crushing. Drilling and blasting,
being the first unit operations, can have a major impact on the performance and cost
of subsequent operations. The prime objective of these two operations is to obtain
optimum fragmentation within safe and economical limits.
The output from the blasting process is dependent on many parameters such as
rock composition, layer thickness, type of explosives, etc. As a result, a quick and
accurate eva!uation process is required to assess its effectiveness. In addition, this
eva!uation process can be used to monitor blasting, optimize the blast design and
asSess loading conditions for scoops and shovels. One of the key indicators of the
effectiveness of a blast is the size of the resulting fragmfnts.
To date, the most accurate method of measuring fragment size is sieving analysis.
The drawback of this method is that it is a time consuming and labour intensive
process. This has led many researchers to'use blasting parameters and rock mass
properties to predict the fragment size distribution. Among these parameters are
jointing measurement, empirical formulae, etc. The disadvantage of the prediction
methods is the lack of actual measurement of the fragments which may result in
1. Introduction
inaccurate assessment. Clearly, there is a need for a more reliable and effective way
of obtaining fragment size distribution than by sieving analysis, while providing more
accurate results than the predictive methods.
1.1
Motivation
Most open-pit mining operations employ blasting for primary breakage of the ground.
Inappropriate blasting techniques can result in excessive damage to the wall rock,
decreasing stability and increasing water inflli.'e. In addition, it will result in either
over and/or under breakage of rocks. The presence of over broken rocks can result in
\
decreased wall stability and require additional excavation. In contrast, the presence
of under broken rocks may require secondary blasting and additional crushing.
Since blasting is a major cost factor, both cases (under and over breakage) create
additional costs reflected in the increase of the operation and maintenance of the
machinery. To establish optimum cost values, it is important that the combined performance of the blast controllable parameters be acknowledged, complying with the
ultimate goals of the overall mining operation [116] [70]. This is usually accomplished
with the definition of a set of conditions minimizing total production cost per ton of
rock blasted.
The blasting process has been described in the Iiterature as a nonlinear process
in which several parameters, often diflicult to evaIuate, dictate the outcome. A set
of twenty different parameters were Iisted by Atchison [7] as influencing rock fragmentation in blasting. Generally speaking, these parameters can be grouped into
two categories: controllable parameters (explosives parameters), and uncontrollable
parameters (rock parameters).
The controllable parameters such as, the size of the blast, the position and align-
ment of the holes, the charge distribution, and the delay pattern, have a great in-
1. Introduction
fiuence on fragmentation size and shape. Consequently, the key to blasting control
would be a method to quantify fragmentation size quickly, safely, and accurately.
, ,.,
Figure 1.1 illustrates a typical open-pit mining process. Since the mining process
sequential in nature, it can be modelled as a set of cascaded black boxes, each representing a unit process (a common representation method used in process control [6]).
This representation is shown in Figure 1.2. As shown in the figure, the blasting process has two sets of inputs, one of which is controllable and the other uncontrollable.
One of the outcomes of this process is the fragments forming the muck-pile, which
is an input to the next process. The next process is loading, followed by hauling.
By monitoring the output of each process, e.g. the profile of the pile, the size of
the fragments, etc., and using it to vary the controllable parameters, one can control
(rninirnize) the cost of each process and consequently the cost of the overall mining
process.
1. Introduction
Uncontrollable
Parameters
Hauling - r -
Mineral
Processing
Updating 1--
1.2
Problem Statement
Assessment of blasting performance has been pursued in many ways in recent years
with the aim of providing a tool for blast optimization [116] [126] [70]. The optimum blast is characterized by the size distribution of the fragments (Nielsen [116]).
The problem of fragment size distribution measurement can be decomposed into two
subproblems; namely, sample measurement, using either what is visible from the surface of a muck-pile or during the loading process, and estimation of the overa1l size
distribution.
The first subproblem, namely the sample measurement, deals with two major
issues: what and how to automatically measure fragments. Depending on the type
of sensors used in the data acquisition process (structured or unstructured light, or
o
stereo), depth information can play a major role in these measurements. In the
mining industry, the most commonly used sensors are TV cameras, interfaced with
computers by digitizing boards known as frame grabbers. The output is a digital
image referred to as an intensity image.
Using intensity images, many mining researchers implicitly use the projected area
of fragments identified by its contours as size descriptors. This is based on the assumption that fragments can be modelled aS spheres, regardless oftheir actual shapes.
Consequently the diameters of such spheres are simply calculated from the projected
1. Introduction
areas.
The second issue in fragment measurement is the segmentation of the bounding
contours, which is another challenging problem. In general, digitized images contain a great deal of redundancy. To overcome this problem, many researchers traced
fragment contours manually. This method results in subjective measurements which
are labour-intensive and very tedious to obtain. Alternatively, some researchers have
resorted to classical edge-detection techniques which were developed for other applications, and hence do not cater to the specifies of the muck-pile image processing
problem.
The solution to the second subproblem, namely the estimation of the overall size
distribution, is highly dependent on the sampling method, the type of models used
and the accuracy of the measurements obtained from the samples. In general, the
fragment sizes are associated with their frequency of occurrence. The frequency of
occurrence of sizes is ~etermined either by a number, when a number size distribution
is considered, or by weight. The weight size distribution is obtained when the size
frequency is measured on a weight basis.
1.3
Objectives
1. Introduction
In addition, a new measure characterizing each fragment will also be formulated which
will be used to estimate the size distribution of the fragments.
1.4
Scope of Thesis
Recently, the field of machine vision has enriched many areas of science with its
tools and algorithms that have made them capable of performing measurements that
were not possiblt> otherwise. This thesis is concerned with the utilization of machine
vision techniques to segment boundaries of rocks forming a pile. It also considers
the estimation of the size distribution based on the behaviour of rocks during the
classification process (sieving process).
One of the objectives of this study is to segment the bounding contour of individual
fragments to perform the measurements. Due to the nature of rock fragments (heavily te"l:tured objects) a smoothing filter is needed to remove unwanted information.
Foliowing smoothing, an edge detection process is applied to the smoothed image.
This will result in an image containing curved segments of parts of the bounding
contours.
Estimation of the missing part of the boundary of a rock resulting from overlapping
can play a major factor in the accuracy of the overali measurements. Consequently,
the chalienging problem of ovedapping fragments as a source of error in measuring
the occluded fragment will be addressed.
The problem of determining the true size distribution of blast fragmentation from
the surface of a pile of fragments has been studied by many mining researchers. In
this thesis, an attempt is made to derive a reliable measure of fragments, based on
three-dimensional space mapping. There are many factors that control the sieving
process, among them is the geometry of the object. Consequently, a new function
(calied the Weighting Function) will be introduced and used to link the geometry
1. Itltroduct.i,..lU
of the object and the siews used. This can be \'iewed as a probabilit.y funct.ion <lf
the passage of the object through the grid based on the analyt.ical logie d<'flIw<! by
Jeffreys liS].
This model will be used as an interpretation of the sieving process and will b<'
referred 1.0 as "Virtual Sieving'. In addition. Virtual Sieving will be tested 'Uld
compared with the existing techniques in this area.
1.5
Thesis Organization
In chapter 2 a literature survey of the applications of machine vision in mining automation is presented.
The first part of this study (chapters 3 and 4) considers the problem of fragment.
recognition and measurement fr-:m intensity images. Chapt.er 3 describes t.he met.hodology used in smoot.hing and edge detection 1.0 e.:"tract fragment contour segment.s.
Chapter 4 addresses the problem of segment grouping, and fragment identification.
In addition, a method for reconstructing missing parts of overlapped fragments is
presented.
The second part (chapters 5 and 6) considers the relation between the fragment
geometry and sieve analysis. Chapter 5 contains a detailed analysis of fragment
size, and defines the concept of fragment measurement. Il. contains an analytical
derivation of the Weighting Function, which will be used in chapter 6 1.0 estimate t.he
size distribution. Chapter 6 contains an overview of fragment size distribution and
presents the link 1.0 Virtual Sieving.
The third part of the thesis (chapters i and 8) contains experimental results and
comparison of the Virtual Sieving method with other methods. Chapter i presents
the result of imple.menting the image analysis algorithm in a laboratory environment.
Chapter 8 contains the comparison of the results obtained using the Virtual Sieving
1. Introduction
Chapter 2
Machine Vision and Mining
Automation
Over the past decade, there has been a significant trend in the mining industry tawards automation. In spite of being common to all industries, process automatin
faces sorne of its greatest challenges in mining under the special environmental conditions in both open-pits and underground mines.
One of the most important parts of the automation process is the sensing device
used to acquire data. Sorne processes might require non-contact sensors either to
replace tactile ones, or because the non-contact sensors are the only tools available
to automate such
p~-=
Sensing devices vary depending on the types of application under study. For
example, there are sorne mining applications where ultrasonic devices have been used
in the navigation system mounted on automatic guided vehicles. Laser technology
has also been used in sorne of the mining applications to measure distance as in
the case of slope monitoring. TV cameras have also been used for identifying and
locating landmarks and objects. Moreover, sorne applications such as the case of
rock modelling require combining more than one of these devices to obtain accurate
In general, there are thrcc criteria to determine the selection of a specifie sensor.
The first one is its flexibility. i.e: the ability of the device to handle a variety of
situations and environmental conditions. The second is safety issues such as radiation.
Finally, the cost of operating and maintaining such a sensor also plays a role in the
selection process.
Associated with sensors are the algorithms used to interpret and analyse the information acquired. Since visual information is more comprehensive and easier to
understand, much. of the information gathered from the sensors is interpreted and
prcsented using algorithms to simulate special purpose visual systems. These algorithms are grouped under what is called computer or machine vision.
This chapter starts by a general introduction to digital image terminologies which
will be used throughout' this thesis. This is followed by a review of sorne of the
ongoing research }n the mining industry in which machine vision plays a major role
in the automation process. FinaIly, a literature survey of previous work on fragment
measurement and estimation of size distributions from digital images is presented.
2.1
The visual images perceived in our everyday life are functions of four variables: the
position of the light source(s) used to illuminate the scene, the position of the viewer,
the reflectance of the surface(s), and the geometry of the object in the scene [101]
[90] [131]. Although human beings do not seem to have any problems inferring the
world's structure from visual information, the comple.'l:ity of the image information
process makes the problem of computerized scene reconstruction a very diflicult one.
Marr [101] defines the ward "vision" as a proceJS that produces a description
that is useful to the viewer from images of the e.'\."ternal world and not cluttered with
10
2.
~!achint'
\isioll and
:\Iinin~
Automatillll
Il)l'all~'
and globally.
On the locallevel. the lirst important diffl'rl'nce bl't,,"l'l'n hutllan vision and l'llIUputer based image analysis lil's in thl' \\"ay illlagl's arl' acquired. Oilfen'nt imaging
sensors have been de\'eloped to acquiTe \-arious types of measurl'Illents. Thl'n'fon'
one has 1.0 begin by understanding ho\\" these measurl'llIl'nts aTl' creatl'lL Thl' most
common sensor is the standard \'ideo camera. Solid-state c;unl'ras. often rl'fl'rred to
as
eeo
sensors. Each detector functions as a photon counter, as electrons are raiscd to the
conduction band in an isolated weil. The signal rea out from each line of detectors
then produces -an analog voltage.
The analog voltage produced by the camera, corresponding to the brightness at
different points in the scene, is then digitized with analog to digital converters which
sample the signal and produce a number between 0
1.0
that represents the brightness (intensity). The digitizing board, known as the frallle
grabber, stores this value in computer memory. Therefore a digital image can be
described in terms of a two-dimensional intensity function l(x, y) of two .discrete
variables x
The value
of 1 at any point is proportional to the brightness of the image at that point [90]
[131].
On the globallevel, machine vision is a combination of three computer orientcd
areas, namely, image processing, pattern recognition and artilicial intelligence. It
focuses on the computer analysis of one or more images, taken with single/multi
band sensors. The analysis recognizes and locates the position and orientation of
objects, and provides a sufficiently detailed symbolic description or recognition of
those imaged objects deemed 1.0 be of interest in the three dimensional environment.
In spite of this area's very limit:!cl capability when compared to human vision, il.
11
2.2
Recent advances in computer architecture, improved software reliability, and the availability of sophisticated image acquisition devices have opened a new frontier for novel
computer vision applications. Using these advances in mining process automation
can result in cost reduction of the overail rnining process, increase in productivity,
2.2.1
12
as sylvanite, carnallite and salt. Using the information about the ore distribution,
Orteu et al. adapted path planning algorithms for a computer controlled cutting
boom. The goal of their research was to develop a system capable of recognizing the
mineral distribution on the face and to produce a face map to determine the optimal
cutting trajectory that should be performed in ah automatic mode u1der control of
the system. Their road-header was equipped with cameras andencoders, actuators
and control equipment in order to automatically perform the cutting operation.
Nguyen and q9hen [114] proposed the use of a texture-segmentation based technique to extract the ore distribution map from visual data of the cutting face in
an
un~rground mine.
region processes. In their work, the segmentation problem was then formulated as
a Bayesian estimation procedure, which they decomposed into local decisions. The
advantage of this method is it allows the development of a highly parallel and fast segmentation algorithm. This proposed technique was tested in an underground potash
13
Franklin and Maerz [49J applied image processing algorithms to measure block
sizes, spacing and orientation. Using a TV camera and a digitizing board to acquire
the image, Franklin and Maerz applied the gradient operator to extract the edges.
They then used directional filters to smooth the image. During the smoothing process,
a measure of the roughness was extracted for each joint. Then, forming polygons by
interpolation and extrapolation of edges, a measure of persistence was extracted.
2.2.2
factors, depending on the partcular type of vehicle, to remove the human operator
1"
from these machines. One way to replace the human operator is to u~~/a machine
14
vision system.
A study by Hurteau et al. [2] resulted in a system designed for underground
vehicle guidance which uses two video cameras: one to evaluate the relative position
of the vehicle from the optically reflecting surfaces (guidelines) as the vehicle moves
forward, and the other for the reverse direction. The vehicle is e.,pected to follow
an opticalline composed of a highly efficient retro-reflector installed on the roof of a
haulage drift. Hurteau et al.'s system is composed of several functional modules; two
of which are the guide-path module which is used to detect the gnide-path defined by
the guideline, and the milestone module used to detect and identify landmark signs.
The guide-path module used by Hurteau et al. is composed of three major components:
1. The opticalline network
3. The opticalline software which can recoguize branching, end points, and milestones in an opticalline network system.
The optical line detector consisted of a CCD video camera and a lighting source
encapsulated in a single protection shield. The video signal of each camera was
connected to a frame grabber. Through the grabber, the computer has access to any
part of the video picture surface. Image enhancement was provided at the hardware
level through a coaxial lightport ring. The experiments were performed with a
Wagner ST-5 diesel powered LHD al:ready equipped with a Nautilus remote control
system.
Takahashi et al. [151] proposed the use of a machine vision system mounted on an
LHD in mining environments to increase the efficiency of the loading process. Their
.'
vision system consisted of a CCD camera, a laser pointer an a light source. The
15
2.3
criteria; among them, the fragment sizing, the diggability of the broken rock and the
stability of the new face created by the blast. One of the constraints that determines
the applicability of a fragment measurement technique is its speed. In other words,
fragment measurement must not slow the overall mining operation.
This section groups the work done in this area into three categories; fragment
measurement on a static system, fragment measurement on a dynamic system, and
fragment modelling.
2.3.1
By Static System, we mean that the digital images of fragments are acquired directly
or indirectly from static muck-piles (see Figure 2.1). In the direct method, a TV
camera is used to acquire the digital image directly from the surface of the muckpile. On the other hand, the indirect method bases its measurement on information
obtained from photographs of the muck-pile and always requires manual editing.
Figure 2.2 shows a schematic diagram of fragment measurement methods for the
16
\
Figure 2.1: Static System
It is important to note that both the direct and indirect approaches, as current!y
developed, rely upon human intervention to decide on the outlines and contours of
fragments. In purely manual approaches, fragments have to be traced by hand which
is very time-consuming [115]. In semi-automated (computer-assisted) approaches,
image processing algorithms are used to perform edge detection processing, resulting
in an image in which contours are defined [42J [96] [124] [46] [80] [21J. However, such
computer-assisted techniques assume that the human operator will then select which
of these contours actually corresponds to a rock fragment.
Whichever approach is chosen, direct or indirect, it results in a digital representation of fragment contours. From these contours, measurements of the fragments'
size are obtained. Nie and Rustan [115] manually digitized each fragment, and used
the exposed area as a fragment descriptor. Based on the assumptions that fragments
are uniform, Nyberg et al. [120], used the Sobel operator to obtain the fragments'
contour, and computed what they called the typical diameter as "the distance taken
at the same place on every fragment 1.0 represent each fragment". Maerz et al. [96)
manually traced each fragment on transparencies and then scanned them using a
CCD camera. Assurning the fragments are spherical, they used the area equivalent
17
Direct
Surface Scan
1 1
Measurement
Manually on Site
Ruler
Indirect
1 1 Sieve
Photographie Method
1 Computer Assisted
Tracing on Transparency
Manual
Point Digitization
Automatic
Fragments Contours
18
Camera
a low pass filter; this may result in diluting parts of the contours which would lead
to inaccurate measurements [69].
There has been some researcb in fragment assessment in underground mineS [69].
Nevertheless, the constraints to acquire images are more restrictive due to the lack
of proper lighting conditions and space. These researcbers followed the footsteps of
their colleagues, using the indirect methods forestimating fragment size in open-pit
mines.
2.3.2
The major difference between the static and dynarnic systems lies in the method
used for acquiring the digital image. As presented earlier, digital images of the static
system are intensity images of static piles representing two dimensional projections
of individual fragments. In the dynarnic system, fragment movement is involved, e.g.
on top of a moving conveyor belt (see Figure 2.3).
19
1~0
Lange [Si] utilized a heuristic search to complete the missing parts of the contours.
From these contours, the chord length was measured and used to estimate the size
distribution.
Another system to measure the size distribution of rocks on a conveyor belt was
proposed by Cheung and Ord [23]. Using active stereo vision, their system consisted
of a TV camera, a frame grabber, and a light projector willch was used to project
a stripe of light on the surface of the conveyor belt. The image of the stripe \Vas
captured by a camera placed on top of the conveyor belt. The fragments were then
isolated by tracing the stripe image, and a three-dimensional size distribution was
obtained by recording the chord length of each stripe. From the deviation of the
light stripe from the neutral position, Cheung and Ord were able to determine the
surface of the rock fragment. As the light stripe was projected over the fragments on
a moving conveyor belt, a sequence of pictures were captured by adjusting the time
between the capture of each frame. In order to capture multiple stripes, so that more
20
50
larger mean size, and a smaller standard deviation than the total distribution. Since
the camera cannot detect fine fragments because of limited resolution, the RosinRammler distribution [135] [2] was used to infer the percentage of fines from the
distribution of the observed fragments.
A recent study by researchers at the V.S. bureau of mines [59] resulted in an online dynarnic system to measure the size distribution of crushed and broken taconite
ore using digital image analysis teciJ.niques. The system was originally developed
by Grannes [iS] to measure the size distribution of slowly moving spherical taconite
pellets.
In their study, Grannes and Zahl [59] proposed to perform measurements of ore
pieces either:
moving on a conveyor belt including loading and discharge points, or
being dumped into the primary crusher from the mine rail haulage system.
The similarity of these two scenarios is due to the fact that the dumping action of
ore from a side-dump car can be viewed as ore sliding on a metal plate. Given the
opportunity, individual rocks will fall to their lowest energy state with the shortest
dimension perpendicular to the plane.
The Grannes and Zahl [59] system consisted of a CCD Camera, a digitizing system, a 386 personal computer, light source and image interpretation software. The
image interpretation software combined an averaging smoothing filter and a first
derivative edge detection mask (Sobel operator) to outline the edges of the rock and
21
prototype of Grannes and Zahl [59] system which was installed at the Minntac #1
crusher to measure the size distribution of ore as it was dumped into the crusher.
The system performance was not degraded by wet or dusty ore orby the presence of
snow in the ore. It did hwever tend to measure frozen chunks as a large rock.
: 2.3.3
Rock Modelling
In underground mines, many repetitive tasks are e:"ecuted by human operators working in harsh environmental conditions. The rock breaker is an example of this type of
process used in underground hard rock mining. The rock breaker is a four degree of
in front of a grizz:y grid. Its function is to break oversize rocks dumped on the grizzly
by trucks or LHD vehicles. The rock breakers are manually controlled by an operator
located in a cabin from where he maintains visual contact \Vith the grizzly in the
working area. The operators can be exposed to a high level of noise. dust, vibration
and flying rock chips.
In an attempt to automate the rock breaker, Hurteau et al. [il] used a TV camera
to detect and locate the rocks on the grizzly, and tactile sensors to obtain the third
dimension (the height) by contacting the rock. By combining this information a
three-dimensional model of the rock was obtained. The only assumption Hurteau et
al. [il] made is that rocks have uniform surfaces.
Cheung et al. [24] addressed the same problem, using a laser range finder, \Vh~re
the coordinates of discrete points on the surface (x,y,z) were measured. Based on the
assumption that rocks possess smooth surfaces, Cheung et al. decomposed the muck
pile into regions, each corresponding to .the surface of individual rocks. To obtain a
three dimensional shape for each rock, super-quadric models were used to estimate
the rock geometry.
Choi et al. [26] developed the perception module of a system to collect rock from
another planet. They developed a rock sampling system including a robot arm, a
range finder and a small terrain mock-up containing sand and small rocks. The goal
of the rock sampling system was to identify, locate and pick rocks from the terrain.
The process was started by taking a range image of the scene then extracting features
from the image. These features were surface features such as surface discontinuities
that were used to extract the object boundaries. Then the contours of the objects
in the scene were extracted. Based on the concept of deformable contours, the set
of points enclosed by the contour of an object was approximated by a super-quadric
surface. The parameters of the surface that approximate each object on the surface
23
2.4
Conclusion
From this overview, it is evident that there are many potential applications of machine
vision in the area of mining automation. However, industrial utilizaticn does not
appear to be substantial. It is believed that more research is needed in this area
to achieve practical, usable solutions in actual mining conditions. This requires the
utilization of the recent achievements in both the instrumentation and algorithms of
machine vision. In addition, a more precise and appropriate model of the process
itself is required.
24
Chapter 3
Preprocessing
Rock segmentation from images of muck-piles plays an important role in the fragment
measurement process. At this stage, individual rocks are extracted from the images
for subsequent analysis and calculations. During the last two decades, many image
segmentation techniques have been developed. They are based on one of the two basic
properties of gray level values: similarity and discontinuity. Those utilizing the first
property are
term~d
and region-oriented methods) [52] [138J. While those utilizing the second property
are referred to as boundary estimation edge detection [34] [131]. Application of the
first category of techniques on images of rock piles can be found in [53] [54] [80], and
of the second one in [59] [12]. In this thesis, the second category will be used for rock
segmentation.
In an ideal environment Le. high contrast and noise free images, edge segments
can be detected using gradient operators. The segments are then joined to form
closed boundaries using an edge linking algorithm. However, images of natural scenes
are usually noisy, containing objects \Vith textured regions and fuzzy boundaries.
Consequently, a smoothing process is needed to reduce the noise, homogenize the
3. Preprocessing
xlj
0'.:.-----
-~., ..-.--"1/
--.. 01
"." .. "
1["..
detection methods will aIso be presented. Finally, analysis of both Crimmins' filter
and Canny's edge detector and the result of applying them to images of muck-piles
are presented.
plane R where R C ~. Using the plane coordinates, I(x,y) is the intensity of the
light at a point on R with coordinates (x, y); I(x,y) is a mapping
~f I(p)
givenby
o'
The light reflectd orthe surfaces of various objects (in this case rocks) Oi visible
from P Will strike R in varions regions R;, where R;
3. Preprocessin;;
partially in front of another object O2 as seen from P (seeFigure 3.1), and some of .
the object O2 appears as the background to the sides of Oh then the open sets RI and
R2 will have a co=on boundary (the "edge" of object 0 1 in the 'image defined on R)
and one usually expects the image l(x, y) to be discontinuous along this boundary...
Other discontinuities in l will be caused by discontinuities in the surface orienta-,
tion of visible objects (e.g. "edges" of a cube), discontinuities in the objects' albedo
(i.e. surface markings) and discontinuities in the illumination (e.g. shadows). In reality, natural objects in general and rocks in particular are textured and not,smooth,
and surface marking occurs in misleading forms (see Figure 3.2). In addition, shadows
"
are not true discontinuities, and the measurement of l always produces a corrup,ted
noisy approximation of the true image l.
In spite of this, the image l(x, y) can be modelled (on ~ certain scale and to a
27
3. Preprocessing
certain approximation) by a set of smooth functions fi defined on a set of disjoint
regions Il; covering R as will be presented later in this chapter.
3.2
Image Smoothing
Rock fragments do not possess smooth surfaces. This usually results in a "noisy"
image. To be able to extract boundaries of individual rocks, preprocessing of the
image is needed to minimize high frequency intensity variations due to the texture.
The purpose of the smoothing process is to eliminate both weak edges and the
texture as much as possible while preserving the boundaries of the rocks. In the
literature, many filters have been developed to smooth noisy images. The problems
associated with these filters is that many do not preserve image features such as
the true edges. For example, in applying a low pass filter in the frequency domain,
occlusion boundaries can be lost to the background in some cases.
In the spatial domain, smoothing can be achieved by applying an operation such
as averaging, through a mask to the image [56]. The gray level of the pixel at the
centre of this mask is replaced by the gray level average of the pixels inside the mask.
These coefficients are either equal, or decrease from the centre pixel to the outside.
These masks do not take into consideration the change of image content. They will
smooth the image, but at the same time, they also blur the sharpness of the objects
,boundaries.
There are also adaptive smoothing methods, in which a versatile operator which
adapts itself to the local topography of the image is used. Chin et al. [25] and Mastin
,[103] review~d
and evaluated some adaptive smoothing
methods. In this section we
,
.
will review some concepts which form the basis of many of these methods, including
the newest approaches.
Graham [5i] devised a method that blurs more in uniform regions of the picture
28
3. Preprocessing
than in the busy part. He used the second difference computed on the nearest neighbours as a local measure of the level of detail. This method replaces each point by a
weighted average of zero or more of its neighbours, depending on whether the values
of the second partials fall below or above a threshold, which is typically two percent
of the range.
Levet al. [89J proposed similar iterative weighted averaging methods. In particular, they proposed applying a weighted mask at each point, whose coefficients were
based on an evaluation of the differences between the value at the centre point and
the values of its neighbours. A similar approach was used by Wang et al. [157], in
which the weighted coefficients are normalized gradient-inverses between the current
point and each neighbour. Another method by Davis and Rosenfeld [35] is based on
selecting the neighbour points which have the value closest to the central point and
as a noise peak and its value is replaced by an average of the gray tone values of the
29
3. Preprocessing
neighbourhood pixels. In order to estimate the local gray tone statistics, an assumption is made that the neighbourhood region is described by a linear or quadratic facet
surface mode!. It is also shown that this method can be successfully applied to scan
line noise removal by using a one dimensional (horizontal or vertical) neighbourhood.
Geman and Geman [55] proposed using simulated annealing, which is computationally expensive. They demonstrated the results of applying their algorithm on images with very few gray levels. Blake and Zisserman [li] proposed a different method
which aims at overcoming the difliculty of local operator approaches by introducing
weak continuity constraints to allow discontinuities in a piecewise continuous reconstruction of a noisy signal. The drawback of their method is the long computation
time required.
The idea of casting adaptive smoothing in terms of nonlinear diffusion was recently
addressed by Perona and Malik [129]. In their method, they allow the image to evolve
over time via the diffusion equation [68J. The resultant image is diffused most where
the gradient is smallest and least where the gradient is largest. As time passes, the
image is smoothed within regions of uniform intensity, but not between regions. In
addition, edges are enhanced due to the smoothing of regions on either sides of an
edge.
Saint-Marc et al. [139] proposed a smoothing filter, which is based on iteratively
convolving the signal to be smoothed with a very small averaging mask whose coefficients reflect, at each point, the degree of continuity orthe signal. Convergence for
their algorithm may take an extremely large number of iterations.
Crimmins [32] developed a nonlinear filter having the property of smoothing the
speckles and preserving features. It was first used in assisting radar image interpretation.This filter was selected as the basis of image smoothing for preprocessing in this
research. One of the motivations for using this filter was the great similarity between
aerial images and the image of the rock pile. In addition, it is simple to implement.
30
3. Prcproccssillg
,,
(a)
(b)
'"
ICI
100
1211
11
110
110
:roo
(c)
In representing the intensity image in 3-D space, specklcs appear as narrow winding walls and valleys. To illustrate this, Figure 3.3 (a) shows an image of a rock,
(b) its 3-D graphical representation, and (c) shows a vertical slice of the surface.
The body of the rock appears as a wide high plateau in (c). The geometric lUter,
through iterative repetition, gradually tears down the narrow walls and fills up the
narrow valleys. It also tears down highplateaus, which are desired to be prcserved.
However, it reduces narrow walls and valleys faster than it reduces the even wier
31
3. Preprocessing
plateaus. In general, the wider any feature is, the more slowly it is reduced. Thus,
only a few iterations are required to reduce the narrow speckle walls and valleys and
these few iterations have very little effect on the wider, even, plateaus (hence rock
shape is preserved).
3.3
Edge Detection
The edge detection process serves to simplify the analysis of images by drastically
reducing the amount of data to be processed, while, at the same time, preserving
useful information about the boundaries. The underlying concept is that abrupt
changes in intensity provide a sufficiently rich source of features that capture the key
aspects for subsequent image analysis, yet at a considerably reduced size. The best
known example (){ such featul'es are step edges, i.e. contours, where the light intensity
changes relatively abruptly from one level to another. Such edges are often associated
with object boundaries, changes in surface orientation, or material properties [98] [99].
Eclge images contain most of the relevant information in the original gray-Ieve! image
in cases where the information is mostly contained in the changing surface material,
in sharp changes in surface depth and/or orientation, and in surface texture, colour,
or grayness.
Many methods for edge detection in noisy images have been proposed, such as the
Robert gradient, Sobel operator, Prewitt operator, facet mode! and Laplacian operators [134] [65] [62] [131]. Overviews of some schemes can be found in [34] [127] [153].
The more recent detection algorithms can be grouped into three main categories,
namely; the optimal operator, multiscale approaches and sequential contour tracing
techniques. These groups validate the detection of local features by considering a
more global conte.xt.
For optimal operators, a single edge is considered. The goal is to :find the optimal
32
3. Prcproccssiug
lilter (in terms of signal to noise ratio) for the detection of such an cdge. Shanlllugalll
et al. [145] defined an edge as a step discontinuity between regions of uniform intensity
and showed that the ideal filter is given by a prolate spherical wave function. Marr
and Hildreth [99] e.,tending the work of Marr and Poggio [100], convolved the sigm
with a rotationally symmetric Laplacian of the Gaussian mask and located zcrocrossings of the resulting output. In their work, they mentioned that a lllultiple scale
approach is necessary, pointing out the difficult problem of integration. Haralick [64]
located edges at the zero-crossing of the second directional derivative in the direction
of the gradient where derivatives were computed by interpolating the data. In [63]
Haralick et al. extended the facet model to the Topographie Primal Sketch. Canny
[20] proposed solving the problem by deriving, using variational methods, an optimal
operator which turns out to be well approximated by a Derivative of Gaussian mask.
Nalwa and Binford [112] proposed an edge detector which fits, at each point, a set
of surfaces within a window and accepts the best surface, in the least squares sense,
which has the fewest parameters.
For multiscale approaches, as noted by severa! authors, automatic adjustment of
the size (or scale) parameter is difficult. Hence using multiple scales should provide
a reasonable answer. This idea is based on some physiological observations [101]
for a few scales, but the integration of these discrete scales is an open problem.
Instead of using discrete scales, Witkin [163] pr~posed a continuum of scales and
showed that, at least in one dimension, the interpretation of the multiscale response
made the important information explicit. In the case of more complex signais, the
discretization of the formulation leads to the need for a large amount of memory
allocation, as in edge focusing [14]; otherwise, heuristics need to be applied to establish
a correspondence between scales. This was done with some success by Asada and
Brady [5] for two dimensional curves in their Curvature Primal Sketch. In his paper
[20], Canny defined a set of heuristic criteria for the integration of multiple size masks,
33
3. Preprocessing
and gave promising results for two scales.
More recent edge detectors, motivated by the need for more recognizable or more
stable contour images, search instead for extremal points of the light intensity distribution known as valleys and ridges, or build up a composite edge representation made
up of a union of step edges, valleys, and ridges [29] [130] [51] [128]. The composite
edge images do not necessarily contain the subset of edges that are stable against
changing illumination; they generally look better than the step edges alone, but that
varies considerably depending on the specifie object.
The third category of edge detection algorithms search\!s the image or a filtered
version of the image for patterns of image intensity that may be edges [107] [102].
These algorithms combine edge detection with edge linking. The analysis of the
patterns of image intensity can be very elaborate. These algorithms are usually used
only in situations where it is necessary to find edges in images with poor quality.
The work in this thesis will follow this third category of edge detection techniques.
Canny'sfilter was selected to extract features of the muck-pile. One of the problems
of this filter is that it may displace the true location of the edge and may also fail to
detect some edges. The edge linking algorithm used in this thesis will be described
in detail in chapter 4.
3.4
ln this section, a proposai to utilize Crimmins' geometric filter for smoothing followed
by Canny's edge detector to e:\.1;ract features of rocks forming a pile is presented,
starting with the analysis of Crimmins' filter and its implementation. The theory of
Canny's filter is then presented. Finally the results of applying both filters to images
of rock piles are presented.
Geometrie filter designs, of which Crimmins' is an instance, are based on the use
34
3. Prcproccssinl;
111111
(a)
(c)
(h)
Figure 3.4: Geometrie Filter (a) Curve (b) Umbraofcurve (c) Complement
ofumbra
ffiilllliili
tOI
l I t
Figure 3.5: Four configurations of the 8-hull algorithm used to process the
umbra
= x, z-plane},
and {y
=1-
x, z-plane}
respectively.
The intersection of any of these vertical planes with the gray-Ievel surface forms
a curve. This curve is used to construct a binary image from the vertical planes, by
defining a discrete grid composed of vertical lines which pass through pixels in the
image (xy-plane) and horizontallines at a height proportional to the intensity value of
that pixel [(x, y) (Figure 3.4 (a)). The points on this vertical grid will be referred to
as vertical pixels and have a value equal to one (1). The umbra of the curve consists
of al! vertical pixels in the binary image, on or below the curve (Figure 3.4 b).
35
3. Preprocessing
1 I' ~I
1 1
~
0
00111 0011
0
Figure 3.6: Four configurations of the 8-hull algorithm used to process the
complement
(Appendix A) twice, once to the binary image and once to its complementary, and
complementing the binary image back. Crimmins [32J adapted this iteratively, such
that each iteration composed of two steps. In the first step, he applied the complementary hulling algorithm to the umbra, i.e. using only four of the eight configurations
to smooth the curve forming the top boundary on the umbra (see Figure 3.5). In his
ne:l:t step, he applied the remaining four configurations shown in figure 3.6 to smooth
the bottom boundary of the complement (Figure 3.4 (c)). An iteration is completed
image~::eplaces
used, etc. Similarly, the four configurations of the complementary image are applied
separately and consecutively to the complement. This results in a greater modification
of the image at each iterative step and hence fewer iterations are required. It also
causes a greater difference between the reduction rate for wide and narrow features.
Figure 3. shows the result of applying the geometric filter to one slice.
This procedure is performed on ail Cl vertical grids simultaneously and the resulting gray-Ievel image replaces the original gray-Ievel image. The same procedure
is repeated in the C3 grids, then the C2 grids and finally the C4 grids. This completes
36
3. Prcprossillg
""
'"
'"'\
..
""
!~
I~
'"
I~
1\
li,",
r\
...
'"'
~.
XiD3IIO..oo_lI
(b)
(a)
""
""
~
X
:lOO3IO..oo"lOkID
li'
(e)
IL
XiD:lIO_4I01oOl)
(d)
Figure 3.7: l-D Geometrie Filter results, (a) the original image (b) after
one iteration (e) after 10 iterations (d) after 20 iterations
Laboratory experiments of applying the geometric lilter on images of rock piles
concluded that 3 iterations are sufficient to smooth the image and to preserve most
of the rock boundaries. Figure 3.8 presents the result of applying the filter to the
image with different numbers of iterations. As can be seen from Figure 3.8 (d),
over smoothing can result in the destruction of many of the weak boundaries of the
fragments. This will complicate the segmentation process of the individual fragments,
and consequently the scene analysis process, which may result either in failure of the
37
3. Preproeessing
(al
(hl
(cl
(dl
Figure 3.8: Geometrie Filter results, (a) after one iteration, (b) after 3
iterations, (e) after 5 iterations, (d) after 7 iterations
The ne.'Ct stage of feature e.\."traction is edge detection. One of the popular edge
detectors is Canny's edge detector [20]. It is based on a one dimensional continuous
domain mode! of a step edge of amplitude hE with additive Gaussian noise having
standard deviation
Un'
a continuous domain, one dimensional noisy edge signal f(x) with an anti-symmetric
impulse response function h(x) bounded by [-w,w] (of zero amplitude outside the
interval). An edge is marked at the local ma.'Cmum of the convolved gradient f(x)
h(x). The impulse response h(x) is chosen to satisfy the following three criteria:
38
Good localization: Edge points marked by the operator should be as close 1.0
the centre of the true edge as possible. The localization factor is defined as
LOC = hE
h'(O) .
un r-w [h'(x)]! dx
where h'(x) is the derivative of h(x).
Single response: There should only be a single response ta a true edge. The
distance between peaks of the gradient when ouly noise is present, denoted as
(3.1)
J~w h(x)dx
r-w [h(x)f dx
h'(O)
x -::-~"-w::-[h'~(X-'-:)]:;-2dx-
subject 1.0 the constraint of equation (3.1). Due 1.0 the complexity of the formulation,
no analytical solution has been found, but a variational approach has been developed.
In the discrete domain the large size operators defined in the continuons domain
l'an be obtained by sampling their continuons impulse functions over sorne w x w
window. The window size should be chosen sufficiently large sucb that truncation
39
3. Preproeessing
(a)
(h)
(e)
(d)
0"=4
0"
= 2 (e)
0"
= 3
of the impulse response fullction does not cause high frequency artifacts. Figure 3.9
demonstrates the result of applying Canny's filter, before smoothing the image, using
different window sizes. These results are satisfactory when the rocks are spread, Le.
using large (1 cao smooth the surface, and may result in displacement of the true edge
.
(see Figure 3.9 (d)). This might create a problem in determining the boundaries of
piled rocks.
E-perimental results showed that, in a laboratory environment, the combination
40
3. Prcproccssing
give an acceptable result. To demonstrate this, a
labor~tory
enviroument image of a
pile of rocks was used as shown in Figure 3.10 (a). Figur" :1.10 (b) presents the result
of applying 3 iterations of Crimmins' filter. And Figures 3.10 (c) and (d) present the
result of applying Canny's filter to both raw and smoothed images respectively.
3.5
Conclusion
Due to the comple..xity of images of rock-piles, the problem of rock segmentation was
decomposed into two parts. The first part (pile feature e..'l:traction) was presented in
this chapter. This part consisted of two main processes, namely smoothing and edge
detection. For smoothing, the utilization of Crimmins' nonlinear filter was proposed
to reduce the lioise resulting from the te..\.'ture of the rocks. Then Canny's edge
detection algorithm was used to extract the discontinuity in the image representing
parts of the boundaries of the rocks and surfaces within these rocks.
41
.::'
3. Preprocessing
(a)
(h)
(c)
(d)
Figure 3.10: Result of Canny's edge detector before and after smoothing
(a) Raw image of a pile ofrocks, (b) Result of smoothing, (c) The result of
applying Canny's filter on the raw image, (d) The ~esult of applying Canny's
!liter on the smoothed image
Chapter 4
Fragment Contours
An object's edge is a result of a sudden change in colour, te.'l:ture or the direction of
lines, in another words,
surface or simply the end of the object. An edge stands out against another surface
of some other colour, texture, etc., or simply a void. Contours have a similar nature,
Le. they form where there are sudden changes in some gradient: colour, shadow,
parallel lines seen in perspective, or texture. The difference in a contour is the one.
dimensional
int~rface
the contours or outlines of a two-dimensional form and the edges of an object when
both are projected through the lens onto a plane.
Applying the edge detection algorithm to rock images will result in images that
contain unnecessary information such
a result of either texture or colour change of the srface of the rock and/or multiple
surfaces of the same rock. Some of this information may provide false clues about
the actual boundaries of the individual rocks. A smoothing process applied prior to
edge detection provides a partial solution in reducing the unwanted information. On
the other hand, smoothing may
also result in joining
the contours of. two or more
,
,:'
individual rocks.
/'
;:
4. Fragment Contours
In this chapter, the methodology which will used to segment the muck-pile fragment contours is presented. The starting point is the representation of the contour
and its parameters. Section 4.2 contains the enhancement algorithms which are applied to the edge map images to remove the unwanted information. Gaps can occur in
a region boundary because the contrast between regions may not be enough to allow
the edges along the boundary to be found by the edge detector. As a result, a simple
recursive method will be presented to fil! these gaps. The proposed analysis is based
on a multi-Iayered image. This will raise the issue of junction detection and analysis
resulting from overlapping of the rocks. Finally, a contour completion algorithm will
be presented to estimate the missing parts of overlapped rocks.
4.1
Contours of fragmented rocks are characterized by many parameters. These parameters can be grouped into local and global parameters. The local parameters represent
the elements of the contour geometry, in which points of the contour are related to
their neighbouring points. These parameters include length, tangent orientation and
curvature. The global parameters on the other hand, characterize the region bounded
by the contour such as area and perimeter.
In this section different methods of representing contour segments will be presented. This will be followed by methods of computing the local parameters. Finally,
a number of algorithms for computing sorne of the global parameters of the contours
will be presented.
4.1.1
Contour Representation
as a mathematical mode!. There are several criteria for a good contour representation
44
4. Fragment Contours
lU1
ordered list of its edges. In a discrete form, digital curves can be represented either
by a sequence of points {(xc, Yc), .. . , (xn-r, Yn-il} or by a string of integers each one
ranging from 0 to i depending on the direction of the next point of the curve sequence.
The latter representation is called the chain code sequence [50].
The above representations are as accurate as the location estimates for the edges,
but are not the most compact ones. In addition it may not provide an effective representation for subsequent image analysis. The accuracy of the contour representation
is deterrnined by the form of curve used to model the contour, by the performance of
the curve fitting, and by the accuracy of the estimates of edge location.
Fitting appropriate curve models to the edges increases efliciency by providing
a more appropriate and more compact representation for subsequent operations. In
general, curves in a plane can be represented in three different ways: the c..'Cplicit form
y = f(x), the implicit form f(x, y) = 0, or the parametric form a(t) = (x(t) , y(t))
for some parameter t. The parametric form of a curve uses two functions, x(t), and
y(t), of a parameter t to specify the point along the curve from the starting point of
a curve a(tt}
In this thesis, both types of the curve representation (the point sequence and the
mathematical model) will be used. The point sequence representation will be used
in both local and global image analysis. The parametric representation will be used
in estimating the missing part of the contour in the contour completion algorithm as
will be shown later in this chapter.
4.1.2
Local Parameters
45
4. Fragment Contours
Determination of the length of a contour is frequently used in image analysis.
Several methods have been proposed to measure the length of a discrete Curve [IDS]
[S5] [41]. The length of a curve segment from point (Xi, Yi) up to point (Xj, Yj), can
be approximated by the length of individual segments between points:
j-l
I(iJ)
J(Xk+l -
xd + (Yk+l - yd
(4.1)
k=i
This approximation will play a major role in noise reduction from the edge map images
as will be demonstrated later in this chapter. Other methods of length measurements
using chain coding can be found in [41].
From the fundamental theorem of differential geometry of curves [150], the first
l, then the parameter t measures the are length along the curve [IDS] [149]. From this
definition, the tangent at a point gives the best !inear approximation to the curve in
the neighbourhood of that point. One of the simplest ways to estimate the tangent is
by determining the parameters of the straight !ine that best approximates the local
curve points. Using a window w centred at (xc, Yc) and estimating the parameters of
a straight !ine, the slope of this !ine is the slope of the tangent of the curve at (xc, Yc).
The equation of the straight !ine is
ax+by+c= 0
where a
= cos((} + I)'
b= Vl- a2 , c
-l
3."l:CS.
= -ax -
(4.2)
and ~, assuming that the !ine_must pass througbat least one of the curve points
'----'-
--_.-~
ill the window. The error associated with any particular line is the sum of the error
values for al! (Xi y) that correspond to the curve points. If the points!ie exactly on the
46
4. Fragment Cont.onrs
line, then the sum is zero, and by choosing the line for which this sum is minimum,
the best line is found.
The second derivative ofthe curve defincs the curvature ti:(t). Viewing the tangent
as the best linear approximation to the curve o:(t), the curvature ti:(t) measures how
rapidly the curve is deviating from this linear approximation, Le. it is a fuuction of
the arc length and is equal to the inverse of the radius of a circle (ti:(t) = ~) locally
coinciding with the curve (sel' Figure 4.1). In a mathematical form:
ti:(t) =
x'(t)y"(t) - y'(t)x"(t)
"
(x 12 (t) + y'2(t))(,)
methods.
Using a second order polynomial, Albano [2] used weighted least square estimation
47
4. Fragment Contours
to minimize the error between the given set of data and the curve f(x, y). Lee et al.
[88] approximated a sequence of points by a third order polynomial of arc length. For
a window w of n points,any point
of the window
Q;
Qm
where t; is the normalized arc length using equation 4.1 such that:
Then the arc length within each neighbourhood has its range from
II < 1.
to + where
The proposed approximation was done using weighted least square estima-
e2 =
W-l
m=O
(
Wm
Tm -
3 )2
La;t~
i=O
Wm
follow a nor-
mal distribution N(O, a). Once the fitting was done for the rows and the column
coordinates, the curvature value at a point, II:". was calculated as follows:
Landau [86] suggested an iterative algorithm for estimating the location of the
. centre of a circular arc and its radius. His algorithm is based on minimization of
the error between a set of given pointS and the estimated arc (a special case of the
general equation 4.3). Using vector notation, Landau [86] was able to express the
48
4. Fragment Contonrs
arc centre and radius. Thomas and Chan [154J proposed the use of area mther t.h'UI
length 1.0 estimate the centre and the radius of a circle. Since t.he minimization is
performed on the area error, an estimate bias will result. Thomas and Chan [154]
argued that the bias is small and approaches zero as the number of dat.a point.s
approaches infinity.
Dudek and Tsotsos [44] proposed the curvature-tuned smoothing method 1.0 measure curvature. Using the calculus of variations, they applied a set of smoothing
functionals 1.0 e.." tract multiple interpretations of the original data as a function of a
priori assumptions of target curvature. A similar method will be used in the contour
completion algorithm which will be presented later in
4.1.3
th~
chapter.
In this thesis, the method of Nitzberg et al. [118] will be used 1.0 estimate the curvature
by finding the centre and the radius of the best fitting circle for the arc.
The general form of the second order polynomial is:
f(x,y) = a+ bx + cy + dx 2 + fxy
+ gy2
(4.3)
=0
:
Setting f = 0 and d = 9 and completing the square will result in the equation of the
circle
The weighted least square method is used 1.0 estimate the :circle centre c = (;:' ;~)
and radius r
= Vb2~.;:td'. [132].
where i = 1 ... n}, the square error e2 , for a given candidate centre c and a radius r,
is equal 1.0 the sum of radial distance squared from the circle 1.0 each point i.e.
49
4. Fragment Contours
e2 = L wi(llai - cll- r?
(4.5)
L Wi(lI ai -
cl1 2 -
(4.6)
(4.7)
Nitzberg et al. [HS] then estimated the parameters a, b, c and d using equation 4.7.
Appendix B contains a detailed description of their algoritbm. With each fit, the
curvature of only one point is computed using the estimated parameters as follows:
(4.S)
and the errorterm in distance units is given by:
.e=
) J~Wj.
-Ai
(4.9)
Both equations (4.S and 4.9) will be used to detect the discontinuity in curves ex-
50
4. Fragment Contonrs
4.1.4
Global Parameters
The remaining part of this section contains an overview sorne of the global parameters
of a region bounded by a closed contour such as the area, the centre of gravity and
the principal a.'l:es.
As mentioned in section 2.1, the digital image is reprcsented by a n
m matri.'I:.
The area of a region is defined as the number of pi.'l:els contained within its boundary.
For example, for a binary image' the area is equal to the sum of pi.'l:el values of the
image
n
A=LL1
i=l ;==1
The balance point of the binary image l (i.e. the centre of gravity) is (xc, Yc)
where:
LLi
i=lj=l
Xc
= A
n
LL'
Yc =
i=lj=l
The principal axes of a region are the eigenvectors of the covariance matrix. The
two eigenvectors of the covariance matri.'I: point in the directions of maximal region
spread, subject to the correspondi1g eigenvalue. Thus the principal spread and direction of a region can be described by the largest eigenvalue and its corresponding
eigenvector [131J. With the centre of gravity established, it is possible to define the
1 A binary image is a matrix with its elements either zero or one: it is assumed that the the
background is set to zero (i.e. l(i,j) = 0) and the foreground is set to one (l(i,j) = 1)
51
4. Fragment Contours
scaled spatial central moment [66] for the row and column of the image,
1
J.l(2.0)
-2
i:::lj=l
1 n
J.l(O,2)
= 2"
L L [j - xl
m
L
L [i i=l ;=1
Ycf
= [J.l(2.0)
J.l(l.l)
J.l(l'l)]
(4.11)
J.l(O.2)
(4.12)
where the columns of
. :.-
52
~.
Fragment Contours
angle between the major a:"is and the horizontal a.'ts) be defined as:
() =
)'M,N
_1
tan
()'MU(I,I)
- U(O, 2)
(4.13)
One way to compute the length of the a.'s is by rotating the region by -(), theu
finding the coordinate of the boundiug box. The other way is to use equation 4.2 and
find the points on the contour having the greatest distance from the tine on opposite
sides. The a.'ts length is the sum of these two distances.
The formula for perpendicular distance between a tine and a point (Xi, Yi) (sec
Figure 4.2) is
d2 = (ax;
+ by; + C)2
a2 + 1J2
(4.14)
Knowing the orientation angle () and the centre of gravity (xc, Yc), the length of the
minor ms is computed. The same applies for the major axis using
...
4.2
() + ~) instead.
The output of Canny's filter is a binary image containing traces of rock edges (the
edge map image). Usually, these edges are multi-pixel wide, as a result a thinning
algorithm is needed to reduce the thickness to facilitate the image analysis.
Numerous thinning algorithms have been proposed in the literature [125J [148J
4. Fragment Contours
Connected regions must result in connected line structures
These tines are 8-connected and should approximate the centre line of the edge.
Approximate end line locations should be maintained
In this thcsis a common thinning approach will be used, in which each pixel in
the image is examined within the context of ts neighbourhood region. The thinning
process is performed iteratively such that in each iteration, every image pixel is inspected within 3 x 3 windows, and single-pixel-thick boundaries that are not required
to maintain connectivity or the position of a line are erased. When no changes are
made in an iteration, the process is terminated.
Grouping is a process that organizes the image into parts, each likely to come
from a single object. This is usually done bottom-up using ciues about the nature
of the objects and the image, and does not depend on the characteristics of any
single object model. The hypothesis that humans use grouping may be prompted by
the introspection that when we look at even confusing images in which we cannot
recognize specifie objects, we see th3.t image as a set of chunks of things, not as an
unorganized collection of edges or of pixels of varying intensities.
A variety of ciues indicate tlie relative likelihood that chunks of the image originated from a single source. The gestalt psychologists suggested several ciues, such as
proximity, symmetry, collinearity, and smooth continuation between separated parts.
For example, in an image of line segments, two nearby lines are more likely to be
grouped together by people than are two distant ones, and gestalt psychologists suggested that this is because they are more likely to come from a single object. Low [93]
applied this view to computer vision. Other recently explored grouping ciues inciade
the relative orientation of chunks of edges [76] and the smoothness and continuity of
edges [111] [146] [31].
An active contour model (SNAKE) proposed by Kass et. al. [79] demonstrated an
54
4. Fragment Contours
intcractivity with ahigher visual process for shape correction. It. resulted in smooth
and closed contours through energy minimization. The active contour model, howcvcr
has sorne problems namely: control, scalin;; <lI1d discontinuity.
The active contour modellooks for ma.'tma in intensity gradient magnitude: however, in complex images, neighbouring and stronger edges may trap t.he contour into a
false, unexpected, boundary. Moreover, if an initial contour is placed too far from an
object boundary, or if there is insuflicient gradient magnitude, the resulting contour
will shrink into a conve.'\: closed curve, even if the object is concave. In order to avoid
these
('~ses,
-,~
[28], sU:ccessive lengthening of an active contour [13] and an internal pressure force
[28] have been introduced. Unfortunately, even if these techniques were applied, the
edge-based active contour might be trapped by unC-pected edges.
.'
Gaps between edges poses an additional problem in image analrsis. They are
the result of low contrast between region boundaries. Consequently, an edge linking
algorithm is needed to connect the broken parts of the contours.
/'
Edge linking algorithms can be used to fill gaps between cvntour segments to form
a closed contour. In his book, Pratt [131] categorized the edge linking methods into
three categories;
curve fitting edge linking: classical curve fitting such as Bzier polynomial or
spllii' fitting [125] or the iterative endo!?C?int fitting [125]),
4. Fragment Contours
Hough transform edge !inking [43] [67] [48].
A fast simple heuristic solution was developed in this research to fill sma1l gaps
between contour segments. It is based on growing each contour segment at its end
points along each tangent !ine iteratively (see Figure 4.3). On a rectangular lattice
with 8-connected contours, diagonal portions of contours can cross over each other
without colliding at any grid point. To ensure collision, at each iteration a contour
grows by one point,ll"d the five neighbours that agree with the growth trajectory are
checked as weil.
4.3
- .....'
Identification of corner points plays an important role in shape analysis [65] [131].
These are special features in an image,' and are characterized by their curvature [16].
Kitchen and Rosenfeld [83] measured cornarity as the rate of change of gradient
direction along an edge multip!ied by the gradient magnitude. Fang and Huang [45]
defined the corner-ness at any point as the magnitude of the gradient of () (the gradient
direction). This quantity attains a local maxima at a corner point. At each pixel, the
product of the corner-ness !md the edge-ness (the magnitude of the gradient of the
image) is computed, and the pixel is declared a- corner point if the value is greater
than some threshold.
Zuniga -and Haralick's [168] algorithm for corner detection is based on a gray level
facet model [62]. They proposed three different methods for corner detection: (i)
incremental change along tangent !ine, the corner point is identified by comparing
the gradient direction change of two nighbouring edge points (along the tangent
!ine of the edge boundary) and a declared threshold , (li) incremental change along
contour line, 'thismethod is similarc to the previous one with the exception that
the neighbouring points are lying along the contour line rather than the tangent,
56
4. Fragment Contours
(i) instantaneous rate of change. in this case the corner point is identified if the
directional derivative of the gradient along the edge direction is gTeater than sorne
threshold, provided that this point is an edge point. Nagel [110] used a rnethod based
on rninirnizing the squared difference between a second order Taylor series expansion
of grey level values from one frame to another. Nobel [119] showed how the Plessey
corner detector estimates image curvature and has proposed an image representatioD"
that is based on the differential geometrical "topography" of the intensity surface.
Rangarajan et al. [133] have proposed an optimal gray tone detcctor, based on
Canny's optimal one-dimensional detector [20]. They formulate the problem as an
optirnization problem, and solve it using variational calculus. The performance measure that has to be maximized is the ratio of the signal to noise ratio and the de",",calization. They developed a mathematical model for a restricted case and classified
corners into 11 types (a mask was proposed for each type). A low threshold is used
to select candidate pixels for corners, which responds to any of the 11 rnasks. A
candidate pixel is declared to be a corner point if it is does not have two" neighbours
(in a 3 x 3 neighbourhood) with a sirnilar gradient angle, provided that it is an edge
point.
'.
4.3.1
The
Ado~;ted Mthod
"~,
In the implementation usehn this thesis, corners and junctions are identified from
edge map images rather than intensity ones. Consequently the accuracy of the analysis is highly dependent on the edge detector algorithm. By tracing the contour,
corners ar identified as the points having high curvature. A junction is defined as a
point of two or more straight line edges. Location of junctions can easily be dete.cted
while tracing the contours.
Junction analysis is the key point in segmenting fragments. The interpretation
process starts by searching for curve branchlng (what will be calle<tthe "Y" junc-,
'>
57
4. Fragment Contours
tions) to extract complete fragment contours (closed contours) and to determine the
endpoints on the incomplete ones. Two criteria are used
tO
fragments: the first is by finding the best circular arc with minimum fitting eITor (Appendix B). The second criterion is by comparing with the average intensity values of
either side of the junction and selecting the highest one.
4.4
Contour Completion
To overcome the problem of overlapping of rocks in muck-piles, we propose the reconstruction of the missing part of their contours. Contours to be partially reconstructed
(completed) are identified as the contour segments connecting two junctions. To reduce the search computation time, severa! cases will not be considered. Among them,
the bisection of two rocks, Le. only connecting the end points of one segment at a
time will be considered.
This section contains a review of different contour completion algorithms. The
objective is to adapt one of these methods to estimate the missing part of mnck-pile
fragments due :to overlapping.
::4.4.1
Interpolation
Given<a cubic polynomial with its four unknown coefficients, four known parametersare used to solve for the unknowns. The four knowns can be the two endpoints
58
4. Fragment Contours
and the derivatives at the endpoints. A curve segment can be defined in terms of a
cubic polynomial as follows:
a(t)
(4.15)
To deal with finite segments of the curve, the parameter t is restricted to the interval
[0,1].
With T =
[t 3
t2 t
nomials as
A curve segment a(t) is defined by constraints on end .points, tangent vectors, and
continuity between cutve segments. The major types of curves described in [125] [47)
are: Hermite, Bzier, and b-splines.
The Bzier form of the cubic polynomial cllr'e segment indh;~ctly specifies the end
:.
tangent vector by specifying two intermediate points that are not on the curve. The
59
4. Fragment Contours
starting and ending tahgent vectors are determined by the vectors P I P2 and P3P4.
The endpoints are PI and P4 :
Plx Ply
P2x P2y
G=
P3x P3y
P4x P4y
-1
M=
4.4.2
3 -3 1
3 -6
3 0
-3
0 0
0 0
Shape Completion
One of the erlier techniques for contour completion was proposed by Ullmann [155].
),-
4.4.3
Nitzberg and Mumford [Hi] p~oposed the use of a tbird order spline a(t)
which minimizes the
SUffi
= (x(t), y(t))
60
4. Fragment Contours
to the curvature-tuned smoothing method proposed by Dudek and Tsotsos [44] for
curvature measuremer:,
Given two end points (x(O), y(O and (x(l), y(l and tangents (x'(O), y'(O and
[ x(t)
y(t)
= [ t 3 t 2 t 1]
x(O)
y(O)
x(l)
y(l)
-2
-3
1)
x'(O) y'(O)
x'(l) y'(l)
1)
1)
-21) -1)
(4.16)
the integr'l1
x'(t) ] T =
[ y'(t)
x"(t)
[ y"(t)
]T =
3t2 2t
1]
.;,.2
-3
:5
[2 "":2
[ 6t 2 ]
"-3
1)
1)
-21) -1)
1)
1)
-21)
.7/]
~1)
x(O)
y(O)
x(l)
y(l)
x'(O) y'(O)
x'(l) y'(l)
x(O)
y(O) ,
x(l)
y(l)
x'(O) y'(O)
x'(l) y'(l)
61
4. Fragment Contours
(h)
(a)
(c)
For this application, both parameters v and 'Y were modified such that they were
selected based on the shape of the un occluded part of the contour, by considering
that 'Y and v are inversely related, Le.
1
v= 'Y
(4.17)
Assuming there isa straight line connecting the two end points ((x(O),y(O)) and
(x(I), y(I))), let d be the maximum orthogonal distance between the line and a boundary point on the contour, then;
'Y
(4.18)
The smaller the value of 'Y the less deformed the constructed spline. To assess the
62
4. Fragment Contours
............!
(a)
(b)
(c)
(e)
(f)
()
(d)
Figure 4.5: Severa! degrees of overlap: (a) a smooth surface rock, (b) and
(c) The rock in (a) overlapped by one rock and two rocks respectively, (d),
(e) and (f) the contours of the rocks, (d), (e) and (f) the contours of (a), (b)
and (c) respectively
performance of the modified algorithm, the contour completion algorithm was applied
to a part of a computer generated ellipse (Figure 4.4 (a)) using two different values
of 'Y: 1 and the computed value from equation 4.18. The results obtained using both
values of 'Y are shown in Figures 4.4 (b) and (c) respectively. Comparing these two
Figures demonstrated the significant influence of selection of the valueiof 'Y on the
,
deformation of the resulting curve. This is clearly shown in Figure (c) where the
generated spline reasonably matches the hidden part of the ellipse.
Another test was also conducted using overlapping rocks (Figure 4.5). In this test,
we varied the visible part of the rock shown in Figure 4.5 (a) overlapping it with one
rock then with two rocks as shown in (b) and (c). Figures (d), (e) and (f) present
their contours respectively. Figures 4.6 (a) and (b) present the contour of the visible
63
4. Fragment Contours
(a)
(b)
(c)
(d)
and (f); (c) and (d) The results ofapplying the contour completion algorithm
to (a) and (b) respectively
part ofthe overlapped rock shown in Figure 4.5 (e) and (f) with common boundaries
with the other rocks deleted. Figures 4.6 (c) and (d) present the result of applying the
contour completion algorithm. These results showed a great similarity to the actual
contour of the rock shown in Figure 4.5 (d).
Since the contour completion algorithm is highly dependent on clues given at
the endpoints, false information may result in failure of the algorithm to correctly
estimate the missing part of the contour. Furthermore, it might result in intersection
with the contour.
To demonstrate this, consider the curves shown in Figures 4.7 (a) and (c). Each
64
(a)
(b)
(c)
(d)
Figure 4.7: Failure of the contour completinn algorithm: (a) and (c) Con-
tours of the visible part of overlapped groups of rocks, (b) and (d) The rcsults
of applying the contour completion algorithm to (a) and (c) respectively
one of these curves is a result of failure of the edge detector to detect the internai
boundaries of groups of rocks, resulting in a complex contour represented by the
external boundary of the whole group. The result of applying the contour completion
algorithm are shown in Figures 4.7 (b) and (d) respectively. As can be seen from the
figures, the generated contours resulted in more complex contours.
4.5
Conclusion
65
4. Fragment Contours
66
Chapter 5
Sieve Analysis and Fragment Size
A method of form description is a procedure for selecting and presenting information
about the characteristic way in which an object occupies space [143]. One of the
"... the minimum square aperture through. which the particle can pass."
t_
ers the two diI:ilensional projection of the fragments in space and the compatibility of
the parameters nf the 3-dimensional models.
5.1
.~
C'.haracterised by the volume (the amount of space the fragment (abject) occupies)
68
'8: : : : : : :,
:. : ,.;
,
;i,./.----.. . . . ~
.'..'
'1.
=====
,/
U'
(h)
(a)
Figure 5.1: Size description of (a) the sphcre, (b) the pyramid
only, since volume is a scalar value which does not provide any information about the
dimensions.
Successful passage of the fragment through a grid of a specified size is highly
dependent on the shape parameters of the two-dimensional projection in a plane
'::;-
origin (0,0,0)).
Using the vector representation of each vertex of the pyramid as shown in Figure
5.2, in which every vertex is represented
Ils
69
"
z1
The parameter that governs the passage through the grid during the grids' vertical
movement (parallel to the xy-plane) is 1. In other words, the grid size SM that
guarantees the pyramid passage through the sieve has to be larger than l.
A smaller grid size may also allow the pyramid passage in the same movement
direction with a change in the grid's planar orientation. The smallest grid size that
will allow its passage is:
Sm=max(d,w)
For the horizontal movement where the grid is oriented perpendicular to the xy-plane,
SM
= max(m,l)
and Sm
= max(m,d,w).
and Sm are:
SM
max(l,h)
and
70
From the above analysis, one Gan deduce that for any grid orientation, the pyrmnid
will alway pas!> provided that
SM = max(l, h)
and the minimum grid size that allows the pyramid to pass at any orientation is:
Sm
The above illustrates that the classification process of a stationary abject is highly
dcpendent on its shape.
We can apply the SaIne analysis to irregular shaped objects such as fragmented
rocks; assuming that a fragment was held stationary, and a set of square grids was used
to test its passage. Provided that ail grids used were oriented in the same way and
moving in one direction, the fragment will always pass if the size of the grid is larger
than the largest Euc!idean distance between any two points on the perimeter of the
two-dimensional projection orthogonal to the gTid, regardless of the orientation of the
!ine connecting these two points. The distance between these two points represents
the length of the projection of the two dimensions into one dimension (Mi) provided
that the fragment is convex. This particular grid size SMp will be the upper bound
for the fragment passage, since any larger grids will have the SaIne result.
E!iminating the larger grids and performing the passage test in descending sizes,
while changing the planar orientation of each grid (from 0 to 90), smaller grids
may aIlow the passage of the fragment until the size of the grid is smaller than t.he
minimum one-dimensional projection of the the two-dimensional projection of the
fragment (m;). Grids smaller than this size (Sm,) will not allow the fragment to pass
at any orientation.
71
i-P--
1
H--~i0
W if-------cc-l JO,1[1-'------.-.,:
o
:
I-:------H ,p
_-_ .. :
'._._.--_.-
(5.1)
i = 1,2, ...
(5.2)
and
Sm = min(Sm,),
Between SM and Sm there is a gray area in which uncertainty about the fragment
passage exists. There are two parameters coutrolling this interval, namely: the size
of the giid and its orientation. The latter parameter is highly dependent on the
roundness of the fragments' two dimensional projection (symmetry around the centre
of gravity). In this fashion, measurement of the object and its shape information are
preserved by the mean of the a.'{es measured.
Punction" W) will be used. This Weight Function will represent a measure that
characterizes the passage of a fragment at a given grid size.
~,
..,=
Let the domain of the sieve function be the set of positive real numbers (S E lR+),
72
its range will be the set P. The set P contains three logical subsets, namcly: pa.....ing
(P), pos...ible passing (?P) and not passing (..,P). The "Weighting Function" IV can
W:S-+[O,l]
(5.3)
Since W is an intermediate function, its range can form a one to one mapping with
the subset of the set P (see Figure 5.3). The idea of using W as an intermediatc
If an abject can pass through a specific grid size at only one orientation
then it will pass through a larger grid size at the same orientation. Moreover, the probability that the same object will pass at any orientation for
the larger size will increase.
Then for a given grid size, one of the following cases is satisfied:
1
W(S)
-+
]0,1[ -+
S <:: SM
P iIf
(5.4)
S :::; Sm
-+..,p iff
In the first and second cases, W(S) remains constant at 1 and 0 respectively. In
the second case (W(S) EJO, ID, for a fixed grid size, the planar orientation of the grid
will be the only factor which affects the passage of the fragment. By considering the
"
73
....
"
s' Su'
10
10
Glidffit."S"
(h)
(a)
Figure 5.4: Weighting FUnction W(t) versus grid size: (a) For a spherlcal
shape, (b) For any other abject
vibratory effeet of the grid [22], this factor becomes a function of time. Thus, W will
Definition 2 The Weighting Function Wo(Si, t), is nonnally d.istributed and approximates the probability of passage of fragment
centred at
SM;Sm
of the abject:
(5.5)
where
-s + SM-Sm
2
Q-
and
The ooly restriction in applyiL;;; the "Weighting Function" is that the measurement
must be obtained from the object projection and not from a cross section of the object.
Sm) the closer the shape to a sphere. This will result in changing the value of the
Weighting Function rapidly, as shown in Figure 5.4 (a). On the other hand, if the
difference is large, W will vary as the size of the grid varies (see Figure 5.4 (b.
To simplify the computation of the Weighting Function, a linear model will be
used to replace the recursive model presented, i.e., W will be assumed to change
linearly between Sm and SM[ll]. Consequently the Weighting Function becomcs:
1
WO(Si,t)
S;-Sm
SM-Sm
5.3
Si ~SM
Sm
(5.6)
< Si < SM
Si ~ Sm
As mentioned earlier, the input to the fragment classification proccss is a digital image
of the muck-pile. These images represent a two dimensional projection of the pile.
In other words, the measurement has to be performed on only one projection rather
than multiple projections.
Many mining researchers assume that fragments always possess spherical shapes.
;.\
As a result, the diameter of the cicle equivalent area is always referred to as the
size of the fragment. Table 5.1 summarises different measures used to
charac~erise
fragments.
The area as a stand alone measure ca:nnot provide enough description about the
75
Researcher
Measured parameters
Typical Diameter
Arca'
Maximum Diameter and Arca
Area
Fragment length (longest dimension) and
width (normal to the length)
Minimum projected cbord length
Elliptical Parameters (Major and Miner a"'{es of the best fitting
ellipse)
Arca and Elliptical Parameters
Diameter (measure of the thinest portion that crosses approximately
the centre of a fragment)
Area
Weighting Function
D
Figure 5.5: Projected area of a cube
process. In addition, the area is a high error sensitive measure. To demonstrate this,
consider the cube shown in Figure 5.5. Assume the cube was viewed from one side
only (i.r tr<! projected image is a rectangle), and the length of the side is equal to
a nnits. The area of the exposed image is a2 and the diameter of the equivalent area
de
=2~
= 1.1284a
76
"
..
,..
Qrld
SIle os
SM = V2a2
= 1.4142a
Applying the same analogy as the Weighiing Function (setion 5.2), one cao conclude that the cube will pass through-the grid at any orientation provided that the
grid size is equal to de' On the other hand, using the cube's a.,es will stretch this
value to the closed interval [Sm, SM] which is considered to be more logical than the
former measure since the cube is less symmeLric around its centroid in comparison
with the sphere. This is demonstrated graphically in Figure 5.6, where a is assumed
to be equal to 1 (note de E [a,1.4142aj).
In viewing a muck-pile, partially occluded fragments are a common problem. Using the expod area of the fragment to compute the diameter of the equivalent sphere
is an inadvisable procedure. To prove this argument, we will use the same square as
sides (simulating occlusion by a larger square), only SM will change; meanwhile, the
diameter of the equivalent circle will keep changing.
77
'00
o--.-al
00
00
~
00
!oo
"
!"
00
00
"
..,
"
lJ
................
2
:)
0.5
(a)
................
:1
2.5
:)
(b)
Figure 5. i (a) shows the behaviour of the error of the diameter of the equivalent
circle resulting from the new area and the error of SM' In another simulation, the
hiddcn part was increased diagonally, in this case, both a.'S were preserved by occlusion, meanwhile, the error of the equivalent diameter increased exponentially, this
is demonstrated in Figure 5.i (b).
5.4
Conclusion
In this chapter, a mure appropriate measure for fragment size was presented. This
measure characterizes each fragment by dimensions (a.xes) and a scalar (Weighting
Function) rather than a scalar only (area).
A special type of classification of an object in space (xyz-space), based,on a
two dimensional criteria imposed by a grid (xy-plane) representing the mesh, \Vas
presented. This classification requires object shape descriptor(s) to evaluate a special
function which was named the "Weighting Function" .
Finally, it was demonstrated that the principal a.xes used in this measure are
less sensitive to shape variation resulting from overlapping in comparison with the
i8
.--
79
Chapter 6
Size Distribution
One of the indicators of the effectiveness of a blast is the size of the fragments forming
a muck-pile.
From the literature, estimation of the size distribution of the blasted material
has been done
b~:one
6. Size Distribut.ion
visual estimation and boulder eounting [60]. the predieti\"e method [33]. and t.hl'
photographie method [94] [21] [42] [95] [58].
The photographie method is based on the measuremcnt of soml' paramctcr from
photographs of the muck-pile, either manually. or automatie,ly. This is usu,ly donc
by dividing the image into regions, performing the measurement and interpreting
these measurements. This interpretation is a form of the transformation of somc t\\"odimensional size parameter of the individual bloeks into a three-dimensiom bloek
size distribution.
Estimation of size distribution is also a major research topie in many other fields
such as biology and metallography [84] [36]. The problem is "to obtain truc p,lrticlc
size distribution of grains or bodies embedded in a three dimensiom volume from
measurements on a t\\"o-dimensional section or eut" [141]. For this type of prob-
lem, closed form solutions based on geometrie probabilities exist, and are part of a
discipline known as stereology [156] [158] [159].
Stereology deals with methods for the three-dimensional representation when only
two-dimensional sections through solid bodies or their projections are available [1581.
The aspects of stereology that are pertinent in the context of measuring fragments
are those relating volume of particles to the size distribution of their section [140J.
This chapter starts by describing different sampling methods. This is fol1owcd by
the introduction of "Virtual Sieving utilizing the Weighting Function, developed in the
previous chapter, as a representation of the size of fragments. This is followcd by an
overview of sorne of the stereologieal solutions used to estimate the size distribution.
Finally, the formulation of Virtual Sieving to estimate the volumetrie size distribution
of fragments from their projected images will be presented.
81
6. Sizc Distrihut.ion
6.1
Sampling of a Muck-Pile
parent distribution, the e.,pected variation may be estimated from statistical analysis
[3]. In general, more samples will result in a closer match between measure<1. sarnple
of a muck-pile. A faidy large range of techniques that include using different types of
films and equipment, have been experimented with in both surface and underground
. mining. Severa! of these techniques provide acceptable results.
The common mechanism is to take a photograph of the pile normal to a horizontal surface. The accuracy of this estimate can he significantly influenced by the
photographie sa:npling procedure used. Indeed, since ail subsequent operations rely
on photos of muck-pile surfaces, procedural errors may get compounded later.
82
6. Sizc Distribution
Hard
Disk
oy i
_ lCl_
l b1
Fill
=-=:,'-,
,LI"'_-'---e
Video Monitor
Shovel
operation will result in continuous increase of the e.xposed area of the pile. The
83
ht'~'\l!ld
6.2
Virtual Sieving
The definition of the size distribution (Irani and Callis [5]) is giwn as:
"the frequency of occurrence of partic/e., of every si;;c perccnt:'
Wk(Si, t)
= ~
e _(.~,.,2
(6.1)
The total Weighting Function of the sampIe is the normalized sum of all Weighting
Functions at this particular grid size. In other words:
W1 (Si' t) =
~ f:Wk(Si, t)
(6.2)
k=l
84
6. Size Distribution
Sllbstitllting cqu<ttion 6.1. cqllation 6.2 bccomcs:
1 sIs.
JF.r(S,. t) = N"
11 vf2ii=
.
r=l
.... /1
-(z-~.f
Le":
(6.3)
x=o
1.0
1.0
a result, by varying the grid size, the resulting weight ean be eonsidered to be a
6.3
fragments. i.e., 2-D -t 3-D. Similar problems existing in biology and metallography [38J [156] [158] [159] [1361 were studied and constraint solutions were found by
employing stereology.
In general, the process of mapping two dimensional information 1.0 three dimension
(2-D -t 3-D) is a diflicult one. Particularly when the two-dimensional profile is a result
of intersection between an object and a plane as in the cases studied in stereology.
This is because the observed profile size of the object is a function of both the shape
of the object, and of the location and orientation of the sectioning plane. A set of
85
6. Sizp Dist.rihut.ion
assumptions were used to outline the solntion to this problt'm: amonf'. th"lll is the
model which nsed to characterize the objec!.. The sdection of th,'s,' mo<1ds was bas"l!
on the closest matching regnlar geometric shape (nsnally ran<1omly oril'nte<1 simp!"
conyex objects: such as spheres. dlipsoids. etc.).
This section contalns an owrYiew of sorne of the ster,'ological solutions
10
Iwo
6.3.1
Spherical Model
There have been several procedures developed to determine l'article size distributions
of spheres from their section size distribution [159] [156] [381 [8J [/4]. Wieksell [1601
pioueered this by formulating the problem in order to solve a eorpnseular problem in
anatomy. In 1958, Saltkov [159] presented one of the most signifieant procednrm to
estimate the true l'article size distribution from section profiles.
Using a spherieal model, Saltkov [159J based his solution on the assumption of a
discrete distribution made up of m classes of equal width (D.), sueh that;
(6.4)
where dmax is the diameter of the largest spheres. As a result, the numerieal densit.y
of profiles (Na) of any class i beeomes;
ni
Na(i)
=L
Na(i,j)
(6.5)
j=l
Using the same number of classes and wiq,th (m and D. respeetively) for the numerieal
86
6. Sizc Distribution
density of v"lnln(' ,'Ii". he proposed a linea:' relation between the number of spheres
in c1a,o;s j. N,,(j) where j
= L ... , m.
r N.(1:
N,;(2)
l
wherc k;j =
JCp -
=~
k ll
k l2
k 1m
Nv(l)
k 22
k'2m
N"(2)
k-. m
N"(m)
No(m)
(i - 1)2) -
J(P -
(6.6)
Nv(l)
Nv(2)
N.(m)
1
=L\
k1m
k 22
k 2m
k mm
-1
r No(l)
l':1:1
(6.i)
~
4
deo =2 -
;r
. rocks. One
o~
the problems associated with this method is its inability 1.0 preserve
8i
ti.
SiZt'
Di~trihl1tion
6.3.2
Ellipsodal Model
~Iethods
[l')~I
[1581. Gelwrally
speaking. particles modelled by ellipsoidal Dodi,'s l'an 1", groujlt'd into three main
categories: constant shape parameter (e.g. ellipsoids with constant 'L,iaj ratios). Iwo
variable shape parameters (e.g. variable ellipsoid of revolution). and three variahl,
shape parameters (e.g. tri-axial ellipsoids).
In the category of constant shape, Wicksell [161] proposed a description hy a single s:ze parameter. namely. the geometric mean of the principle axes. The particulat"
phase was then de,.;ribed by a univariate size distribution. and the stereological prohlem was reduced 1.0 identifying the set of profiles pruduccd by random plane sections
through the aggregate of particles.
For the second and the third categories, particles \Vere 'l."Sumed 1.0 exhibit vari- .
ations about a given type of shape as \Vell as size variation [158]. As a result, t.he
corresponding distribution function will be tri and bi-dimensiona! respcctivcly. This
is based on the argumenS-Q!' Cruz-Orive [121] that the p-dimensional partic1e dist.ri'=
bution can be identified from the corresponding profile distribut.ion only if the lat.t.er
has a dimension greater than or equal 1.0 p.
This restricted the solution of the third category 1.0 be nondet.erministic for an
infinitesimally thin section, i.e. identifying a three-variant distribution describing
variable tri-a."(a! ellipsoids from plane sections become indeterminate, which can only
be described by a bivariate distribution (e.g. that governing their major and minor
principal axes).
For the second category, namely two variable shape parameters, Cruz-Orive [122]
[159] used the following assumptions 1.0 estimate the size distribution:
88
6. Size Distribution
;:b
. _ .. _ .. -;
----. -
~.
....
_.
....
-
- ..
_. -. : _._a..
~-
.1
-'-
a.'<es, a and b (see Figure 6.2), were assumed to vary between 0 and B, where B is a
constant larger than'or equal to the largest value of b for the prolate, Le. b Elo, Bl,
or of a for the oblate (a
qual width
Elo, BD.
divided into k classes of equal width r = ~. Thus, the domain of variation of (b, x 2 )
for the prolate, or of (a, x 2 ) for the oblate, was divided into a grid comprising s x k
classes, each class being represented by a rectangle of sides
A spheroid belonging to the (i, j)th class is called the ij-spheroid; it must satisfy
the inequalities (i-1)~ < b( or a) ~ i~ and (j -l)r < r ~ jr, where i = 1,2, ... , s
and j = 1,2, ... , k. The :number of the ij spheroids per unit volume of specimen
s
.~
density of spheroids.
1Prolate spheroids are generated by ellipses revolving around their major principal a.'<S
'Oblate spheroids are generated by ellipses revolving around their minor plincipal axis
89
6. Size
Di~trihl1tion
The elliptical profiles WE're also c1assificd hy mcans of t.llt' satu" .,' x k
Sizl~shapl'
grid nsed for the spheroids. Thns the ellipsE' numhcr dE'llsit.i,'s N" an' reli\t.l'd
tll
lVa l,l
PI,I Pu
i'lal.k
lVa~.l
lVa ;,.k
where
PiJ=
{H
v((i-n
Pl,.
P2,2
P2,s
Ps,s
ql,l
q'l.,l
Q2,'l
qk,1
qk,2
qk,k
lVV1 1
lVvl,/c
1VVJl 1
lVvtl ,1c
(6 S)
i=j
>i
(6.9)
The elements of the Q matrix are functions of both k and the spheroid type. For
prolate spheroids,
i=j
-
f (tj+l)}
(6,10)
i>j
90
6. Sizc Distribution
(2k-2H2)
(2i
2j .... l)
i=j
(6.11 )
i>j
whcrc, f(t) =
t'~l + tan-1(t).
tj =
Jg~~;1:;l.
NUI 1
IVVt,k
'.
~-:.?z
Pl.2
Pl,s
P2,2
P2,s
ps,s
q",
Q2,1
Q2,2
qk,l
qk,2
qk,k
i'\ial,1
JV01,k
IValJ 1
IVall 1c
IVv
Pl,l
1
~
NVIJ k
-1
(6.12)
In spite of not being a popular method for size distribution estimation in the mining industry, the Cruz-Grive method preserves to a certain extent the shape information embedded in the shape factor. The shortcoming of this method is its complexity
in presenting the resulted distribution, Le, interpreting the three dimensionaI size
distribution is not an easy task to achieve.
l
3By definition tanh- (t)
= ! ln (~)
C~~Z~h71(t) = ! ln :~:
1
'-"
91
6. Si1.l' Distrihlltilltl
6.3.3
:-rany researchers believe that the mathematics inyolwd in tlll' inlt'fpn'tation of projected images have direct applicability to sectiolll'd onl'S [1561. This is ba..' l'd on thl'
assumption that the profiles which will be measured arc of a speci, typl' of inters!'ction of the l'articles with a sampling plane.
Rather than a random sectioning plane. the sampling will "of Ill'cessity" bl' the
surface of a rock pile. Le. a "projected" profile. where the largest visible dimension
of the fragment. in the direction of projection. is reye,ed.
The main problems encountered here are:
Particles overlapping: where l'articles in the second layer of the pile will be
6.3.4
Following in the footsteps of Cruz-Orive [121], two parameters will be used to describe
the fragments. Using a bivariate convex model for ail fragments, namely prolate or
92
6. Size Distribution
oblat.e ellipsoids. and modifying Cruz-Orive's [121] assumptions 1.0 cope with the
projected image of rock-pile surface accordingly:
l'ion overlapping spheroids: rather than using an empirical function 1.0 correct
overlapping for one case and generalize il. for all cases, we shal! attempt 1.0
reconst.ruct the missing part of the fragments contour resulting from overlap.
Spheroids are al! of the same type and sieves are imposed in the same viewing
direction.
Rather than using the bivariate distribution proposcd by Cruz-Orive [121], the
weighting function defined in the previous chapter will be used. Assuming that fragments are ellipsoids, the volume of a fragment i can be compute directly using the
following equation:
Vi
where m;
4; mUvl;,
i= 1, ... ,N.
(6.13)
major a...i:CS respectively. The discrete distribution is then divided into n classes cf
equal. width
such that
(6.14)
(6.15)
Using the weighting function's linear mode! (equation 5.6), and defining Wij as the
weighting function of fragment i al. class j, this will result in a N x n matrb:. The
93
elemer,ts of the
IV
al'l'ordin~
to th,'
folh)\\"in~
,'<[nation:
\ li. Ill)
The volumetrie distribution can simply be computed nsing tlll' fo!lowing equation:
T
W11
FI
[ VI
...
11"1..
(6.17)
VI' ]
l/~
H"NI
WN..
This method convolves both shape paranleters used 1.0 represeut thl' fragments iu
the estimation of the volumetrie size distribution. Even though. the discretl' distri-
bution was divided into n classes of l'quai interval (equation 6.14), the method has
the fiexibility of using intervals of variable width. Furthermore, the distribution 01>tained is much simpler 1.0 interpret than the one obtained by the Cruz-Orive Ulethod.
Results of applying this method will be presented in chapter 8.
6.4
Conclusion
In this chapter, Virtual Sieving was presented as a direct application of the weighting
function. This powerful 1.001 of fragment measurement was interpreted as the simulation of the sieve analysis which provided a feasible measure of the size distribution by
the number of fragments retained in each sieve. Using spheroids as a geometric model
which provided a more realistic l'l'presentation of fragments, the volumetric/true size
distribution was then derived. Performance l'valuation of this method as weil as comparison with the' stereological and mining method used in estimating the true size
94
Chapter 7
Implementation and
Experimentation
Using the description of the surface mining process given in chapter 1 as a sequential
process, and by replacing the first closed loop subsystem shown in Figure 1.2 by the
one shown in Figure .1, a significant impact on the subsequent processes such as
digging conditions, enhancement of production quality control, as weIl as furthering
the automation of surface mining operations [126] can be achieved. The single input
and single output black box added to the new system configuration is the process of
fragment measurement. The input to the black box is the visual information acquired
by sensing a muck-pile, e.g. digital images of the pile. The output is a fragment
classification similar to the mechanical classification process (sieving) of fragments.
This chapter describes the process represented by the black box and the link between the input and the output. It also discusses the constraints and the assumptions
made to establish this link. The starting point is a detailed description of the dig!tal
images of muck-piles and outline their characteristics. This is followed by the methods used in imI'lementing the tools described in chapters 3, 4 and 5 which ar used
1.
Uncontrollable
Parameters
Controllable
Parameters
Blasting
I\luck-Pile
1-..-
--,
Black Box
Classificd
Fragmcnts
Updating
Figure 7.1: A black box model of the blasting process
laboratory images will be used demonstrate the results obt:l.ined will be prcscntcr!.
7.1
Muck-Pile Description
The input to the bl"ck box is intensity images (where each pixel reprcsents the gray
level valu,:: of the point on the sensed surface) of muck-piles. These images are eharacterized by many properties, among them:
Surfac' Texture: each fragment possesses a textured surface; in addition, the
pile itself is not of uniform texture. This poses a problem since conventiona!
image processing boundary detection algorithms tend to be highly sensitive to
texture variations.
Multifaceted Fragments: rock fragments may have more than one face visible
in an image. This may result in an edge detection algorithm determining that
a fragment boundary exists at what is actually a face boundary.
View Location: the images acquired vary with the viewing angle, elevation
and distance.
illumination: natural lighting can vary in intensity and angle of incidence.
This can have a very signifieant effect due to shadows, and loss of contrast.
96
\'" Fragments
Envr.onment:
rain and surface Irioisture can dramatically affect the image
,
properties. Snow can obscure fragments.
AIl of these propertics contribute to the quality of the digital image to be analyzed
as will be shown later.
(h)
(a)
I, ~..
1
(c)
(d)
(e)
(f)
(g)
Figure 7.4: Overiapping rocks: (a) Composite layers, (h) Contours of the
composite layers, (c) and (d)Rocks of the first layer, (e) and (f) Rocks of the
second layer, (g) The third layer rock
Images of the surface of muck-piles usually contain partially occluded rocks. Thus
the surface of the pile can be modelled as being divided into three layers (See section
3.1). This division is adopted to simplify the fragment segmentation and measurement
sub-problems. The three layers in question are the first (top) layer, second (middle)
layer, and background layer, L 1 ,L2 , and La respectively. In measuring the fragrgents,
only the fragments in the first and the second layers will be considered. Figure 704 .
demonstratesa multi-layer image. The criterion used to classify fragments into one
of these layers is based on their contour properties such as continuity and convexity.
The layers classification methodology can be summarised as follows:
98
(b)
(a)
and R 2 E L2
Second Layer: A rock is in the second layer, R E L2, if it is partially occluded
by one or more rock in the first layer (Type A) or by another rock in the second
layer (Type B).
Based on these criteria, interpretation of the images of the muck-pile surface
will be analysed. In the analysis, several uncommon situations will not considered.
Among them, the bisection of one rock by another i.e. when their contours meet
at four points (see Figure i.5). Each contour segment connecting two junctions will
--\"
oe:treated independently. In other words, the bisected contour will be split into two
contours and the contour completion algorithm will be applied to each independently.
This strategy was adopted to reduce the computation (time and complexity) required
for the search algorithm to match the segments.
99
-,
-~~~~~~~~~~~~~~~~--
1
1
Noise Remo\'al
1
_
IjlThinningl~
:
p
1:
l':
1
l '
Smoothing
:
1
Edge Detection
1 ~ _?}"~P!pS~~i!1K.
Edge Linking
L~!~.)AJ;l.I.t~n..Hn~I)~.i
A:u'8 Mea.'mrement
'
:1
:1
~ '-~==:J==::::;--' : r::-~=::;::J=::;:~~...,
r.:Weighting
1 ,
Function Ca1culation ,
Sizc Distribution
: ,-
1
.1
' -_ _--,
:1
__ .5ize.. Classification.
:1
:'----'--.,------'
:,-----'-----,
,: ' -
-'-_---'
~p_~yi~______
'--
Black Bax..
1.2
Fragment Measurement
The fragment measurement process (described earlier as a black box) consists of three
main subprocesses, namely: preprocessing, analysis and size classification (sec Figure
7.6). The theoretical aspect of these subproccsses was described in detail in chapters
3, 4 and 5 respectively. In this section we will present their implementation. In
addition, for each of the implemented subprocesscs a laboratory environment image
will be used demonstrated its results.
7.2.1
Preprocessing
Actual rock fragments do not possess smooth surfaces; this usually rcsults in "noisy"
images. Prior to the application of contour extraction algorithms, smoothing of the
image is needed to reduce the image artifacts. In chapter 3 we showed that the
noise in a muck-pile image can be signilicantly reduced using the Crimmins filter [32J.
Being a nonlinear filter, it has the property of smoothing the surface and prcserving
100
bOllndary features.
The nllmber of iterations used affects, to sorne extent, the contrast between the
bOllndaries, especially if the image contains small rocks. Consequently, the optimum
number of iterations varies from one image to another depending on the lighting
conditions and fragment colour and size. In the laboratory environment (where we
have control on illumination and rock size range), two or three iterations were found
to be sufficient to smooth fragments' surfaces and minimise noise adequately.
Following smoothing is the application of an edge detection algorithm. Canny's
filter [20J was selected to perform this process. Since this filter convolves the image
with a Gaussian smoothing filter to smooth the image, optimum
al
is determined by
the contrast and resolution of the image. Experiments showed that setting a to a
value of 3 can reduce the presence of unwanted edges.
7.2.2
Image Analysis
Fragment contours are difficult to discriminate from other edges resulting from fragment features. In addition, the image preprocessing yields an image with numerous
unconnected edge segments. Hence a method of extracting these contours from the
pre-processed image is needed. The objective of this process is to identify individual
fragments from edge map images. The analysis process performs this task in several
stages.
Initially, the boundaries of individual fragments are not identified, hence, the term
"region" will used to indicate either boundaries of individual fragments or boundaries
of surfaces of fragments. Also the term "noise" will be used to refer to the unwanted
information such as short edge segments and small closed contours.
The first stage of the image analysis process is the edge enhancement process.
This consists of thinning, noise removal and edge linking algorithms. Applying the
er is the spread (standard deviation) of the Gaussian and controis the degree of smoothing
101
thinning algorithm to the edge map image will l'l'suit in an image eontaining one..
pL,el..wide edge segments. This is an important process which prepares the image for
the local and regional analysis. As mentioned in chapter 4. thE' thinning process is
performed iteratively in which each edge point is inspected within 3 x 3 windows for
the maintaining of connectivity and the position of the edge.
Short edges may provide a false indication about the presence of region boundaries.
As a result, they are considered noise and an algorithm is implemented to filter out
these edges. The short edge elimination algorithm is based on length measurement
of the edges (equation 4.1), where edges of a length below a predefined threshold ((/)
are removed from the edge map image. Setting the threshold for this process varies
depending on the ma:cimum and minimum edge length present in the image.
In the implementation, arcs of length of 10 pixels
((1
= 10)
contours with contour length less than 20 pL,els ((/ = 20) are eliminated. One
possible way to automatically set the length threshold is to selected a percentage
value of the longest arc length (for examplEi'10% of the longest arc)
In the last part of the edge enhancement process, small gaps between edge segments are filled. This is accomplished by extending each segment from its two endpoints along their tangents. The tangent direction at the end points i3 usually unreliable. To estimate a more accurate direction, we move back along the segment by n
points and estimate the tangent direction at that point instead (in our implementation, we used live pixels Le. n = 5).
The extension process is done iteratively, such that, in eaeh iteration each of the
unconneeted segments grows by on!! point on each end (if both ends are unconnected
otherwise, the unconnected end only). To ensure collision with other edge segments
(sometimes with itself), the live neighbours that agree with the growth trajectory
are checked in each iteration. To demonstrate this algorithm, we applied the edge
enhancement algorithm to the edge map image shown in Figure 3.10 (d): the result
102
(a)
Figure 7.7: Resulting edge map after edge linking: (a) The resulting edge
map of the image shown in Figure 3.10 after deleting closed contours and
short edge segments, (b) The result of applying the gap filling algorithm
103
is presented in Figure
. .
tour net. These junctions can be either boundary intersections, surface intersections,
shadows and surfaces, or shadows and boundaries. We use the criteria of finding the
best fitting circular arc, and the two lines with the least fitting errClr (Appendix B)
are assuined to be continuous. Figure 7.8 demonstrates tms algorithm.
Another criterion is also used to consider o.,ceptions to the above technique. In
cases where the junction occurs near a corner of a rock or at the boundary of the pile,
the best fitting circular arc does not provide the proper selection. A window centred
at the junction point is o.,tracted and the average intensity values of the smoothed
image of the three regions are compared. If the two lines with the least fitting error
bound the minimum average intensity of the junction window (Sec Figure 7.8 (c)) the
least fitting error criteria is ignored and the maximum average intensity is considered.
Once the junction's two line segments are selected, the tmrd line is disconnected
from the junction. Each edge segment !s then traced: the direction of movement is
always counter clockwise. For each point, the tangent and the curvature are estimated
104
(b)
(a)
(c)
Figure 7.8: Junction analysis: (a) Edge map of three rocks, (b) Window
of the junction Edge map, (b) Window of the junction smoothed image
such that
a).
a
= 50. Similar
bounded area.
During the tracing process, if the curvature of a point exceeds a predefined threshold (positive sign) this identifies a concavity of the contour. Hence, a number of points
on both sides of the high curvature point are eliminated. It is observed that the error
E:
105
If,
is positive. Also straight lines are considered surface intersections consequently they
are eliminated.
The fragments of the second layer are identified by their edge maps. each of
which has two end points. As mentioned earlier, fragment biscction will not be
considered. When this occurs. each part of the bisected fragment contour will be
processed individually Le. the broken contour will be considered the contours of two
fragments.
To compute the hidden part of a
spline (a(t)) (presented in chapter 4) is constructed linking the two end points. Given
the two end points (x(O), y(O)) and (x(l), y(l)) and their tangents (x'(0). y'(O)) and
(x'(l), y'(l)), a(t) = (x(t), y(t)) is computed iteratively using the modified algorithm
of Nitzberg and Mumford [11 iJ using equations 4.16, 4.1 i. and 4.18 respectively.
To demonstrate the image analysis process. Figure i.9 presents the edge map
of the image shown in Figure i.3 after the edge enhancement process. Figure 7.10
106
(a)
(b)
00
a
~'"
<0
"
r5J
(c)
Figure 7.10: Layer classifications of Figure 7.9, (a) Edge map of the first
layer, (b) Edge map ofthe second layer type A, (c) Edge map of the second
layer type B
107
1.
(al
Q
()o
00
()
...
Il
OJ
(hl
:
lOS
7.2.3
Classification
The last part of the fragment classification process is the measurement process. In
this section we will only consider the measurement of individual fragments. The
remaining part of the classification subprocess will be addressed in chapter 8.
Fragments resulting from a blast do not possess regular (e.g. ellipsoid) shapes in
the general case. This is a result of many factors that control the blasting process
and the nature of the rocks (i.e. mineralogic composition, layer thickness, etc.). In
addition, depending on its orientation, the fragment can pass through the grid in
sorne cases whereas its passage is blocked in others.
Fragment geometry and orientation are the two major factors that control the
sieving process. Though both are non-deterministic, many researchers tried to use
a unique model for the geometry, ignoring the orientation. The term size can have
many meanings, for this particular problem the size which will be used is defined as
follows (See Figure 7.12):
109
Major Axis
Minor Axis
Figure 7.12: Major and minor axes of a fragment
then the fragment size can be characterized by the length of both its major and minor
axes, where the major axis is defined as the longest Euclidean distance between two
eztreme points on the fragment contour, and the minor axis as the sum of the max-
--:.
imum orthogonal distance between points of the contour and the major axis on boUl
its sides.
Both major and minor axes will be used to compute the weighting function W(t)
which is considered the symbolic model of a fragment. As described in Chapter 6,
these axes form a limit for the grid size dA in which the weighting function has a
value different from
a and 1, Le.
Measuring the major and minor axes can be achieved as follows: for each fragment
compute the centre of gravity, then the moments, then the eigenvectors. The minor
axis is computed using the orientation angle () (equation 4.13) and the centre of
gravity: the maximum and llnimum points orthogonal to the principal axis can be
found using by traversing along the contour and computing the orthogonal distance
using equation 4.14. Similarly the major axis is computed in the same manner using
and second layers. This list will then be used in the computation of the Weighting
110
'.:
7.3
Conclusion
In this chapter the fragment measurement process was decomposed into three subprocesses, namely: preprocessing, image analysis and measurement. The output of
the overall process is the major and minor axes of al! fragments of the surfa.ce of the
pile. The problem of overlapping and multifaceted rocks was also addressed in this
chapter, and a solution was presented.
The fragment measurement process introduced has the advantage of being near
fully automatic, Le. minimum human interaction is required to interpret the boundaries of the fragments.
111 ..
Chapter 8
Comparative Evaluation of
Virtual Sieving
The Virtual Sieving method, a tool for quantitative analysis of the fragments size,
is evaluated in this chapter. This evaluation is initially based on the comparison of
performance and accuracy of this method with two stereological
methods~
namely:
the Saltkov method and the Cruz-Grive method. For this coniparison, a computer
generated data set is used to sim1ate the ideal situation.
This chapter also presents an overview of two commonly used methods to estimate
size distribution from muck-pile images in the mining industry. These methods are
then compared to the Virtal Sieving method. The comparison is done by applying
these methods on two different sets of data: the computer generated data and data
obtained from'laboratory CY.IJerim~nts.
8.1
In order to evaluate the performance of the Virtual Sieving algorithm, and to compare
:
'
Three methods were used to estimate the size frequency. In the first method,
the cross-sectional area was calculated for each sphere, and the equivalent diameter
was used to estim~te the size frequency, using Saltkov's method [159] (equation 6.7).
Figure 8.1 (c) shows the regenerated frequency ofthe spheres when Saltkov'smethod
[159] is used. For the other two methods, namely: Cruz-Orive [121] and the Virtual
the diameter of the spheres was used twice (as major and minor
Sieving method,
.
.
axes) for the estimation. Figure 8.1 (d) shows the size frequency estimated using
the Cruz-Orive method [121], equation 6.10 1 This results in a fiat surface, which
is e.''l:pected since the shape factor of the sphere is equal to zero. Finally, the result
shown in the Figure 8.1 (e) is obtained using the Virtual Sieving method. In this
case, the
tot~l
IThe axis label "semi-axis" denotes the use of half of the axis (section 6.3.2)
113
where 8(S) is the impulse function and the term 8(S - Si) is a vertical arrow of unity
amplitude at S
= Si.
Comparing Figures (c), (d) and (e) one can notice that Saltkov's method yielded
a normally distributed size frequency of the spheres. The Cruz-Orive method did
not show any result: since the shape factor x 2 for the sphere is equal to zero, the
classification constraints fail. In other words, for a spheroid to belong to the (i, j)
class, it must satisfy the inequalities
(i - 1).6. <
a::; i.6.
(j - l)r < x 2
::;
jr
On the other hand, the Virtual Sieving method provided a more reasonable approximation of the generated data in comparison with the other two methods by demon-
multiplication factor
and 8.3 (b) present the tbree dimensional representation of the function, where the
x-axis represents the minor axis and the y-axis the represents the major axis.
This arbitrary data representing ellipsoids was employed to evaluate the performance of the same three methods, the results are shown in Figures 8.2 and 8.3. The
size frequency shown in Figures 8.2 (c) and 8.3 (c) was generated using the Saltkov
114
..
,
,
"
(h)
(a)
; '"
il ..
i'
Il: -0.5
.,,
'.
"
"
nn
m
[n_
m
GrldS"-lcm)
"
(d)
(c)
..
....
;
."
J...
~
~:-
..
"
".
J
m
m
GItd SIh (tm)
rl
~
(e)
Figure 8.1: Simulated size frequency 01 the spherical model: (a) One dimensional generated frequency of the spheres diameters, (b) Two dimensional generated frequency of the spheres diameters,(c) Size frequency using the Saltkov method, (d) Size frequency using the Cruz-Orve method,
(e) Size frequency using the Virtual Sieving method
115
'"
Qrid SIno
lem)
(a)
(h)
'"
~
r
10;
!.,
1:
0.1
."
,,
..
..
..r
h.
OIicISlNlcml
...........
,m
'"
"
(d)
(c)
"
"
",
"
"
"
"
.-
"
Il-,
m
'"
.m
(e)
Figure 8.2: Simulated size frequency of the ellipsoidal mode! >. = 3: (a) One
dimensional generated frequency of the model's major and minor .axes, (b)
Two dimensional generated frequency of the model's axes, (c) Size frcquency
using the Saltkov method, (d) Size frequency using the Cruz-Orive method,
(e) Size frequency using the Virtual Sieving method
116
"
o
(b)
(a)
...
"
~"
~"
r
It.
~"
J".,
0,1
00
,
Qrld
..
n.
su. (cm)
"
(c)
(d)
.,
.,
,...
.-
,...
-
100
\10
0Iid Slnlc::rnl
rh
..
(e)
117
method. The circle equivalent diameter was computed from t,he cross-sectiom ,uea
.4.. defined as:
where Sm and SM are the minor and major a..,es respectively. Figures 8.2 (d) and 8.3
(d) show the size frequency generated using Cruz-Orive method for prolate ellipsoids.
Finally; the size frequency shown in (e) was generated using the Virtm Sieving
method.
Comparing the three plots of Figures 8.2 (c), (d) and (e), one can observe that the
Saltkov method resulted in a normally distributed size frequency with mean ::::: 56 (as
compared with Figure 8.2 (a)). The Cruz-Orive method shown in (d) also resulted in
a normally distributed size frequency with mean ::::: 189 which is the smne as the mean
of the major a..'es of the generated data provided that the Figure was viewed from the
xz-plane. On the other hand, the Virtual Sieving method resulted in a log-normal
distribution of size frequency with mean ::::: 63.
The same comparison can be also applied to Figures 8.3 (c), (d) and (e); the
Saltkov method starts to deform to a log-normal shape with mean ::::: 69, the CruzOrive method shows the same result as that of>' = 3, with a shift along the y-axis due
the change is the shape factor. Finally, the results using the Virtual Sieving method
remains log-normal with a change in the mean ::::: 9l.
In conclusion, the Saltkov method showed the least accurate results among the
tested methods. This can be seen clearly in the second part of the simulation, in
which the use of elongated ellipsoids resulted in a small shift of the mean value.
This behaviour contradicts the logic of the sieving process. By contrast, in the three
dimensional frequency representation resulting from the Cruz-Orive method, the two
parameters, namely: the major axis and the shape factor, were treated individually
without any attempt to link them. From the simulation, it is clearly shown that
changing
>. resulted in shifting the frequency with respect to the shape factor ouly in
118
0.05
0.04
:;;
Il. 0.03
go
~ 0.02
cr
0.01
20
(al
...
....
....
."..."
.....
i:
l4. Q.Q2
o.OlS
...."
.."
~o.035
1\
1\
1\
i:
1'/1\
~,:;-.~,,;--;..; -
(bl
...,
1\
1\\
0.01
~~::lS.:=-:.:.-,::,,:-;";..
'.f--I'~.~~.""'f.
"v
0.03
1\
8012141.
Shape Factor
~..
"
(cl
Figure 8.4: Size frequency of spread rocks using the Cruz-Orive's method:
(a) 3-D representation, (b) Size frequency with respect to the normalized
. major semi-axis, (c) Size frequency with respect to the normalized shape
factor
119
one dimension. In other words, changing the shape factor and projecting the three
dimensional frequency onto two planes (parallel to the {x:: }-plane and the the {y::}plane), will result in two frequencies. One of these frequencies is the size frequency
with respect to one of the spheroid's a..'i:es, either the major or the minor depending on
the model selected. The second is the size frequency with respect to the shape factor,
which does not provide useful information for the sieving process when viewed as a
classification process. A demonstration of this is shown in Figure 8.4, in which (a)
shows a three dimensional size frequency of a set of rock, and (b) and (c) demonstrate
their two dimensional projections.
In contrast, the Virtual Sieving method provided a more logical result from the
sieving point of view. This result is more accurate than those obtained using the
Saltkov method since it considers the shape parameters (i.e. the ellipsoid a..'i:es). In
addition the Virtual Sieving method provided a link between the ellipsoids parameters, which resulted in a more useful representation of the size frequency than the
one obtained by the Cruz-Orive method.
8.2
To evaluate the accuracy of the Virtual Sieving method, the algorithm was also applied to actual data obtained from laboratory experiments and compared with laboratory experimental results. To obtain the laboratory data, a number of rocks were
crushed and sieved in the laboratory.
Generally speaking, the size and shape of rock fragmented by blasting are largely
influenced by the structural condition in the rock mass (i.e. pressure, crust movement,
etc.). On the other hand, breakage of rocks during the crushing process are influenced
more by the minerai structure and composition. In this section, it will be assumed
that both processes (blasting and crushing) will result in similar shapes.
120
,c
"
,.
l.5
GIld Slz.(O'TI)
2.~
"
1.S
:ts
(h)
(a)
During sieving, the crushed rocks were manually reoriented to ensure their passage
through the relevant sieves. The retained rocks were then weighed and counted.
Figure S.5 gives the results of this e.'Cperiment. Figure S.5 (a) shows the distribution
of the rocks by number of rocks retained per grid, and Figure S.5 (b) shows their
clistribution by weight.
The rocks were then mixed again and a number of images of these rocks were
acquired. For these images, the rocks were spread such that they did not overlap. A
total of 45 images of 4056 rocks were taken at random orientations. Each rock was
then measured using the techniques of chapter 4, in which area, major and minor a'Ces
were computed. The linear model of the Weighting Function as defined in chapter 5
was then applied on these measurements using equation 5.6 as follows:
T
WT(Si> t) =
L Wk(Si, t)
(S.l)
k=l
121
Si,,\itl~
]~/\~--~----__
'.
.;
OIId~lanl
(a)
(h)
2' 10"
~
~
S~
."
...
..
... W
."
100
1100
ClIlll_11lft1
(c)
Figure 8.6: Weighting function of crushed rocks using the !incar model
(Equation 6.3): (a) Weighting function of crushed rocks, (b) Error between
the actual sieving results and the Virtual Sieving, (c) Error between the
nonlinear and the !inear models
where
o
Wk(Si, t)
= ;:"-Ss":'
1
Sm"? Si
Sm
< Si < SM
(8.2)
SM ~ Si
The results axe shawn in Figure 8.6. The distribution of the spread rock using the
lineax mode! Weighting Function is shawn in Figure 8.6 (a). Figure 8.6 (b) shows the
error between the actual and Virtual Sieving, which is less than 15%. This erraI is
122
,...
'".l:
11
a"
.~
JO.15
0::0,11
i- ...
~o
l"
11:005
,Ir
Il_
1:
~02
il
"
"
(a)
"
(h)
Figure 8.7: Size frequency and distribution of spread rocks using the Virtuai Sieving method: (a) Size frequency, (b) Size distribution
acceptable since the sieving test was manual. Figure 8.6 (c) shows the error between
the linear and the nonlinear models for the same set of spread rocks. This error is
very small (i.e. in the range of 10-3 ); consequently, the linear model of the weighting
function will be used since it requires less computational time.
Figure 8.7 presents the size frequency and the cumulative size distribution of the
spread rocks (i.e. no overlap) using the Virtual Sieving method. In comparison to the
distribution obtained by physical sieving (shown in Figure 8.8), the technique shows
excellent correlation.
8.3
Many mining researchers have studied the problem of estimating the size distribution
from the surface of a muck-pile. In this section we will present the theory of two of
the most commonly used methods, namely, Maerz's and Kemeny's methods.
123
'1:
::J
~0.8
en
en
en
.:.:
gO.6
CI:
:E
~0.4
3:
.~
'ii
::; 0.2
Virtuel Sfeving
:l
Actuel
468
Grid Size (cm)
10
12
8.3.1
Maerz's Method
In their method, Maerz et al. [95] modelled fragments as approximate spheres. From
the projected area of each individual fragment, the diameter of a circle of equivalent
area (dea = 2ffJ was computed. The distribution of dea's was then divided into s
classes (s
= 10) of equal class width, .D. = ~ where deaM is the maximum size of
dea. The frequencies in each size class (d;) were expressed as the number ofblocks (N)
of a particular diameter class (d;) per unit area (A) of the fragment surface (Na(d;)).
The true or three dimensional size distribution was then estimated by applying
the following equation:
(8.3)
where f(d;) is an empirical calibration function (for each class diameter there is a
124
8.3.2
Kemeny's Method
': Kemeny et al. [SOl combined two measured parameters in estimating the size distriIn
bution. These parameters are: projected area and axes of the best fitting ellips.
1
. this metnod, the equivalent diameter (fragment screen size) was calculated for each
fragment as follows
d;
(8.4)
where Mi is the major a.'Cs of the best fitting ellipse of fragment iand 'In; is theminor
'axis. To estimate the volume Kemeny et al. [SOl multiplied the projected area for
each fragment Ai by the equivalent diameter, i.e.
vi=Aid;
,1
(S~5)
The size distribution was then classified into k equal classes; th~ a probabilitY
matrbi: P was computed for the number of fragments, where the dimension of P is
N x k. Using the midpointsof each class, ((j, j = 1, ... , k) the relative fragment size
1
125
Pij
as follows:
4i.
Xi
Pil
= a
(1
Xi
4i.
(n'
n= 2, ... ,k-1
Pin
l-(a+,6),
n = 2, .. . ,k-1
Xi
4i.
Pik
(8.6)
(.
= ,6
where
>1
= 1 iff
f3 = 0 iff
,6.
,6 = 0.0401 + 20.8973x9.3D84e-4.7464x
Il
,6>1
,6 < Ox
0.2
(8.7)
The elements of the probability matrL"< were caleulated by normalizing the elements
of P as follows:
p.lJ.. -_
Pij
(8.8)
LPij
j=1
Pn
-1
Pn
(8.9)
126
8.4
The two methods explained in the previous section are the most established and
commonly used methods in the mining industry for estimation of the size distribution
of fragments from the surface of muck-piles. These two methods were applied to the
generated data described in section 8.1, and to the experimental data of section 8.2.
This section discusses the results of this application and compares it with the Virtual
Sieving method described in this thesis.
8.4.1
The two methods were first applied to the spherical data of section 8.1. The estimated
size frequencies obtained using the two methods are given in Figures 8.9 (a) and (b)
for Maerz and Kemeny methods respectively. From these figures, one can notice the
similarity between Maerz's solution and the Saltkov method (see Figure 8.1 (c)) .
The two methods were then applied to the ellipsoid data of section 8.1 for two
different values of >., 3 and 5. The resulting size frequency using the two methods
for>' = 3 are given in Figures 8.9 (c) and (d). Figures 8.9 (e) and (f) show the same
respective resu1ts when >.
8.4.2
The two methods were also applied to the actual data of section 8.2. Three tests were
conducted using the experimental data. In the first test, images of spread rocks were
used. In the second and third tests images of the overlapping rocks were considered.
The results of these tests are presented in Figures 8.10, 8.12 and 8.14 respectively.
Figure 8.10 (a) shows the size frequency of the actual rocks. Figures (b) and (c)
127
0"
r-
cr-
~n
o
0
.,
r-
"
_-r-_
"
"
ni
rI~
(b)
(a)
.,.
..
~."
r..
~
00
....,
m
.m
n-
(c)
"
."
J.:
J."
.,
.,.
00
,m
."
~."
(d)
0"
o.,
IL
'"
(e)
"
."
o0
nm
h
,m
OIldSlNl<:rn1
..
'"
(f)
Figure 8.9: Frequency of arbitrary generated data using Maerz's and Kemeny's methods: (a) Size frequency using Maerz's method for spheres, (h)
Size frequency using Kemeny's method for spheres, (c) Size frequency using
Maerz's method for ellipsoids ( = 3), (d) Size frequency using Kemeny's
method for ellipsoids ( = 3), (e) Size frequency using Maerz's method for
ellipsoids ( = 5), (f) Size frequency using Kemeny's method for ellipsoids
( = 5)
128
tained the closest size frequency to the actual one in spite of the missing information
due to overlapping of rocks. On the other hand, Maerz's and Kemeny's methods
showed sensitivity to the variation of the exposed area. Figure 8.13 compares the cumulative size distributions of the actual data and those given by the three methods.
From the Figure, it can be seen that missing information due to occlusion degrades
the performance of Virtual Sieving, but it still gives the closest fit.
In the third test, contour completion algorithms were used to complete the missing
part of the rocks in the second layers caused by overlapping. Figure 8.14 (b), (c) and
(d) shows the results of using Maerz's, Kemeny's and the Virtual Sieving methods
respectively. In this case, improvement of the shape of the cumulative size distribution
using the Virtual Sieving and Kemeny's methods was seen, as shown in Figure 8.15.
This improvement resulted in a good match of the Virtual Sieving results and the
aetual results. Meanwhile, Maerz's distribution remains unchanged.
--
The reason that size distribution obtained using the Virtual Sieving method is
129
the least affected when ignoring the hidden part of overlapped rock8 i8 beCml8l' the
method is based on a,es measurement which is less sensitive 1.0 variations of projected
regions than the area based methods (sel' section 5.3). A not.iceable improvcment, 1.0
the cumulative size distribution is achieved using the contour complct.ion a1goritlun
in conjunction with Virtual Sieving.
8.5
Case Study
8.5.1
Intermediate Results
Using two iterations of Crimmins filter resulted in a fairly smoothed image as show in
Figure 8.18. Figure 8.19 show the result of applying Canny's lUter 1.0 the smoothed
image using
(1
= 2.0.
The thinning algorithm described in Section 4.2 is then applied 1.0 the edge map
image (Figure 8.19). Short, unconnected edges are then eliminated provided that
their lengths do not exceed the defined threshold (in this case ~l = lU for arcs and
~l
= 50 pixels for closed contours). The results of the thinning and noise removal are
The last stage of the image analysis is the layer classification. The fust step in
130
= 11).
Once the junction analysis step is completed, rocks of the first layer are
contour completion algorithms described in chapter 4, Figures 8.24 and 8.26 present
the completed contours of the two types of the second layer (Le. Types A and B).
Due to false edge information, the curve generated by the contour completion
algorithm might intersects with the edge map itself, this can be seen in Figures 8.24
and 8.26. As a result these contours are eliminated from the image (they will not be
considered in the size computation). In addition, due to smoothing many rocks are
merged together to form one contour. To overcome this problem, each closed contour
is traced, and for each point the curvature is estimated using the Nitzberg method
described in Section 4.1.3 using u
= 2.0 and m = 7.
points where the curvature exceeds a fixed threshold (K: > 0.1 and error > 40%) by
eliminating seven points inclusively (three points on each side). If the contour breaks
into an even number of segments, the end points of each segment are connected by
a straight line. Otherwise each endpoint is connected to the point on the contour
orthogonal to its tangent.
131
8.5.2
Overall Performance
The overall image processing algorithms performed weil: they detected 353 rocks,
compared to 451 obtained by manual tracing. Most of the rock boundaries were
detected correctly with the e:l:ception of two rocks bounded by the white dashed
boxes of Figure 8.16. Many small rocks were also merged together or with larger rock
as a result of the smoothing algorithm used.
The "Virtual Sieving" method was then used to measure the performance of the
overall technique by comparing the size frequency and cumulative distributions of
the muck-pile using the manual traced image and the one obtained from the image
analysis algorithms. Figures 8.27 (a) and (b) show the size frequency of the seanned
image btained from the manual tracing method and the image analysis algorithm
respectively. One can notice the great similarity between the two frequencies. This
becomes more clear when the cumulative size distributions of both methods are plotted on the same graph as shown in Figure 8.27 (c). The slight difference between the
two distribution is a result of many factors. Among these is the human bias during
the manual tracing process including the approximation in locating the boundaries of
the individual rocks. Also, the failure of the image analysis algorithm to detect some
of the boundaries and merging the contour of more than one rock affects the final
result. This demonstrates the effectiveness of the overall technique which resulted in
a size distribution curve that closely follows the one obtained by manual tracing.
8.6
Conclusions
In this chapter we have presented comparison of the Virtual Sieving method with the
existing methods for size distribution measurement used in stereology as weil as in the
mining industry. The comparison was done using an artificial data set generated by
computer, as weil as using actual data obtained from laboratory experiments. This
132
133
,.,
,
,
c--
.--
.,
'", .!1
,
,.--
MH
..
,,
rI,
':
..
5
Gnd Slztt (cm)
(b)
(a)
....
~O.25
lu
0015
.~
i"
'''Ii
'.
"-
.10.15
I...
r-
(c)
~,
,1, li
i-
rh
, ,
(d)
Figure 8.10: Size frequency of spread rocks (no overlap): (a) Frequency
of the actual rocks from sieving, (b) Sizc frequency result using Macrz's
method, (c) Size frcquency rcsult using Kemcny's mcthod (d) Size frcqucncy
result using the Virtual Sieving method
134
=>
.,
Q)
C.
-----------
.1
-go.a
r.
1:
fi)
fi)
tU
C.
fi)
i'
.i'
.>::
go.a
-- - - --- - --
.-/
'2
a:
:
Cl
.'
,,
.' 1
1 1
1
0;0.4
3:
Maerz
, ,
, ,
.~
ai
~0.2
-.-.- Kemeny
1
1
Virtual Sieving
:l
Actual
1 .
10
12
135
Sicvin~
,.,
:~
,
,
"lo.3S
r-
1"
~
~'"
."
~
r-
.,
,."
,,
JH
:
r-
i'"
~"
Gnd SlztIlcm)
hnr
'",
,
...
"i..zs
1
jO.15
!0:
"l0-2
~
0.2
jO.15
.i ,.
00
0
0
l ,."
" 0,05
,1, r
, Gncs Slze (cm)
(c)
"
(b)
(a)
,, r-
IL
(d)
Figure 8.12: Size frequency of overlapping rocks without contour complction: (a) Frequency of the actual rocks from sieving, (b) Size frequency rcsult
using Maerz's method, (c) Size frequency rcsult using Kemeny's method (d)
Size frequency rcsult using the Virtual Sieving method
136
=>
III
0-
1
1
"go.a
<Il
Q.
fi)
13
0.6
0
cr
.S!'0.4
":'J:
.if
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 .'
fi)
fi)
,"
---
Maerz
-.-.- Kemeny
'.'
III
.2:
/1
~0.2
i(
:::l
Virtual Sieving
Actual
1 :
0
0
10
12
137
---""
,.
, ,..
0.45
.~ 0.4
"lo.3$
":O~
~~.,
i""
i""
"
""
"
""
.---
l"
,~ 0..
,os
,,
,-
1.,
,
i',"..
...
tJH.
"
I~
,,
:a
:,
,, r
III
,...
..
",.
Il
Il
,, r
:'-.':::
.1
:a
T.
s,.
(h)
(a)
..
Il
2
:1
of
Il
(c)
138
=>
...
,
,,
CIl
Q.
al O.S
en
en
1
1
<Il
Q.
en
---
"
,,
1
-Uo.S
0
lI:
--
,,
1
oC
CIl
1
1
1
1
1
.2'0.4
Maerz
1
1
-.-.- Kemeny
~
:;=
~0"2
E
:::l
Virtual Sieving
1 .
"
1
Actual
i:
1:
0
0
10
12
139
".
Figure 8.16: Muck-pile of open-pit mine
140
141
142
143
Figure 8.20: Thinning and noise removal of the edge map of the muck-pile
image
144
145
146
;)
1\ ~cV
((1
-1)1}
eJ\
()
J!
J t?~
"
"""JI
Figure .8.23: Second layer Type A of the muck-pile without contour com"
pletion
147
()
~o
ZJ of" \0
~ ~ c1Cj
f\
Clt?Q
\, --oD.
r?
Figure 8.24: Second layer Type A of the muck-pile with contour completion
148
li"'
7
~'igure
. pletion
149
{]
o
Figure 8.26: Second layer Type B of the muck-pile with contour completion
150
..
5 ..
"
r ..
2
..,,0.'
~,
r-
0-
~o.
r-
"
5lO.,
r-
'i 0 .1
=,
~O.1
c:
rI
r-
'r-
I,
r-
~o,~
."
,o
1
~
100
G""""
l~
r-
i,
00.01
r-
r-
~,,
r-
I~
'",
o
,~
100
nnn
Grid Sae
I~
(b)
(a)
rT.
alo.a
(J)
gO.6
a:
Cl
-0;0.4
;:
Manual
.~
~0.2
Automatic
E
:>
20
40
60
ao
100
120
140
160
1ao
200
GridSize
(c)
Figure 8.27: Scanned image size frequency and distribution: (a) Size frequency from manually traced image, (b) Size frequency form the automatic
image analysis, (c) Size distribution of the manually traced image and the
automatic image analysis
151
Chapter 9
Conclusions
This thesis addressed the problem of estimation of the size distribution of fragmented
rocks in a muck-pile. This problem was decomposed into two subproblems, namely:
analysis of the digital image of muck-piles that resulted from blasting, and the design
of an effective and efficient measure that correlated with the classical definition of
size measurement in mining.
Steps toward the solution of the first subproblem have been presented based on
computer vision techniques. Many problems associated with the analysis of the surface of muck-piles were addressed in this study and solutions to these problems were
proposed. Among these are fragment identification and the over1apping problem. The
overall approach formu1ated for image analysis involves four main steps:
1. Fragments contours extraction from intensity images.
:::.."::---:"
process to deduce the parameters controlling it. These parameters were used tel
9. Conclusions
define a measure which was then linked to the estimation of the size distribution of
fragments.
This chapter first presents the original contributions of the thesis. This is followed
by a discussion of the limitations of the proposed solution. Finally recommendations
for future work are presented.
9.1
Original Contributions
The major
~ontributions of
this
researciiar,~:
The combination of the Crimmins smoothing lilter and the Canny's edge detector, resulting in a cleaner image which simplifies the fragment contour e>.:traction
process.
2. Development of a simple recursive edge linking strategy:
Used to fill small gaps between contour segments.
overlapping problem.
5. Development of an adaptive technique for the contour completion
algorithm:
Used to estimate the missing part of partially occ!uded fragments. Using direct
153
9. Conclusions
shape measurement, the new formulation controls the deformation of the cur'"
used in estimating the contour of the hidden part of the fragment.
154
9. Conclusions
9.2
Limitations
The image analysis a1gorithm presented in this thesis showed encouraging results.
However, the success of this a1gorithm is highly dependent on the quality of the
image. Consequently, the a1gorithm suffers from the following limitations:
In many images, boundaries of rocks are poorly defined, thus measurements
depend heavily on the reconstruction methods.
The viewing angle was never addressed, it was always assumed that the camera
was orthogonal to the pile surface.
Due :to the lack of depth information, layer rescaling was not considered.
The juaction analysis a1gorithm considers only the local image information
155
9. Conclusions
9.3
This study has presented several opportunities for future research. Among them are:
e A different type of sensing methodology such as colour images, stereo vi3ion,
or laser range finders is needed for more accurate results and to obtain depth
information.
Three dimensional modelling of fragments is also recommended to increase the
effectiveness of the Virtual Sieving algorithm.
Bisecting rocks are an important issue, requiring the development of a search
strategy to group the unconnected parts of the contour.
Improvement of the overall algorithms is also recommended to reduce the num-
Since we have successfully demonstrated the applicability of the proposed Virtual Sieving algorithm, it is reco=ended that it be implemented in a mining
environment.
156
References
[IJ J. Adams. Sieve size statistics from grain measurement. The Journal of Geology,
Vol. 85(No. 6):209 - 22i, January-November 19ii.
[2J A. Albano. Representation of digitized contours in terms of conie arcs and
straight-line segments. Computer Vision, Graphies, an Image Processing, Vol.
3(No. 1):23 - 33, March 19i4.
[3J T. Allen. Particle" size Measure.ment. Cb.apman and Hall, New York, third
edition, 1981.
[4J 1. Ande~on and J. Bezdek. Curvature and tangential deflection of discrete arcs:
A theory based on thee commutator of scatter matrix pairs and its application to
vertex detection in planar shape data. IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. PAMI-6(No. 1):2i - 40, January 1984.
[5] H. Asada and M. Brady. The curvature primal sketch. IEEE Transactions on
Pattern Analysis and Mchine Intelligence, Vol. PAMI-8(No. 1):2 -14, January
1986.
[6] K. Astrom and B. Wittenmark. Computer Controlled Systems Theory and
Design. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1984.
15i
References
158
[8] G. Bach. Size distribution of particles deri~'ed from the size distribution of thcir
sections. In H. Elias, editor, Proceedings of the Second International Congre.<s
for Stereology, pages 1i4 - 186, Chicago, April 196i.
[11] A. Bedair, L. Daneshmend, C. Hendricks, and M. Scoble. Automated image segmentation and measurement for rock fragmentation analysis. In Fourth Cana-
October 1994.
[12] A. Bedair, L. Daneshmend, C. Hendricks, and M. Scoble. Robust computer
vision techniques for rock fragmentation and loading analysis. In Third Conference on Computer Applications in the Mineral Industry, pages 664 - 61,
[15] O. Bergmann, J. Riggle, and F. Wu. Model rock blasting effect of explosives
References
159
[19] G. Buchan, K. Grewal, and A. Robson. Improved models of particle-size distribution: An illustration of model comparison techniques. Soil Science Society
of America Journal, Vol. 57(No. 4):901 - 908, July-August 1993.
[20] J. Canny. A computational approach to edge detection. IEEE Transactions
on Pattern Analysis and Machine Intelligence, Vol. PAMI-8(No. 6):679 - 698,
November 1986.
[21] O. Carlsson and L. Nyberg. A method for estimation of fragment size distribution with automatic image processing. In Proceedings of the First International
Symposium on Rock Fragmentation by Blasting, pages 333 - 345, Lulea, Sweden,
August 1983.
[22] A. Carter An experimental sieving machine. Journal of Testing and Evaluation,
Vol. 15:87 - 94, 1987.
References
160
[23] C. Cheung and A. Ordo An on line fragment size analyser using image proccssing techniques. In Proceedings ta the Third International Symposium on Rock
Fragmentation by Blasting, pages 233 - 238, Brisbane, Austria, August 1990.
1990.
[29J M. Concetta-Morrone and D. Burr. Feature detection in human vision: A
phase-dependent energy model. Proceedings of the Royal Society of London,
Series B, Vol. 235(No. 1280):221 - 245, December 1988.
[30J P. Corke. Machine vision feedback control of mining machinery. In Third
International Symposium on Mine Mechanization and Automation, volume 1,
ReFerences
161
[35] L. Davis and A. Rosenfeld. Noise clearing by iterated local averaging. IEEE
Transactions on Systems, Man, and Cybernetics, Vol. SMC-8(No. 9):705 - 710,
September 1978.
[36] R. DeHoff and P. Bousquet. Estimation of the size distribution of triaxial ellipsoidal particles from the distribution of linear intercepts. Journal of Microscopy,
Vol. 92:119 - 135, October 1970.
[37] R. DeHoff and F. Rhines. Determination of number of particles per unit volume
from measurements made on random plane sections: The general cylinder and
the ellipsoid. Transactions of the Metallurgical Society of AIME, Vol. 221:975
- 982, October 1961.
[38J R. DeHoff and F. Rhines. Quantitative Microscopy. McGraw-Hill, Inc, New
York,1968.
References
162
Optimization ap-
1983.
[41] L. Dorst and A. Smeulders. Length estimators for digitized contours. Computer
Vision, Graphies, and Image Processing, Vol. 40(No. 3):311 - 333, December
198i.
[42]" C. Doucet and Y. Lizotte. Rock fragmentation assessment by digital photographyanalysis. Technical Report MRL 92-116 (TR), Mining Research Laboratories, CANMET, Val d'Or Quebec, November 1992.
[43] R. Duda and P. Hart. Use of hough transformation to deteet Iines and eurves in
pietures. Communications of the ACM, Vol. 15(No. 1):11 - 15, January 1972.
[44] G. Dudek and J. Tsotsos. Shape representation and recognition from eurvature.
In Proceedings of the 1991 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pages 30 - 3i, Lahaina, Maui, Hawaii, June
1991.
[45] J. Fang and T. Huang. A eornel' finding a1gorithm for image analysis and
registration. In Proceedin9s of the National Conference on Artificial Intelligence
AAAI-82, pages 46 - 49, Pittsburgh, Pennsylvania, Augnst 1982.
163
References
In Proceedings of the International Congress on Rock Mechanics, pages 1037 1042, Aachen, Germany, 1991.
[47) J. Foley, A. van Dam, S. Feiner, and J. Hughes. Computer Graphies Princi-
pies and Practice. Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1990.
['18] G. Foresti, V. Muino, C. Regazzoni, and G. Vernazza. Groupingof rectilinear
segments by labelled hough transform.
IJ>~.20,
'"
1986.
~'-\'
References
16-1
[55] S. Geman and D. Geman. Stochastic rela.'l:ation, gibbs distribntions. ,Uld the
bayesian restoration of images. IEEE Transactions on Pattcrn
.4naly.~is
and
[59] S. Grannes and R. Zahl. Development of a digital image based on-line product
size sensor for taconite mining. In Proceedings of the 10th WVU International
Mining Electrotechnology LJonference, pages 102 - 109, July 1990.
[60] J. Grant and A. Dutton. Development of a fragmentation monitoring system
for evaluating open slope blast performance at mount isa mines. In First International Symposium on Rock Fragmentation by Blasting, volume Vol. 2, pages
63i - 652, Lulea, Sweden, 1983.
[61] R. Haralick and L. Shapiro. Computer and Robot Vision, volume Volume 1.
Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1992.
[62] R. Haralick and L. Watson. A facet model for image data. Computer Vision,
Graphies, and Image Processing, Vol. 15(No. 2):113 - 129, February 1981.
-
[63] R. Haralick, L. Watson, and T. Laffey. The topographie primai sketch. Inter-
!:C
165
References
[64J R. Hlulick. Digital step erlges from zero crossings of second directional derivatives. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.
PAMI-5(No. 1):58 - 68, .January 1984.
[65] R. Duda P. Hart. Pattern Classification and Scene Analysis. John Wiley and
Sons, Inc., New York, 1973.
[66] M. Hu. Visual recognition by moment invariants. IRE Transactions on Infor-
ings of the 1985 IEEE Computar Society Conference on Computer Vision and
Pattern Recognition, pages 204 - 209, Miil.mi Beach, Florida, June 1986.
[69] G. Hunter, C. McDermott, N. Miles, A. Singh, and M. Scoble. A reyiew,of
image analysis techniques for measuring blast fragmentation. Mining Science
~.~
A~~tecture
-::.:::---.
'.'
References
166
June 1991.
[73J S. Grannes . Determining size distribution of moving pellets by computer image processing. In R. Ramani, editor, Proceedings of the 19th Application of
Computers and Operations Research in the Mineral Industry, pages 746 - 753.
[75] R. Irani and C. Callis. Particle Size: Measurement, interpretation, and Application. Wiley, New York, 1963.
[76] D. Jacobs. Recognizing 3-d objects using 2-d images. Mz.:ster's thesis, Department of Electrical"Engineering and Computer Science, Massachusetts Institutc
of Technology, 1992.
[77] R. Jan, R. Kasturi, and B. Schnck. Machine Vision. McGraw-Hill, Inc., New
York,1995.
[78] H. Jeffreys. Scientific Inference. Cambridge University Press, London, UK,
third edition, 1973.
[79] M. Kass, A. Witkin, and D. TerzopoulosKass, M. Snakes: Active contour
models. In First International Conference on Computer Vision, pages 259 -
References
16
[80] J. Kemcny, A. Devgan, R. Hagaman, and X. Wu . Analysis of rock fragmentation using digital image processing. Journal of Geotechnical Engineering, Vol.
119(No. ):1144 - 1160, July 1993.
[81] B. Kettunen, P. NiIes, and R. Bleifuss. Size distribution of mine ron ore by
image analysis. In Proceedings of the 65th Annual Meeting of the Minnesota
[84] L. Kbinova. Recent stereological methods for the measurement of leaf anatomical characteristics: Estimation of the number and size of stomata and mesophyll
cells. Journal of Experimental Botany, Vol. 45(No. 2O):119 - 12, January
1994.
[85] Z. Kulpa. Area and perimeter measurement of blobs in discrete binary pictures.
Computer Vision, Gmphics, and Image Processing, Vol. 6(No. 5):434 - 451,
October 19.
[86] U. Landau. Estimation of a circular arc center and its radius. Computer Vision,
October 1988.
References
168
References
169
[99J D. Marr and E. Hildreth. Theory of edge detection. Proceedings of the Royal
Society of London, Series B, Vol. 207(No. 1167):187 - 217, February 1980.
[100] D. Marr and T. Poggio. A computational theory of human stereo vision. Proceedings of the Royal Society of London, Series B, Vol. 204(No. 1156):301 - 328,
May 1979.
[101] D. Marr. Vision. W. H. Freeman and Company, San Francisco, 1982.
[102J A. Martelli. An application of heuristic search method to edge and contour
detection. Communications of the ACM, Vol. 19(No. 2):73 - 83, February 1976.
[103] G. Mastin. Adaptive filters for digital image noise smoothing: An evaluation.
Computer Vision, Graphies, and Image Processing, Vol. 31(No. 1):103 - 121,
July 1985.
[104J C. McDermott, G. Hunter, and N. Miles. The application of image analysis to
the measurement of blast fragmentation. Technical report, Nottingham University, Nottingham, UK, 1989.
References
170
[105] G. Medioni and Y. Yasumoto. Corner detection and curve representation using
cubic b-splines. Computer Vision, Graphies, and Image Processing, Vol. 39(No.
3):26i - 2i8, September 1987.
[106] F. Mokhtarian and A. Mackworth. Scale-based description and recognition
of planer curves and two-dimensional shapes. IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. PAMI-8(No. 1):34 - 43, January 1986.
[lOi] U. Montanari. On the optimal detection of curves in noisy pictures. Communications of the ACM, Vol. 14(No. 5):335 - 345, May 19i1.
[108] P. Moran. Measuring the length of a curve. Biometrika, Vol. 53(No. 3 and
4):359 - 34, 1966.
11
References
[114] H.H. Nguyen and P. Cohen. Automated recognition of ore distribution by texturai segmentation. In Third International Symposium on Mine Mechanization
and Automation, volume Vol. l, pages 3-1- 3-11, Golden, Colorado, June 1995.
[115J S-L. Nie and A. Rustan. Techniques and procedures in analyzing fragmentation after blasting by photographie method. In Proceeding of the Second
International Symposium on Rock Fragmentation by Blasting, pages 102 - 113,
[l1J M. Nitzberg and D. Mumford. The 2.1-d sketch. In Third International Con-
ference on Computer Vision, pages 138 - 144, Osaka, Japan, Deeember . ~990 .
[121] L. Cruz Drive. Particle size-shape distribution: The general spheroid problem,
i. mathematical model. Journal of MicTOscoPY, Vol. 10:235 - 253, August 196.
[122] L. Cruz Drive. Particle size-shape distribution: The general spheroid problem,
ii. stochastic modei and practical guide. Journal p( MicTOscopy, Vol. 112:153 -
/"
References
_,)
.-
[123] J.J. Orteu, J. C. Catalina, and M. Devy. Perception for a roadheader in automati~
Conference on Robotics and Automation, pages 626 -- 632, Nice, France, l'I'!ay
1992.
[124] N. Paley, G.J. Lyman. and A. Kavetsky. Optical blast fragmentation assessment. In Proceedings ta the Third International Symposium on Rock Fragmentation by Blasting, pages 291 - 301, Brisbane Austria. August 1990.
[125] T. Pavlidis. Algoriihms for Graphies and Image Processing. Computer Science
Press, 1982.
[126] J. Peck, C. Hendricks, and M. Scoble. Blast optimization through performance
monitoring of drills and shovels. In M. Singhal and M. Vavra, editors, Proceed-
[127] T. Peli and D. Malah. A study of edge detedion algorithms. Computer Vision,
Gmphics, and Image Processing, Vol. 20(No. 1):1 - 21, September 1982.
[128] P. Perona. Steerable-sealable kernels for edge deteetion and junetion analysis. In
G. Sandini, editor, Proceedings of the Second European Conference on Computer
Vision ECCV'92, pages 1 - 18, Santa Margherita Ligure, Italy, May 1992.
Springer Verlag.
[129] P. Perona and J. Malik. Seale-spaee and edge deteetion using anisotropie diffusion. IEEE Transactions on Pattern Analysis ani-Machine Intelligence, Vol.
12(No. 7):629 - 639, July 1990.
[130] P. Perona and J. MalikPerona. Deteeting and loealizing edges composed of
steps, peaks and roofs. In Third International Conference on Computer
Vi.~ion,
ReFerences
li3
[131] W. Pratt. Digital Image Processing. John Wiley and Sons, Inc., New York,
second edition, 1991.
[132J W. Press, B. Flannery, S. Teukolsky, and W. Vetterlirrg Numerical Recipes in
C. Cambridge University Press, 1990.
[133J K. Rangarajan, M. Shah, and D. Van Brac1cle. Optimal corner detector. In
Proceedings of the Second International Conference on Computer Vision, pages
[135J P. Rosin and E. Rammler. Laws governing the fineness of powered coal. J. inst.
Coal, Vol. i:29 - 36, 1933.
[138J P. Sahoo, S. Soltani. K. Wong, and Y. Chen. A survey of thresholding techniques. Compute,' \ ,.",n, Graphics, and Image Processing, Vol. 41(No. 2):233
- 260, February 1988.
[139] P. Saint-Marc, J.S. Chen, and G. Medioni. Adaptive smoothing: A general
tool for early vision. IEEE Transactions on Pattern Analysis and Machine
References
171
[140] 1. Santal6. Integral Geometry and Geometrie Probability. Addisoll-Wesley Publishing Company, 1976.
[141] 1. SaxI. Stereology of Objects with Internai Structure. Elsevicr Scicncc Publishers, Amsterdam, Netherland, 1989.
[142] J. Schleifer, R. Chavez, D. Leblin, and S. Grollier. Grain size distribution
analysis for blasting by means of image processing. In J. Elbrond and X. Tang,
editors, Proceedings of the International Symposium on the Application of Computers and OperatioTlS Research in the Mineral Industries, pages 361 - 367,
[146J A.
Shashu~.
Technology, 1992.
[147J J. Shu. One-pixel-wide edge detection. Pattern Recognition, Vol. 22(No. 6):665
- 673,1989.
[148J R. Stefanelli and A. Rosenfeld. Some parallel thinning algorithms for digital
pictures. Journal of the ACM, Vol. 18(No. 2):255 - 264, April 1971.
Referenccs
175
[149] H. Steinhaus. Length, shape and area. Colloquium Mathematieum, Vol. 3:1 13, 1954.
[150J D. Struik. Lectures on Classieal Differentiai Geometry. Addison-Wesley Publishing Company, New York, 1961.
[151J H. Takahashi, H. Kamata, T. Masuyama, and S. Sarata. Autonomous shovelling
of rocks by using image vision system on lhd. In Third International Symposium
on Mine Meehanization and Automation, volume Vol. l, pages 1-33 - 1-44,
Golden, Colorado, June 1995.
[152] G. Tallis. Estimating the size distribution of spherical and elliptical bodies in
conglomerates from plane sections. Biometries, Vol. 26(No. 1):87 - 103, March
1970.
References
1;6
puter Vision and Pattern Recognition, pages 104 - 112, Seattle, Washington,
June 1994.
[163] A. Witkin. Scale-space filtering. In A. Bundy, editor, Proceedings of the Eighth
International Joint Conference on Artificiul Intelligence, pages 1019 - 1022,
[165] Y. Yasuoka and R. Haralick. Peak noise removal by a facet mode!. Pattern
Recognition, Vol. 16(No. 1):23 - 29, 1983.
izing particle size. Powder Technology, Vol. 50(No. 1):9 - 89, March 1987.
[168J O. Zuniga and R. Haralick. Corner detetion using the facet mode!. In Pro-
Appendix A
Convex Hull _A..Igorithm
The conve:" hull of a set is the intersection of ail the half-planes containing it. An
approximation to this, called the 8-hull, is defined as the intersection of only those
half-planes which contains the set and whose edges are either horizoutal or vertical
or lie in either of the 45 diagonal directions. the 8-hull of a set has at most, eight
sides.
For a binary image, the convex hull iterative algorithm works as follows: at each
iteration, the value of a pb:el is changed from 0 to l if its neighbouring pixels have
one arranged in any one of the following configurations:
l
l
l
178
..l.PPENDIX A.
l9
The blank squares can be eil.her zeros or ones. If enough iterations of l.his stl'P an'
performed. eventually the S-hull of the giwn set, will be general.i'd and il. will tH'
invariant under further iteral.ions.
An il.erative algorithm for smoothing the ragged edges of binary images of vehides
obtained by slicing gray image-level radar images (referred to as the cornplelllclll.;try
hulling algorithlll. One step of the S-hull algorithlll described above is applied
1.0
thc
set. Then one step of the S-hull algorithm is applied to its complemt:nt. In othe1'
words, one step of the S-hull algorithm is applied, then zeros and ones are interchanged, the another step of the S-hull algorithm is applied, and finall)', zeros and
ones are interchanged again. This has the eff:?ct of graduall)' reducing the ma.'cimum
curvature of the boundary of the set. More precisel)', with few e.\:ceptions, the bound-
ar.,r set of a set invariant under this algorithm can turn a ma.'cimum of 45 at an)'
"~
verte.\:."
Appendix B
Curvature Estimation
The Nitzberg et al. [118] mt>thod of estimating the curvature is based on the local
circle touching method, Le. estimating the parameters of the best fitting circle and
{Qi
(Xi> Yi),
error e2 , for a given candidate centre c and a radius r, is equal to the sum of radial
distance squared from the circle to each point Le.
(B.1)
Qi
LWi(IIQi -
cJ12 -
r 2)2
i -
(2r)2L w i(llQi -
cll- r)2
cll- r)2
(B.2)
180
.4.PPENDIX B.
CURUTtiRE ESTI.\i.UION
ISl
lB.3)
Nitzberg et al. [118J estimated the parameters a. b. c and d nsing cqnat.:on 8.3
by first constructing the design matrix .1.
.4 =
where n is the number of point used to fit the mode!. They defined the pararneter
vector B as follows:
B=
For the error weight matri:", Nitzberg et al. [118] smoothed the data using' a fixed
standard deviation
(1
a~fixed
odd inteser
window size m (the number of points to fit at a time), they then built a diagonal
matrix W of the size of the window Wi, i = l, ... , 2m + 1 of weights given by
APPENDIX B.
so that
1J!m+J
CURVATURE ESTIAJATION
182
Tl' =
W,
W2
o
o
o
Wn
where
n
LWi
LWi
LWi
i=l
i=I
n
:EWi Xi
LWi 'Yi
LWi '110;11 2
i=l
i=l
n
oX ;
L;Wi -Xi
LWi . Xi '110;11 2
Yi
i=l
i=l
i=l
i=l
LWi' Yi
LWi-XioYi
LWi 'Y;
LWi' Yi
i=l
n
i=l
i=l
LWi . Xi
i=l
i=l
[b2 + c2
2
LWi 'lI oill
'lIoill2
LW; . Yi'
lI oill 2
'lI oill 2
i=l
n
LWi '110;11
i=l
i=l
- 4ad] =BTCB'
r
= ~ '2
b c d]
-Xi
0 -2
-2
O.
.4.PPENDIX B.
CURHTURE E5TD1.4.TION
183
B T .4.T IV A B
BTCB
(B..t)
Where E is an orthogonal matrLx2 and A contains the eigenvalues of .4.T l,V .4..
From linear algebra, the square root of A (a diagonal matrix) is equal to the square
roots of its elements. Equation B.5 can be rewritten as follows:
(8.6)
If an element i of the diagonal of A is zero at this point, then thrre has been an
B = (i th column of E).
Let
A2
C
_1
(B.i)
APPENDIX B.
CURV.4.TURE ESTIMATION
184
then
B = .42"' . (i th column of E 2 )
The error term in distance units is given by
With each fit, the curvature of only one point -the middle point of the window m + lis estimated
4 If
K.
= 7":b2:-+---...,Cl::-_---:4-.-:-a-.~d