You are on page 1of 15

QUANTITATIVE GEOGRAPHY, 1967

LESLIE CURRY
University of Toronto

THE QUANTIFIER, as distinct from one handling statistical data, represents a


reaction to the empiricism of nineteenth-century German science which became all
but universal in twentieth-century geography, It was not always so. Lukermannl
has documented the view that the primacy of mathematics as a scientific language
was apparent to the Greek geographers. He has pointed to the importance of
probabilistic notions for the French school and of geometrical notions for Ratzel.
Kant, the pillar of recent methodology, equated the scientific content of a study
with the amount of mathematics it employed.
Very little methodological common ground exists among quantifiers; a direct
or remote connection with mathematics does not provide sufficient communality
since this discipline has an even wider area of discourse than geography. Certainly
if Hartshorne and Sauer be taken as the limits of the spectrum of methodological
discussions in the 1940s, they appear cosy intimates compared to the quantifiers of
the 1960s. The deliberate shunning of plausible argument in the social physics of
Warntz, the normative approach of Garrison, the areal association viewpoint of
McCarthy, the probablistic notions of Dacey do not represent all the strands of
the fabric.
If there be a common ground it probably lies at the level of tactics rather than
strategy or war aims and concerns the degree of articulation of abstraction which
investigators are willing to sustain. Abstraction is universal and the issue concerns
the role of empiricism in a study. Most quantifiers would take observations of
reality to compare with theory. Since theory is itself a conceptualization of an
extremely simplified version of reality, the empiricists' role is to devise and apply
computational models directed by and towards theory. This approach is quite
different from the traditional role of the empiricist as collector of facts the garnering
of which is suggested by a number of unarticulated and informal theories and whose
relationships are discussed in a similar vein to their collection. In other words, the
quantifier will not, by his peculiar definition of science, investigate a portion of
reality. Rather he will investigate a predetermined set of relationships existing in
a portion of reality. The anonymously drawn cartoon" (Figure 1 ) typifies this
difference.
Quantitative methods can conveniently be discussed under two headings :
analytic models which investigate the structure of concepts and computational
models which relate the former to the real world. It is surprising that, in a geo-
graphy becoming increasingly and explicitly theoretically oriented, so few analytic
models are being written. This essay will concern itself with empirical models which
usually involve the handling of data. The history of the last dozen years can be
summed up as the adoption of methods developed elsewhere and the gradual
appreciation of the difficulties of spatial analysis. Even everyday concepts like
265
CANADIAN
GEOGRAPHER,
XI, 4, 1967
266 THE CANADIAN GEOGRAPHER

FIGURE1

density, length, texture, regularity, homogeneity lose their naive simplicity under
formal investigation, but gain a much richer content in the process. We shall seek
to describe the main problems facing the use of current computational models:
each individual avenue of research will clearly have its own unresolved issues but
we shall refer only to those of more general concern.
QUANTITATIVE GEOGRAPHY 267

UNIFORM AND CONTORTED SPACE

Bunge3 and others4 have argued that theory should be written for the uniform
plane using deterministic phrasing and that the applicability of the results should
then be tested in the real world by map transformation. Certainly most of location
theory is essentially Euclidean and some form of space stretching can produce more
“realistic” results. Thus, for example, if we expect towns to be evenly distributed
given a uniform rural population basis, a mathematical expression of the actual
distribution of farmers will allow us to obtain the expected distribution of towns.
Metaphorically, we have pulled a rubber map of uniform density to fit a map of
actual density and in so doing “derived” a map of towns. One might well quarrel
with this argument not only in relation to specific theories such as the Berry-
Garrison amendments to Christaller5 but also on the mundane grounds that the
co-ordinate manipulations appear fantastically difficult for more than the simplest
problem. If we could gain the level of sophistication necessary to transform, we
could probably write theory in terms which would not require it. As might be
expected, few examples exist of this approach.
The practice of co-ordinate transformation is very similar to the method of
surface fitting. In the former, space is stretched to produce a uniform distribution,
that is, the spatial variability of some phenomenon is rendered uniform by varying
the locational co-ordinates of originally uniform grid points according to some
mathematical transformation. In the latter, space is again stretched, this time by
retaining the original grid but pushing the “rubber sheet” vertically to a different
-amount at each grid. It is obvious that the mathematical specifications of the fitted
surface and of the co-ordinate transformation are related. No work seems to have
been done on this topic.
The quantitative description of the spatial distribution of any phenomenon
which can be regarded as continuous has been attempted on numerous occasions.
A number of different methods have been employed but while their elegance is
admirable it is usually difficult to interpret the results. They are thus generally to
be regarded as “pure description” and any analytic merit they enjoy as fortuitous.
Until we have theory which allows us to anticipate the mathematical form of the
surface, the fitting of arbitrarily chosen functions cannot by itself provide insights.
Nevertheless, such methods are indispensable to any quantitative geography since
our aim must always be the derivation of the set of equations which describe the
surfaces empirically. Thus the type of functions to be fitted should at least be
suggested by the phenomena under study.
The greatest success by far in surface fitting has been the use of orthogonal
polynomials in geology and geomorphology6 since the first-order surface has an
immediate interpretation as the general tilt of the land and the second-order as
concave or convex warping. Higher orders are more difficult to put a finger on but
even residuals have been identified in substantive terms7 Semple, under Casetti’s
supervision, sequentially fitted cones to the map of economic change.x This also
achieves consistent plausibility in the notion of development diffusing out from
growth centres. By and large, however, we can never be sure that the components
forming the surface are meaningful. In climatological time series this occurs with
268 LE GBOGRAPHE CANADIEN

Fourier series when sometime the amplitudes and phases look plausible and at
other times mysterious.
Perhaps the surfaces used most frequently are empirical orthogonal functions,
the method being known as factor analysis. For example, airline routes have been
divided into national, regional, and local; trade and commuter flows have been
apportioned to centres and hinterlands while urban centres have been classified in
various ways.8 Perhaps the most surprising case is its use on sets of weather maps
in which the components extracted retained climatoIogica1 meaning as typical
weather patterns, instead of being mixtures of types as might have been suspected.1°
There is a clear need for, and much potential profit in, seeing if these patterns are
consistent through the year.
Essentially, this method takes a set of simultaneous equations describing the
characteristics of a number of areas or points and reduces the number of equations
as far as possible. Thus, whereas we might begin with thirty characteristics for each
area, we can end up with two or three, by eliminating those proportions of any or
all of the characteristics which are mathematical functions of others. Unfortunately
there is no guarantee that the characteristics with which we end up can be named,
much less understood. Certainly the only occasion in which factor analysis will
provide new insight is when we are quite ignorant of the subject being studied and
then, except in certain applied problems, it is doubtful whether we should be
investigating them to begin with. However, as a device for simplifying the descrip-
tion of areas it will always have its uses. But as with all analysis, output to increase
understanding is a direct function of theoretical input and factor analysis needs
hardly any of the latter.
An interesting effort in quantitative geography is concerned with regionalization
and classification but the author’s ignorance of the field does not allow specific
comment.ll In general terms it may be said that until scaling of the weights to be
given to variables is other than fortuitous, the results will always be open to doubt.
As a common yardstick is not likely to be available between, for example, the
infantile mortality rate and the percentage of the labour force in the dairy industry,
let alone the relative scaling to be employed, elements of subjectivity must always
be present. One field which might allow such a scaling is the setting up of planning
regions where the definite criterion of maximizing the efficiency of the communica-
tion-decision network is imposed. Another problem of classification is the search
for natural breaks in series of data or natural clusters of data.12 Methods here too
do not appear to have been rid of subjective judgements except where clusters are
plain to the casual observer.13

CORRELATIONS AND AUTO-CORRELATIONS

There is a great need in geography to pass easily between the synoptic level of
the map and the, usually more local, level of process. We have to be able to
express both the process and the map or aspects of the map in the same terms.
This is too large a subject to be treated exhaustively in this brief essay but the
simple and well-known example of population potential will make the point
~1early.l~I n a population spatial series, a spatial averaging is performed with the
QUANTITATIVE GEOGRAPHY 269
weights assigned being an inverse function of distance from each local origin in
turn. This arithmetical operation describes a local influence function from which
the whole map may be constructed.
In statistical language, this gap between local influence and map is bridged by
the autocorrelation function.16 In essence it describes the common degree of spatial
averaging with distance which has been applied to the whole map. It can be
approached in two directions, this being its analytic power. The map may be
processed and the autocorrelation function obtained. Alternatively the local process
may be written in mathematical terms and this will define the autocorrelation
function.
Autocorrelation is closely tied to areal differentation and indeed is a precise
definition of aspects of it. Obviously if there is no distance influence function each
observation is independently random and there is no pattern or texture to be
explained. Variations in the intensity and distance weighting of the local influence
will produce different spatial patterns which will be reflected in the scale of
differentation. Mathematically this scale factor is obtained as the inverse Fourier
transform of the autocorrelation function known as the spectral density or power
spectrum. Here the total variance of the map is assigned to the various scales of
differentation.ls While having some mathematical advantage, this function has no
substantive meaning additional to that of the autocorrelation function. However, it
may often be more convenient to think in the scale of differentation domain than
in the local influence domain.
This is likely to provide the main field of investigation for statistical geography
for several years. Its pertinence may be illuminated by a few examples. Consider a
succession of two-dimensional infra-red images of a forest area during the course
of the night. Autocorrelation functions and/or spectral densities of apparent surface
temperatures are calculated and compared. The transfer functions which equate
successive measures must describe the horizontal exchange of energy by long-wave
radiation and advection. Again consider the elevations of a landform type: distance
influence functions which may be obtained are clearly related to genetic processes,
possibly the height of the water table. In the field of land productivity, Fairfield
Smith's data show crop yields to have an intriguing spatial autocorrelation which
cries out for attenti0n.l' There are numerous examples in the area of economic
localization in which the autocorrelation function provides a useful measure of
distance influence, as well as a concept for the theoretical analysis of spatial
behaviour.l* One has but to look at the work of Longuet-Higgins in oceanography,
describing the surface of the ocean, to appreciate the incredible variety of results
which can be obtained if spectra of derivatives as well as of the original functions
are ~0nsidered.l~ It is difficult to imagine a micro-climatology which did not have
time autocorrelation of vertical velocity available for estimating vertical transfer
by eddies.
We have so far discussed a function which describes how correlation of a single
series varies with distance. The more popular use of correlation is to consider the
relation of two series without considering lags in time or space. We are thus
concerned with relationships between, for example, temperature and meridional
wind through time at a point, or between wind fluctuations through time at two
270 THE CANADIAN GEOGRAPHER

points, or between humidity and temperature at many points at an instant of time.


The latter is the areal association problem and is discussed separately. There has
been some critical work done in climatology, in recent years, with the time relation-
ship of two variables at a point. The whole procedure of atmospheric budget-
keeping for sensible heat, momentum, water vapour, etc. which has so modified
our notions of the general circulation are based on this elementary procedure.20 In
the course of this work an important principle of general interest has been
demonstrated. This is that the transfer of some quality may be made against the
mean gradient and indeed that the mean flow may be against the mean gradient.
It is a good example of the dangers of transferring short-time notions to long-term
conditions.
There is an intimate connection between the auto-correlation function for time
series and forecasting. Clearly if we can assign lag weights to local influence
functions in the past these may be used to extrapolate future conditions on a
probabilistic basis."l A similar problem with spatial series concerns the estimation
of map accuracy depending o n t h e number of sampling points: it is only when we
have some knowledge of the general shape of the autocorrelation function that our
procedures assume any precision.22 There is an allied field of investigation in map
generalization, best discussed in the domain of the scale of differentation. Statistical
mapping is largely concerned with the filtering of information, either in the design
of optimal filters or in assessing the filtering which has already been performed on
published data. This involves both the choice of class interval and the degree of
spatial generalization required.23 It is clear that the spectral density of the data
will have a profound effect on the choice of optimal intervals or areas. Yet another
topic which is somewhat related concerns the length of a natural boundary such as
a coastline; Nystuen, following Perkal, uses an "epsilon" measure, that is, a series
of arcs of given radius of curvature which are fitted to the boundary.24 By employ-
ing successively decreasing radii, the actual shape is approximated more exactly.
This method has obvious affinities to spectral analysis.
A feature of spectral analysis which becomes much more important when applied
to area than to time is the question of size of sample. It is obvious, of course, that
the maximum wavelength which can be resolved will depend on sample size. More
important, however, is the fact that a form of periodicity is likely to be induced in
the spectral density because of the finite sample. As has been said, this happens
for time series also but since sample size can be easily increased towards infinity
the oscillations can be ironed out. Consider for example two taxicabs running
between and serving A and B and in which the time taken for picking up a pas-
senger and making the run is constant plus a small random term. Given the initial
starting positions, it is clear that we may for a while specify the location of one taxi
from a knowledge of that of the other. Eventually, however, we will not be able to
do so. The solution for position is periodic but periodicity declines and disappears
as sample size in~reases."~ Think of this same effect in terms of almost regularly
spaced points in which dependence is out from a point. For small samples the
regularity will show as strong peaks in the spectrum for the fundamental wave-
length plus harmonics but as sample size is increased the periodicity will disappear.
Usually in studying time series we are not concerned with these transient solu-
tions. However, to say that in space series we are not concerned with hills and
QUANTITATIVE GEOGRAPHY 271
valleys because they are averaged out in the hypsometric curve would be a rather
useless level of generalization. The steady state in a space series will only rarely be
of interest and we are left with distance-dependent solutions and transient or local
states.
A spectral density is a type of average of the degree of likeness of all points on
the map which are d yards apart. This clearly only makes sense when the area
under investigation is homogeneous in some way. It is not always easy to decide
what we mean by homogeneity. Where the Black Hills stand out from the sur-
rounding plains, it seems clear that they should be treated separately. However, in
analyzing urban settlement distribution, is it reasonable to treat the arid west
separately from the humid east? To do so implies that urban settlement is related
to the agricultural productivity due to rainfall, a rather doubtful proposition. Might
not the processes generating urban settlement actually require wide areas with
poorly developed urbanization?26 It seems clear that homogeneity is dependent on
our theoretical notions of process and cannot be obtained from the data.
It should perhaps be pointed out that there are many problems of geography in
which the scale of differentiation is of the same order of magnitude as the map and
consequently autocorrelation and spectral methods are inappropriate. We cannot
estimate an autocorrelation with a lag greater than about a tenth of the size of the
map since one can have no real confidence in a smaller number of samples.
Obviously a priori knowledge and the trend of smaller lag autocorrelations will
affect our belief in the results but there must often be a point in which we leave
the probabilistic world and face the deterministic. In this connection it is interesting
to note that a polynomial which can adequately represent a statistical surface can
be built up, term by term, by the exponential smoothing of independently random
data.'? I t may thus be possible to obtain local influence functions from a mathe-
matically described surface without being tied by sampling limitations.

AREAL ASSOCIATION

The description of the degree of correspondence of two maps and the method
of estimating a map from others are very similar but there is a difference. The
former is a form of pure description which need not have any substantive contri-
bution to make. Indeed it will treat a nonsense correlation with as much respect
as a carefully argued structural equation. For example, two sets of variables may
have no conceivable functional relationship but if each set is serially correlated
then a cross correlation will occur over some parts of the series and the maps may
occur in such a sequence.28 On the other hand, the two sets may be the results of
similar and process-related local influence functions and yet show zero correlation
over-all. It is probably most useful to regard the map correspondence problem as
the most naive form of fishing expedition. The best method to use, if there are
sufficient data, is to investigate the cross-correlations between the various scales
of differentiation obtained by a spectral approach. However, the task is large and
it is much more dicient to abandon the objective of pure description, have some
ideas on the relationship being investigated, and be able to make some sense of
the results.
We shall turn now to the more interesting problem of multiple regression and
272 LE GBOGRAPHE CANADIEN
correlation. As multivariate analyses enter the journals increasingly, the issue of
the validity of their results is forced to our attention. Usually the theory which has
guided the study is loosely qualitative-for which the authors cannot be blamed-
so that the precise numerical results, often providing considerable explanatory
power (in the statistical sense), are quite startling. Yet it is well known that fairly
arbitrarily chosen variables having only something to do with the phenomenon
being investigated can usually be made to exhibit considerable explanatory power
with a sufficient degree of transformation. Thus, while a and b may be thought to
be related to c, the statement that .325 4- .69& f .007 arc cos b explains 83%
of the variance of c is disturbing. Do these functions provide new insights or do
they represent a game with numben? Were they chosen to improve the normality
of their distributions and should they therefore be taken as having substantive
significance? Were they the result of trial and error to obtain a better fit and are
they therefore meaningful? Clear theory could provide a categorical answer but
since this is presumably not available, how should we regard them? Is a set of
linear variables explaining 55 per cent of the variance less preferable to the results
of a fishing expedition which explain 83 per cent, both being deemed “significant”?
The only possible answer to this quandary appears to be the device of leaving
some of the observations out of the analysis and then attempting to forecast these
c’s using the a and b equations so obtained. If this does not differentiate, one must
fall back on the oldest scientific adage of all-economy of hypothesis-and, unless
it can be demonstrated otherwise, linear variables are plausibly the simplest.
There is perhaps nothing more perplexing than statements on the statistical
significance of areal associations. It is certainly reasonable that we should enquire
as to the possibility of the discovered association having arisen by chance but it is
doubtful if current tests do this. Consider a point or vertical process such as relates
soil chemistry and productivity. Observations taken of this type of process at
different locations are independent even when spatial autocorrelations are present
in the data. The fact that rain amounts or soil types are not spatially independent
is not relevant to the process being investigated. In this case the usual significance
tests are valid.2QHowever, most processes are spatial in character and we must
take account of dependence in the variables so induced. Sampling in space of such
. ~ ~such conditions the method of seeking
processes is unlikely to be i n d e ~ e n d e n t In
associations between the various scales of differentiation, that is, via the cross
spectrum, is the proper one.
However most statistical maps are drawn from small sets of administrative unit
data so that spectral methods are inappropriate. Yet at the same time they cannot
be regarded as independent samples. Little work appears to have been done on the
problem of areal association in this context apart from an interesting approach via
contiguity.s1 However, a degree of randomness is still required which may not be
present. After all, if we can account for most of the variance of a map by a poly-
nomial then clearly the observations are highly determined by position.
Assume we fit polynomial surfaces to each of two maps: having only one equa-
tion we could not solve for the co-efficients. Fit only one of the maps and reduce
the other map by the polynomial expression so formed, the method of doing so
being uncertain. The residuals so formed could then be tested to see if a further
QUANTITATIVE GEOGRAPHY 273
polynomial could be fitted. This might simply indicate that a non-linear relationship
obtains or, alternatively, imply that a real departure from the relationship occurs.
The measure of association would be the proportion of the variance of the second
map explained by the polynomial of the first. Since fitting would be by least squares,
the final residuals are likely to be normally distributed.
An alterqative procedure might be to use the first polynomial to transform the
co-ordinates of the first map to a uniform distribution. Use the same transformation
on the second map so that locations remain identical on both and then, after
checking that the relationship is linear, use what remains relative to the total as
a measure of non-association. This method would have the advantage of displaying
clearly whether some of the non-association might be simply due to a different
degree of spatial averaging between maps, either by real process or in data
publication.
These suggestions have other implications. It frequently happens that the size
of units varies over the map, the smaller being concentrated in one or a few areas
usually because of a denser population there.= This raises the question of different
levels of filtering of information on a single map. By using a polynomial of the
data rather than the data itself this differential effect will be considerably reduced.
Consider for example a first order Tchebycheff polynomial which simply shows
the general linear tilt of the surface. Although the relatively large numbers of units
in the densely populated areas will have a disproportionately large share in the
fitting of this surface, their combined effect will be strongly limited because the
weighting of observations is based on the spacing between data points.
The same effect applies to an areal association problem and limits the weighting
of small areas which may well be excessive. The net result is to smooth the filtering
effect of unit size over the whole map: whether such a scale of filtering is pertinent
to the problem in hand is another matter but this arises just as readily with present
methods.
One possible disadvantage may be noted. A reasonably small degree polynomial
will not include all of the information of the map. It is possible that this content
will be correlated with the fhal residuals. No significance levels can be attached to
the amount of association since the degrees-of-freedom argument becomes useless
in such a context. Logically we only have as many degrees of freedom as we have
orders of the polynomial and this would result in most relationships being declared
not significant. However, it is doubtful whether anyone’s confidence in his results
has been hampered by the statistical limits put on it.
Obviously this method is only applicable in certain contexts. If the eastern half
of the area consists of contiguous zeros, a polynomial approach is likely to give
strange results, especially if there are a priori reasons for the zeros.

POINT STATISTICS

The main contributor to the study of two-dimensional dot distributions has been
Dacey33 but unfortunately there are very few examples of this work being taken
up in a substantive context. Methodologically the work is simple: given a region
divided into unit areas, form a frequency distribution of points per unit area and
274 THE CANADIAN GEOGRAPHER

estimate how well this data may be described by various probability density
functions. Since these functions have many linked characteristics, a number of
areas of substantive interest are opened up for examination. Since an areal density
implies a mean spacing of objects, it is clear that a probabilistic density implies a
probabilistic spacing. A density function having first and second moments equal is
termed a Poisson series and shows independent placing of objects. When the
variance is greater than the mean, dependence is being introduced by attraction
between objects forming clusters. In the opposite case, repulsion occurs, producing
regularity. We can thus form some ideas on the type of process resulting in the
distribution and have the whole of probability theory to structure our description
of process. Again, as we speak of dependence between objects, it is clear that we
can move into the covariance between objects (or areas) and the spectral density
so formed.
The main difficulties attending the applications of stochastic models to the real
world are those encountered in other areas. The mixing of scales of phenomena,
so that probabilistic processes occur on deterministically differentiated surfaces is
a thorny one but certainly not beyond solution by the use of polynomials. The
mixing of probability distributions is also important and solutions are available
for only a few cases.34 Again, choice of quadrat size can affect results but this is
likely to be a consequence of the above-named difficulties. However, more than
anything else, the main need is for the employment of these powerful techniques
in actual cases. Harvey and Getis, for example, have pointed the way to a rigorous
verification of Hagerstrand’s diffusion model by the fitting of negative binomial
density functions.35 We could probably learn more about the settling of North
America in process terms by observing changes in the parameters and forms of
probability functions than by a decade of archival research.
A rather curious notion used by Medvedkov is the degree of entropy of a spatial
distribution of points tending toward regularity.3BSince the mean is greater than
the variance it follows that their comparison will indicate the degree of regularity
or of independent randomness of the data. However, Medvedkov states this
differently. He uses the variance as a measure of “entropy,” essentially randomness,
and mean minus variance as a measure of regularity. In other words he, quite
legitimately, conceives the distribution as being a mixture of a Poisson and regular
distribution. The strangeness occurs in believing that this mathematical fiction can
be recognized on the ground: x percent of settlements are regularly distributed and
100-x are Poisson. This is akin to saying that an even angular distribution of misses
around a bull’s-eye is identical with so many hitting the bull’s-eye and others being
randomly placed. Maybe the scores would be the same but the shooters are certainly
different.
DATA-MAN

It takes but little confrontation with assembled data or a summer for field work
to appreciate that some model of men as organizers of space is necessary to guide
one’s work. This linking of model and data, while a universal scientific task, does
present certain characteristic features in geography. Since almost always groups
of men are the concern while frequently only it is individuals who may be observed,
QUANTITATIVE GEOGRAPHY 275
the first problem is to find a method of modelling which will allow individual
characteristics to be ascribed to groups. The second concerns census data. In
exploiting these rich resources a total of characteristics is ascribed to a fairly
arbitrary grouping of people and we must somehow erect a model of man to bring
them to human terms. These problems are the reverse direction of the model-data
link involving a step in aggregation. It is of course possible to bypass this quandary
by having both model and data at either the individual level or the group level.
However, while the data are available, very few activities may be plausibly discussed
as group behaviour. On the other hand, while individualistic models are generally
the easiest to provide, the enormous number of samples necessary to obtain
sufficient data to fit them, except for highly homogeneous groups, is formidable.
Consider for example a journey-to-work study: a meaningful individualistic decision
model relating places of work and residence would need to incorporate so many
variables that a very large sample would be necessary to evaluate it.
Alternatively, one can choose one or two of the main variables, stratify them in
some way and erect “representative” men for each stratum. Behaviour may be
quite contradictory between each stratum since we have sharply contracted the
number of variables being considered. However, they are sufficient to adequately
constrain the behaviour of this larger number of independently considered repre-
sentative men. This process could be continued indefinitely with the final result
being an extremely large number of representative men having an extremely small
number of constraints on their actions. In probability theory it makes no difference
whether we speak of a single particle having a probabilistic existence at specific
places or a large number of particles actually being at different places according
to the corresponding frequencies. Consequently, instead of many representative
men each having a small number of possibly different constraints we may use a
summation man in which the constraints are generalized and lose plausibility in
the process. However, the probabilities of the constraints applying in individual
cases may be convoluted and thus we end up with the summation man acting quite
randomly between possible states, the transition probabilities between states being
the convolution of the probabilities of the constraints applying in individual cases.37
This summation man is, of course, different from the individual decision maker.

T H E T I M E ANDSPACEOFGEOGRAPHY

The most fundamental problem besetting geographical studies is the nature and
specification of the time and space dimensions involved. Concomitantly a recon-
ciliation is necessary between the various specifications and between their appro-
priate languages. Some of the difficulties can be exemplified by climatology. To
appreciate the all-important heat and moisture balance at the surface we must look
at interchanges of energy across the atmosphere-earth interface via turbulence.
The extremely small-scale eddy structure forces a statistical language on us for
studying collective properties. Yet bushes, hills, houses are affecting the turbulent
structure and these are highly specific deterministic features at this scale. Hence
the penchant for the featureless plain.
Moving up in scale, the bushes become a vegetation surface, the hills become
276 LE G ~ O G R A P H ECANADIEN

a type of topography and so on. Again we are back in the collective language but
again we are influenced by deterministic phenomena of meso and macro scale-
the general circulation or the lie of a continent. Finally, at the global level we may
treat weather systems of the size of half a continent in statistical terminology but
realize that our earth has a single specific distribution of land and water or of
mountain ranges. Thus, even within the restricted area of atmospheric circulation
we are constantly moving between operationally incompatible languages as the
result of the spaces we encounter.
How much more irreconcilable are these scales of operation in a truly regional
study seeking understanding of the integration of phenomena in area. Nature may
have achieved an integration but we cannot understand it. A literary tour de force
may provoke a poetic intuitive grasp of this totality but in any mundane scientific
sense we are very far from it. The best we can hope for in computational models
is a “parametrization” of smaller scale collective processes in deterministically
studied higher scales or, alternatively, a movement between the collective scales of
study, regarding the deterministic influences as exogenous to lower-scale studies but
compatible with higher scales.
Earlier we referred to the reconciliation necessary between the local operator,
representing process, and the large scale of the map. For many purposes the map,
being a synoptic view, is of no significance whatsoever to the processes going on.
Even the existence of maps implies a viewpoint which will not be available to the
actors on the landscape. Kinglake expresses this idea nicely:
In so far as the battlefield presented itself to the bare eyesight of men, it had no
entirety, no length, no breadth, no depth, no size, no shape and was made up of nothing
except small numberless circlets commensurate with such ranges of vision as the mist
might allow at each spot. . . . In such conditions, each separate gathering of English
soldiery went on fighting its own little battle in happy and advantageous ignorance of
the general state of the action; nay, even very often in ignorance of the fact that any
great conflict was raging.38
If this notion be at all realistic then it must follow that an important task of
institutions is to provide address systems by which the whole area may be articu-
lated or be searched using only local information. This appears to imply some
form of hierarchical ordering of locational registers and a structure which will
allow for the easy assimilation of new information. The organizations which allow
the linking of human memories of locational information into operational regional
structures control the circulations of economic geography.
Space may often be regarded as simply adding another dimension to the one of
time when analyzing a static record. Most of time series analysis can be carried
over into map-like problems since the directionality of time has not been exploited.
We could just as easily have run time backwards and obtained the same results.
This applies to smoothing and spectral densities for example. However, even in
one dimension, spatial problems are often two-directional while time is one-
d i r e ~ t i o n a l .But
~ ~ even this is a difference developed by the physical sciences in
which the future state of a system is determined (or constrained within the limits
of the precision of explanation) by processes defined from interrelations of past
events. In the social sciences this need not happen. Men may attempt to estimate
the future from the past and then base their present decisions and actions on this
QUANTITATIVE GEOGRAPHY 277
prediction, so raising the possibility of rendering their initial forecast incorrect.
In essence there is no difference in directionality between time and space in this
viewpoint, at least for contiguous event interaction.
There are several possible ways of viewing the future in evolutionary studies.
The most obvious is to extrapolate the trends in spatial distributions, with or with-
out the use of local operators. In this approach we consider only the results of
interacting processes, thus saying that, although evolution could possibly be
described by a set of equations, history has itself solved these eq_uat&ns and we
need only look at the trend of these solutions to predict the future.40
Either buried within these trends or excluded from the analysis is the possibility
of decision makers in the second period making such a forecast for the third and
then making their locational decisions on the basis of this prediction which is thus
never realized. Clearly there is the possibility of even more clever decision makers
who regard this one-shot feedback decision as itself a forecast and require two or
more feedbacks before finally deciding. There is no a priori way of knowing what
level of complexity is required so that computational models of economic location
using one directional time have been questioned by Koopmans and Beckman41

CONCLUSION

The extent of use of various computational models to date has been in inverse
relation to their theoretical requirements. The lack of formal theorizing is the major
bottleneck to progress and it is difficult to foresee how this situation will change
in the near future. The welter of specialized fields which cannot really be justified
in terms of their content increases. The education of the geographer consists of
exposure to these fields at the phenomenological level rather than at the logical
and computational. The noble aim of studying the integration of phenomena in
area is abandoned or becomes an exercise in poetic intuition rather than rigorous
analysis. But spatial association is present in all fields; a Markov chain can be
applied to mass wasting, river joining, retail shopping, or changing climates;42
diffusion occurs for sediments, water vapour, money, and cultural traits.43 Were
the features common to the various specialized branches emphasized and a desire
promoted to link them logically, then the need for mathematical and statistical
models would be apparent to all.
It is hazardous to guess about the impact of quantitative methods on geography
fifty or a hundred years from now. Development so far suggests that the future
course will be governed more by technological change than an optimal search for
scientific advances. Computers have already had some impact and are likely to
loom much larger. Earth-looking satellites returning locationally specified data in
all bands of the electro-magnetic spectrum and, considerably improved in resolu-
tion, will make heavy demands on data-processing capabilities. Automated pattern
recognition will be promoted. Much of the “world knowledge” function of geo-
graphy will be supplanted by on-line viewing of any desired part of the earth. A
good deal of the decision making for allocating resources will also be programmed,
using this flow of information. In all of this the quantitative geographer will play
an important role and be subverted from considering fundamental issues in the
process.
278 THE CANADIAN GEWRAPHER

REFERENCES

1. LUKERMANN, F., “The ‘calcul des probabilitks’ and the kcole franGaise de giographie,”
Can. Geog., 9 (1965), 128-38; “The Concept of Location in Classical Geography,”
Annals, Assoc. A m . Geog., 51, 2 (1961), 194-210.
2. ANONYMOUS, Geography, I, 3 (1964), 15.
3. BUNGE, W., “Patterns of Location,” M.Z.C. of M . G . Discussion Paper 3, 1964.
4. HUDSON, J. C., “An Algebraic Relation between the Losch and Christaller Central Place
Networks,” Professional Geog., XIX, 3 (1967), 133-35.
5. BERRY,B. J. L. and W. L. GARRISON, “A Note on Central Place Theory and the Range
of a Good,” Econ. Geog., 34, 4 (1958), 304-11.
6. KRUMBEIN,W. C., “Confidence Intervals on Low-Order Polynomial Trend Surfaces,”
1. Geophys. Research, 68, 20 (1963); “Classification of Map Surfaces based on the
Structure of Polynomial and Fourier Coefficient Matrices,” Colloquium on Chssification
Procedures: Computer Applications in the Earth Sciences (University of Kansas, 1966),
pp. 12-18; CHORLEY, R. J. and P. HAGGETT,“Trend-Surface Mapping in Geographical
Research,” Znst. Brit. Geog. (1965), 47-67.
7 . MERRIAM,D. F. and J. W. HARBAUGH, “Computer Helps Map Oil Structures,” Oil und
Gas J . (Nov. 1963), 2-6.
8. SEMPLE,R. K., “A quantitative separation and analysis of spatial trends in the viability
of small urban centres in Southern Ontario,” M.A. thesis, Dept. of Geography, Uni-
versity of Toronto, 1966.
9. BERRY,B. J. L., Essays on Commodity Flows and the SDatial Structure of the Indian
Economy. Dept. of Geography, University of Chicago, Research Paper No. 1 1 1 . 1966:
BERRY,B. J. L., et al., Comparative Studies of Central Place Systems. Final Report,
Geography Branch, O.N.R. 1962; “Functional Economic Areas and Consolidated Urban
Regions of the United States.” Final report to Social Science Research Council, 1967,
unpublished.
10. KUTZBACH, J. E., “Empirical Eigenvectors of Surface Pressure, Temperature and Precipi-
tation over North America,” Paper, N.S.F. Seminar on the Use of Quantituti1.e methods
in Physical Geography, University of Iowa, 1967.
11. CASETTI,E., Multiple Discriminant Functions. Tech. Report 11, O.N.R. Geography
Branch, 1964; Classificatory and Regional Analysis by Discriminant 1teration.s. Tech.
Report 12, O.N.R. Geography Branch Northwestern University (n.d.1. RAY,D. M. and
B. J. L. BERRY,“Multivariate Socio-Economic Regionalization: A Pilot Study in Central
Canada,” Papers on Regional Statistical Studies, Canadian Political Science Assoc.
Conf. on Statistics, Charlottetown, P.E.I., 1964, pp. 75-122. KING. L. J., “Cross-
Sectional Analysis of Canadian Urban Dimensions: 1951 and 1961,” Cnri. Geog. X, 4
(1966), 205-24.
12. BERRY,B. J. L. and W. L. GARRISON, “The Functional Rases of the Central Place
Hierarchy,” Econ. Geog. 34 ( 1 958).
13. KING,L. J., “The Functional Role of Small Towns in Canterbury,” Proc. 3rr1, N . Z . Grog.
C o n f . (Palmerston North, 1961).
14. WARNTZ,W., “A New Map of the Surface of Population Potentials for the United States,
1960,” Geog. R . , 54 (1964), 170-84.
15. MATERN,BERTIL,“Spatial Variation; Stochastic models and their application to some
problems in forest surveys and other sampling investigations,” MedddmdPIi Frun
Sfatens SkDgsforskningsinstitut; 49, 5 (1960), 1-144.
16. TOBLER,W. R., “Spectral Analysis of Spatial Series,” in Fourth Annual Conference on
Urban Planning Information Systems and Programs, Berkeley, 1966 (unpublished):
PRESTON,F. W., “Two-Dimensional Power Spectra for Classification of Land Forms,”
Colloquium on Classification Procedures; Computer Applications in The Eorth Scietrces
(University of Kansas, 1966), pp. 64-69.
17. RAYNER, J. W., “Correlation between Surfaces by Spectral Methods,” in “Colloquium on
Spatial Series Analysis: Computer Applications in The Earth Sciences (University of
Kansas, forthcoming). Quoted in WHITTLE,P., “Topographic correlation, power-law
covariance functions and diffusion,” Biometrika, 49 (1962), 305-14.
18. CURRY,L., “Central Places in the Random Spatial Economy,” to be published in the
J . Regional Sci.
19. LONGUET-HIGGINS, M. S., “The statistical analysis of a random, moving surface.” P/lilos.
Trans. Roy. Soc., London, Vol. 249, A.966, 321-87.
QUANTITATIVE GEOGRAPHY 279
20. STARR,V. P., Studies of the Atmospheric General Circulation IV: Final Report Planetary
Circulations Project, M.I.T., 1963.
21. BROWN,R. G., Smoothing, Forecasting and Prediction of Discrete Time Series (Englewood
Cliffs, N.J.: Prentice Hall, Inc., 1963).
22. SWITZER,P., “Pattern Reconstruction: Introduction and Summary,” paper presented at
First Seminar on Quantitative Geography, Ann Arbor, 1966 (mimeographed).
23. CASETTI,E., “Analysis of Spatial Association by Trigonometric Polynomials,” Can. Grog.
X, 4 (1966), 199-204; TOBLER, W. R., “Numerical Map Generalisation and Notes on
the Analysis of Geographical Distributions,” M.I.C. of M.G. Discussion Paper 8 , 1966;
JENKS,G. F., ‘‘Visualizing Statistical Distributions and the Generalizing Process,”
unpub. paper presented at A.A.G. annual meeting, Toronto, 1966.
24. NYSTUEN,J. D., “Effects of Boundary Shape and The Concept of Local Convexity,”
Michigan Inter-University Community of hlathematicnl Geographers. Discussion Paper
No. 10, 1966; PERKAL,J., “On the Length of Empirical Curves and An Attempt at
Objection Generalization,” English trans. in M.I.C. of M.G., 10 (1966).
25. MORSE,P. M., “Dynamics of Operational Systems: Markov and Queuing Processes,” in
Progress in Operations Research, Vol. 2 (New York: John Wiley, 1964).
26. CURRY,L., “The Random Spatial Economy: An Exploration in Settlement Theory,”
Annals Assoc. Am. Geog., 54 (1964), 138-46.
27. BROWN,R. G., Smoothing, Forecasting and Prediction of Discrete Time Series.
28. YULE, G. UDNY,“Why do we sometimes get nonsense-correlations between time-series?
A study in sampling and the nature of time-series,” J . Roy. Star. SOC., Vol. LXXXIX
(Jan. 1926), I, i - 6 5
29. THOMAS. E. N. and D. L. ANDERSON. “Additional Comments on Weighting Values in
Correlation Analysis of Areal Data,” Annals Assoc. A m . Grog., 55 1196?), 492-505.
30. CURRY,L., “A Note on Spatial Association,” Professional Grog., 18 (1966), 97.
31. DACEY,M. F., A Review on Measures of Contiguity for Two and K-Color Maps. Tech.
Rep. 2, Spatial Diffusion Study, O.N.R. Geog. Bch. (Northwestern University, 1965);
GEARY,R. C., “The Contiguity Ratio and Statistical Mapping,” The Incorporated
Statistician, 5 , 115-45.
32. ROBINSON, A. H., “The Necessity of Weighting Values in Correlation Analysis of Areal
Data,” Annals Assoc. Am. Geog., 46 (1956), 233-36.
33. DACEY,M. F., “A compound probability law for a pattern more dispersed than random
and with areal inhomogeneity,” Econ. Geog., 42 (1966), 172-79; “Modified Poisson
probability law for point pattern more regular than random,” Annals Assoc. A m . Geog.,
54 (1964), 559-65; “Two dimensional random point patterns: a review and an inter-
pretation,” Papers and Proceedings, R.S.A., 13 (1964), 41-58.
34. See, for example, Proc. Int. Symp. on Classical and Contagious Discrete Distributions
(Calcutta: Statistical Publishing Society).
35. HARVEY,D. W., “Geographical Processes and the Analysis of Point Patterns: Testing
Models of Diffusion by Quadrat Sampling,” I.B.G. Trans. 40 (1966), 81-96; GETIS,A,,
“Occupancy Theory and Map Pattern Analysis,’’ Dept. o f Geography, Unibwsity of
Bristol, Seminar Paper Series A, No. 1.
36. MEDVEDKOV, Yu. V., “The Regular Component in Settlement Patterns Shown on a Map,”
Soviet Geog., 8 (1967), 150-68.
37. MARBLE,D. F., Simple Markovian model of trip structure in a metropolitan region,”
Proc. Western Section Regional Science Assoc., Tempe, Ariz., 1964.
38. Quoted in GREENE,GRAHAM, It’s a Battlefield, London: Heinemann Ltd., 1934.
39. HEINE, V., “Models for Two-Dimensional Stationary Stochastic Processes,” Biometrika,
42 (1955), 170-78; WHITTLE,P., Stochastic Processes in Several Dimensions. Bull. de
I’lnst. int. de stat., XL, 2 (Ottawa, 1963), 974-94.
40. WIENER,N., Extrapolation, Interpolation, and Smoothing of Stationary Time Series (New
York: John Wiley, 1949).
41. KOOPMANS, T. C. and M. BECKMANN, “Assignment Problems and the Location of Eco-
nomic Activities. Econametrica, 23 (1957), 53-76.
42. CULLING,W. E. H., “Soil Creep and the Development of Hillside Slopes,” J . Geology,
71, 2 (19631, 127-61; SHREVE,R. L., “Infinite Topologically Random Channel Net-
works,” J . Geology, 75, 2 (1967); GOLLEDGE, R . G., “A Conceptual Framework of a
Market Decision Process,” Dept. of Geog., Univ. of Iowa, Discussion Paper Series,
No. 4; CURRY,L., “Climatic change as a random series,’’ Annals Assoc. A m . Geog.,
52, 1 (March 1962), 21-31.
43. BROWN,L. A,, “Towards a Conceptual and Operational Framework of Spatial Diffusion,”
unpub. paper, 1967.

You might also like