You are on page 1of 22

Scale spaces:

The goal of this exercise is for students to get an opportunity to familiarize with the
capabilities and shortcomings of computational techniques in design. By necessity
we are going to reduce the actual technical component of this exercise to a
minimum. This minimum however will help as explore some themes and tensions
that emerge when computational procedures interfere with the design process.

The problem of rigour


Design can be rigorous but often follows its own internal logic. Rigour and logic in
the strict sense used in mathematics often clashes with aspects of design as a
culturally conditioned practice. For students of non-design background this is also
often a point of confusion and frustration as a rigorous design process may seem
arbitrary from the outside.
However computational media often place higher requirements for rigour on any
design process. Both the ontology and the methodology of the design problem have
to take into account these restrictions when computation becomes an integral part
of design thinking. Whether or under what conditions this added requirements
impoverish or enrich design is open to debate and probably undecidable.
One particular approach to computational design that fetishizes the algorithm is
more susceptible to this criticism. In this approach the whole design process is
described as a procedural and often but not always deterministic sequence that
maps inputs to outcomes.
One of the earliest proponents of similar methods in music was Iannis Xenakis [see
for example his book Formalized Music], a mathematician who worked for Le
Corbusier before dedicating himself to musical composition. The transfer of
methods from mathematics and logic to musical composition did indeed help
Xenakis to construct musical pieces that challenged both the listeners and the
musical norms of their time. However Xenakis also realized that the mathematical
procedure was a seed but he could have written the actual pieces without the
mathematics. The mathematical rigour helped discover a new aesthetic but it was
his ability to reflect and develop it that turned it into music.

Music scores by I. Xenakis.

Mathematics as an observation tool


In this exercise we are going to use a mathematical formulation as an observation
tool. It is not a method for solving a problem [we havent even set up a specific
conventional problem to begin with] but to generate opportunities and project
questions, both of these requiring human agency.

Reductionism and emergence


The problem with many numerical approaches to design is that they rely on
reductionism of the design context which is operational in many levels. Starting
from the reductionism inherent in the sciences, models and simulations that are
related to design problems, and up to the thought processes and ontologies
imposed by computational environments and programming patterns. This is partly
the result of mathematics and computer science attempts to conceal complexity
under layers of abstractions.
Even at the scientific level reductionism although universally accepted as part of the
scientific method becomes problematic when we try to gain insight into systems at
mesoscopic scales. This is because secondary phenomena and structures can and do
emerge at intermediate scales, and the transition between scales is one of the
greatest problems in modern physics.
In 1972 Nobel Laureate physicist Philip Warren Anderson wrote a paper named
more is different discussing exactly this problem in current scientific thinking.

Anderson does not want to challenge reductionism and explicitly states so. It is a
tremendously useful principle that has almost universal acceptance. But he does
want to understand the limits of reductionism in making sense of a world that is
messy and complex.
These limitations become even more apparent in the case of design and the
production of cultural artefacts.

The problem with scale


The exercise for this introduction to computation has been set up in a way that the
tension between reductionism and its emergent side effects [that often become
more important than the reduced model] become apparent and the subject of
design.
The premise is very simple, you can perfectly control and understand the relation of
two units in isolation. However as this relation propagates throughout space
secondary relationships appear at multiple scales of association. This secondary
emergent relationships invite you to rethink the ramifications of the original unit
configuration. The complications emerge without even having to bring into the
problem factors such as an actual context [topography, culture, environment,
program etc] but simply considering the internal logic of a relation and its
propagation. The initial configuration is its own context but even with the most
rigorous and controlled propagation logic the spatial effects are unpredictable.
To explore this tensions we will be using a simple computational solution. However
we need to keep in mind two things. The computational solution allows you to
explore a small portion of what is in effect an infinite space of solutions. Second it
can elucidate some derived relations but as it is agnostic towards the context of the
exercise and the meaning of the geometric elements it will still be up to the
individual designer to interpret and exploit the discoveries in the generated system.
Signals and scale spaces
Throughout the exercise we will be using concepts from computation and signal
analysis that help clarify intuitive approaches to similar problems. This is a
demonstration that perhaps there are interesting concepts in addition to useful
tools within computation that can help elucidate and render more rigorous and at
times more generalizable our habitual modes of thought. One such concept is the
idea of a scale space.
In architectural representation the scale is often related with a discrete set of
representations that contain various amounts of information. [1:100, 1:1000, 1:5],
urban, plan, detail etc This is a discrete scale space. What determines the scale is
not the ratio [e.g. 1:100] but the information density. A detail drawing expresses a
higher information density than a master plan.
In signal analysis this idea of scale space becomes both exact and continuous. The
environment has a nearly infinite or at least enormous information density. We can
sample it at different resolution to extract a signal [an image, a piece of sound, an
MRI scan etc...]. These signals that describe portions of the environment can be
down-sampled in order to focus on large scale structures by reducing the amount of
noise and detail. As well see this method can discover patterns that sometimes
much our intuition and at other times challenge our visual processing. Our visual
intuition is prejudiced and preconditioned [or trained and intelligent depending on
how one looks at it] but the question would be: are the patterns that we discover
when we look at a map that represents a 2 by 2 kilometre area [as seen by our eyes
that usually interpret associations in smaller scales] more reliable than the quasiobjective associations that the signal analysis discovers?
Another parallel concept to the scale space is the distinction between Texture and
structure. This dipole appears in computer vision as a vague and generalizable
numerical analogue to the aesthetic categories of figure and ground. What is
interesting with the Texture/Structure dichotomy is that it is again scale dependent.

What looks like structure up close it might look like a pattern that constitutes the
texture of a larger structure when seen at another scale.

Functions as maps
One of the components of this exercise will give us an opportunity to discuss a
couple of more concepts at the intersection of mathematics, computation and
material culture. We are going to employ simple functions in order to generate a
propagating pattern that will represent vaguely an urban plan. A mathematical
function in its more general sense is a map that associates two sets. In our case we
will build functions that map points from one space [an initial distribution] to
another [a deformed and warped variable density distribution]. Mathematical
functions bring along certain artefacts that become important aesthetic features
and manifest themselves in many different ways in material culture. This is just an
example of convergence between craft and computation.
Singularities and exceptions
A defining feature of most functions is a singularity. Loosely speaking it is a region in
a functions domain where the function itself may be indeterminate and it usually
manifests as a collapse in the mapping. Two well-known singularities are the poles
of the earth when we use Latitude and longitude to describe locations.

Although singularities are mathematical features they often appear in contexts


where there is no explicit reference to mathematics. One such instance is in the
patterning logic of woven objects. Weaving constructs a three dimensional objects
out of an orthogonal grid and by necessity this grid will have one or more quilting
points. These points that have the mechanical role of keeping the whole artefact
together appear in similar locations as certain discretization algorithms place
singularities on smooth surfaces.

The placement of these singularities where the logic of a pattern collapses can be
a design problem. Often such singularities are incorporated into the logic of the
artefact and endowed with meaning.

Looking at the jade mask from ancient China above there are two singularities that
form conviniently at the tips of the mouth allowing a rectangualr grid to cover a non
rectilinear domain while remaining parallel to the mouth and chin. This is very
similar to a mathematical function that maps the oval shape of the face to a
rectangular region and needs this extra feature, the singularity, to resolve the
variable stretching.
However there is a secondary feature here that is not part of the mathematical
solution. This is the two eyes. Two circles placed against the requirements of the
mapping, two foreign objects shuttering the logic of the pattern. They make sense
because of factors external to the patterning logic [the interpretation of the shape
as a representation of the face]. These two circles are exceptions within the
pattern. It is not the point where the logic collapses but where it is ignored because
this mask is a cultural artefact fulfilling multiple requirements, including a symbolic
function.
When dealing with real world problems exceptions have to be introduced quite
often. These are imposed requirements seamingly unrelated and external to the
logic of the mathematical description but essential nontheless. They do introduce
inconsistencies and make code inelegant. They are usually the points where the
elegance of the computational solution reaches its limits and comes into conflict
with the externalities of its own logic.

The aesthetics of the digital

the spaceships from the game Elite

Sculptures by artist Xavier Velhan, ArtPress 2012

Even when not explicitly referenced or even used as a tool, the digital manifests
itself in many aspects of contemporary culture, simply because our aesthetic
experience is heavily mediated by these media. The digital has found an expression
in material and visual culture and the artifacts of digitization and computation can
become manifest in various way, subtle or not. The sculptures above by Xavier
Veilhan use exactly the short comings of digital representation [the selective
reduction of information content, and the disruptions or degradation of visual
information as a signal] as an aesthetic statement.

Yi HwanKwan, Jangdockdae, 2008

This sculpture by artist Yi HwanKwan also indirectly references digital methods. The
image is not squashed. The artist creates naturalistic sculptures of ordinary people
but applies a non uniform scaling to their geometry. This disrupts our normal visual
processing when faced with these artefacts in a physical setting. The effect is a
sense of loss of spatial proportions which can be disorientating and disturbing. The
transformation though that enables this effect, the idea of separation of scaling
along different axes is more probable in a society saturated with digital media and
the way they manipulate visual information [though anamorphic projections in the
past can come close to similar effects].
In a sense the digital characterized by seriality, and uniform treatment of the
environment as a stream of interchangeable signals, generates its own visual
culture. These artworks acquire meaning within a culture already used to a digitally
mediated appropriation of the environment even if no computer were to be used in
their conception and fabrication process.

The configuration / analysis interface

For this exercise students will work within a computational environment that has
already been set up for them. In a sense this procedural definition is going to be an
exploration, observation and reflection tool.
The interface is divided into two major groups of components
[configuration/generation] and [analysis/observation].
Students are only interacting with the red components. Most of them are sliders
that enable them to explore the parameter space of the solution. However the
central red script component will require the input of formulas that will enable
greater freedom in exploring the various spatial configurations and distributions of
units.
The skeleton structure of the above component is the following:
1.

2.

Configuration/ Generation
a. One
i. Pair configuration
ii. Pair visualization
b. Many
i. Instantiation
ii. Re-Mapping
iii. Transformation
Analysis / Observation
a. Topography
b. Figure / ground
c. Density map
d. Scale spaces and the clustering continuum

The most critical moments are:


a.
b.
c.

The [Pair configuration] which determines the unit relation.


The [Re-Mapping] where students will have to write the formulas that will
define the actual distribution of units.
The [Scale spaces and the clustering continuum] where the concept of the unit
is relativized depending on the scale at which one looks at the generated
system. By that point the initial distinction between part and whole becomes
problematic.

Configuration/ Generation

This region contains the controls that enable the configuration of the pair relation
between the two units and the generation of a first map that is created by unit
instantiation.
One
This region of controls is focusing on the configuration a single pair and the relative
positioning of the two components.
Pair configuration

The red sliders here control the relative positioning and rotation of the two units.
There are 3 translations for the shifts along the x,y and z axes, as well as two
rotations [in degrees] for the two units.

Pair visualization

This is a component for the visualization of the pair. The purpose of this is just to
make it easier to see the pair changing as well as the complete map. You can define
a point around which to visualize the pair as well as scaling factor.
Many

This group control the generation of the complete map through the instantiation of
the previously defined pair.
Instantiation

This component generates an initial set of copies of the original pairs. This initial
grid will be radically reshaped during the remapping phase.

You can control the number of units [countx and county] along the initial x and y
axes, the distance between successive units [dx and dy] as well as a unit omission
pattern [mod1 and mod2] that generates periodic gaps in the grid.

Examples of different combinations of the input parameters.

Re-Mapping

This is the components where students have the most control over the layout of the
map.
Until now interactions with the model were mediated by user interface elements
such as sliders. Here you can directly control a small portion of the code that will
generate the map using simple mathematical formulas. This control is named remapping because its main purpose is to reposition and rotate the instantiated

elements from the previous component. It achieves this using mathematical


functions that map one space to another. The mathematical meaning of a map
is that of establishing a correspondence between two sets.
To edit this component you need to double-click on it. This will open up a text editor
within which you need to assign values to the outputs X,Y,Z and R corresponding to
the final xyz coordinates and rotation of each pair. These values should be a
rearrangement through mathematical formulas of the original coordinates x,y,z. In
addition to the x,y,z we also provide the so called normalized coordinates nx and
ny. These are numbers that start from [0,0] for the bottom left corner of the original
grid and reach [1,1] at the top right corner. The mid point of the grid would have
normalized coordinates of [0.5, 0.5]

It is possible to express formal concepts such as periodicity, densification,


acceleration, randomness and others using simple formulas. The exact outcome is
difficult to predict even for experienced users and in that sense the model becomes
an observation tool.
The simplest map is the identity map where nothing changes. Each points
coordinates [x,y,z] are passed unmodified to its final coordinates [X,Y,Z] and there is
no rotation added to the units at that point [R=0]:

Adding some acceleration along the x axis can be achieved by adding more
displacement along the x for each point depending on the square of the x
coordinate. In the formula below Y,Z and R remain unchanged. We add a
displacement along the x axis for each unit that depends on its normalized z
2
coordinate. The factor (500 nx ) means that the left most units [nx=0] will remain at
their original positions but the rightmost units [where nx=1] will be displaced by 500
feet along the x axis. Units in the middle [e.g. nx=0.5] will be displaced by a factor of
[0.5*0.5*500 = 125 feet]. Because we are using the square of nx the rate of
displacement along the x is accelerating [try different powers like nx*500, or
nx*nx*nx*500]:

We can add a periodic displacement along the x axis by using the cosine function of
the x coordinate:

And we can rotate the units so that the left most units have a rotation of 0 and the
right most a rotation of 360 by associating the rotation angle R with the x
coordinate [R=nx*360;] 360 means that we will get a full rotation along the x axis.
Here we also modify the topography of the map by changing the elevation Z along
the y axis.

We can also add a degree of randomness in the rotation angle that it too varies
along the x axis [with no randomness on the left and maximal randomness of
100degrees to the right]

With clever use of formulas you can create not only variable density rectangular
grids but also multipolar configurations with singularities:

The code editor window


Within the code editing window we are writing C# code. We wont be using the full
set of structures and capabilities of the language but rather we will limit ourselves
to writing simple mathematical formulas. We need to assign a value to each one of
the X,Y,Z and R variables using the assignment operator =. The equality sign here
means assign to so it is not the same as an equation.
e.g. X=5.0;
Means assign the value 5.0 to the variable X.

!All lines should terminate with a ;


X,Y,Z and R are output variables and represent each one of the units in the
generated grid. So the script you are writing here will be executed many times once
for every grid point. The input variables [x,y,z] and [nx,ny] will be different each
time because each grid point has a different initial location [x,y,z] and normalized
coordinates [nx, ny]
Comments
Within the code editor you can deactivate pieces of code by either placing two
forward slashes // at the start of a line or enclosing a block of code in /* . */
In this way you can add text comments in your code. The green text is ignored and is
only there for you to keep notes or maintain previous trials in an inactive form. You
can activate pieces of code by removing the /* */ pair around them. This is useful in
case you want to keep track of different formulas you try. Just make a copy and
comment the last one so that you dont lose previous results that you found
interesting.

Mathematical function and operators


When writing formulas all familiar rules from algebra still hold. Basic algebraic
operations are expressed through the 4 arithmetic operators:
+ (plus)
- (minus)
/ (division)
* (multiply)
In addition a set of the most commonly used mathematical functions are also
available. You can find an exhaustive list at
http://msdn.microsoft.com/en-us/library/system.math(v=vs.110).aspx

To enter the function cos(x) for example for the cosine you need to write it like
Math.Cos(x)
All mathematical functions will be preceded by the namespace Math.
The trigonometric functions:
Function name
Cosine
Sine
Tangent
Absolute Value
Exponential
Logarithm
Power
ArcTangent
Round a number

Traditional form
cos(x)
sin(x)
tan(x)
|x|
x
e
log(x)
y
x
arctan(x)

C# equivalent
Math.Cos(x)
Math.Sin(x)
Math.Tan(x)
Math.Abs(x)
Math.Exp(x)
Math.Log(x)
Math.Pow(x,y)
Math.Atan(x)
Math.Round(x)

In addition we can use the pseudo-number rnd to represent a random number


between 0.0 and 1.0. For example the formula:
X=rnd*200;
Will assign a different random number between 0 and 200 to each X coordinate
Finally if you can create branches in formulas that are activates if some condition is
met:
if (nx > 0.5)Y = y + 100.0;
else Y = y - 100.0;
the above two lines of code will shift all points with nx larger than 0.5 [the right half
of the grid] by 100 units up, while the point to the right will be shifted by 100 units
down.
The pair if() / else is called a conditional statement and enables the introduction of
exceptions in the logic of the formulas
Transformation

You dont need to interact with any of the components in this group. They simply
convert the formulas you entered into geometric transformations and apply them
to copies of the original pair.

Analysis / Observation

This group of components will help you observe secondary effects in the generated
map as well as visually explore and evaluate the outcomes.
Topography

This component generates a continuous mesh from the corners of the instantiated
blocks. It uses a common technique for surface reconstruction called [Delaney
triangulation]. The local elevation [human scale] variations will be determined by
the relation between the units of each pair of houses while the large scale elevation
variation will be determined by the Z component of the remapping formula.

Figure / ground

This component is fully automated. It reads your geometry and generates a high
resolution image. We need to do this transition step from a vector representation of
geometry [points and lines and surfaces..] to a raster [an image] because it is easier
to estimate properties such as density using a pixel grid.
The only input required is to set the path where the generated image should be
saved. You can do this by right clicking on the path component [in red] and selecting
a file in any folder of your hard drive. This file will not be overwritten but an image
of the map will be saved in the same folder as the selected file. You also need to
double-click on the image component [large area at the top right] and select the
path of the saved image so that it can display it.
Density map

This component re-processes the previously generated image by applying a


technique called [downsampling]. In effect it reduces the resolution of the image by
averaging patches of information. It is the same thing you achieve by squinting your
eyes in order to discern large scale structures and ignore detail in your visual field.
In signal analysis this is a very common technique that enables us to traverse the
scale space of the signal. The more we downsample [more squinting] the fewer
details can be seen and the more we can focus on larger scale structures.
The red slider here enables you to control the amount of downsampling [blurriness]
in the generated image. In this way we can see a map of average density in larger
patches of the original image. In addition as it will become evident by the next

component we will be able to determine precisely different clustering behaviours of


the repositioned units.

Scale spaces and the clustering continuum

In this component we extract contours from the previous downsampled map that
represent different clustering behaviours. The concept of a cluster is not fixed but it
depends on the scale we are looking at [downsampling] and the threshold of
proximity we arbitrarily set [red slider here]

You might also like