S
C
C
G
G
M
A
Y
/
J
U
N
E
2
0
0
4
F
R
O
N
T
I
E
R
S
O
F
S
I
M
U
L
A
T
I
O
N
P
A
R
T
I
I
V
O
L
U
M
E
6
N
U
M
B
E
R
3
Computing in Science & Engineering is a peerreviewed, joint publication of the IEEE Computer Society and the American Institute of Physics
http://cise.aip.org www.computer.org/cise
May/June 2004
Simulated Bite
Marks, p. 4
Multisensory
Perception, p. 61
Biological Aging
and Speciation, p. 72
A L S O
Frontiers of Simulation
PART II
Frontiers of Simulation
Statement of Purpose
Computing in Science & Engineering
aims to support and promote the
emerging discipline of computational
science and engineering and to
foster the use of computers and
computational techniques in scientific
research and education. Every issue
contains broadinterest theme articles,
departments, news reports, and
editorial comment. Collateral materials
such as source code are made available
electronically over the Internet. The
intended audience comprises physical
scientists, engineers, mathematicians,
and others who would benefit from
computational methodologies.
All theme and feature articles in
CiSE are peerreviewed.
Copublished by the IEEE
Computer Society and the
American Institute of Physics
F R O N T I E R S O F S I M U L A T I O N
P A R T I I
M A Y / J U N E 2 0 0 4 Volume 6, Number 3
Guest Editors Introduction:
Frontiers of Simulation, Part II
Douglass Post
16
Virtual Watersheds:
Simulating the Water Balance of the Rio Grande Basin
C.L. Winter, Everett P. Springer, Keeley Costigan, Patricia Fasel,
Sue Mniewski, and George Zyvoloski
18
LargeScale FluidStructure Interaction Simulations
Rainald Lhner, Juan Cebral, Chi Yang, Joseph D. Baum, Eric Mestreau,
Charles Charman, and Daniele Pelessone
27
Simulation of Swimming Organisms: Coupling Internal
Mechanics with External Fluid Dynamics
Ricardo Cortez, Lisa Fauci, Nathaniel Cowen, and Robert Dillon
38
Two and ThreeDimensional
Asteroid Impact Simulations
Galen Gisler, Robert Weaver, Charles Mader, and Michael Gittings
46
Cover illustration: Dirk Hagner
President:
CARL K. CHANG*
Computer Science Dept.
Iowa State University
Ames, IA 500111040
Phone: +1 515 294 4377
Fax: +1 515 294 0258
c.chang@computer.org
PresidentElect:
GERALD L. ENGEL*
Past President:
STEPHEN L. DIAMOND*
VP, Educational Activities:
MURALI VARANASI*
VP, Electronic Products and Ser
vices:
LOWELL G. JOHNSON (1ST VP)*
VP, Conferences and Tutorials:
CHRISTINA SCHOBER*
VP, Chapters Activities:
RICHARD A. KEMMERER (2ND VP)
VP, Publications:
MICHAEL R. WILLIAMS
VP, Standards Activities:
JAMES W. MOORE
VP, Technical Activities:
YERVANT ZORIAN
Secretary:
OSCAR N. GARCIA*
Treasurer:
RANGACHAR KASTURI
20032004 IEEE Division V Direc
tor:
GENE H. HOFFNAGLE
20032004 IEEE Division VIII Di
rector:
JAMES D. ISAAK
2004 IEEE Division VIII Director
Elect:
STEPHEN L. DIAMOND*
Computer Editor in Chief:
DORIS L. CARVER
Executive Director:
DAVID W. HENNAGE
* voting member of the Board of Governors
M
A
Y
/
J
U
N
E
2
0
0
4
F
R
O
N
T
I
E
R
S
O
F
S
I
M
U
L
A
T
I
O
N
,
P
A
R
T
I
I
V
O
L
U
M
E
6
N
U
M
B
E
R
3
Statement of Purpose
Computing in Science & Engineering
aims to support and promote the
emerging discipline of computational
science and engineering and to
foster the use of computers and
computational techniques in scientific
research and education. Every issue
contains broadinterest theme articles,
departments, news reports, and
editorial comment. Collateral materials
such as source code are made available
electronically over the Internet. The
intended audience comprises physical
scientists, engineers, mathematicians,
and others who would benefit from
computational methodologies.
All theme and feature articles in
CiSE are peerreviewed.
Copublished by the IEEE
Computer Society and the
American Institute of Physics
F R O N T I E R S O F S I M U L A T I O N
P A R T I I
M A Y / J U N E 2 0 0 4 Volume 6, Number 3
Guest Editors Introduction:
Frontiers of Simulation, Part II
Douglass Post
16
Virtual Watersheds:
Simulating the Water Balance of the Rio Grande Basin
C.L. Winter, Everett P. Springer, Keeley Costigan, Patricia Fasel,
Sue Mniewski, and George Zyvoloski
18
LargeScale FluidStructure Interaction Simulations
Rainald Lhner, Juan Cebral, Chi Yang, Joseph D. Baum, Eric Mestreau,
Charles Charman, and Daniele Pelessone
27
Simulation of Swimming Organisms: Coupling Internal
Mechanics with External Fluid Dynamics
Ricardo Cortez, Lisa Fauci, Nathaniel Cowen, and Robert Dillon
38
Two and ThreeDimensional
Asteroid Impact Simulations
Galen Gisler, Robert Weaver, Charles Mader, and Michael Gittings
46
Cover illustration: Dirk Hagner
From the Editors
Francis Sullivan
Computational Science and Pathological Science
News
Simulated Bite Marks
New Cloud Animation Software on the Horizon
Technology News & Reviews
Norman Chonacky
Stella: Growing Upward, Downward, and Outward
2
4
8
56
61
66
74
82
87
WWW. C O MP U T E R . O R G / C I S E /
H T T P : / / C I S E . A I P. O R G
M A Y / J U N E 2 0 0 4
Computing Prescriptions
Eugenio RoanesLozano, Eugenio RoanesMacas, and Luis M. Laita
Some Applications of Grbner Bases
Visualization Corner
Jonathan C. Roberts
Visualization Equivalence
for Multisensory Perception: Learning from the Visual
Your Homework Assignment
Dianne P. OLeary
Fitting Exponentials: An Interest in Rates
Computer Simulations
Suzana Moss de Oliveira, Jorge S. S Martins,
Paulo Murilo C. de Oliveira, Karen LuzBurgoa,
Armando Ticona, and Thadeau J.P. Penna
The Penna Model for Biological Aging and Speciation
Education
Guy Ashkenazi and Ronnie Kosloff
String, Ring, Sphere:
Visualizing Wavefunctions on Different Topologies
Scientific Programming
Glenn Downing, Paul F. Dubois, and Teresa Cottom
Data Sharing in Scientific Simulations
How to Contact CiSE, p. 17
Advertiser/Product Index, p. 37
AIP Membership Info, p. 45
Subscription Card, p. 88 a/b
Computer Society Membership Info, Inside Back Cover
D E P A R T M E N T S
From the Editors
Francis Sullivan
Computational Science and Pathological Science
News
Simulated Bite Marks
New Cloud Animation Software on the Horizon
Technology News & Reviews
Norman Chonacky
Stella: Growing Upward, Downward, and Outward
2
4
8
56
61
66
74
82
87
WWW. C O MP U T E R . O R G / C I S E /
H T T P : / / C I S E . A I P. O R G
M A Y / J U N E 2 0 0 4
Computing Prescriptions
Eugenio RoanesLozano, Eugenio RoanesMacas, and Luis M. Laita
Some Applications of Grbner Bases
Visualization Corner
Jonathan C. Roberts
Visualization Equivalence
for Multisensory Perception: Learning from the Visual
Your Homework Assignment
Dianne P. OLeary
Fitting Exponentials: An Interest in Rates
Computer Simulations
Suzana Moss de Oliveira, Jorge S. S Martins,
Paulo Murilo C. de Oliveira, Karen LuzBurgoa,
Armando Ticona, and Thadeau J.P. Penna
The Penna Model for Biological Aging and Speciation
Education
Guy Ashkenazi and Ronnie Kosloff
String, Ring, Sphere:
Visualizing Wavefunctions on Different Topologies
Scientific Programming
Glenn Downing, Paul F. Dubois, and Teresa Cottom
Data Sharing in Scientific Simulations
How to Contact CiSE, p. 17
Advertiser/Product Index, p. 37
AIP Membership Info, p. 45
Subscription Card, p. 88 a/b
Computer Society Membership Info, Inside Back Cover
D E P A R T M E N T S
2 Copublished by the IEEE CS and the AIP 15219615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
F R O M T H E
E D I T O R S
the best diet for a healthy life. One characteristic these
examples all share is that they fade quickly, only to be re
placed by a new ultimate answer. Sometimes rather than
fading, the thrilling discovery has a second life in check
outline tabloids. A few of these items are hoaxes, some
are merely consequences of overenthusiasm about pre
liminary results, but many are honest mistakes carried to
the point of pathology.
To be fair, let me say at the outset that computational
science is not immune from this pathology. But a point I
hope to make is that widespread availability of fairly
highend computing has shortened the life span of the
science pathologies that occur in computing.
The term pathological science goes back at least as
far as Irving Langmuirs famous 1953 General Electric
lecture, in which he discussed things like Nrays and ESP.
He described pathological science this way:
These are cases where there is no dishonesty involved but
where people are tricked into false results by a lack of un
derstanding about what human beings can do to themselves
in the way of being led astray by subjective effects, wishful
thinking or threshold interactions. These are examples of
pathological science. These are things that attracted a great
deal of attention. Usually hundreds of papers have been pub
lished on them. Sometimes they have lasted for 15 or 20
years and then gradually have died away.
Langmuir also identified six features that he thought
characterized pathological science:
The maximum effect observed is produced by a
causative agent of barely detectable intensity; the effects
magnitude is substantially independent of the cause.
The effect is of a magnitude that remains close to the
limit of detectability; otherwise, many measurements
are necessary because of the very low significance of
the results.
Claims of great accuracy.
Fantastic theories contrary to experience.
Criticisms are met by ad hoc excuses thought up on the
spur of the moment.
The ratio of supporters to critics rises up to somewhere
near 50 percent and then falls gradually to oblivion.
Langmuirs lecture did not put an end to patho
logical science. In 1966, the Soviet scientists Boris
Valdimirovich Derjaguin and N.N. Fedyakin discovered
a new form of water that came to be known as polywa
ter. It had a density higher than normal water, a viscos
ity 15 times that of normal water, a boiling point higher
than 100 degrees Centigrade, and a freezing point lower
than zero degrees. After more experiments, it turned out
that these strange properties were all due to impurities
in the samples. An amusing sidenote is that the polywa
ter episode occurred a few years after Kurt Vonneguts
book Cats Cradle, which imagined a form of water, and
more importantly a form of ice, with strange properties.
The most wellpublicized pathological case in recent
years is arguably the cold fusion story.
Why do these things happen? Imagine working late
into the night on a new algorithm that you feel sure will
be much more efficient than existing methods, but it
somehow doesnt seem to work. After many hours of ef
fort, you make a few more changes to the code, and sud
denly it works amazingly well. The results begin to ap
pear almost as soon as you hit the enter key. Next you
try another case, but that example doesnt work well at
COMPUTATIONAL SCIENCE AND PATHOLOGICAL SCIENCE
By Francis Sullivan
Editor in Chief
E
VERY NOW AND THEN, A PECULIAR KIND OF NEWS STORY APPEARS ABOUT
SOME SCIENTIFIC TOPIC. ON FIRST READING, IT LOOKS LIKE STARTLING
NEW RESULTS OR THE ANSWER TO EVERYTHING ABOUT SOME PERPETUALLY
HOT TOPIC, SUCH AS THE AGE OF THE UNIVERSE, THE ORIGIN OF MANKIND, OR
MAY/JUNE 2004 3
all. You go back to rerun the original wonderful case,
and that doesnt work either! This is the danger point:
you either find the error that made the one good case
work, or you decide that theres a subtle effect here that
can only be produced by doing things just so. If you
choose the second path and get one more good result,
you might end up believing you have an excellent
method that only you know how to use. This is one way
that legitimate science can descend into pathology.
Fortunately, your experiment was done with a com
puter rather than a complicated lab setup, which means
that, in principle, others can repeat the experiment
quickly and easily. And unless youre very stubborn in
deed, youll soon discover that your error was a fluke,
perhaps something like branching to a routine where the
correct answer was stored for testing purposes.
A final caution: to guard against becoming too com
placent about the use of computing as immunization
against pathological science, recall the many instances
where easily generated and beautiful gratuitous graph
ics are used in lieu of content in computational science
presentations. I dont know if this is pathological science
in the old sense, but its a symptom of something spawned
by the ease of computing.
Scalable Input/
Output
Achieving System Balance
edited by Daniel A. Reed
A summar y of the major research
results from the Scalable Input/
Output Initiative, exploring software
and algorithmic solutions to the I/O
imbalance.
Scientific and Engineering Computation
series
392 pp. $35 paper
Imitation of Life
How Biology Is Inspiring Computing
Nancy Forbes
This book will appeal to tech
nophiles, interdiscplinarians, and
broad thinkers of all stripes.
George M. Church, Harvard
Medical School
176 pp., 48 illus. $25.95 cloth
To order call 8004051619.
Prices subject to change without notice.
New from The MIT Press
http://mitpress.mit.edu
SIAM/ACM Prize in
Computational Science
and Engineering
CALL for NOMINATIONS
The prize will be awarded for the second time at the SIAM
Conference on Computational Science and Engineering
(CSE05), February 1215, 2005, in Orlando, Florida.
The prize was established in 2002 and first awarded in 2003.
It is awarded every other year by SIAM and ACM in the area
of computational science in recognition of outstanding
contributions to the development and use of mathematical and
computational tools and methods for the solution of science
and engineering problems.
The prize is intended to recognize either one individual or a
group of individuals for outstanding research contributions to
the field of computational science and engineering. The
contribution(s) for which the award is made must be publicly
available and may belong to any aspect of computational
science in its broadest sense.
The award will include a total cash prize of $5,000 and a
certificate. SIAM and ACM will reimburse reasonable travel
expenses to attend the award ceremony.
A letter of nomination, including a description of the
contribution(s), should be sent by July 31, 2004, to:
Chair, SIAM/ACM Prize in CS&E
c/o Joanna Littleton
SIAM
3600 University City Science Center
Philadelphia, PA 191042688
littleton@siam.org (215) 3829800 ext. 303 www.siam.org/prizes
Though real Sabertooth cats are long extinct, anatomist
Frank Mendel and his team plan to build a scale model of the
head and jaws of a 700pound Smilodon fatalis to reproduce the
predators deadly bite. They want to measure the forces nec
essary for the teeth to penetrate the skin, muscle, and other
tissues of a recently dead herbivore, and use the data in a new
computeraided design (CAD) program theyre developing.
The CAD program, the Vertebrate Analyzer (VA), could
do for muscle and bone what similar programs have done
for bridges, buildings, and automobileslet scientists probe
the form and function of a complex object on the computer.
Ultimately, it could shed light on human bone and muscle
ailments, as well as the lives of longgone exotic creatures.
Mendel wants to be careful not to oversell the technol
ogy. He and Kevin Hulme of the projects engineering
team have only just begun to show the beta version of the
VA at scientific conferences, and theyve just applied for
US$1 million of federal funding to develop it further. But
everyone from paleontologists to orthopedists wants a fin
ished product.
Whenever I talk about the Vertebrate Analyzer, someone
says, that sounds great, when can we have it? Mendel says.
Larry Witmer, an anatomist at Ohio University, echoes
that sentiment. The software sounds really exciting. It
looks like they still have a ways to go before they have a
really sophisticated tool, but theyre on the right track,
he says.
The Software
Witmer currently uses the 3D visualization program Amira
from TGS to analyze computed tomography scans of fossil
skullsthe same kind of data set that Mendels team uses. Re
cently, Witmer changed the face of Tyrannosaurus rex by sug
gesting the dinosaurs nostrils rested lower on its snout than
once thought; hes also reconstructed a Pterodactyl brain and
inner ear. He wants a program like the VA, which promises
to let users virtually apply tissue to bone quickly and easily.
With the VA, the 3D skull rotates and translates by using
the arrow keys; two mouse clicks attach the ends of a muscle
bundle. During jaw movement, the muscle glows green when
its relaxed, then yellow, and nally, red as it fully extends. The
goal is for the virtual muscles to move like real ones. Users
can hasten the simulation by lowering the resolution.
A supercomputer could speed things up, but Mendel
wants the software to run on a PC.
What Mendel and Hulme hope will set the VA apart from
similar software is what they plan to do with it. They want
to maintain it as opensource code and create a publicly
available online vertebrate anatomy library, comparable in
scope to the National Center for Biotechnology Informa
tions GenBank DNA database. Modeling Smilodon is the
rst step.
Toothy Test Case
When scientists study prehistoric animals, they dont often
have the luxury of complete specimens. Smilodon is an ex
ception, due to large clusters of remains such as the 2,000
cats preserved in Californias La Brea Tar Pits. Those skele
tons suggest that adults were about the size of an African
lion, but with longer forelegs that were more powerful than
its hind legs. The cats infamous fangsskinny and serrated
like steak knives, and up to 7 inches longprompted experts
to debate whether they were used for hunting or for com
petition among males (see Figure 1).
For Mendel, that question is settled. At La Brea, we cant
tell males from females, he says. They all have enlarged
canines, even the kittens. This suggests that the teeth did
something other than advertise age or gender.
But how Smilodon used those teeth is still a mystery. Did
it clamp down on an animals throat to suffocate it, as big
cats do today, or simply tear the throat out and let its prey
bleed to death? Maybe its strong front legs could have
pinned down a suffocating Ice Age herbivore such as a deer,
but could those relatively thin teeth, which lack a full coat
4 Copublished by the IEEE CS and the AIP 15219615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
SIMULATED BITE MARKS
By Pam Frost Gorder
F
OR THE FIRST TIME IN 11,000 YEARS, THE FEAR
SOME SABERTOOTHED TIGERS CANINES
WILL TEAR INTO FRESH MEATIF SCIENTISTS
AT THE UNIVERSITY OF BUFFALO GET THEIR WAY.
News Editor: Scott L. Andresen, sandresen@computer.org
NEWS
N E W S
MAY/JUNE 2004
of the enamel that strengthens human teeth, have held on
without breaking?
We assume the teeth were used to kill, yet we have to ac
count for the lack of heft and enamel, so its a mechanical
problem, Mendel explains. Whats more, fossil skulls offer
only the barest clues of the muscle architecture that made
wielding such teeth possible.
He was considering this puzzle when news reports of Boe
ings computerdesigned 777 aircraft prompted him to con
tact engineers at his institution. I thought, wouldnt it be
great if we could bring CAD to bear on the things I want to
look at? But modeling soft tissue is a complex problem. An
airplane is great technology, but it pales in comparison to
what humans do walking around every day.
Once they build a skull and replicate its bite on animal
carcasses from a butcher shop, scientists might know more
about Smilodon. But the real payoff could go beyond that.
Potential Value
One benefit would be a clearer picture of extinct animals
biomechanics. If you just look at modern times, youre
missing the diversity of most of the life that has existed on
this planet, Witmer says. Understanding animals from the
past helps us better understand animals today.
Stuart Sumida, a functional morphologist at California
State University, San Bernardino, who also works with the
film industry, sees two other ways for this technology to
reach people: movies and video games. Today, animators
move virtual skeletons called rigs inside animated skin to
create movement. Using virtual muscles to pull on these
rigs realistically is a kind of Holy Grail of special effects,
Sumida says.
Medicine, too, could benet, as doctors could use the soft
ware to study joint problems. For instance, the work on
Smilodon could lend insight to temporomandibular joint dis
order, which causes headaches and jaw pain in an estimated
Figure 1. Frank Mendel holding the Smilodon cast.
EDITOR IN CHIEF
Francis Sullivan, IDA Ctr. for Computing Sciences
fran@super.org
ASSOCIATE EDITORS IN CHIEF
Anthony C. Hearn, RAND
hearn@rand.org
Douglass E. Post, Los Alamos Natl Lab.
post@lanl.gov
John Rundle, Univ. of California at Davis
rundle@physics.ucdavis.edu
EDITORIAL BOARD MEMBERS
KlausJrgen Bathe, Mass. Inst. of Technology, kjb@mit.edu
Antony Beris, Univ. of Delaware, beris@che.udel.edu
Michael W. Berry, Univ. of Tennessee, berry@cs.utk.edu
John Blondin, North Carolina State Univ., john_blondin@ncsu.edu
David M. Ceperley, Univ. of Illinois, ceperley@uiuc.edu
Michael J. Creutz, Brookhaven Natl Lab., creutz@bnl.gov
George Cybenko, Dartmouth College, gvc@dartmouth.edu
Jack Dongarra, Univ. of Tennessee, dongarra@cs.utk.edu
Rudolf Eigenmann, Purdue Univ., eigenman@ecn.purdue.edu
David Eisenbud, Mathematical Sciences Research Inst., de@msri.org
William J. Feiereisen, Los Alamos Natl Lab, bill@feiereisen.net
Sharon Glotzer, Univ. of Michigan, sglotzer@umich.edu
Charles J. Holland, Ofce of the Defense Dept., charles.holland@osd.mil
M.Y. Hussaini, Florida State Univ., myh@cse.fsu.edu
David Kuck, KAI Software, Intel, david.kuck@intel.com
David P. Landau, Univ. of Georgia, dlandau@hal.physast.uga.edu
B. Vincent McKoy, California Inst. of Technology, mckoy@its.caltech.edu
Jill P. Mesirov, Whitehead/MIT Ctr. for Genome Research,
mesirov@genome.wi.mit.edu
Cleve Moler, The MathWorks Inc., moler@mathworks.com
Yoichi Muraoka, Waseda Univ., muraoka@muraoka.info.waseda.ac.jp
Kevin J. Northover, Open Text, k.northover@computer.org
Andrew M. Odlyzko, Univ. of Minnesota, odlyzko@umn.edu
Charles Peskin, Courant Inst. of Mathematical Sciences,
peskin@cims.nyu.edu
Constantine Polychronopoulos, Univ. of Illinois, cdp@csrd.uiuc.edu
William H. Press, Los Alamos Natl Lab., wpress@lanl.gov
John Rice, Purdue Univ., jrr@cs.purdue.edu
Ahmed Sameh, Purdue Univ., sameh@cs.purdue.edu
Henrik Schmidt, MIT, henrik@keel.mit.edu
Donald G. Truhlar, Univ. of Minnesota, truhlar@chem.umn.edu
Margaret H. Wright, Bell Lab., mhw@belllabs.com
COMPUTING IN SCIENCE & ENGINEERING
10,000 Americans (see Figure 2). Better articial limbs could
also result.
Mendel is staying patient. If in three or four years we have
a part of what Ive been dreaming about, itll be a great thing.
Pam Frost Gorder is a freelance science writer living in Columbus, Ohio.
B R I E F
NEWCLOUD ANIMATION
SOFTWARE ON THE HORIZON
By Lissa E. Harris
A
cirrus cloud wisp hovers on a brooding sky, glowing
gold and vermilion with the last rays of the setting
sun. But this cloud isnt made of dust and vaporits made
of pixels. Its the product of Swell, a new software program
that creates animated clouds with unprecedented speed.
Swell and Primetwo new programs that render ani
mated, threedimensional (3D) cloudsare the Purdue
University Rendering and Perceptualization Labs latest in
novations. At the lab, directed by David Ebert, researchers
are developing software that brings scientific and medical
data sets to life as 3D models, computergenerated illustra
tions, and photorealistic images.
Swell
Swell isnt the first cloudanimation program to be devel
oped, or the most realistic. But many simulatorslike the
software used to make virtual clouds for cinematic special
effectstake hours or days to run. Those that function in
real time, such as weatherpredicting simulators that meteo
EDITORIAL OFFICE
COMPUTING in SCIENCE & ENGINEERING
10662 Los Vaqueros Circle, PO Box 3014
Los Alamitos, CA 907201314
phone +1 714 821 8380; fax +1 714 821 4010;
www.computer.org/cise/
DEPARTMENT EDITORS
Book & Web Reviews: Bruce Boghosian, Tufts Univ., bruce.boghosian@
tufts.edu
Computing Prescriptions: Isabel Beichl, Natl Inst. of Standards and
Tech., isabel.beichl@nist.gov, and Julian Noble, Univ. of Virginia,
jvn@virginia.edu
Computer Simulations: Dietrich Stauffer, Univ. of Khn, stauffer@
thp.unikoeln.de
Education: Denis Donnelly, Siena College, donnelly@siena.edu
Scientic Programming: Paul Dubois, Lawrence Livermore Natl Labs,
dubois1@llnl.gov, and George K. Thiruvathukal, gkt@nimkathana.com
Technology News & Reviews: Norman Chonacky, Columbia Univ.,
chonacky@chem.columbia.edu
Visualization Corner: Jim X. Chen, George Mason Univ., jchen@cs.gmu.edu,
and R. Bowen Loftin, Old Dominion Univ., bloftin@odu.edu
Web Computing: Geoffrey Fox, Indiana State Univ., gcf@grids.ucs.indiana.edu
Your Homework Assignment: Dianne P. OLeary, Univ. of Maryland,
oleary@cs.umd.edu
STAFF
Senior Editor: Jenny Ferrero, jferrero@computer.org
Group Managing Editor: Gene Smarte
Staff Editors: Scott L. Andresen, Kathy ClarkFisher, and Steve Woods
Contributing Editors: Cheryl Baltes and Joan Taylor
Production Editor: Monette Velasco
Magazine Assistant: Hazel Kosky, cise@computer.org
Design Director: Toni Van Buskirk
Technical Illustration: Alex Torres
Publisher: Angela Burgess
Assistant Publisher: Dick Price
Advertising Coordinator: Marian Anderson
Marketing Manager: Georgann Carter
Business Development Manager: Sandra Brown
AIP STAFF
Jeff Bebee, Circulation Director, jbebee@aip.org
Charles Day, Editorial Liaison, cday@aip.org
IEEE ANTENNAS AND PROPAGATION SOCIETY LIAISON
Don Wilton, Univ. of Houston, wilton@uh.edu
IEEE SIGNAL PROCESSING SOCIETY LIAISON
Elias S. Manolakos, Northeastern Univ., elias@neu.edu
CS MAGAZINE OPERATIONS COMMITTEE
Michael R. Williams (chair), Michael Blaha, Mark Christensen, Sorel Reisman,
Jon Rokne, Bill Schilit, Linda Shafer, Steven L. Tanimoto, Anand Tripathi
CS PUBLICATIONS BOARD
Bill Schilit (chair), Jean Bacon, Pradip Bose, Doris L. Carver, George
Cybenko, John C. Dill, Frank E. Ferrante, Robert E. Filman, Forouzan
Golshani, David Alan Grier, Rajesh Gupta, Warren Harrison, Mahadev
Satyanarayanan, Nigel Shadbolt, Francis Sullivan
IEEE
Signal Processing Society
A
S
P
IEEE Antennas &
Propagation Society
Figure 2. Vertexbased model of the Smilodon skull.
MAY/JUNE 2004 7
rologists use, tend to produce bloblike, unrealistic images
that dont possess a real clouds depth or complexity.
In the animation trade, Swells clouds are known as vol
umetric objects, meaning they have internal structures, not
just a surface. Many computergenerated images are hollow
shells composed of a kind of digital chicken wire, a mesh of
triangles that approximates a curved surface. But to interact
convincingly with solid objects in computergenerated ani
mation, a cloud must be truly 3D (see Figure 3).
Volumetric phenomena are difcult to render. Youre not
just working with a surface, says Swell author Joshua Sch
pok, who wrote the software as an undergraduate in Eberts
lab. To illuminate things, you need to consider that any
point can illuminate any other point.
To create a virtual cloud structure, Swell begins with sets
of points, called vertices, arrayed on a series of stacked planes
in 3D space. The software then assigns values for cloud
properties, such as opacity and brightness, to each point, and
interpolates between them to form a seamless texture.
Think of sheets of glass lined up perpendicular to the di
rection youre looking at the cloud from. You look at the color
and opacity of each of those points on those planes, Ebert
says. The reason you do them in planes, rather than random
points, is that it allows you to do quicker processing.
Running a simulation with this level of detail typically in
volves massive amounts of datacrunching, hence the long
computing times required for most simulators. But many of
the data manipulations involve computing the same function
on a large group of similar data pointsfor example, adjust
ing the opacity of a set of points, all by the same factor.
Swell sidesteps this dilemma by harnessing recent im
provements in the speed and efciency of graphics process
ing units (GPUs), which perform computations in parallel
to the CPU. The new breed of graphics cards, used primar
ily by gamers, handles singleinstruction, multipledata
computations far more swiftly than the pace of software is
suing instructions to the CPU.
Unlike its CPUbased competitors, Swell can render
complex, visually realistic clouds quickly enough to react to
a mouse. Swell lacks the sophistication of the very best cloud
simulators, but its dramatic speedcombined with an im
pressive level of realismmight soon make cloud modeling
accessible for realtime applications.
Prime
For now, Swell seems to be more of an artists than a
meteorologists tool; those most interested in it are video
game developers and specialeffects studios. But the Pur
due lab is developing similar software that merges the art
and science realms.
One promising program, Prime, has emerged from a lab
effort to create software that takes scientic data sets and ren
ders them more visually realistic. Primes author, doctoral
student Kirk Riley, has developed a program that takes data
from weatherpredicting simulation software and upgrades
its images from solid blobs to realistic, volumetric clouds.
The numerical weather prediction models that run daily
in Washington, DC, produce the kind of data that would al
low you to view the data in a photorealistic sense, if you had
the software to do it, says Jason Levit, a meteorologist for
the National Oceanic and Atmospheric Administration, who
collaborated with Eberts lab on the Prime project. But up
until now, we havent had that software.
Like Swell, Prime uses parallel processing on the GPU to
speed up rendering. But while Swell builds and manipulates
virtual clouds from scratch, Prime takes its clouds underly
ing structure from the simulator data.
Were trying to take the simulation data and make it look
the way someone would see it, if it were actually there, Ri
ley says. Now, all programs can do are surface approxima
tions that look like plastic blobs in the sky. This handles the
light in a more realistic fashion.
Crude as they might appear, simulators are invaluable to
weather forecasters. But they havent replaced storm spot
ters: meteorologists trained in field observation still make
predictions based on how clouds look in the sky.
Prime soon could train new storm spotters to recognize
many different types of conditions, without having to wait
for them to occur in the eld. It also could nd applications
in public education about meteorology or make television
weather forecasts more visually appealing.
Ultimately, Primes developers hope that the software will
enhance forecastings speed and accuracy by giving simula
tion data the look and feel of realworld weather conditions
that meteorologists could instantly recognize.
It might help us predict things faster, because we can vi
sualize things in the model with greater accuracy, Levit
says. Will it enhance scientic discovery? That remains to
be seen.
Lissa E. Harris is a freelance writer based in Boston, Massachusetts.
Figure 3. Screen shot of a Swell cloud model.
Thus, it was not exceptional when a
student recently asked me for a nu
merical integrator. After some prob
ing, I established that this student
wanted to simulate the time course
for a complex chemical process, given
the rate constants for various compo
nent reactions. In short, he wanted to
build a model; so why not use a mod
eling tool?
Indeed, our department already has
a license for Aspen, a sophisticated sys
tem for modeling unit processes in
chemical engineering. Rather than a
detailed pathtoprocess optimization,
however, this student wanted a quick
answer to whether a certain process
would proceed and, if so, how fast. Sci
entists and engineers often want to do
this type of back of the envelope cal
culation, where an envelope is inade
quate for the task. On such occasions,
we want to reach for a modeling
scratch pad.
Stella (www.hpsinc.com) is a mod
eling application that can serve such
needs, although it makes a relatively
expensive scratch pad. Fortunately, it
also provides other capabilities that
add to its value as a productivity tool,
just as a spreadsheet application lets
you both build certain kinds of models
and formulate a budget. Stella has sev
eral component toolsets, and its user
interface is organized in layers. As
such, the test of the applications total
value is not just in its range of func
tionality but also in how well its
toolsets and layers are integrated. The
premise underlying Stellas design is
that systems thinking is important
for solving a wide class of problems
and that there is a need for tools that
support and cultivate this methodol
ogy. As professionals, most scientists
and engineers seem to heartily agree
with this premise, but it is less obvious
that, as academics, they find it fit or
feasible to include this methodology in
standard curricular practiceparticu
larly for undergraduate students (es
pecially those who arent mathemati
cally sophisticated). There thus seems
to be a need for a product to help en
gineering and science students learn
model systems.
In this article, Ill review Stellas
modeling capabilities for both re
search and instruction. Ill describe
the basic modeling tools using my
students quest as a simple, illustra
tive case study, exploring how these
tools contribute to speed and effi
ciency in creating models for concept
testing. I will also examine some of
Stellas broader research capabilities
in the context of how they support
and connect with more specific and
scalable modeling systems imple
mented in highend systems. Finally,
I will comment on Stellas range of
educational applications.
Basic Modeling for
Science and Engineering
As testament to the powerful, efcient,
and wellintegrated features that High
Performance Systemsnow isee sys
temshas engineered into Stella, my
graduate student started with no
knowledge of the system and learned
the basic modeling functions in about
an hour or two. This investment
earned him the ability to create his
first working model (although a cor
rect model required the usual debug
ging, and more time). Stellas features
are not only easy to learn and intuitive
to use, but they also support good
modeling practices such as documen
tation and unit consistencygood
things for students to learn and for ex
perts to follow.
My student wanted to emulate the
process of hydrocarbon radicals react
ing with nitrogen in an airsustained,
oxygendepleted part of a ame to pro
duce hydrogen cyanide, nitric oxide,
and other things. Starting with a col
lection of rate constants for compo
nent reactions, he wanted to determine
the time courses of selected parts of the
process under various initial and ambi
ent conditions. He knew that these re
actions were described by differential
equations, and that the solution lay in
integration; hence his initial quest for
8 Copublished by the IEEE CS and the AIP 15219615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
STELLA: GROWING UPWARD,
DOWNWARD, AND OUTWARD
By Norman Chonacky
A
S AN EXPERIMENTAL PHYSICIST WORKING IN AN ENVI
RONMENTALENGINEERING RESEARCH GROUP, I OFTEN
GET REQUESTS TO INTRODUCE GRADUATE STUDENTS TO COM
PUTATIONAL TOOLS TO HELP THEM CONDUCT THESIS RESEARCH.
Editor: Norman Chonacky, chonacky@columbia.edu
TECHNOLOGY
T E C H N O L O G Y N E W S & R E V I E W S
MAY/JUNE 2004 9
a numerical integrator. But it did not
occur to him that, rather than simply
doing a computation, he really needed
to create a model like the one in Figure
1, which shows the reactions of inter
est in a graphical rendering he first
produced using Stella.
In a larger context, Figure 1 shows a
Stella window containing an iconic
map of the chemical process model in
its view area. The window margins
contain icons representing the Stella
modeling objects and various interface
tools that control the modeling process
and appearance. Four of these objects,
whose icons appear in the top lefthand
corner of this window, are of funda
mental significance. Figure 2 shows
these icons for Stellas modeling vo
cabulary objects in closer detail:
The stock (Figure 2a) is a material ac
cumulator. In the language of mathe
matics, it is a quantitative variable. In
this particular model, the stocks are all
molecular concentrations of reactants.
The flow (Figure 2b) is a material
connector. Mathematically, it is an
integrator. In this model, the flows
are chemical reactions.
The converter (Figure 2c) is an infor
mation translator. Mathematically, it
is an algebraic operator. In this model,
the converters introduce and control
the reaction processes rate constants.
The action connector (Figure 2d) is an
information connector. Mathemati
cally, it is a logical relationship. In
this model, the action connectors de
ne dependencies and relationships
in the chemical reactions.
Note the (somewhat nave) choice that
my student made for modeling the three
coupled processes in this reaction: he
represented these explicitly as three sep
arate ows, not coupled with one an
other. Instead, he implicitly specied the
actual coupling via action connectors
(here represented by the red arrows)
from the stocks to the Rate Converter
and back. I will comment on his ap
proach in the last section, noting this
revelations educational value. In
essence, this approach requires material
to exit and enter the system, appropri
ately. The cloud objects in the dia
gram achieve this. They represent input
and output portals across the boundaries
of the system required by this choice of
model. Stella automatically inserts these
at each end of a ow pipe when it is rst
created, and maintains each until the
modeler makes a positive connection to
a stock. This is one of many similar
cueing mechanismspart of Stella
designs guided learning strategy.
These are useful for tutoring new users,
but also serve as a debugging pre
processor to catch incompleteness and
inconsistencies in a models specication
while it is being created. They greatly
facilitate the model production/debug
ging process as well as being excellent
autoinstructional aids.
The buttons in the lefthand margin
of Figure 1 control model operations
and visualizations. At the bottom, the
button with an icon of a running per
son pops up a Run Controller for start
ing, stopping, and modifying the
model calculation. The {+} buttons
below it are zoom controls for the
graphical window. At the upper end of
the lefthand margin, the up and down
arrows navigate among three levels for
presenting the model to usersstart
ing here at the graphical model view, the
<uparrow> takes us to the interface
view, and the <downarrow> to the
Figure 1. The Stella graphical modeling environment, holding a model for nitrogen
xation by free radicals in a hydrocarbon ame. This window of the Stella user
interface contains objects and connectives that the developer drags and drops
into position, and then uses popup windows for setting their internal parameters.
(a) (b) (c) (d)
Figure 2. Iconic cluster representing Stellas basic modeling vocabulary: (a) stock, (b)
ow, (c) converter, and (d) action connector. This is a minimal set for modeling
processes represented by ordinary differential equations.
10 COMPUTING IN SCIENCE & ENGINEERING
equation view. I illustrate some details
and the value of these in the next sec
tion.
The button bearing the chisquared
icon switches between two modes of
the model view:
In the model mode, the user can
modify parameters and relations.
In the map mode, the user cant
modify them.
I mention this to illustrate that there
are limits to the intuitiveness of the
Stella design. Despite trying, I couldnt
understand the map modes value; I
found only its annoyances when trying
to create a model.
As mention previously, in the model
view, clicking on any model object
opens a popup window that lets you
view and change the objects opera
tional detailsthat is, its conguration.
To illustrate in our example, clicking
on the Rate Converter (remember to
be in the model, not map, mode!)
brings up the window pictured in Fig
ure 3.
This window explicitly shows which
input values the model topology re
quires, as depicted by the red lines in
the graphical renderingin this case,
three action connectors pointing in
ward (see Figure 1). This particular
configuration window shows which
connections (here, input) are required
for this object (here, converter) in or
der for the model to be complete; it lets
the developer set the (algebraic) rela
tion among these inputs to be used for
calculating the rates state value. For
our simple chemical models topology,
an algebraic expression in the Rate
Converters box (across the bottom of
the window in Figure 3) must refer to
three itemsone for each of the inputs
listed in the requiredinputs box (at the
windows upper left). Note that each
input is pictured with an icon indicat
ing what type of object is involved
(here, stocks and other converters).
The appropriate expression for our
model is a threelinear product of the
three required input values. A key
board tool and a scroll box of builtin
functions provide support for formu
lating appropriate expressions. If the
modeler fails to create an acceptable
expression based on these complete
ness criteria, a question mark appears
on the objects icon in the model dia
gram and remains until the incom
pleteness is resolved. To further assist
maintaining consistency in the model,
these configuration windows include
units and documents utilities to remind
and facilitate unit coherence and model
documentation, respectively.
Icons across the graphical modeling
windows top margin (Figure 1) indi
cate some of Stellas other basic mod
eling tools, including a button object
and a sector frame for implementing ex
ecution controls. You can program a
button to step through a models
computation, for example, or you can
use a sector frame to partition, isolate,
and run sections of the model piece
meal. There are also graphical and tab
ular pads, as well as numerical windows
for rendering outputs in various ways.
Figure 4 shows a typical graphical
display for our simple chemical reac
tion model on the Stella graph pad.
Note that the ordinates in this display
have different scales and ranges, which
are clearly indicated. Consistent with
the programs quick prototyping ap
proach, it selects the scales and ranges
automatically, but the user can override
autoscaling and autoranging to facili
tate exibility in communicating mod
eling results. In these senses, and con
sistent with modern productivity
software, Stella is reasonably self
contained. Unless you want particu
larly fancy graphical displays, there is
no need to export results. These inte
grated capabilities are consistent with
the scratchpad usage of Stella. Its in
T E C H N O L O G Y N E W S & R E V I E W S
Figure 3. Conguration window for the Rate Converter in the chemical reaction
model for the model mode of the model view. This conguration window is typical
of that for other objects, showing such things as required connections, allowing the
developer to x relations among inputs, setting initial values in the case of a stock
and so on.
MAY/JUNE 2004 11
tegration even goes further in this
cause. For example, the run controller
launched through the model view (de
scribed above) from a marginal button
is really a oating palette of dropdown
menus that can be dragged on top of
the graph pad. One dropdown is a
Specications menu that lets the user
set the range and scale parameters for
the displays. It also contains selections
for setting computational run parame
ters, selecting the integration algo
rithm and step size, evaluating the re
sults sensitivity to variations in the
initial conditions and runparameter
values, and controlling sector switch
ing for models that are partitioned into
sectors. This facilitates exploring the
parameter space for these computa
tional variables by providing a compact
way to control and observe repeated
tests of the model.
This type of wellthoughtout design
is evident in many segments of Stella.
The design features economy and their
deft integration are other hallmarks of
the applications design, reecting a
philosophical consistency and lots of use
experience. I nd this quite remarkable
in this age of hastily drafted bloatware
that is fatally aficted with featureitis.
AddedValue Capabilities
Moving upward and beyond Stellas ba
sic modeling capabilities and its use as a
scratch pad, we can best discuss some of
the features that add value by looking at
more complex modeling examples. To
that end, consider the reversible chem
ical reaction in Figure 5, borrowed from
examples that come bundled with the
current Stella distributionversion 8.1.
Equation View Features
The up and down arrows near the up
per lefthand corner of the nowfamil
iar window in this gure, like that in
Figure 1, let the user navigate to the
model presentations equation view
(down) or interface view (up). The for
mer takes you under the hood to see a
representation of the equations Stella
automatically generates to depict the re
lations among the objects in the model.
Figure 6 shows these codes for the
model in Figure 5, gathered from each
of the individual objects into one place.
In effect, this listing summarizes all
the relations and values fixed by the
modeler for the individual objects, as
required by the models topology.
These determine the time evolution
dictated by its ows. The objects in the
list are organized first by stocks, with
Figure 4. Graphpad window. This window displays results for one run of our simple
chemical model.
Figure 5. Graphical model of the hydrogeniodide dissociation, reversible chemical
reaction. This model of a straightforward chemical reaction has been carefully
drawn to reect the system symmetries and employs a more highly detailed form of
object icons, which intimate Stellas more sophisticated depths.
12 COMPUTING IN SCIENCE & ENGINEERING
sublists of inflows, outflows, and con
stants under each. For the novice, it il
lustrates the integrations computa
tional logicimplied by the concept
of timestepped ow values. Its a rst
step toward understanding the deeper
issues of computational algorithms im
plemented in the actual computational
codes. To the expert, the listing pro
vides a single comprehensive summary
of all the structures, relations, and
fixed values included in the models
underlying objectoriented computa
tional machinery.
Interface View Features
The <uparrow> icon takes you from
the model view on top to the inter
face view to create as a developer, or
see as a user, a rendering of the model
intended to facilitate the communica
tion of its results. The objective is to let
the model speak for itself by delivering
an easily operated version to the puta
tive audience. The Stella distribution
package comes with runonly engines
for Mac and Windows platforms.
These runtime engines are well suited
to educational applications. By letting
users manipulate but not modify the
models, they enable those who dont
have the Stella package to still operate
the models through interfaces such as
the one in Figure 7. Constructed to op
erate the model depicted in Figure 5,
this interface uses graphical input de
vices, such as sliders and knobs, to fa
cilitate exploratory use of the model. It
lets users conduct runs using different
values for selected parameters over re
stricted ranges.
This interface provides virtual
knobs that let the model user set a
value for each reactants initial con
centration. Similarly, the user can set
values for the two reaction rates via
sliders. This interface includes a run
controller and a predefined graph that
displays resulting time courses for
each of the reactants over a selected
range. The developer can also design
an interface that restricts users con
trol to certain model functions and
predefined range values. In this sense,
the model creator conveys informa
tion to the user in an operational,
rather than a declarative, way.
Nonetheless, the interface view pro
vides many ways to communicate de
claratively as well. The Instructions,
View the Model, and Run Control
palettes are all button objects. The rst
invokes page linking for tutorial pur
poses. The second is coupled to the
Stella modeling environments view
shifting machinery. The third is cou
pled to the computational engines ex
ecution control. As a collection, these
capabilities let you build standalone
tutorials of considerable exibility and
power and should be ideal for com
puterassisted instruction using mod
els. They also support professional sci
entic and engineering communication
that can be suitably tailored for peers in
the same or other disciplines, as well as
for those outside the technical sphere
who must be able to appreciate and un
derstand the models consequences.
The details of such applications are
outside this reviews scope, but they
abound on the High Performance Sys
tems Web site (www.hpsinc.com).
Advanced Features
For the professional scientist or engi
neer, Stella offers other modeling capa
bilities that are suited to sophisticated
applications. The stock objects described
thus far have acted like simple reservoirs,
but Stella lets you congure them to be
have in more complex ways than simple
storage. Indeed, we can describe three
variants of reservoir behavior:
T E C H N O L O G Y N E W S & R E V I E W S
Figure 6. Equation view for the chemicaldissociation reaction. This view shows
equations that dene the relations among objects expressed as algebraic formulae
that determine how their values are computed from one anothers, organized
starting with the stocks and including parametric data.
MAY/JUNE 2004 13
A conveyor receives inflow material
that it holds for outflow in one of
two conditionseither normally af
ter a specied residence time or as a
leakage with specied probabilities.
Both capacity and inflowrate re
strictions can be imposed, and con
veyor operations can be arrested
(suspended temporarily) subject to a
programmed logic condition.
A queue holds portions of multiple
inflows on a firstin, firstout basis.
One portion is admitted to a queue
for each time slice from among the
multiple inflow possibilities whose
priorities the system sets and alters
according to various explicit and im
plicit criteria.
An oven is a reservoir that processes
discrete batches of its inflow, which
are set by capacity or time. Outow
is off during this time, and subse
quent to inow shutoff, the outow
remains closed until cook time has
elapsedthe duration is set by logic
programmed into the outflowat
which point, the stock outputs the
entire contents at once.
Most of the logic conditions and time
values described in these three additional
stock varieties can be drawn by the soft
ware from userspecied sampling dis
tributions, thus adding statistical char
acter to the resulting simulations.
Stella has many other sophisticated
features that subscribe to the spirit of
these modeling capabilities. The list of
builtin functions is substantial, in
cluding conventional math, trig, and
logic functions, but also cycletime
functions and those capable of pro
cessing arrays. In fact, array capability
is also built into the basic modeling
objects, which we have already de
scribed, so the developer can econom
ically represent parallel processing of
different cases or different classes of
materials. Stella also offers the ability
to obtain cycletime information that
helps support computational perfor
mance optimization, as well as a sub
model capability that helps control
complexity in building and testing
complicated applications.
Subversive Values
In evaluating the potential signicance
of a modeling tool like Stella to scien
tic and engineering computation, we
should consider its utilitythe degree
to which its functionality aligns with a
users modus operandi. Does the tools
form fit the way scientists and engi
neers, whether students or practition
ers, function? I believe that the answer
is a resounding yes. Does it also fos
ter best practices, surreptitiously, by its
intrinsic design and not by making an
explicit issue of these? You can judge its
subversive value for yourself.
Why for Students
and Teaching Faculty?
As organized and represented by my
student, each component of the sec
ondorder chemical reaction was fully
described by an ordinary differential
equation, from which follows a well
known form of analytic solution. In
light of this fact, the student might
simply have emulated the solution for
these coupled component reactions on
a spreadsheet and fit them to various
boundary conditions introduced as pa
rameters. In a subsequent interview
with him, however, I substantiated that
this was a line of attack of which he was
completely unaware.
For this student, an operational per
spective on such problems was most
naturalconsidering each reaction
concretely as an operating mechanism
governed by a differential equation to
be integrated, rather than as a coupled
network of reactions abstractly de
scribed by a set of coupled differential
equations. The modeling exercise us
ing Stella explicitly demonstrated to
me how my student was thinking about
the problemto wit, the cognitive
construction of his analyses. For in
structors that are able and willing to
use such information in constructing
their instructional approach, this is an
enormous advantage.
If you doubt the conventional wis
dom or the results of cognitive re
search on problemsolving protocols
upon which I base these assertions,
Figure 7. Illustrative interface. This interface provides virtual, graphical input devices
to let users explore the chemicaldissociation reaction by changing the reaction
parameter values.
14 COMPUTING IN SCIENCE & ENGINEERING
consider the last time you discussed a
reaction process with a chemist. I
doubt that the grammar was of equa
tions, differential or otherwise.
Chemists thoughts most frequently
unfold by diagramming reaction
mechanisms, and the results generally
resemble the Stella model in Figure 1,
though in a more sophisticated form.
Students, and novices learning to be
experts, must progressively increase
their sophistication in solving prob
lems and designing systems. Because
Stella serves the proclivities of both
novices and experts, it supports a
seamless transition between levels.
Other good reasons to introduce en
gineering students to modeling tools lie
in the recently rewritten standards for
accreditation of undergraduate engi
neering programs by the Accreditation
Board for Engineering and Technology
(ABET). The new criteria for curricular
evaluation are cast in terms of learning
outcomeswhat graduating students
must be capable of doing. Among other
things, ABETs 20042005 Criteria for
Accrediting Engineering Programs
(www.abet.org/criteria.html) requires
engineering schools to demonstrate that
graduates have the ability to:
(3c) design a system, component, or
process to meet desired needs,
(3d) function on multidisciplinary
teams,
(3e) identify, formulate, and solve
engineering problems, and
(3g) communicate their results.
Because Stella requires the developer
to explicate a systems logical construc
tion in an external medium that can be
read and studied by others, it is a natural
tool for assisting systemdesign work
done collaboratively in teams. Moreover,
its simple language facilitates clear com
munication of both process and results,
appropriate for the educational process
and supportive of the ABET outcomes
listed above. In addition, Stella:
has a lowprofile learning curve for
achieving the ability to construct
modelsthat is, to render a hypoth
esis in computational form, enabling
exploratory simulation of model
performance;
uses operational representations of
systems, facilitating the student
learning process;
facilitates the process of experiential
collaboration (supporting ABET
outcomes goal 3d);
embodies the ability to simulate, al
lowing students to learn to identify
critical parts in a component or
process steps and to solve engineer
ing problems such as optimization
(supporting ABET goal 3e); and
supports clear communication, espe
cially among those with differing
preparations and disciplinary back
grounds (supporting ABET goal 3g).
This kind of explication of student
thinking is a very important investiga
tive capability for those in the college
teaching profession who are aware of
the results of recent research in teach
ing and learning in science and engi
neering, conducted by a new genera
tion of university professoriate whose
research is dedicated to this end. In this
sense, Stella is also in the tradition of
these new research professionals whose
research will help determine better
ways of training the next generation of
scientists and engineers.
Why for Research Professionals?
There remains to point out why pro
fessional computational scientists and
engineers might wish to use Stella. A
good deal of what has already been said
sheds light on this question. Consider
ing the operational approach to spec
ifying systems that Stella uses, I con
tend that many experienced chemists
prefer to think about chemical reactions
in this way. Considering numerical as
compared to analytic descriptions, the
obvious comment is that the former are
generally applicable while the latter ap
ply just to special casesfor example,
where simplifying assumptions can be
employed. But beyond this advantage,
many experimental scientists and engi
neers prefer working in a mode of in
teraction with a model that closely re
sembles laboratory work, even when an
analytic alternative is available. As ex
perts, scientists and engineers more
naturally do their analytic thinking in
objectoriented rather than relational
frameworks. Part of Stellas effective
ness is that it supports operational
thinking, and it is objectoriented
thus making it useful for both experi
mental and theoretical types.
I havent made much explicit men
tion of what Stella proffers as one of
its strengths: its communications ca
pability. Clearly, being able to pass a
model that colleagues can operate,
rather than just describing certain re
sults of that model, has much to rec
ommend it. Today, there is much call
to make the details of our work known
to those outside of our profession in
forms that let them understand the
consequences in their own terms. I
believe that providing an easily oper
ated simulation model might provide
a great advantage here, for the ex
ported model permits its recipient to
invent appropriate and relevant impli
cations by experimentation. This is
guided active learning for the profes
sional! But can Stella do this?
Here, I must resort more to per
sonal deduction than generalized
knowledge. The list of Stella users is
impressively large and varied. There is
T E C H N O L O G Y N E W S & R E V I E W S
MAY/JUNE 2004 15
a related productiThinkthat is
used by professionals in business and
other nonscience/engineering profes
sions as well. The single most pre
scient conclusion I can draw from this
broad appeal is that modeling liter
acy is a tool that facilitates cross
disciplinary collaborations. In Stella,
we have a tool that can be used by, and
results that can be shared among, a
wide spectrum of professionals. As
widely understood in industry, gov
ernment, and academe, such collabo
rations drive the cutting edge of re
search and development these days.
Stella thus seems well positioned to
help such collaborations share re
search knowledge across disciplines.
S
tella is used by all sorts of pro
fessionalsfrom highschool
teachers to middle corporate man
agers for activities from instruction
to production engineering. For sci
entists and engineers, it facilitates
quick pasteup and sanity checks of
technical ideas and prepares certain
modeling ideas for the transition
from small to largescale applica
tions. It also serves as a tool for
teaching students about solving sys
tems problems and as a transitional
tool for taking simpler system con
cepts to a more complex level of
analysis prior to attacking them with
highlevel simulation tools.
By this point, it should be clear
that Stella is much more than a
modeling scratch pad, although I
would maintain that it excels at that.
At its base, Stella is for modeling
problems that can be described by
ordinary differential equations.
That means it is not designed for at
tacking problems that involve spa
tial distributions, time evolution, or
partial differential equations. Yet, in
the appropriate regime, it is an excel
lent system for treating and commu
nicating the nature and results of
problems with many dependent vari
ables, very complicated topologies,
and a wide range of logical rules for
interactions. It is truly a simulation
package as well, because it lets you in
troduce statistical effects into a
models operation in several helpful
ways. For a deeper appraisal of Stellas
capabilities and a realistic experience
of its look and feel, you can download
a demo and try it yourself.
Norman Chonacky is a senior research sci
entist at Columbia University. His research in
terests include cognitive processes in research
and education, environmental sensors and
the management of data derived from sensor
arrays, the physicochemical behavior of ma
terial in environmental systems, and applied
optics. He received a PhD in physics from the
University of Wisconsin, Madison. He is a
member of the American Association of
Physics Teachers (AAPT), the American Phys
ical Society (APS), and the American Associa
tion for the Advancement of Science (AAAS).
Contact him at chonacky@columbia.edu.
Applications are invited for an assistant professor level, tenuretrack faculty position, with joint
appointments in the Scientific Computing and Imaging (SCI) Institute and the Department of
Bioengineering at the University of Utah. Candidates with expertise in the areas of cardiac or neurologic
modeling and simulation and/or biomedical image analysis are encouraged to apply. A strong candi
date should also have an extensive background in numerical computation and applicationdriven
research.
The SCI Institute is an interdisciplinary research institute consisting of approximately 70 scientists,
staff, and students dedicated to advancing the development and application of computing, scientific
visualization, and numerical mathematics to topics in a wide variety of fields such as bioelectricity in
the heart and brain, multimodal medical imaging, and combustion. The SCI Institute currently houses
two national research centers: the NIH Center for Bioelectric Field Modeling, Simulation, and
Visualization and the DOE Advanced Visualization Technology Center.
The Bioengineering Department at the University of Utah is ranked in the top10 of American gradu
ate programs in bioengineering and has an international reputation for research with particular
strengths in biobased engineering, biomaterials, biomechanics, biomedical computing/imaging, con
trolled chemical delivery, tissue engineering and neural interfaces. Tenuretrack faculty typically have
primary appointments within College of Engineering and secondary appointments within the Health
Sciences.
The successful candidate will be expected to maintain/establish a strong extramurally funded
research program consistent with the research mission of the SCI Institute, and participate in under
graduate/graduate teaching consistent with the educational mission of the Department of
Bioengineering. The candidate should have a doctoral degree in a field related to biomedicine or engi
neering and have demonstrated research skills, ideally with 2 or more years of postdoctoral experience.
The candidate must be prepared to seek and secure ongoing extramural research support, collaborate
closely with researchers in interdisciplinary projects, and establish or maintain an international pres
ence in his or her field.
A complete CV, names of three references, and a short description of
current research activities, teaching experience, and career goals should
be sent to: Director, Scientific Computing and Imaging Institute,
University of Utah, 50 So. Central Campus Drive, Rm. 3490, Salt Lake
City, UT 84112; Email: crj@sci.utah.edu; Web: www.sci.utah.edu.
The University of Utah, an AA/EO employer, encourages applications
from women and minorities, and provides reasonable accommodation to
the known disabilities of applicants and employees.
UNIVERSITY OF UTAH  SCI/Bioengineering
I
n this second of two issues devoted to the
frontiers of simulation, we feature four ar
ticles that illustrate the diversity of com
putational applications of complex physi
cal phenomena. A major challenge for
computational simulations is how to accurately
calculate the effects of interacting phenomena,
especially when such phenomena evolve with
different time and distance scales and have very
different properties.
When time scales for coupling different effects
are longcompared with those that determine
each effects evolution separatelythen the system
is loosely coupled. It is then possible to couple
several existing calculations together through an
interface and obtain accurate answers.
Two of the articlesVirtual Watersheds: Mod
eling Regional Water Balances, by Winter et al.,
and LargeScale FluidStructure Interaction Sim
ulations, by Lhner et al.discuss how to do this
for specific loosely coupled systems and give ex
ample codes and results. A third article, Simula
16 COMPUTING IN SCIENCE & ENGINEERING
FRONTIERS OF SIMULATION, PART II
DOUGLASS POST
Los Alamos National Laboratory
15219615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
G U E S T E D I T O R S
I N T R O D U C T I O N
MAY/JUNE 2004 17
tion of Swimming Organisms: Coupling Internal
Mechanics with External Fluid Dynamics, by
Cortez et al., describes methods for calculating how
deformable animals ranging in size from microbes
to large vertebrates swim through fluids. The
fourth article, Two and ThreeDimensional As
teroid Impact Simulations, by Gisler et al., de
scribes a closely coupled calculation of hydrody
namics and radiation transport for asteroids
striking the Earth. The coupling time for the radi
ation and material is much shorter than the time
step, so the radiation transport and hydrodynam
ics motion must be solved simultaneously.
Linking together existing modules has tremen
dous advantages compared to developing new ones
with a similar capability. If the modules already ex
ist, the time between defining the problem and
solving it can be much shorter. Second, the mod
ules have already been tested, and thus have a lot of
verication and validation. Third, code developers
and users already have experience with how to use
the modules correctly. The largest remaining issue
is how to pass data among modules and how to
handle different types of adjacent meshes. The cal
culation in Winter et al.s article employs a gener
alized software infrastructure that connects sepa
rate parallel applications and couples three existing
software packages. This method appears to be par
ticularly powerful for calculating fluid flows
through a fixed geometry. Lhner et al. discuss
their solutions for how to enforce accurate cou
pling between packages with very different mesh
types and geometries. Their simulations include
deformation of a solid object due to force loading
from the fluid. Cortez et al. examine how to treat
the interaction of highly deformable objects (such
as bacteria and nematodes) within the fluids
through which they move via an immersed bound
ary framework. This powerful technique helps cal
culate selfconsistent solutions for the force balance
between the swimming organism and the fluid
through which it moves.
Obviously, the coupling between the constituent
parts of asteroid impactsmatter and radiation
occurs on a time scale much shorter than practical
time steps. Gisler et al. calculate the radiationmat
ter interaction implicitly. The material and radia
tion both move through the same fixed Cartesian
mesh. Although the common mesh simplifies the
treatments of different phenomena, it does so at a
potential cost of numerical diffusion if the resolu
tion is inadequate. They achieve additional resolu
tion by adaptive mesh renement (AMR)that is,
by increasing the number of mesh cells locally
wherever increased accuracy is needed.
Douglass Post is an associate editor in chief of CiSE
magazine. He has 30 years of experience with compu
tational science in controlled magnetic and inertial fu
sion. His research interests center on methodologies
for the development of largescale scientic simulations
for the US Department of Defense and for the con
trolledfusion program. Contact him at post@lanl.gov.
Writers
For detailed information on submitting articles, write to cise@
computer.org or visit www.computer.org/cise/author.htm.
Letters to the Editors
Send letters to Jenny Ferrero, Contact Editor, jferrero@computer.org.
Please provide an email address or daytime phone number with your
letter.
On the Web
Access www.computer.org/cise/ or http://cise.aip.org for informa
tion about CiSE.
Subscription Change of Address (IEEE/CS)
Send changeofaddress requests for magazine subscriptions to
address.change@ieee.org. Be sure to specify CiSE.
Subscription Change of Address (AIP)
Send general subscription and refund inquiries to subs@aip.org.
Subscribe
Visit https://www.aip.org/forms/journal_catalog/order_form_fs.html
or www.computer.org/subscribe/.
Missing or Damaged Copies
If you are missing an issue or you received a damaged copy
(IEEE/CS), contact membership@computer.org. For AIP sub
scribers, contact kgentili@aip.org.
Reprints of Articles
For price information or to order reprints, send email to cise@
computer.org or fax +1 714 821 4010.
Reprint Permission
To obtain permission to reprint an article, contact William Hagen,
IEEE Copyrights and Trademarks Manager, at copyrights@ieee.org.
How to
Reach
CiSE
D
etailed computational models of
complex naturalhuman systems
can help decision makers allocate
scarce natural resources such as wa
ter. This article describes a virtual watershed
model, the Los Alamos Distributed Hydrologic
System (LADHS), which contains the essential
physics of all elements of a regional hydros
phere and allows feedback between them. Un
like real watersheds, researchers can perform
experiments on virtual watersheds, produce
them relatively cheaply (once a modeling
framework is established), and run them faster
than real time. Furthermore, physicsbased vir
tual watersheds do not require extensive tuning
and are flexible enough to accommodate novel
boundary conditions such as landuse change
or increased climate variability. Essentially, vir
tual watersheds help resource managers evalu
ate the risks of alternatives once uncertainties
have been quantified.
LADHS currently emphasizes natural processes,
but its components can be extended to include such
anthropogenic effects as municipal, industrial, and
agricultural demands. The system is embedded in
the Parallel Applications Work Space (PAWS), a
software infrastructure for connecting separate par
allel applications within a multicomponent model.
1
LADHS is composed of four interacting compo
nents: a regional atmospheric model, a landsurface
hydrology model, a subsurface hydrology model,
and a riverrouting model. Integrated atmos
phereland/surfacegroundwater models such as
LADHS and those described elsewhere
24
provide
a realistic assessment of regional water balances by
including feedback between components. Realistic
simulations of watershed performance require dy
namically coupling these components because
many of them are nonlinear, as are their interac
tions. Boundary conditions from global climate
models, for example, can be propagated through a
virtual watershed; interaction effects can then be
evaluated in each component.
The level of resolution a virtual watershed re
quires depends on the questions asked. Grid res
olutions of 5 km or less on a side seem necessary
for atmospheric simulations to represent the
convective storms and highrelief topography
common in semiarid regions, whereas resolu
tions of less than 100 m are needed to represent
the spatial variability inherent in soil and vegeta
tion. Simulations of regional water balances gen
18 COMPUTING IN SCIENCE & ENGINEERING
VIRTUAL WATERSHEDS:
SIMULATINGTHE WATER
BALANCE OF THE RIOGRANDE BASIN
C.L. WINTER, EVERETT P. SPRINGER, KEELEY COSTIGAN,
PATRICIA FASEL, SUE MNIEWSKI, AND GEORGE ZYVOLOSKI
Los Alamos National Laboratory
15219615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
F R O N T I E R S
O F S I M U L A T I O N
Managers of water resources in arid and semiarid regions must allocate increasingly
variable surface water supplies and limited groundwater resources. This challenge is
leading to a new generation of detailed computational models that can link multiple
sources to a wide range of demands.
MAY/JUNE 2004 19
erally require high resolution because they are
meant to support analysis of finescaled processes
such as landuse change, soil moisture distribu
tion, localized groundwater recharge, and soil
erosion. Many water resource decisions are based
on data from 1 m to 1 km in scale; the smallest
grid in LADHSs regional atmosphere compo
nent is 5 km on a side. The landsurface compo
nent uses 100m spacing, whereas the groundwa
ter component concentrates processing on key
volumes via an unstructured grid of about 100 m
characteristic length.
This article focuses on LADHSs computational
aspectsprimarily, its system design and imple
mentation and basic measures of its performance
when simulating interactions between the land
surface and regional atmospheres. We also give re
sults of initial simulations of the water balance be
tween the land surface and atmosphere in the
upper Rio Grande basin to illustrate the promise
of this approach.
LADHS Functional Decomposition
Our computational approach links a regional at
mospheric component with terrestrial hydrologic
components in a dataflow corresponding to ex
changes of mass and energy among elements of re
gional water cycles (see Figure 1). We implemented
the individual component models as loosely cou
pled processes on several shared and distributed
memory parallel computers at Los Alamos Na
tional Laboratory. Because legacy applications exist
for each component, we use PAWS to link the ap
plications with minimal additional code. Each com
ponent process is assigned a xed number of phys
ical processors before runtime. The processes run
independently, but are synchronized by exchang
ing data in parallel via message passing. Data are
geographically referenced to a location for passing
between applications.
Table 1 summarizes the detailed physics of re
gional watershed elements along with the resolu
tions we use in our model. Fluxes are basically
driven by dissipative waves operating at multiple
scales. Scaling the links between components is
one of the major modeling challenges in a system
like LADHS. For example, the atmospheric com
ponent solves the NavierStokes equations and
operates on time steps of m/sec, whereas the
groundwater element uses Darcys law and has a
time resolution of m/day. The relative time steps
of these components differ by four orders of mag
nitude, with their spatial resolutions differing by
an order of magnitude (see Table 1). The differ
ence in spatial resolution is managed by a statis
tical downscaling technique that transforms rel
Regional
atmospheric
model
Winds,
water vapor
Sea surface
temperature
Runoff
Interflow
Recharge
Exfiltration
GW recharge GW discharge
Ocean
model
General
circulation
model
Land
surface
model
River
system
model
Groundwater
model
Temperature
Pressure Water vapor
Wind
Winds
Temperature
Water vapor
Net radiation
Precipitation
Figure 1. LADHS dataow. The system consists of four software objects
corresponding to the major components of basinscale water cycles: the
regional atmosphere, the landsurface, the groundwater system, and
the network of rivers and streams. Globalscaled general circulation
data enters the system through the regional atmospheric model.
Table 1. Physics of model elements.
Model element Physical model Characteristic time scales Spatial resolution
Groundwater Darcys equation mm to m/day ~100 m
Unsaturated subsurfaces Multiphase ow mm to cm/min 100 m
Atmosphere NavierStokes equations mm to m/sec 1 to 5 km
Overland ow St. Venant equations cm to m/sec 100 m
Snowmelt Diffusion (heat and mass) m/hr 100 m
Stream St. Venant equations m/sec By reach
Evapotranspiration Diffusion m/sec 100 m
20 COMPUTING IN SCIENCE & ENGINEERING
atively coarsely resolved atmospheric data to
more highly resolved hydrologic scales. Differ
ences in temporal resolution are handled by sum
ming mass quantities like precipitation over many
short time steps. Energetic quantities such as
temperature are scaled up by averaging atmos
pheric data over time.
The model physics are instantiated in four com
putational modules. The physics of the atmos
phere, including precipitation, is computed in the
Regional Atmospheric Modeling System (RAMS).
We use the niteelement heat and mass (FEHM)
transport code to calculate groundwater ow; over
land flow and river routing are separate responsi
bilities of LADHSs landsurface module. In addi
tion to the physics, an auxiliary module couples the
land surface to the atmosphere through statistical
downscaling, with PAWS providing the computa
tional glue needed to link components.
Regional Atmosphere
The mesoscale atmosphere component of the
LADHS is RAMS,
5
which estimates meteorolog
ical fields by solving the NavierStokes equations
with finite differencing methods. The RAMS
model consists of modules that allow for many
possible configurations of parameterizations for
processes such as radiation calculations and cloud
microphysics. Potentially nonstationary global cli
mate effects enter LADHS via boundary condi
tions affecting the regional atmosphere. We can
set these boundary conditions from observed sea
surface temperatures and atmospheric fields or
from a global climate models output; RAMS pro
vides precipitation, temperature, humidity, radia
tion, and wind data to the surfacewater hydrol
ogy component. A masterslave model and
domain decomposition of nested grids are used to
parallelize RAMS.
Land Surface
The LADHS surface hydrology module is a grid
based waterbalance model based on the land
surface representation presented elsewhere.
6
This
module uses nite differencing to approximate sur
face and subsurface ows in two dimensions. It in
cludes routines for snow accumulation and
snowmelt, infiltration, overland flow, evapotran
spiration, saturated subsurface lateral flow, and
groundwater recharge. The surface hydrology
module is parallelized by domain decomposition.
River Routing
Stream flow routing is based on the St. Venant
equations to account for multiple ow conditions
that occur in watersheds. Reservoirs and other fea
tures such as diversion dams create backwater con
ditions that affect channel flows. Reservoirs and
their operations must be represented realistically
because they can dominate stream ow in a basin.
Subsurface Hydrology
Groundwater represents a major water resource
not included in current climate models. LADHS
uses the FEHM code to model both shallow sub
surface and regional aquifers.
7
FEHM is a three
dimensional multiphase ow code that uses control
volume finite elements to solve mass and energy
flow equations in a porous medium. The upper
boundary condition for FEHM is supplied by a Los
Alamos surface hydrology module, a surface flow
module within FEHM, or a computational module
that simulates streambed recharge. FEHM is par
allelized by domain decomposition.
Coupling Components
A key challenge for integrated modeling is to cou
ple physical domains operating at different scales
of space and time. However, we do not emphasize
coupling here because its main challenges are phys
ical, not computational. We use a statistical algo
rithm based on kriging to downscale regional at
mospheric data at 1 to 5 km resolutions to the
100m resolution of the Los Alamos surface hy
drology module.
8
The approach uses an elevation
covariate to represent topographys effects. Cou
Table 2. Computational requirements of highresolution basinscale landsurface/atmosphere simulation.
RAMS Los Alamos surface hydrology module
Basin size (km
2
) upper Rio Grande 92,000
Duration of simulation One year
Resolution 1 km 100 m
Number of grid cells 92,000 9,200,000
Number of vertical layers and themes 22 80
Floatingpoint operations per grid cell 300 100
Time step One second One minute
Total number of operations 2.E+16 4.E+16
MAY/JUNE 2004 21
pling from the land surface to the atmosphere is
presently based on RAMSs internal submodels.
Parallel Applications Work Space
PAWS takes a datacentric view of coordination be
tween applications, which makes it wellsuited to
implement dataows between legacy codes. In gen
eral, applications are loosely coupled and opaque
to each other within PAWS. They can have differ
ent numbers of processors and data layout strate
gies, and can be written in different languages.
PAWS consists of two main elements: a central
controller, which coordinates the creation of con
nections between components and data structures,
and an application program interface (API). Appli
cations register the parallel layout of shared data
with the API and identify points where data can be
transferred in parallel. PAWS can work coopera
tively with an applications existing parallel com
munication mechanism.
In this article, we concentrate on the coupled
performance of RAMS and the Los Alamos surface
hydrology module, which are standalone legacy
codes. Nevertheless, the resolutions of their data
structures differ, the regional atmosphere compo
nent runs in a masterslave style, and they have
different grid orientations. We use three different
communication strategies: landsurface elevation
data is broadcast from the regional atmosphere
master node to every node in the surface
hydrology module, each surfacehydrology node
then gathers partial precipitation arrays from
RAMS, and, nally, the remaining arrays are trans
ferred in parallel and reoriented.
Implementation
We selected the upper Rio Grande for our simu
lation because the Rio Grande is a major river sys
tem in the southwestern United States and north
ern Mexico providing water for flora, fauna,
agriculture, domestic consumption, recreation,
business, and industry. Analysis indicates highres
olution simulation of a single year of the upper Rio
Grande basins water balance requires on the or
der of 10
16
arithmetical operations, with the com
putation fairly evenly balanced between the com
ponents (see Table 2).
Performance experiments conrm this. Because
a coupled model using our datatransfer module
runs at the speed of the slowest component (due to
datatransfer synchronization), we evaluated the
performance of RAMS and the Los Alamos surface
hydrology module separately and later investigated
performance of the coupled models. RAMS ran
fastest on 25 or 29 processes in standalone mode on
an SGI Origin 2000 Nirvana cluster using a 94
74 grid with 22 vertical layers (see Figure 2). The
fall off in performance beyond 29 processors is due
to the messagepassing overhead associated with
the masterslave arrangement.
Runtime for one iteration of the standalone sur
facehydrology module on a PC Linux cluster
does not show a decrease in performance, with the
number of processors over the range investigated,
although performance essentially plateaus at 15
processors (see Figure 3). The surfacehydrology
module ran on a 3,650 2,550 grid with 100 m
spatial resolution. Time per iteration is 5.5 sec
onds for 15 processors and 3.0 for 25. Communi
cations overhead goes up with the number of
processors, with 15 percent overhead for message
passing. Performance is maintained when com
ponents are linked.
The coupled RAMSsurface hydrology model us
ing PAWS was run for one days simulated time, with
0
50
100
150
200
250
300
350
400
450
RAMS timing
13 17 21 25 29 33
Number of processors
T
i
m
e
(
s
e
c
o
n
d
s
)
Figure 2. RAMS timing. The decrease in performance beyond 29
processors is due to messagepassing overhead.
LASH timing
Number of processors
T
i
m
e
p
e
r
i
t
e
r
a
t
i
o
n
(
s
e
c
o
n
d
s
) 140
120
100
80
60
40
20
0
1 2 4 5 6 15 25 30 50
Figure 3. Landsurface hydrology (LASH) timing. Performance levels off
at 15 processors.
22 COMPUTING IN SCIENCE & ENGINEERING
varying numbers of processors for the regional at
mosphere and 25 processors for surface hydrology
(see Figure 4). The wait time is due to the difference
in speed between the components, with PAWS data
transfer time constant over different numbers of
processors, typically 2 to 4 percent of total runtime.
Rio Grande Simulations
The upper Rio Grande basin extends from head
waters in the San Juan and Sangre de Cristo moun
tains of southern Colorado to where it runs dry at
Fort Quitman, Texas, about 40 miles downstream
from El Paso/Juarez (see Figure 5). The upper
basin covers around 90,000 km
2
and includes the
cities of Santa Fe and Albuquerque and the Las
Cruces/El Paso/Juarez metropolitan area.
Water moves through the basin along multiple
pathways, the most important of which are pre
cipitation, surface runoff, infiltration, groundwa
ter recharge and discharge, and evapotranspiration
(see Figure 6). River discharge and the atmosphere
are the main mechanisms for transporting water
out of the basin: about 95 percent of precipitation
is evaporated or transpired by plants back to the
atmosphere. Annual flows have averaged about a
million acrefeet per year in the upper Rio
Grande, but variability is high, and the river has
been subject to lengthy droughts. A major drought
in the 1950s caused a rapid shift in forest and
woodland. The system may be entering another
such period now.
Spring snowmelt and summer rains are the main
sources of water in the basin.
9
Spring snowmelt ac
cumulated from winter storms contributes about
70 percent of annual flows in the northern Rio
Grande and its tributaries. Further south in the
basin, thunderstorms contribute a greater propor
tion of the precipitation feeding the river. Stream
ow interacts with groundwater in some areas, with
gains and losses highly localized. Additional
groundwater recharge occurs through fractures
within mountain blocks, in ephemeral streams
along mountain fronts, and through agricultural
elds. Groundwater is the primary source of water
for metropolitan areas. The Rio Grande is a highly
regulated stream, and the operation of diversion
and storage dams reduces stream ow as the river
passes through New Mexico.
So far, our modeling efforts have concentrated
on the spatial extent and timing of the inuence of
precipitation on soil moisture during the
19921993 water year (October 1992 through Sep
tember 1993). Our precipitation estimates are
based on highresolution simulations using RAMS,
with three nested grids of size 80 km, 20 km, and 5
Number of processors
T
i
m
e
(
m
i
n
u
t
e
s
)
RAMSPAWSLASH timing
LASH wait
RAMS/LASH run
PAWS run
160
140
120
100
80
60
40
20
0
13 17 21 15
Figure 4. RAMS/PAWS/LASH timing. Timings are based on a xed
number of processors for LASH (25) and varying numbers of processors
for RAMS. The wait time is due to the difference in speed between
RAMS and LASH. The PAWS data transfer time is constant over different
numbers of processors and is a small percentage of total runtime.
Figure 5. The upper Rio Grande basin has its headwaters in San Juan
Mountains near Creede, Colorado, and ends near Fort Quitman, Texas.
MAY/JUNE 2004 23
km on a side. The largest grid covers most of the
western United States, along with parts of Canada,
Mexico, and the Pacic Ocean. We need it for sim
ulating synopticscale flow features in the region.
The 20km grid contains the states of Utah, Ari
zona, Colorado, and New Mexico. Terrain features,
such as mountain ranges, are discerned at this res
olution to affect regional atmospheric dynamics.
The 5km grid more fully describes the rapid
changes in topography and land use that affect the
regional atmosphere, especially precipitation. We
compared our simulations of precipitation in
19921993 to observed data
9
and ran the atmos
pheric simulations on an SGI Origin 2000 Nirvana
cluster using 17 processors.
We ran the simulations with a 120secondlong
time step (24 seconds for acoustic terms) on the
coarsest (80km) grid, with proportionally shorter
time steps on the smaller grids for the winter
months. The time step halved during the warmer
seasons. The model produced one day of the sim
ulation for each one to four hours of wall clock
time, depending on the complexity of the micro
physical processes taking place at any given time.
Simulated and observed monthly precipitation to
tals compare fairly well, although they are far from
perfect. It should be noted that we performed the
simulations without calibration. In general, the
19921993 water year was wetter than normal, but
even so, our model had a tendency to overestimate
precipitation at some locations. For instance, ob
servations from July 1993 indicate that the great
est precipitation totals for the month occurred in
southern and eastern New Mexico, a feature that
our model captures (see Figure 7).
We can demonstrate the coupled land
surface/atmosphere models capability by simulat
ing the effect of snow water equivalent on soil
moisture from October through November 1992.
The atmospheric simulation we used observed sea
surface temperatures and US National Oceanic and
Atmospheric Administration (NOAA)s National
Center for Environmental Prediction reanalysis
data as global boundary conditions. Simulated tem
perature, radiation, and wind data were sent from
RAMS to the Los Alamos surface hydrology mod
ule every 20 minutes of simulated time; precipita
tion was sent at twominute intervals when it oc
curred. Snow water equivalent is the amount of
water contained in snow. Its extent is the same as
the snowpack. RAMS produced it at 5km resolu
tions and statistically downscaled it to the surface
hydrology module at 100m resolution. The num
ber of landsurface grid cells in this simulation is
9,307,500. We obtained soils data from the State
Soil Geographic (STATSGO) database,
10
and esti
mated hydrologic parameters by using soil texture.
We obtained spatial distributions of vegetation type
from the Vegetation/Ecosystem Modeling and
Analysis Project (VEMAP) database.
11
Estimates of snow water equivalent (see Figure 8)
and surface soil moisture estimates (see Figure 9) il
lustrate the relative effects of 5km and 100m grid
cell representations. The blocky nature of the snow
distribution at the 5km resolution is obvious in
Figure 8, but the highly resolved soil moisture
process smoothes the edges of the snow distribution
(see Figure 9). The distribution of soil moisture
ranges from very dry in the San Luis Valley around
Alamosa, Colorado, where there is little precipita
tion on an annual basis, to very wet conditions in
higher elevation zones, where snow accumulation
and melt usually occur. The detail presented in Fig
ure 9 is important when simulating processes such
as soil erosion and contaminant transport, which
depend on local information to determine the wa
ter velocities used in transport calculations.
A
lthough we cannot conduct actual ex
periments with a system as large and
valuable as the hydrosphere of the Rio
Grande basin, computational science
has advanced to a point where simulations of
river basins can be highly realistic. Although
coupling theory, additional component model
ing, and data gaps that affect parameterizations
and validation are the main limits on distributed
Channel/
groundwater
interaction
C
h
a
n
n
e
l
f
l
o
w
Terrestrial hydrologic model
Overland
flow
Evapotranspiration
Soil
hydrologic
model
Groundwater
hydrologic model
Infiltration Infiltration
Figure 6. Water moves through the basin along multiple pathways,
the most important of which are precipitation, surface runoff,
infiltration, groundwater recharge and discharge, and
evapotranspiration.
24 COMPUTING IN SCIENCE & ENGINEERING
basinscale simulations, the basic framework for
addressing them exists in LADHS and similar
physicsbased systems.
24
That said, we still have some progress to make.
An immediate need of LADHS is to link land
surface output directly to the atmosphere. Once
this is done, we can evaluate the horizontal redis
tribution of soil moistures impact on the atmos
phere. Enhancements also are in order for exist
ing components. Improved models of plantwater
interactions in riparian areas can lead to better
evaluation of their impacts on aquifer recharge
and streamflow. In the future, a more compre
hensive model based on the FEHM groundwater
code will replace LADHSs subsurface dynamics:
the gridbased computational model will be re
placed by a treebased data structure that can take
computational advantage of specific physical fea
tures of flow through watersheds. Domain de
composition based on watersheds can reduce mes
sage sizes to the output of a single point, the
stream outlet, because surface flows do not cross
watershed divides. Domain decomposition of
FEHM also can take advantage of similar limita
tions on flows between groundwater basins.
LADHSs modular structure allows for the inter
change of atmospheric models, raising the possi
bility of using other atmospheric models.
Remote sensing, especially satellitebased, and
new geological and geophysical characterization
techniques could eventually fill many gaps in ini
tialization and parameterization data, but issues of
resolution and scaling must still be resolved. Most
remotely sensed data is too coarse to be the direct
source of parameters. We plan to investigate alter
native atmospheric data sets for boundary condi
tions and largescale forcing because they can sig
nicantly affect model results.
N
o
r
t
h
l
a
t
i
t
u
d
e
109 111 123 121 119 117 115 113 107 105
West longitude
33
41
39
37
35
31
Simulation
5km resolution
N
o
r
t
h
l
a
t
i
t
u
d
e
109 111 123 121 119 117 115 113 107 105
West longitude
33
41
39
37
35
31
Observations
109 111 123 121 119 117 115 113 107 105
West longitude
33
41
39
37
35
31
N
o
r
t
h
l
a
t
i
t
u
d
e
Simulation
20km resolution
0 50 mm
50 125 mm
125 250 mm
> 250 mm
Figure 7. The effect of resolution on simulations of precipitation. The lightblue rectangles indicate the extent of coverage of the
20km and 5km grids. The area of a circle is the total amount of precipitation as simulated at the two resolutions (lower) and as
observed (upper) for July 1993. The more highly resolved 5km simulation does a better job of capturing the observed pattern of
variability.
MAY/JUNE 2004 25
Validation is a challenge for distributed models of
environmental systems such as virtual watersheds.
Most hydrologic state variables have not been ob
served consistently for long periods, and they are
usually restricted to point data. We can use point
data to evaluate distributed models, but it is not suf
cient by itself. Streamow measured at a point is
often used to validate hydrologic models, but the
method is illposed because very different parame
terizations can lead to the same estimates of stream
ow (streamow integrates hillslope processes). We
plan to explore better methods for comparing the
gridded model predictions to observation points.
One method is to convert the point observations to
gridded elds, using a model that combines physi
cally based simulation submodels with threedi
mensional, spatial interpolation to reduce the topo
graphic, geographic, and observational bias in
station networks. A weaker alternative is to compare
two models; we plan to do this especially with re
gard to the regional atmosphere component. Some
day, remotely sensed data could be the source of
spatially distributed observations of system state
variables as well as the source of distributed system
parameters. Most progress has been made in esti
mating snowcovered areas.
Observations contain both systematic and ran
dom errors, either of which can affect conclusions
drawn from simulations. Coupled basinscale mod
els require methods of quantifying uncertainty be
cause no data set will ever be exact. Uncertainty in
physicsbased models can be represented through
stochastic partial differential equations and quan
tied by either Monte Carlo simulation or the di
rect evaluation of moment equations. We have de
veloped moment equations for the groundwater
pressure head
12
that we expect to extend to other
components of the system, especially the land sur
face. Because most decisionmakers recognize un
certainty is a byproduct of every simulation, quan
tifying uncertainty systematically is a critical basis
for establishing their trust.
Trust also arises when a model can respond to a
wide range of scenarios, including ones that have
not been observed. Decisionmakers need esti
mates of what the Rio Grande basin will look like
if urban populations double, if land use changes, if
climate becomes much more variable, or if we en
ter a new climate regime entirely. Physicsbased
models such as LADHS are not restricted to ob
served ranges of variability, nor do they rely on cal
ibration. Virtual watersheds help us predict the
continued longterm behavior of regional hydros
pheres under circumstances that will not be ob
served for many years, if ever.
Acknowledgments
This study was supported by the Los Alamos National
Laboratorys Directed Research and Development project,
Sustainable Hydrology, in cooperation with the US
National Science Foundation Science and Technology
Center for Sustainability of SemiArid Hydrology and
Riparian Areas (SAHRA).
References
1. K. Keahey, P. Fasel, and S. Mniszewski, PAWS: Collective Inter
actions and Data Transfers, Proc. 10th IEEE Intl Symp. High Per
formance Distributed Computing (HPDC10), IEEE CS Press, 2001,
pp. 4754.
2. Z. Yu et al., Simulating the RiverBasin Response to Atmospheric
Forcing by Linking a Mesoscale Meteorological Model and Hy
drologic Model System, J. Hydrology, 1999, vol. 218, nos. 1 and
2, 1999, pp. 7291.
3. J.P. York et al., Putting Aquifers into Atmospheric Simulation
Models: An Example from the Mill Creek Watershed, Northeast
ern Kansas, Advances in Water Resources, vol. 25, no. 2, 2002,
pp. 221238.
4. G. Seuffert et al., The Inuence of Hydrologic Modeling on the
Predicted Local Weather: TwoWay Coupling of a Mesoscale
Weather Prediction Model and a Land Surface Hydrologic
Model, J. Hydrometeorology, vol. 3, no. 5, 2002, pp. 505523.
5. R.A. Pielke et al., A Comprehensive Meteorological Modeling
19 November 1992 00:00 UTC
Snow water
equivalent (cm)
0
0.1 5
5 10
10 20
20 40
40 80
80 120
120 180
180 240
> 240
Colorado
New Mexico
Creede
Alamosa
Taos
Espanola
Los Alamos
Santa Fe
Figure 8. The distribution of simulated snow on 19 Nov. 1992. The
estimates come from RAMS using a 5km resolution. Snow occurs in the
mountains, where it should, but note the blocky nature of the pattern.
26 COMPUTING IN SCIENCE & ENGINEERING
System: RAMS, Meteorological Atmospheric Physics, vol. 49, nos.
14, 1992, pp. 6991.
6. Q.F. Xiao, S.L. Ustin, and W.W. Wallender, A Spatial and Tem
poral Continuous SurfaceSubsurface Hydrologic Model, J. Geo
physical Research, vol. 101, no. 29, 1996, pp. 565 584.
7. G.A. Zyvoloski et al., Users Manual for the FEHM Application: A Fi
niteElement Heat and MassTransfer Code, tech. report LA
13306M, Los Alamos Natl Laboratory, 1997.
8. K. Campbell, Linking MesoScale and MicroScale Models: Us
ing BLUP for Downscaling, Proc. Section on Statistics and the En
vironment, Am. Statistical Assoc., 1999.
9. K.R. Costigan, J.E. Bossert, and D.L. Langley, Atmospheric/Hy
drologic Models for the Rio Grande Basin: Simulations of Precip
itation Variability, Global and Planetary Change, vol. 25, nos. 1
and 2, 2000, pp. 83110.
10. State Soil Geographic (STATSG0) Database, publication number
1492, US Dept. of Agriculture, Natural Resources Conservation
Service, Natl Soil Survey Center, Aug. 1991.
11. T.G.F. Kittel et al., The VEMAP Integrated Database for Model
ing United States Ecosystem/Vegetation Sensitivity to Climate
Change, J. Biogeography, vol. 22, nos. 4 and 5, 1995, pp.
857862.
12. C.L. Winter and D.M. Tartakovsky, Groundwater Flow in Het
erogeneous Composite Aquifers, Water Resources Research, vol.
38, no. 8, 2002, pp. 23123.11.
C.L. Winter, an applied mathematician and ground
water hydrologist, was a member of the Theoretical Di
vision at Los Alamos National Laboratory and principal
investigator on Los Alamoss project to model the wa
ter cycles of regional basins. He is currently the deputy
director of the National Center for Atmospheric Re
search in Boulder, Colorado. Winter also is an adjunct
professor in the Department of Hydrology and Water
Resources at the University of Arizona. He has a PhD in
applied mathematics from the University of Arizona.
Contact him at lwinter@ucar.edu.
Everett P. Springer is a technical staff member with the
Atmospheric, Climate, and Environmental Dynamics
Group at Los Alamos National Laboratory. His research
interests include numerical modeling of surface and sub
surface hydrologic systems, applying highperformance
computing to hydrologic modeling, and hydrologic
model testing. He has a BS and an MS in forestry from
the University of Kentucky and a PhD from Utah State
University. Contact him at everetts@lanl.gov.
Keeley Costigan is a technical staff member in the At
mospheric, Climate, and Environmental Dynamics
Group at Los Alamos National Laboratory. Her research
interests include regional climate modeling and moun
tain meteorology. She has a BS in meteorology from
Iowa State University and an MS and PhD in atmos
pheric science from Colorado State University. She is a
member of the American Meteorological Society. Con
tact her at krc@lanl.gov.
Patricia Fasel is a technical staff member with the
Computer and Computational Sciences Division at Los
Alamos National Laboratory. Her interests include par
allel programming, anomaly detection, feature extrac
tion, and algorithm development in all areas of science.
She has a BS in mathematics and computer science
and an MS in computer science from Purdue Univer
sity. Contact her at pkf@lanl.gov.
Sue Mniszewski is a staff member at Los Alamos Na
tional Laboratory. Her research interests include paral
lel coupling of largescale models, bioontologies, and
computational economics. She has a BS in computer
science from Illinois Institute of Technology in Chicago.
Contact her at smm@lanl.gov.
George Zyvoloski is a subsurface ow specialist at the
Los Alamos National Laboratory. His interests include
numerical algorithms for coupled groundwater ow at
large scales and the development of linear equation
solvers for unstructured grids. He has a PhD in me
chanical engineering from the University of California,
Santa Barbara. Contact him at gaz@lanl.gov.
Volumetric
soil mosture
content
0.00 0.05
0.05 0.10
0.10 0.15
0.15 0.20
0.20 0.25
0.25 0.30
0.30 0.35
0.35 0.40
0.40 0.45
0.45 0.50
0.50 0.55
0.55 0.60
Colorado
New Mexico
Creede
Alamosa
Taos
Espanola
Los Alamos
Santa Fe
19 November 1992 00:00 UTC
Figure 9. The distribution of simulated soil moisture on 19 Nov. 1992.
The estimates come from LASH using 100m cell resolutions. Soil
moisture arises from rain as well as snow, hence its greater spatial
extent. The much higher resolution of LASH leads to a smoother
distribution than that of snow.
MAY/JUNE 2004 27
F R O N T I E R S
O F S I M U L A T I O N
O
ver the past two decades, the disci
plines required to predict the behav
ior of processes or productsfluid
dynamics, structural mechanics,
combustion, heat transfer, and so onhave fol
lowed the typical bottomup trend.
Starting from sufciently simple geometries and
equations to have an impact on design decisions and
be identied as computational, more and more re
alism was added at the geometrical and physics lev
els. Whereas the engineering process, outlined in
Figure 1, follows a line from project to solution of
partial differential equations (PDEs) and evaluation,
the developments (in particular of software) in the
computational sciences tend to run in the opposite
direction: from solvers to complete database.
With the advancement of numerical techniques
and the advent, rst, of affordable 3D graphics work
stations and scalable compute servers, and, more re
cently, PCs with sufciently large memory and 3D
graphics cards, publicdomain and commercial soft
ware for each of the computational core disciplines
has matured rapidly and received wide acceptance in
the design and analysis process. Most of these pack
ages are now at the threshold mesh generator:pre
processor. This has prompted the development of the
next logical step: multidisciplinary links of codes, a
trend that is clearly documented by the growing num
ber of publications and software releases in this area.
In principle, interesting problems exist for any
combination of the disciplines listed previously.
Here, we concentrate on uidstructure and uid
structurethermal interaction, in which changes of
geometry due to fluid pressure, shear, and heat
loads considerably affect the flowfield, changing
the loads in turn. Problems in this category include
steadystate aerodynamics of wings under
cruise conditions;
aeroelasticity of vibratingthat is, elastic
structures such as utter and buzz (aeroplanes
and turbines), galloping (cables and bridges),
and maneuvering and control (missiles and
drones);
weak and nonlinear structures, such as wetted
membranes (parachutes and tents) and bio
logical tissues (hearts and blood vessels); and
LARGESCALE FLUIDSTRUCTURE
INTERACTIONSIMULATIONS
RAINALD LHNER, JUAN CEBRAL, AND CHI YANG
George Mason University
JOSEPH D. BAUM AND ERIC MESTREAU
Science Applications International Corporation
CHARLES CHARMAN
General Atomics
DANIELE PELESSONE
Engineering and Software Systems Solutions
15219615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
Combining computationalscience disciplines, such as in uidstructure interaction
simulations, introduces a number of problems. The authors offer a convenient and cost
effective approach for coupling computational uid dynamics (CFD) and computational
structural dynamics (CSD) codes without rewriting them.
28 COMPUTING IN SCIENCE & ENGINEERING
strong and nonlinear structures, such as shock
structure interaction (command and control
centers, military vehicles) and hypersonic
ight vehicles.
The most important question is how to combine
these disciplines in order to arrive at an accurate,
costeffective, and modular simulation approach
that can handle an arbitrary number of disciplines
at the same time. Considering the uidstructure
thermal interaction problem as an example, we see
from the list of possibilities displayed in Figure 2
that any multidisciplinary capability must be able
to quickly switch between approximation levels,
models, and ultimately codes. Clearly, only those
approaches that allow a maximum of exibility will
survive. Such approaches enable
linear and nonlinear computational fluid dy
namics (CFD), computational structure dy
namics (CSD), and computational thermal dy
namics (CTD) models,
different, optimally suited discretizations for
CFD, CSD, and CTD domains,
modularity in CFD, CSD, and CTD models
and codes,
fast multidisciplinary problem denition, and
fully automatic grid generation for arbitrary
geometrical complexity.
In this article, we focus only on such approaches.
Coupling Schemes
The question of how to couple CSD and CFD
codes has been treated extensively in the litera
ture.
16
Two main approaches have been pursued
to date: strong coupling and loose coupling. The strong
(or tight) coupling technique solves the discrete
system of coupled, nonlinear equations resulting
from the CFD, CSD, CTD, and interface condi
tions in a single step. Thornton and Dechaumphai
present an extreme example of the tight coupling
approach, in which even the surface discretization
was forced to be the same.
1
The loose coupling
technique, illustrated in Figure 3, solves the same
system using an iterative strategy of repeated CFD
solution followed by CTD solution followed by
CSD solution until convergence is achieved.
Special cases of the loose coupling approach in
clude the direct coupling in time of explicit CFD
and CSD codes and the incrementalload approach
of steady aero and hydroelasticity. The variables
on the boundaries are transferred back and forth
between codes by a master code that directs the
multidisciplinary run. Each code (CFD, CSD,
Project
Objectives (performance, cost, ...)
Optimization (critical parameters, ...)
Disciplines (CSD, CFD, CTD, CEM, CDM, and so on)
Problem definition (models, PDEs, BCs, and so on)
Grid
Solver
Data reduction
Historic
development
line
Figure 1. Design and analysis process in engineering. Developments in
the computational sciences tend to go in the reverse direction.
Prescribed heat, flux,temperature/sinks
Rigid
walls
Rigid body
(6 DOF)
Modal
analysis
Linear
finite
element
method
Nonlinear
finite
element
method
Rupture,
tearing,
and so on
No fluid
Potential/
acoustics
Full
potential
Euler
Reynoldsaveraged
NavierStokes
Largeeddy
simulation
Direct simulation
of NavierStokes
Computational fluid dynamics
Computational
structural
dynamics
Computational
thermodynamics
Advanced
aeroelasticity
Current
efforts
Classic
aeroelasticity
Current
efforts
Linear heat conduction
Nonlinear heat conduction
Current
efforts
Current
efforts
Figure 2. Fluidstructurethermal interaction. Researchers in the
computational sciences must develop flexible approaches to
combining disciplines to create accurate, costeffective, and modular
simulations.
MAY/JUNE 2004 29
CTD, and so on) is seen as a subroutine, or object,
that is called by the master code, or as a series of
processes that communicate via message passing.
This implies that the transfer of geometrical and
physical information is performed between codes
without affecting their efficiency, layout, basic
functionality, or coding styles.
At the same time, CSD, CTD, and CFD codes
can easily be replaced, making this a modular ap
proach. The loose coupling approach allows for a
straightforward reuse of existing codes and the
choice of the most suitable model for a given ap
plication. The information transfer software can be
developed, to a large extent, independently of the
CSD, CTD, and CFD codes involved, again lead
ing to modularity and software reuse. For this rea
son, this approach is favored for widespread use in
academia and industry. Indeed, considerable effort
has been devoted to develop general, scalable in
formation transfer libraries.
4,7,8
Information Transfer
Optimal discretizations for the CSD, CTD, and
CFD problem will, in all probability, differ. For ex
ample, consider a commercial aircraft wing under
going aeroelastic loads. For a reliable CFD solu
tion using the Euler equations, an accurate surface
representation with 60 to 120 points in the chord
direction will be required. For the CSD model, a
20 40 mesh of plate elements might be more than
sufficient to capture the dominant eigenmodes.
Any general uidstructure coupling strategy must
be able to efciently handle the information trans
fer between surface representations. This is not
only a matter of fast interpolation techniques, but
also of accuracy, load conservation, geometrical 
delity, and temporal synchronization.
One of the main aims of the loose coupling ap
proach is to achieve multidisciplinary runs in such a
way that each one of the codes used is modied in the
least possible way. Moreover, the option of having
different grids for different disciplines, as well as
adaptive grids that vary in time, implies that in most
cases no xed common variables will exist at the
boundaries. Therefore, fast and accurate interpola
tion techniques are required. Because the grids can
be rened or coarsened during time steps, and the
surface deformations can be severe, the interpolation
procedures must combine speed with generality.
Consider the problem of fast interpolation be
tween two surface triangulations. Other types of sur
face elements can be handled by splitting them into
triangles, so that what follows can be applied to such
grid types as well. The basic idea is to treat the topol
ogy as 2D while the interpolation problem is given
in 3D space. This implies that further criteria, such
as relative distances normal to the surface, will have
to be used to make the problem unique. Many search
and interpolation algorithms have been devised over
the years. Experience indicates that, for generality, a
layered approach of different interpolation tech
niques works best. Wherever possible, a vectorized
advancing front neighbortoneighbor algorithm is
used as the basic procedure.
4
If this fails, octrees are
used. Finally, if this approach also fails, an exhaustive
search over all surface faces is performed.
For realistic 3D surface geometries, a number of
factors can complicate the interpolation of surface
grid information.
The rst of these factors is the proper answer to
the question, How close must a face be to a point
to be acceptable? This is not a trivial question for
situations in which narrow gaps exist in the CFD
mesh, and when there is a large discrepancy of face
sizes between surface grids.
A second complication often encountered
arises due to the fact that interpolation may be
impossible (for convex ridges) or multivalued (for
concave ridges).
4
A third complication arises for cases in which
thin shells are embedded in a 3D volumetric uid
mesh. For these cases, the best face might actually
lie on the opposite side of the face being interpo
lated. This ambiguity is avoided by dening a sur
face normal, and then only considering the faces
and points whose normals are aligned.
A fourth complication arises for the common case
of thin structural elementsfor example roofs,
walls, and stiffenerssurrounded by a uid medium.
The structural elements will be discretized using
q,(T) u f
f: Forces
q: Heat fluxes
T: Temperature
u: Deformations
x: Mesh position
w: Mesh velocity
CTD
T,(q)
CFD
CSD
Master
x,w,T,(q) f,q,(T)
Figure 3. Loose coupling for uidstructurethermal simulations. The
technique uses an iterative strategy, with the master codes transferring
geometrical and physical information between codes.
30 COMPUTING IN SCIENCE & ENGINEERING
shell elements. These shell elements will be affected
by loads from both sides. Most CSD codes require
a list of faces on which loads are exerted. This im
plies that the shell elements loaded from both sides
will appear twice in this list. To be able to incorpo
rate thickness and interpolate between CSD and
CFD surface grids in a unique way, these doubly de
ned faces are identied and, should this check re
veal the existence of doubly dened faces, new points
are introduced using an unwrapping procedure.
4
Position and Load Transfer
Another important question that needs to be ad
dressed is how to make the different grids follow
one another when deforming surfaces are present.
Consider again the aeroelastic case of a wing de
forming under aerodynamic loads. For accuracy,
the CFD discretization will be ne on the surface,
and the surface will be modeled as accurately as
possible from the CAD/CAM data at the start of
the simulation. On one hand, a CSD discretization
that models the wing as a series of plates might be
entirely appropriate. If one would force the CFD
surface to follow the CSD surface, the result would
be a wing with no thickness, clearly inappropriate
for an acceptable CFD result. On the other hand,
for strong shock/object interactions with large plas
tic deformations and possible tearing, forcing the
CFD surface to follow exactly the CSD surface is
the correct way to proceed. These two examples in
dicate that more than one strategy might have to
be used to interpolate and move the surface of the
CFD mesh as the structure moves. To date, a num
ber of techniques have been explored, including
exact tracking with linear interpolation,
4
exact tracking with quadratic interpolation,
9
and
tracking with an initial distance vector.
10
An important unsolved problem (at least to our
knowledge) is how to handle, in an efcient and au
tomatic way, models that exhibit incompatible di
mensionalities. An example of such a reduced
model is an aeroelastic problem in which the wing
structure is modeled by a torsional beam (perfectly
acceptable for the lowest eigenmodes), and the uid
by a 3D volumetric mesh. Clearly, the proper spec
ification of movement for the CFD surface based
on the 1D beam, as well as the load transfer from
the uid to the beam, represent nontrivial problems
for a general, userfriendly computing environment.
During each global cycle, the CFD loads must be
transferred to the CSD mesh. Simple pointwise in
terpolation can be used for cases in which the CSD
surface mesh elements are smaller than or of simi
lar size to the elements of the CFD surface mesh.
However, this approach is not conservative and will
not yield accurate results for the common case of
CSD surface elements being larger than their CFD
counterpart. Considering, without loss of general
ity, the pressure loads only, it is desirable to attain:
p
s
(x) p
f
(x), (1)
while being conservative in the sense of
f = p
s
nd = p
f
nd, (2)
where p
f
, p
s
denote the pressures on the uid and
solid material surfaces, and n is the normal vector.
These requirements can be combined using a
weighted residual method. With the approximations
p
s
= N
s
i
p
is
, p
f
= N
f
j
p
jf
, (3)
we have
N
s
i
N
s
j
dp
js
= N
s
i
N
f
j
d p
jf
, (4)
which can be rewritten as
Mp
s
= r = Lp
f
. (5)
Here M is a consistentmass matrix, and L a
loading matrix. This weighted residual method is
conservative in the sense of Equation 2.
9,10
The
most problematic part of the weighted residual
method is the evaluation of the integrals appearing
on the righthand side of Equation 4. When the
CFD and CSD surface meshes are not nested, this
is a formidable task. Adaptive Gaussian quadrature
techniques
9,10
have been able to solve this problem
reliably even for highly complex geometries.
Treatment of Moving Surfaces/Bodies
Any uidstructure interaction simulation with con
siderable structural deformation will require a ow
solver that can handle the arbitrary surface defor
mation in time. The treatment of these moving sur
faces differs depending on the mesh type chosen.
For bodyconforming grids, the external mesh faces
match up with the surface (body surfaces, external
surfaces, and so on) of the domain. This is not the
case for the embedded approach (also known as c
ticious domain, immersed boundary, or Cartesian
method), in which the surface is placed inside a large
mesh (typically a box), with special treatment of the
elements near the surfaces. For moving or deform
ing surfaces with topology change, both approaches
have complementary strengths and weaknesses.
MAY/JUNE 2004 31
BodyConforming Moving Meshes
The PDEs describing the ow need to be cast in an
arbitrary LagrangianEulerian (ALE) frame of ref
erence, the mesh is moved in such a way as to min
imize distortion, if required the topology is recon
structed, the mesh is regenerated, and the solution
reinterpolated. All of these steps have been opti
mized over the last decade, and this approach has
been used extensively.
6,1114
The bodyconforming solution strategy exhibits
several shortcomings:
The topology reconstruction can sometimes
fail for singular surface points.
There is no way to remove subgrid features
from surfaces, leading to small elements due
to geometry.
Reliable parallel performance on more than 16
processors has proven elusive for most gen
eralpurpose grid generators.
The interpolation required between grids in
variably leads to some loss of information.
There is an extra cost associated with the re
calculation of geometry, wall distances, and
mesh velocities as the mesh deforms.
On the other hand, the imposition of boundary
conditions is natural, the precision of the solution
is high at the boundary, and this approach still rep
resents the only viable solution for problems with
boundary layers.
Embedded Fixed Meshes
An embedded fixed mesh is not body conforming
and does not move. Hence, the PDEs describing
the ow can remain in the simpler Eulerian frame
of reference. At every time step, the edges crossed
by CSD faces are identified and proper boundary
conditions are applied in their vicinity. Although
used extensively (see Lhner and colleagues,
15
Murman, Aftosmis, and Berger,
16
and the refer
ences cited therein), this solution strategy also ex
hibits some shortcomings:
The boundary, which has the most profound
inuence on the ensuing physics, is also where
the worst elements are found.
At the same time, near the boundary, the em
bedding boundary conditions must be applied,
reducing the local order of approximation for
the PDE.
Stretched elements cannot be introduced to
resolve boundary layers.
Adaptivity is essential for most cases.
There is an extra cost associated with the re
calculation of geometry (when adapting) and
the crossed edge information.
Efcient Use
of Supercomputing Hardware
Despite the striking successes reported to date, only
the simplest solversexplicit timestepping or im
plicit iterative schemes, perhaps with added multi
gridhave been ported without major changes or
problems to massively parallel machines with dis
tributed memory. Many code options essential for re
alistic simulations are difcult to parallelize on this
type of machinefor example, local and global
remeshing,
2,17
uidstructure interaction with topol
ogy change, and in general, applications with rapidly
varying load imbalances. Even if 99 percent of all op
erations required by these codes can be parallelized,
the maximum achievable gain would be 1:100.
If we accept as fact that for most largescale
codes we might not be able to parallelize more than
99 percent of all operations, the sharedmemory
paradigm, discarded for a while as nonscalable, will
make a comeback. It is far easier to parallelize some
of the more complex algorithms, as well as cases
with large load imbalance, on a sharedmemory
machine. In addition, it is within technological
reach to achieve a 100processor, sharedmemory
machine (128 has been a reality since 2000).
Figure 4 shows the performance of the authors
Finite Element Flow Code (Feo)the uid code
used in the work presented hereon a variety of
common US Department of Defense highperfor
mancecomputing platforms. One can see that the
speedup obtained using shared and distributed
memory approaches is similar.
1
2
4
8
16
32
1 2 4 8 16 32
S
p
e
e
d
u
p
Number of processors
Ideal
SGIO2K SHM
SGIO2K MPI
IBMSP2 MPI
HPDAX MPI
Figure 4. Performance of the Finite Element Flow Code (Feo) on
different platforms. Shared and distributedmemory approaches gave
similar results.
32 COMPUTING IN SCIENCE & ENGINEERING
Examples
The loose coupling methodology has been applied to
a number of problems over the past ve years. We in
clude here some recent examples, from simple rigid
body CSD motion to highly nonlinear, fragmenting
(that is, topologychanging) solids. Additional exam
ples, including validation and comparison to experi
ments, are available elsewhere.
5,6,12,13,17,18
Series60 Hull
The rst example considers the steady (incompress
ible) ow past a typical ship hull. The hull is allowed
to sink and trim due to the uid forces. The nal po
sition and inclination (trim) of the hull are obtained
iteratively. In each iteration, the steady ow is com
puted, the forces and moments evaluated, and the
ship repositioned. The mesh is typically moved.
Should the need arise, a local or global remeshing is
invoked to remove elements with negative volumes.
Figure 5a shows the geometry considered. The
mesh consisted of approximately 400,000 elements.
Figures 5b and 5c depict the convergence of the
computed sinkage and trim with respect to the
number of iterations. Figures 5d and 5e compare
the computed sinkage and trim with experimental
data. Figures 5f and 5g compare the computed
wave drag coefficient with experimental data for
both the xed model and the free to sink and trim
model, respectively. A run of this kind can be ob
tained in less than an hour on a leadingedge PC.
Details are available elsewhere.
18
Nose Cone
Figure 6 shows results for a proposed nosecone
experiment. The CFD part of the problem was
computed using Feflo98, and the CSD and CTD
with CosmicNastran. More on the flow solver is
available elsewhere.
2,11,19,20
The incoming ow was set to M
= 3.0 at an an
gle of attack of = 10
o
. The Reynolds number was
approximately Re = 2 10
6
, based on the length of
the cone. The solution was initiated by converging
the fluidthermal problem without any structural
deformation. Thereafter, the uidstructurether
mal problem was solved. Convergence was
achieved after 10 cycles. The convergence is
markedly slower than that achieved for uidstruc
ture (aeroelastic) problems. This is due to the in
terplay of temperature advection in the flow do
main and conduction in the solid, whose
counteracting effects must be balanced.
Fragmenting Weapon
The third case considered was a fragmenting
weapon. The detonation and shock propagation was
modeled using a JonesWilkinsLee equation of state
with Feo. The structural response, which included
tearing and failure of elements, was computed using
GADYNA, General Atomics version of DYNA3D.
At the beginning, the walls of the weapon separate
two ow domains: the inner domain, consisting of
high explosives, and the outer domain, consisting of
air. As the weapons structure begins to fail, fragments
are shrunk and the ensuing gaps are automatically
remeshed, leading to one continuous domain. The
topology reconstruction from the discrete data
passed to Feo from GADYNA is completely auto
matic, requiring no user intervention at any stage of
the simulation. The mesh in the uid domain was
adapted using sources for geometric delity and a
modied H2seminorm error indicator. The sources
required for geometric delity are constructed auto
matically from the CSD surface faces during the
topology reconstruction. At the end of the run, the
ow domain contains approximately 750 indepen
dently ying bodies and 16 million elements.
Figures 7a, 7b, and 7c show the development of
the detonation. The fragmentation of the weapon
is clearly visible. Figure 7d shows the correlation
with the observed experimental evidence.
Blast Interaction with a Generic Ship Hull
Figure 8 shows the interaction of an explosion with
a generic ship hull. For this fully coupled
CFD/CSD run, the structure was modeled with
quadrilateral shell elements and the uid as a mix
ture of high explosives and air, and mesh embed
ding was used.
15
The structural elements were as
sumed to fail once the average strain in an element
exceeded 60 percent. As the shell elements failed,
the uid domain underwent topological changes.
Figure 8 shows the structure and the pressure
contours in a cut plane at two times during the
run. Note the failure of the structure, and the in
vasion of high pressure into the chamber. The dis
tortion and interpenetration of the structural ele
ments is such that the traditional moving mesh
approach (with topology reconstruction, remesh
ing, ALE formulation, remeshing, and so on) will
invariably fail for this class of problems. In fact, it
was this type of application that led the authors to
consider the development of an embedded CSD
capability in Feo.
15
T
he methodologies and software re
quired for fluidstructure(thermal)
interaction simulations have pro
gressed rapidly over the last decade.
Several packages offer the possibility of fully
MAY/JUNE 2004 33
nonlinear coupled CSD, CFD, and CTD in a
production environment. Looking toward the
future, we envision a multidisciplinary, database
linked framework that is accessible from any
where on demand, simulations with unprece
dented detail and realism carried out in fast suc
cession, virtual meeting spaces where geograph
ically displaced designers and engineers discuss
and analyze collaboratively new ideas, and rst
principlesdriven virtual reality.
(c) (b)
(e) (d)
(g) (f)
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.15 0.20 0.25 0.30 0.35 0.40
W
a
v
e
d
r
a
g
c
o
e
f
f
i
c
i
e
n
t
(
C
w
)
Froude number
Present results
Experiment results, SRS
Experiment results, UT
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.15 0.20 0.25 0.30 0.35 0.40
W
a
v
e
d
r
a
g
c
o
e
f
f
i
c
i
e
n
t
(
C
w
)
Froude number
0
0.001
0.002
0.004
0.005
0.006
1 2 3 4
S
i
n
k
a
g
e
(
s
)
Number of iterations
Fr = 0.18
Fr = 0.25
Fr = 0.32
0.003
Fr = 0.368
Fr = 0.388
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.15 0.20 0.25 0.30 0.35 0.40
S
i
n
k
a
g
e
(
s
)
Froude number
0.002
0.000
0.002
0.004
0.006
0.008
0.010
0.012
0.014
0 1 2 3 4
T
r
i
m
(
t
)
Number of iterations
Fr = 0.18
Fr = 0.25
Fr = 0.32
Fr = 0.3682
Fr = 0.388
0.004
0.002
0.000
0.002
0.004
0.006
0.008
0.010
0.012
0.014
0.15 0.20 0.25 0.30 0.35 0.40
T
r
i
m
(
t
)
Froude number
Present results
Experiment results, IHHI
Experiment results, SRS
Experiment results, UT
(a)
Present results
Experiment results, IHHI
Experiment results, SRS
Experiment results, UT
Present results
Experiment results, IHHI
Experiment results, SRS
Experiment results, UT
Figure 5. Series60 hull. (a) Surface mesh, (b) sinkage convergence, (c) trim convergence, (d) sinkage versus our experimental
data, FroudeNr., (e) trim versus FroudeNr., (f) wave drag for xed model, and (g) wave drag for free model.
34 COMPUTING IN SCIENCE & ENGINEERING
Acknowledgments
This research was partially supported by AFOSR and
DTRA. Leonidas Sakell, Michael Giltrud, and Darren Rice
acted as technical monitors.
References
1. E.A. Thornton and P. Dechaumphai, Coupled Flow, Thermal
and Structural Analysis of Aerodynamically Heated Panels, J. Air
craft, vol. 25, no. 11, 1988, pp. 10521059.
2. R. Lhner, ThreeDimensional FluidStructure Interaction Using
a Finite Element Solver and Adaptive Remeshing, Computer Sys
tems in Eng., vol. 1, nos. 24, 1990, pp. 257272.
3. G.P. Guruswamy and C. Byun, FluidStructural Interactions Using
NavierStokes Flow Equations Coupled with Shell Finite Element
Structures, paper no. 933087, Am. Inst. of Aeronautics and As
tronautics,1993.
(a)
(b)
1 2
1. Pressure 2. Temperature
3. Deformation 4. Temperature
Figure 6. Nose cone. The (a) surface grids for computational uid dynamics (CFD) and computational structure
dynamics/computational thermal dynamics (CSD/CTD), and the (b) CFD/CSD/CTD results obtained.
MAY/JUNE 2004 35
4. R. Lhner et al., FluidStructure Interaction Using a Loose Coupling
Algorithm and Adaptive Unstructured Grids, Paper no. 952259,
Am. Inst. of Aeronautics and Astronautics, 1995.
5. R. Lhner et al., FluidStructureThermal Interaction Using a Loose
Coupling Algorithm and Adaptive Unstructured Grids, Paper no. 98
2419, Am. Inst. of Aeronautics and Astronautics, 1998.
6. J.D. Baum et al., A Coupled CFD/CSD Methodology for Modeling
Weapon Detonation and Fragmentation, Paper no. 990794, Am.
Inst. of Aeronautics and Astronautics, 1999.
7. N. Maman and C. Farhat, Matching Fluid and Structure Meshes
for Aeroelastic Computations: A Parallel Approach, Computers
and Structures, vol. 54, no. 4, 1995, pp. 779785.
8. COCOLIB Deliverable 1.1: Specication of the Coupling Com
munications Library, Cispar Esprit Project 20161, 1997.
9. J.R. Cebral and R. Lhner, Conservative Load Projection and
Tracking for FluidStructure Problems, AIAA J., vol. 35, no. 4,
1997, pp. 687692.
10. J.R. Cebral and R. Lhner, FluidStructure Coupling: Extensions and
Improvements, Paper no. 970858, Am. Inst. of Aeronautics and
Astronautics, 1997.
11. J.D. Baum, H. Luo, and R. Lhner, A New ALE Adaptive Unstruc
tured Methodology for the Simulation of Moving Bodies, Paper no.
940414, Am. Inst. of Aeronautics and Astronautics, 1994.
12. J.D. Baum et al., A Coupled Fluid/Structure Modeling of Shock In
teraction with a Truck, Paper no. 960795, Am. Inst. of Aeronau
tics and Astronautics, 1996.
13. J.D. Baum et al., Application of Unstructured Adaptive Moving Body
Methodology to the Simulation of Fuel Tank Separation From an F
16 C/D Fighter, Paper no. 970166, Am. Inst. of Aeronautics and
Astronautics, 1997.
14. D. Sharov et al., TimeAccurate Implicit ALE Algorithm for
SharedMemory Parallel Computers, Proc. 1st Intl Conf. Com
putational Fluid Dynamics, Springer Verlag, 2000, pp. 387392.
15. R. Lhner et al., Adaptive Embedded Unstructured Grid Methods,
Paper no. 031116, Am. Inst. of Aeronautics and Astronautics,
2003.
16. S.M. Murman, M.J. Aftosmis, and M.J. Berger, Simulations of 6
DOF Motion with a Cartesian Method, Paper no. 031246, Am.
Inst. of Aeronautics and Astronautics, 2003.
17. R. Lhner et al., The Numerical Simulation of Strongly Unsteady
(b) (a)
(d) (c)
0
10
9
8
7
6
5
4
3
1 .5 1.5
Weight (kg)
2.0 2.5 3.0 3.5
V
r
(
c
m
/
s
e
c
i
n
t
h
o
u
s
a
n
d
s
)
Aup fragmentation (2)
Aup fragmentation (2)
Pressure Mesh velocity Frag. velocity
t =
.131 ms
t =
.310 ms
Pressure Mesh velocity Frag. velocity
Mass average velocity
Pressure Mesh velocity Frag. velocity
Aup fragmentation (2)
t =
.500 ms
Figure 7. Fragmenting weapon. The gure shows fragmentation (a) at 131 msec, (b) at 310 msec, (c) at 500 msec, and (d) radial
velocity as a function of fragment weight.
36 COMPUTING IN SCIENCE & ENGINEERING
Flows With Hundreds of Moving Bodies, Intl J. for Numerical
Methods Fluids, vol. 31, 1999, pp. 113120.
18. C. Yang and R. Lhner, Calculation of Ship Sinkage and Trim
Using a Finite Element Method and Unstructured Grids, Intl J.
CFD, vol. 16, no. 3, 2002, pp. 217227.
19. H. Luo, J.D. Baum, and R. Lhner, EdgeBased Finite Element
Scheme for the Euler Equations, AIAA J., vol. 32, no. 6, 1994,
pp. 11831190.
20. H. Luo, J.D. Baum, and R. Lhner, An Accurate, Fast, MatrixFree
Implicit Method for Computing Unsteady Flows on Unstructured
Grids, Comp. and Fluids, vol. 30, 2001, pp. 137159.
Rainald Lhner is a professor in the School of Com
putational Sciences at George Mason University,
where he is also head of the Fluid and Materials Pro
gram. His research interests include eld solvers based
on unstructured grids, uidstructurethermal interac
tion, grid generation, parallel computing, and visual
ization. Lhner has an MS in mechanical engineering
from the Technical University of Braunschweig, Ger
many, and a PhD in civil engineering from the Uni
versity of Wales. He is a member of the American In
stitute of Aeronautics and Astronautics (AIAA) and
SigmaChi. Contact him at rlohner@gmu.edu.
Chi Yang is an associate professor in the School of
Computational Sciences at George Mason University.
Her research interests include eld solvers based on un
structured grids for compressible and incompressible
flows, incompressible flows with free surface, field
solvers based on boundary element method for free
surface ows, ship hydrodynamics and hull optimiza
tion, and uidstructure interaction. Yang has a BS and
a PhD in naval architecture and ocean engineering
from Shanghai Jiao Tong University. She is a member
of the AIAA and an associate member of the Society of
Naval Architects and Marine Engineers. Contact her at
cyang@gmu.edu.
Juan R. Cebral is an assistant professor in the School of
Computational Sciences at George Mason University and
a research physicist at Inova Fairfax Hospital. His research
(b) (a)
(d) (c)
Figure 8. Results in a cut plane for the interaction of an explosion with a generic ship hull: (a) surface at 20 msec, (b) pressure at
20 msec, (c) surface at 50 msec, and (d) pressure at 50 msec.
MAY/JUNE 2004 37
interests include imagebased modeling of blood ows;
distributed, multidisciplinary visualization; applications
to cerebral aneurysms, carotid artery disease, and cere
bral perfusion; and uidstructure interaction in the con
text of biouids. Cebral has an MS in physics from the
University of Buenos Aires and a PhD in computational
sciences from George Mason University. He is a member
of the AIAA. Contact him at jcebral@gmu.edu.
Joseph D. Baum is director of the Center for Applied
Computational Sciences at the Science Applications In
ternational Corporation. His research interests include
unsteady flows for internal and external flows, shock
and blast dynamics, and blaststructure interaction.
Baum has an MSc and a PhD in aerospace engineering
from Georgia Tech. He is an associate fellow of the
AIAA. Contact him at joseph.d.baum@saic.com.
Charles Charman is senior technical advisor at Gen
eral Atomics. His research interests include nonlinear
structural mechanics, soilstructure and uidstructure
interaction, parallel computing, and discrete particle
mechanics. Charman has a BS in engineering from
San Diego State University and an MS in civil engi
neering from the Massachusetts Institute of Technol
ogy. He is a professional civil engineer in the State of
California. Contact him at charman@gat.com.
Eric L. Mestreau is a senior research scientist at the
Center for Applied Computational Sciences at the Sci
ence Applications International Corporation. His re
search interests include uid/structure coupling, shock
and blast dynamics, and graphical display of large mod
els. Mestreau has an MSc in mechanical engineering
from the Ecole Centrale de Paris. He is a member of the
AIAA. Contact him at eric.l.mestreau@saic.com.
Daniele Pelessone is chief scientist and founding part
ner of Engineering and Software Systems Solutions
(ES3). His research interests include development of
advanced analytical modeling techniques in structural
dynamics, including theoretical continuum mechan
ics, applications of niteelement programs, and soft
ware installation and optimization on vector process
ing computers. Pelessone has an MSc in applied
mechanics from the University of California, San
Diego, and a DSc in aeronautical engineering from the
University of Pisa. Contact him at peless@home.com.
A D V E R T I S E R / P R O D U C T I N D E X
M A Y / J U N E 2 0 0 4
CSE 2005 3
MIT Press 3
SIAM Cover 4
University of Utah 15
Advertising Personnel Advertiser / Product Page Number
Marion Delaney
IEEE Media, Advertising Director
Phone: +1 212 419 7766
Fax: +1 212 419 7589
Email: md.ieeemedia@ieee.org
Marian Anderson
Advertising Coordinator
Phone: +1 714 821 8380
Fax: +1 714 821 4010
Email: manderson@computer.org
Sandy Brown
IEEE Computer Society,
Business Development Manager
Phone: +1 714 821 8380
Fax: +1 714 821 4010
Email: sb.ieeemedia@ieee.org
Advertising Sales Representatives
Mid Atlantic (product/recruitment)
Dawn Becker
Phone: +1 732 772 0160
Fax: +1 732 772 0161
Email: db.ieeemedia@ieee.org
New England (product)
Jody Estabrook
Phone: +1 978 244 0192
Fax: +1 978 244 0103
Email: je.ieeemedia@ieee.org
New England (recruitment)
Barbara Lynch
Phone: +1 401 7397798
Fax: +1 401 739 7970
Email: bl.ieeemedia@ieee.org
Connecticut (product)
Stan Greenfield
Phone: +1 203 938 2418
Fax: +1 203 938 3211
Email: greenco@optonline.net
Midwest (product)
Dave Jones
Phone: +1 708 442 5633
Fax: +1 708 442 7620
Email: dj.ieeemedia@ieee.org
Will Hamilton
Phone: +1 269 381 2156
Fax: +1 269 381 2556
Email: wh.ieeemedia@ieee.org
Joe DiNardo
Phone: +1 440 248 2456
Fax: +1 440 248 2594
Email: jd.ieeemedia@ieee.org
Southeast (recruitment)
Jana Smith
Email: jsmith@bmmatlanta.com
Phone: +1 404 256 3800
Fax: +1 404 255 7942
Southeast (product)
Bob Doran
Email: bd.ieeemedia@ieee.org
Phone: +1 770 587 9421
Fax: +1 770 587 9501
Midwest/Southwest (recruitment)
Darcy Giovingo
Phone: +1 847 4984520
Fax: +1 847 4985911
Email: dg.ieeemedia@ieee.org
Southwest (product)
Josh Mayer
Phone: +1 972 423 5507
Fax: +1 972 423 6858
Email: josh.mayer@wageneckassociates.com
Northwest (product)
Peter D. Scott
Phone: +1 415 4217950
Fax: +1 415 3984156
Email: peterd@pscottassoc.com
Southern CA (product)
Marshall Rubin
Phone: +1 818 888 2407
Fax: +1 818 888 4907
Email: mr.ieeemedia@ieee.org
Northwest/Southern CA (recruitment)
Tim Matteson
Phone: +1 310 836 4064
Fax: +1 310 836 4067
Email: tm.ieeemedia@ieee.org
Japan
German Tajiri
Phone: +81 42 501 9551
Fax: +81 42 501 9552
Email: gt.ieeemedia@ieee.org
Europe (product)
Hilary Turnbull
Phone: +44 1875 825700
Fax: +44 1875 825701
Email: impress@impressmedia.com
Europe (recruitment)
Penny Lee
Phone: +20 7405 7577
Fax: +20 7405 7506
Email: reception@essentialmedia.co.uk
C
omputational simulation, in conjunc
tion with laboratory experiment, can
provide valuable insight into complex
biological systems that involve the in
teraction of an elastic structure with a viscous, in
compressible uid. This biological uiddynamics
setting presents several more challenges than
those traditionally faced in computational uid
dynamicsspecically, dynamic ow situations
dominate, and capturing timedependent geome
tries with large structural deformations is neces
sary. In addition, the shape of the elastic struc
tures is not preset: uid dynamics determines it.
The Reynolds number of a ow is a dimension
less parameter that measures the relative signicance
of inertial forces to viscous forces. Due to the small
length scales, the swimming of microorganisms cor
responds to very small Reynolds numbers (10
6
10
2
). Faster and larger organisms such as sh and
eels swim at high Reynolds numbers (10
2
10
5
), but
organisms such as nematodes and tadpoles experi
ence inertial forces comparable to viscous forces:
they swim at Reynolds numbers of order one.
Modern methods in computational uid dynam
ics can help create a controlled environment in
which we can measure and visualize the fluid dy
namics of swimming organisms. Accordingly, we
designed a unied computational approach, based
on an immersed boundary framework,
1
that couples
internal forcegeneration mechanisms of organisms
and cells with an external, viscous, incompressible
uid. This approach can be applied to model low,
moderate, and high Reynolds number ow regimes.
Analyzing the uid dynamics of a exible, swim
ming organism is very difcult, even when the or
ganisms waveform is assumed in advance.
2,3
In the
case of microorganism motility, the low Reynolds
number simplies mathematical analysis because the
equations of uid mechanics in this regime are lin
ear. However, even at low Reynolds numbers, a mi
croorganisms waveform is an emergent property of
the coupled nonlinear system, which consists of the
organisms forcegeneration mechanisms, its passive
elastic structure, and external uid dynamics. In the
immersed boundary framework, the force
38 COMPUTING IN SCIENCE & ENGINEERING
SIMULATIONOF
SWIMMINGORGANISMS:
COUPLINGINTERNAL MECHANICS
WITHEXTERNAL FLUIDDYNAMICS
RICARDO CORTEZ AND LISA FAUCI
Tulane University
NATHANIEL COWEN
Courant Institute of Mathematical Sciences
ROBERT DILLON
Washington State University
15219615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
F R O N T I E R S
O F S I M U L A T I O N
Problems in biological uid dynamics typically involve the interaction of an elastic structure
with its surrounding uid. A unied computational approach, based on an immersed
boundary framework, couples the internal forcegenerating mechanisms of organisms and
cells with an external, viscous, incompressible uid.
MAY/JUNE 2004 39
generating organism is accounted for by suitable con
tributions to a force term in the uiddynamics equa
tions. The force of an organism on the uid is a Dirac
deltafunction layer of force supported only by the
region of uid that coincides with the organisms ma
terial points; away from these points, this force is
zero. After including this force distribution on the
uid, we can solve the uid equations by using either
a nitedifference gridbased method or the regular
ized Stokeslets gridfree method developed specically
for zero Reynolds number regimes.
4
This article presents our recent progress on cou
pling the internal molecular motor mechanisms of
beating cilia and flagella with an external fluid, as
well as the threedimensional (3D) undulatory
swimming of nematodes and leeches. We expect
these computational models to provide a testbed
for examining different theories of internal force
generation mechanisms.
Immersed Boundary Framework
Charles Peskin
1
introduced the immersed bound
ary method to model blood ow in the heart. Since
then, many researchers have advanced this method
to study other biologic fluid dynamics problems,
including platelet aggregation, 3D blood flow in
the heart, innerear dynamics, blood flow in the
kidneys, limb development, and deformation of red
blood cells; a recent overview appears elsewhere.
1
For this articles purposes, we describe the im
mersed boundary method in the context of swim
ming organisms. We regard the uid as viscous and
incompressible, and the laments that comprise the
organisms as elastic boundaries immersed in this
uid. In our 3D simulationsFigure 1 shows a typ
ical examplemany laments join to form the or
ganism. The nematode, tapered at both ends, is built
out of three families of laments: circular, longitu
dinal, and right and lefthanded helical laments.
We assume that the flow is governed by the in
compressible NavierStokes equations (conserva
tion of momentum and conservation of mass):
= p + u + F(x, t)
u = 0.
Here, is uid density, is dynamic viscosity, u is
uid velocity, p denotes pressure, and F is the force
per unit volume the organism exerts on the uid
this force is split into the contributions from each
of the filaments comprising the organism. The
forces F
k
due to the kth filament include elastic
forces from individual lament structures and pas
sive elastic forces caused by links between fila
ments; they also may include active forces due to
muscle contractions (in the case of nematode or
leech swimming) or active forces caused by the ac
tion of dynein molecular motors (in the case of cil
iary and agellar beating). F is a function layer of
force supported only by the region of uid that co
incides with the filaments material points; away
from these points, the force is zero.
Let X
k
(s, t) denote the kth lament as a function
of a Lagrangian parameter s and time t, and let f
k
(s,
t) denote the boundary force per unit length along
the kth filament. The boundary force depends on
F F =
k
k
u
u u
t
(a) (b)
Figure 1. Threedimensional nematode. (a) An immersed boundary nematode, and (b) a snapshot of a
swimming nematode suppressing all but the circular laments. Notice that these laments are elastic and
deform in response to the viscous uid.
40 COMPUTING IN SCIENCE & ENGINEERING
the biological system being modeled; well discuss
the general form later. We assume the elastic
boundary has the same density as the surrounding
uid, and that its mass is attributed to the mass of
the uid in which it sits, thus the forces are trans
mitted directly to the uid. The force eld F
k
from
the lament X
k
(s, t) is therefore
F
k
(x, t) = f
k
(s, t) (x X
k
(s, t))ds.
Here, the integration is over the kth onedimen
sional lament comprising an immersed boundary,
and is the 3D Dirac deltafunction. The total
force F(x, t) is calculated by adding the forces from
each lament.
Each filament of the immersed boundary is ap
proximated by a discrete collection of points. This
boundary exerts elastic forces on the uid near each
of these points. We imagine that between each pair
of successive points on a lament, an elastic spring
or link generates forces to push the links length to
ward a specified resting length. The force arising
from the spring on a short filament segment of
length ds is the product of a stiffness constant and
the deviation from rest length. This force is ap
proximated by the force density at a single point in
the segment multiplied by ds. In addition to the
forces caused by springs along individual laments,
forces due to passive or active interactions between
laments contribute to force density. Each spring
may have a timedependent rest length as well as a
timedependent stiffness. Our coupled fluidim
mersed boundary system is closed because it re
quires the velocity of a laments material point to
be equal to the uid velocity evaluated at that point.
In the next two sections, we provide brief descrip
tions of two numerical methods used in the simula
tion of immersed boundary motion in ows corre
sponding to a wide range of Reynolds numbers.
GridBased Immersed Boundary Algorithm
We can summarize the immersed boundary algo
rithm as follows: Suppose that at the end of time
step n, we have uid velocity eld u
n
on a grid and
the conguration of the immersed boundary points
on the laments comprising the organism (X
k
)
n
. To
advance the system by one time step, we must
1. Calculate the force densities f
k
from the
boundary conguration.
2. Spread the force densities to the grid to de
termine the forces F
k
on the uid.
3. Solve the NavierStokes equations for u
n+1
.
4. Interpolate the uid velocity eld to each im
mersed boundary point (X
k
)
n
and move the
point at this local uid velocity.
The NavierStokes equations are solved on a reg
ular grid with simple boundary conditions in Step 3;
Steps 2 and 4 involve the use of a discrete deltafunc
tion that communicates information between the
grid and the immersed boundary points.
1
This algo
rithms crucial feature is that the immersed boundary
is not the computational boundary in the Navier
Stokes solverrather, it is a dynamic force eld that
inuences uid motion via the force term in the uid
equations. This modular approach lets us choose a
uid solver best suited to the problems Reynolds
number. Furthermore, we can base whatever solver
we choose on a variety of formulations, including
nitedifference and niteelement methods.
GridFree Method of Regularized Stokeslets
At the low Reynolds number regime of swimming
microorganisms, we can describe the uid dynam
ics via the quasisteady Stokes equations:
u = p F(x, t)
u = 0.
A fundamental solution of these equations is called
a Stokeslet, which represents the velocity due to a
concentrated force acting on the uid at a single
point in an innite domain of uid.
3
In fact, F(x, t)
is the sum of such point forces. Ricardo Cortez con
sidered the smoothed case in which the concentrated
0
1
2
3
4
5
6
1
0
1
x
y
z
1.0
0.5
0.0
0.5
1.0
Figure 2. A bacterium swimming because of a helical waves
propagation. Fluid velocity vectors are shown on two planes
perpendicular to the swimming axis. The simulation demonstrates the
gridfree method of regularized Stokeslets.
MAY/JUNE 2004 41
force is applied not at a single point, but over a small
ball of radius centered at the immersed boundary
point.
4
We can compute a regularized fundamental
solutionor regularized Stokesletanalytically.
The method of regularized Stokeslets is a La
grangian method in which the trajectories of uid
particles are tracked throughout the simulation.
This method is particularly useful when the forces
driving the uid motion are placed along the surface
of a swimming organism that deforms because of its
interaction with the uid. The forces on the surface
are given by regularized deltafunctions, and the re
sulting velocity represents the exact solution of
Stokes equations for the given forces.
Because the incompressible Stokes equations are
linear, we can use direct summation to compute the
velocity at each immersed boundary point to ad
vance a time step. This method of regularized
Stokeslets is related to boundary integral methods,
but it has the advantage that forces may be applied
at any discrete collection of pointsthese points
need not approximate a smooth interface.
We have successfully implemented this algo
rithm for ciliary beating in two dimensions and he
lical swimming in three. Figure 2 shows a snapshot
of a helical swimmer with uid velocity elds com
puted along two planes perpendicular to the axis of
the helix.
Undulatory Swimming
Nematodes are unsegmented roundworms with
elongated bodies tapered at both ends. The most fa
mous nematode is C. Elegans, a model organism for
genetic, developmental, and neurobiological stud
ies. Nematodes possess a uidlled cavity, longi
tudinal muscles, and a flexible outer cuticle com
posed of left and righthanded helical laments, yet
they still maintain a circular crosssection. The al
ternate contractions of their dorsal and ventral lon
gitudinal muscles cause these worms to swim with
an eellike, undulatory pattern.
5
A typical nematode
is roughly 0.5 to 1 millimeter long, undulating with
a wave speed between 0.8 and 4 millimeters per sec
ond. Therefore, in water, a Reynolds number (based
on wavelength and wave speed) between 0.4 and 4
governs nematode swimming.
We chose the laments comprising our computa
tional organism to reect the nematodes anatomy,
including the longitudinal muscle bers and the he
lical laments of its cuticle. The stiffness constants
of the springs making up these laments reect
the tissues elastic properties. In the simulation de
picted in Figure 1, sinusoidal undulatory waves are
passed along the body of the immersed organism by
imposing appropriate muscle contractions along its
longitudinal and helical laments. Figure 3 shows a
3D perspective of the worm along with the velocity
eld of the uid depicted in the plane that contains
the worms centerline. (Here, we used a gridbased
immersed boundary algorithm.) The ow eld
shows vortices with alternating directions supported
along the length of the organism. A previous study
experimentally observed this characteristic ow pat
tern for the nematode Turbatrix.
5
We computed the
swimming speed of our simulated nematode, whose
amplitude of oscillation we chose to be about one
half of that reported for Turbatrix, to be 5 percent of
the propulsive wave speed along its body. These cal
culations compare very well with the experimentally
observed swimming speed of 20 percent of wave
speed reported for Turbatrix;
5
swimming speed is
proportional to the square of the waves amplitude.
2
We now turn to modeling another undulatory
swimmerthe leech. Leeches are larger and faster
than nematodes, and have an elliptical rather than
circular crosssection. We focus on 2centimeter
long juvenile leeches, with propulsive wave speeds
of approximately 5 centimeters per second undu
lating in water. In this case, the Reynolds number
based on wavelength and wave speed is about
1,000; inertial effects are significantly more im
portant than viscous effects.
6
Using the same immersed boundary construct as
we did for the nematodes (longitudinal muscle la
ments and right and lefthelical laments), but re
placing the circular laments with elliptical cross
sectional filaments, we examine the leechs
undulatory swimming in a 3D uid. Figure 4 shows
Figure 3. Snapshot of a swimming nematode shown within the
rectangular computational domain. The velocity eld is depicted in the
plane that contains the worms centerline.
42 COMPUTING IN SCIENCE & ENGINEERING
four snapshots of the leech as viewed from the side,
along with uid markers for ow visualization. Each
of the four snapshots depicts the leech at the same
phase in its undulation, during successive periods.
A wave passes over the body from left to right
note the forward swimming progression and the
wake that is left behind. We initially placed the red
uid markers in the foreground far enough from the
side of the leech that they dont get carried along
with the organism. Figure 5 shows four snapshots
of the leech from a different perspectivenote the
complex 3D particle mixing that occurs.
For our simulated leech, we used experimental
data on waveform and wave speed originally re
ported by Chris Jordan.
6
Because of accuracy con
straints that require enough grid points within a
crosssection of the leech, the aspect ratio of the
simulated leechs elliptical crosssection is 2:1, not
the actual 5:1 Jordan reported.
6
We believe that
this difference causes the simulated leech to swim
about ve times slower than the real leech.
Cilia and Flagella
Cilia and agella are the prominent organelles as
sociated with microorganism motility. Although
the patterns of agellar movement are distinct from
those of ciliary movement, and agella are typically
much longer than cilia, their basic ultrastructure is
identical. A corecalled the axonemeproduces
the bending of cilia and flagella. The typical ax
oneme consists of a central pair of single micro
tubules surrounded by nine outer doublet micro
tubules and encased by the cell membrane.
7,8
Radial spokes attach to the peripheral doublet mi
crotubules and span the space toward the central
pair of microtubules. The outer doublets are con
nected by nexin links between adjacent pairs of
doublets. Two rows of dynein arms extend from the
Atubule of an outer doublet toward the Btubule
of an adjacent doublet at regularly spaced intervals.
The bending of the axoneme is caused by sliding
between pairs of outer doublets, which in turn is
due to the unidirectional adenosine triphosphate
(ATP)induced force generation of the dynein mol
ecular motors. The precise nature of the spatial and
temporal control mechanisms regulating the vari
ous waveforms of cilia and agella is still unknown.
Considerable interest has focused on the devel
opment of mathematical models for the hydrody
namics of individual as well as rows of cilia and on
individual flagellated organisms. Gray and Han
cocks
9
resistiveforce theory and Sir James
Lighthills slenderbody theory
3
are particularly
noteworthy. More detailed hydrodynamic analy
sis, such as refined slenderbody theory and
boundary element methods, have produced excel
lent simulations of both two and threedimen
sional agellar propulsion and ciliary beating in an
infinite fluid domain or in a domain with a fixed
Figure 4. Snapshots of leech and surrounding uid markers at the same
phase in its undulation during successive temporal periods. The actual
organism is mostly obscured in the rst panel by the uid markers
placed around it.
MAY/JUNE 2004 43
wall. In all these fluid dynamical models, re
searchers take the shape of the ciliary or flagellar
beat as given. More recent work by Shay Gueron
and Konstantin LevitGurevich includes a model
that addresses the internal force generation in a
cilium
10
but does not explicitly model the individ
ual microtubuledynein interactions.
Our model for an individual cilium or agellum in
corporates discrete representations of the dynein
arms, passive elastic structures of the axoneme in
cluding the microtubules and nexin links, and the
surrounding uid. This model couples the internal
force generation of the molecular motors through
the passive elastic structure with external uid me
chanics. Detailed geometric information may be kept
track of in this computational model, such as the
spacing and shear between the microtubules, the lo
cal curvature of individual microtubules, and the
stretching of the nexin links. In addition, the explicit
representation of the dynein motors gives us the ex
ibility to incorporate a variety of activation theories.
The ciliary beat or agellar waveform is not preset,
but it is an emergent property of the interacting com
ponents of the coupled uidaxoneme system.
In other articles,
11,12
we present a model of a
simplied axoneme consisting of two microtubules,
with dynein motors being dynamic, diagonal elas
tic links between the two microtubules. To achieve
beating in the simplied twomicrotubule model,
we allow two sets of dyneins to act between the mi
crotubulesone set is permanently attached to
fixed nodes on the left microtubule, the other to
xed nodes on the right. Contraction of the dynein
generates sliding between the two microtubules; in
either configuration, one end of a dynein can at
tach, detach, and reattach to attachment sites on
the microtubule. As the microtubules slide, a
dynein links endpoint can jump, or ratchet, from
one node of the microtubule to another.
We model each microtubule as a pair of fila
ments with diagonal crosslinks. The diagonal
crosslinks elastic properties govern the resistance
to microtubule bending. Linear elastic springs rep
resenting the nexin and/or radial links of the ax
oneme interconnect adjacent pairs of microtubules.
In the case of ciliary beating, the axoneme is teth
ered to fixed points in space via strong elastic
springs at the base. The entire structure is embed
ded in a viscous incompressible uid.
Figure 6 shows a cilium during the power stroke
(note the two microtubules) and a ciliary waveform
showing a single lament at equally spaced time in
tervals. This waveform was not presetit resulted
from the actions of individual dynein motors. In
particular, the ciliums local curvature determined
the activation cycle of each dynein motor along the
cilium. Figure 7 shows the swimming of a model
sperm cell whose waveform is also the result of a
curvature control model. The beating cilium does
Figure 5. Snapshots of leech and surrounding uid markers. From this
perspective, the wave is moving back over the body, and the swimming
progression is toward the viewer. Note the complex 3D uid mixing
depicted by the evolution of the uid markers.
44 COMPUTING IN SCIENCE & ENGINEERING
indeed result in a net displacement of fluid in the
direction of the power stroke, and the sperm cell
does indeed swim in the direction opposite that of
the wave. We have shown elsewhere
12
that making
different assumptions about the internal dynein ac
tivation mechanisms does results in different swim
ming behavior. In particular, when we altered the
curvature control model to change the effective
time scale of dynein kinetics, the time of a single
beat changes significantly, along with the entire
waveform of the agellum.
C
ombining computational uid dynam
ics with biological modeling provides
a powerful means for studying the in
ternal forcegeneration mechanisms of
a swimming organism. The integrative approach
presented here lets us use computer simulations
to examine theories of physiological processes
such as dynein activation in a beating cilium and
muscle dynamics in invertebrates. The success of
these models depends on both the continued de
velopment of robust and accurate numerical
(a) (b)
Figure 6. Cilium. (a) A twomicrotubule cilium nearing the end of its power stroke. Asterisks denote uid markers, which we
initially placed directly above the base of the cilium in a rectangular array. The displacement to the right is the result of the net
uid ow induced by the beating cilium. (b) A ciliary waveform showing a single lament at equally spaced time intervals.
Figure 7. A sequence of a twomicrotubule sperm cell swimming upwards as a wave passes from base to tip. The red (blue) color
indicates that the right (left) family of dyneins is activated at that position of the agellum. Asterisks denote uid markers.
MAY/JUNE 2004 45
methods and the interdisciplinary collaboration
of computational scientists and biologists. We ex
pect that this work will have an impact on un
derstanding biomedical systems such as sperm
motility in the reproductive tract and mucuscil
iary transport in both healthy and diseased respi
ratory tracts, as well as the complex coupling of
electrophysiology, muscle mechanics, and fluid
dynamics in aquatic animal locomotion.
References
1. C.S. Peskin, The Immersed Boundary Method, Acta Numerica,
vol. 11, 2002, pp. 479517.
2. S. Childress, Mechanics of Swimming and Flying, Cambridge Univ.
Press, 1981.
3. J.L. Lighthill, Mathematical Biouiddynamics, SIAM Press, 1975.
4. R. Cortez, The Method of Regularized Stokeslets, SIAM J. Sci
entic Computing, vol. 23, no. 4, 2001, pp. 12041225.
5. J. Gray and H.W. Lissmann, The Locomotion of Nematodes, J.
Exploratory Biology, vol. 41, 1964, pp. 135154.
6. C.E. Jordan, Scale Effects in the Kinematics and Dynamics of
Swimming Leeches, Canadian J. Zoology, vol. 76, 1998, pp.
18691877.
7. M. Murase, The Dynamics of Cellular Motility, John Wiley & Sons,
1992.
8. G.B. Witman, Introduction to Cilia and Flagella, Ciliary and Fla
gellar Membranes, R.A. Bloodgood, ed., Plenum, 1990, pp. 130.
9. J. Gray and G. Hancock, The Propulsion of SeaUrchin Sperma
tozoa, J. Exploratory Biology, vol. 32, 1955, pp. 802814.
10. S. Gueron and K. LevitGurevich, Computation of the Internal
Forces in Cilia: Application to Ciliary Motion, the Effects of Vis
cosity, and Cilia Interactions, Biophyscial J., vol. 74, 1998, pp.
16581676.
11. R. Dillon and L.J. Fauci, An Integrative Model of Internal Ax
oneme Mechanics and External Fluid Dynamics in Ciliary Beat
ing, J. Theoretical Biology, vol. 207, 2000, pp. 415430.
12. R. Dillon, L.J. Fauci, and C. Omoto, Mathematical Modeling of
Axoneme Mechanics and Fluid Dynamics in Ciliary and Sperm
Motility, Dynamics of Continuous, Discrete and Impulsive Systems,
vol. 10, no. 5, 2003, pp. 745757.
Ricardo Cortez is an associate professor of mathe
matics at Tulane University and associate director of the
Center for Computational Science at Tulane and Xavier
Universities. His research interests include numerical
analysis, scientic computing, and mathematical biol
ogy. He has a PhD in applied mathematics from the
University of California, Berkeley. Contact him at
rcortez@tulane.edu.
Nathaniel Cowen is a PhD candidate in mathematics
at the Courant Institute of Mathematical Sciences. His
research interests include computational biouid dy
namics, which involves mathematical modeling of bi
ological systems (including both swimming organisms
and internal physiological ows), computational uid
dynamics, and parallel computing. He is a member of
the Society for Industrial and Applied Mathematics.
Contact him at cowen@cims.nyu.edu.
Robert Dillon is an associate professor of mathemat
ics at Washington State University. His research inter
ests include mathematical modeling of tumor growth,
limb development, and agellar and ciliary motility. He
has a PhD in mathematics from the University of Utah.
He is a member of the Society for Mathematical Biol
ogy, the Society for Industrial and Applied Mathemat
ics, and the American Mathematical Society. Contact
him at dillon@math.wsu.edu.
Lisa Fauci is a professor of mathematics at Tulane Uni
versity and an associate director of the Center for Com
putational Science at Tulane and Xavier Universities.
Her research interests include scientic computing and
mathematical biology. She has a PhD in mathematics
from the Courant Institute of Mathematical Sciences in
1986. She is a member of the Council of the Society
for Industrial and Applied Mathematics. Contact her at
fauci@tulane.edu .
The American Institute of Physics is a
notforprot membership corporation
chartered in New York State in 1931 for the
purpose of promoting the advancement and
diffusion of the knowledge of physics and
its application to human welfare. Leading
societies in the elds of physics, astronomy,
and related sciences are its members.
In order to achieve its purpose, AIP serves physics and related elds
of science and technology by serving its Member Societies, individual
scientists, educators, students, R&D leaders, and the general public
with programs, services, and publicationsinformation that matters.
The Institute publishes its own scientic journals as well as those
of its member societies; provides abstracting and indexing services;
provides online database services; disseminates reliable information on
physics to the public; collects and analyzes statistics on the profession
and on physics education; encourages and assists in the documentation
and study of the history and philosophy of physics; cooperates with
other organizations on educational projects at all levels; and collects
and analyzes information on federal programs and budgets.
The scientists represented by the Institute through its member soci
eties number approximately 120 000. In addition, approximately 6000
students in more than 700 colleges and universities are members of the
Institutes Society of Physics Students, which includes the honor society
Sigma Pi Sigma. Industry is represented through the membership of 42
Corporate Associates.
Governing Board: Mildred S. Dresselhaus (chair), Martin Blume,
Dawn A. Bonnell, William F. Brinkman, Marc H. Brodsky (ex ofcio),
James L. Burch, Brian Clark, Lawrence A. Crum, Robert E. Dickinson,
Michael D. Duncan, H. Frederick Dylla, Joseph H. Eberly, Judy R.
Franz, Donald R. Hamann, Charles H. Holbrow, James N. Hollen
horst, Judy C. Holoviak, Anthony M. Johnson, Bernard V. Khoury,
Leonard V. Kuhi, Arlo U. Landolt, Louis J. Lanzerotti, Charlotte Lowe
Ma, Rudolf Ludeke, Christopher H. Marshall, Thomas J. McIlrath,
Arthur B. Metzner, Robert W. Milkey, James Nelson, Jeffrey J. Park,
Richard W. Peterson, Helen R. Quinn, S. Narasinga Rao, Elizabeth A.
Rogan, Myriam P. Sarachik, Charles E. Schmid, James B. Smathers,
Benjamin B. Snavely (ex ofcio), A. F. Spilhaus Jr, and Richard Stern.
Board members listed in italics are members of the executive committee.
O
n a geological time scale, science
must consider the impacts of aster
oids and comets with Earth a rela
tively frequent occurrence, causing
signicant disturbances to biological communi
ties and strongly perturbing evolutions course.
1
Most famous among known catastrophic im
pacts, of course, is the one that ended the Creta
ceous period and the dominance of the di
nosaurswhat researchers now believe caused
the shallowwater impact event at the Chicxulub
site in Mexicos Yucatan Peninsula. (See the
Chicxulub Site Impact sidebar for specics on
this event and its importance.)
In preparation for a definitive simulation of a
large event like Chicxulub, we developed a pro
gram for modeling smaller impacts, beginning with
impacts in the deep ocean where the physics is
somewhat simpler. Smaller impacts happen more
frequently than dinosaurkiller events.
2,3
Besides
seafloor cratering, these events give rise to
tsunamis
4
that leave traces many kilometers inland
from a coast facing the impact point.
In this article, we report on a series of simula
tions of asteroid impacts we performed using the
SAGE code from Los Alamos National Labora
tory (LANL) and Science Applications Interna
tional Corporation (SAIC), developed under the
US Department of Energys program in Accel
erated Strategic Computing (ASCI). With our
oceanimpact simulations, we estimate impact
generated tsunami events as a function of the
size and energy of the projectile, partly to aid
further studies of potential threats from modest
sized Earthcrossing asteroids.
We also present a preliminary report on a sim
ulation of the impact that created the Chicxulub
crater in Mexicos Yucatan Peninsula. This is a
rich test because of the stratigraphys complexity
at Chicxulub, involving rocks like calcite and an
hydrite that are highly volatile at the pressures
reached during impact. (The Chicxulub stratas
volatility is what made this event so dangerous to
the megafauna of the late Cretaceous.) To model
this volatilitys effects and to better understand
what happened, we must use good equations of
state and constitutive models for these materials.
We report on progress in developing better con
stitutive models for the geological materials in
volved in this impact and in cratering processes
in general.
46 COMPUTING IN SCIENCE & ENGINEERING
TWO ANDTHREEDIMENSIONAL
ASTEROIDIMPACT SIMULATIONS
GALEN R. GISLER, ROBERT P. WEAVER, AND CHARLES L. MADER
Los Alamos National Laboratory
MICHAEL L. GITTINGS
Science Applications International
15219615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
F R O N T I E R S
O F S I M U L A T I O N
Performing a series of simulations of asteroid impacts using the SAGE code, the authors
attempt to estimate the effects of tsunamis and other important environmental events.
MAY/JUNE 2004 47
SAGE Code
The SAGE hydrocode is a multimaterial adaptive
grid Eulerian code with a highresolution Godunov
scheme originally developed by Michael Gittings
for SAIC and LANL. It uses continuous adaptive
mesh renement (CAMR), meaning that the deci
sion to rene the grid is made cell by cell and cycle
by cycle continuously throughout the problem run.
Refinement occurs when gradients in physical
properties (density, pressure, temperature, and ma
terial constitution) exceed userdefined limits,
down to a minimum cell size the user species for
each material in the problem. With the computing
power concentrated on the regions of the problem
that require higher resolution, we can simulate very
large computational volumes and substantial dif
ferences in scale at low cost.
We can run SAGE in several modes of geome
try and dimensionality: explicitly 1D Cartesian
and spherical, 2D Cartesian and cylindrical, and
3D Cartesian. The RAGE code is similar to
SAGE but incorporates a separate module for im
plicit, gray, nonequilibrium radiation diffusion.
Both codes are part of LANLs Crestone project,
in turn part of the Department of Energys ASCI
program.
Because scientists commonly do modern super
computing on machines or machine clusters con
taining many identical processors, the codes par
allel implementation is supremely important. For
portability and scalability, SAGE uses the widely
available messagepassing interface (MPI). It ac
complishes load leveling using an adaptive cell
pointer list, in which newly created daughter cells
are placed immediately after the mother cells. Cells
are redistributed among processors at every time
step, while keeping mothers and daughters to
gether. If there are Mcells and N processors, this
technique gives nearly M/Ncells per processor. As
neighborcell variables are necessary, the MPIs
gather and scatter routines copy those neighbor
variables into local scratch.
In a multimaterial code like SAGE, every cell in
the computational volume can contain all the ma
terials defined in the problem, each with its own
equation of state (and strength model, as appro
priate). A number of equations of state are avail
able, analytical and tabular. In our impact prob
Chicxulub Site Impact
S
cientists now widely accept that the worldwide sequence of mass extinctions at the CretaceousTertiary (K/T) bound
ary 65 million years ago was directly caused by the collision of an asteroid or comet with Earth.1,2 Evidence for this
includes the large (200km diameter) buried impact structure at Chicxulub in Mexicos Yucatan Peninsula, the worldwide
iridiumenriched layer at the K/T boundary, and the tsunamic deposits well inland in North America, all dated to the same
epoch as the extinction event.
Consensus is building that the K/T impactor was a bolide of diameter roughly 10 km; its impact was oblique (not verti
cal), either from the southeast at 30 degrees to the horizontal or from the southwest at 60 degrees; its encounter with lay
ers of water, anhydrite, gypsum, and calcium carbonate (all highly volatile materials at the pressures of impact) lofted
many hundreds of cubic kilometers of these materials into the stratosphere. These materials then resided there for many
years and produced a global climate deterioration that was fatal to many largeanimal species on Earth. All these points
are still under discussion, however, and researchers still need to address several scientic questions:
How is the energy of impact (in the realm of hundreds of teratons TNT equivalent) partitioned among the vaporization
of volatiles, the lofting of other materials, the generation of tsunamis, and the cratering of the substrate? How is this
partition of energy reected in the observables detectable after 65 million years?
What is the projectiles fate?
What is the distribution of proximal and distal ejecta around the impact site?
How do these questions depend on the problems unknown parametersnamely, bolide mass, diameter, velocity, and
impact angle?
References
1. J.V. Morgan et al., PeakRing Formation in Large Impact Craters: Geophysical Constraints from Chicxulub, Earth and Planetary Science Letters, vol. 183,
2000, pp. 347354.
2. E. Pierazzo, D.A. Kring, and H.J. Melosh, Hydrocode Simulation of the Chicxulub Impact Event and the Production of Climatically Active Gases, J. Geo
physical Research, vol. 103, 1998, pp. 2860728625.
48 COMPUTING IN SCIENCE & ENGINEERING
lems, we use the LANL Sesame tables for air,
basalt, calcite, granite, iron, and garnet (as a rather
stiff analog to mantle material), and for water, we
use a somewhat more sophisticated table (includ
ing a good treatment of the vapor dome) from
SAIC. When we judged strength to be important,
we used a simple elasticplastic model with pres
sure hardening (with depth) for the crustal mater
ial (basalt for the water impacts, calcite, and gran
ite for the K/T impactthat is, the impact at the
CretaceousTertiary [K/T] boundary).
The boundary conditions we use in these calcu
lations allow unhindered outow of waves and ma
terial. We accomplish this by using freeze regions
around the computational boxs edges, which are
updated normally during the hydrodynamic step,
then quietly restored to their initial values of pres
sure, density, internal energy, and material proper
ties before the next step. This technique has proven
to be extremely effective at minimizing the delete
rious effect of articial reections.
By far the best technique for dealing with un
wanted boundary effects is to put the boundaries
far away from the regions of interest or to place
the boundary beyond a material interface that
truly exists in the problem and might be expected
to interact with waves appropriatelythat is,
through reflection, transmission, and absorption.
In the oceanimpact simulations, the physical
boundary that is most important is of course the
seafloor, which partly reflects and partly transmits
the waves that strike it. The crustmantle inter
face provides further impedance to waves that
propagate toward the computational boxs bottom
boundary. For land (or continental shelf) impact
simulations, the sedimentcrust and crustmantle
interfaces play similar roles. With these material
interfaces, and our freezeregion boundary con
ditions, reflections from the computational
boundaries are insignificant.
3D WaterImpact Simulations
We performed 3D simulations of a 1kmdiameter
iron asteroid impacting the ocean at 45 and 30de
gree angles at 20 km/s on the ASCI White machine
at LLNL, using 1,200 processors for several weeks.
We used up to 200 million computational cells, and
the total computational time was 1,300,000 CPU
hours. The computational volume was a rectangu
lar box 200km long in the direction of the aster
oid trajectory, 100km wide, and 60km tall. We di
vided the vertical extent into 42 km of atmosphere,
5 km ocean water, 7 km basalt crust, and 6 km man
tle material. Using bilateral symmetry, we simu
lated a halfspace only, the boundary of the half
space being the vertical plane containing the
impact trajectory.
Asteroid initial position 30km altitude
t = 0.5 seconds
1.0 seconds
1.5 seconds
2.0 seconds
3.0 seconds
2
0
k
m
/
s
e
c
Atmosphere 47 km
Ocean water 5 km
Basalt crust 7 km
Mantle 5 km
5.0 seconds
10.0 seconds
37.0 seconds
101.0 seconds
Figure 1. Montage of 10 separate images from the 3D run of the impact
of a 1kmdiameter iron bolide at an angle of 45 degrees with an ocean
5 km deep. These are density raster graphics in a 2D slice in the vertical
plane containing the asteroid trajectory. Note the initial
uprangedownrange asymmetry and its disappearance in time.
Maximum transient crater diameter of 25 km is achieved at about 35
seconds. The maximum crown height reaches 30 km, and the jet seen
forming in the last frame eventually approaches 60 km.
SAGE ast308 5.00 seconds
Pressure .01 mbar/cm Grid spacing: 10 km
P
r
e
s
s
u
r
e
(
b
a
r
)
10000.00
5623.41
3162.28
1778.28
1000.00
562.34
316.23
177.83
100.00
56.23
31.62
17.78
10.00
5.62
3.16
1.78
1.00
0.56
0.32
0.18
0.10
Figure 2. Perspective plot of an isosurface of the pressure gradient at a
time ve seconds after the beginning of a 3D run of the impact of a 1
kmdiameter iron bolide at an angle of 30 degrees with an ocean 5 km
deep. The pressure gradient isosurface is colored by the value of
pressure, with a color palette chosen to highlight interfaces between
mantle and basalt as well as basalt and water in the target. The
isosurface shows both the atmospheric shock accompanying the
incoming trajectory of the projectile (right) and the explosively driven
downrange shock (left) that carries the horizontal component of the
projectiles momentum. Also visible are seismic waves generated in the
mantle and crust and the expanding transient crater in the water.
MAY/JUNE 2004 49
The asteroid starts at a point 30 km above the
waters surface (see Figure 1). The atmosphere we
used in this simulation is a standard exponential at
mosphere with a scale height of 10 km, so the
medium surrounding the bolide is tenuous (with a
density of approximately 1.5 percent of sealevel
density) when the calculation begins. During the
2.1 seconds of the bolides atmospheric passage at
approximately Mach 60, a strong shock develops
(see Figure 2), heating the air to temperatures up
wards of 1 eV (1.2 10
4
K). Less than 1 percent of
the bolides kinetic energy (roughly 200 gigatons
highexplosive equivalent yield) is dissipated in the
atmospheric passage.
The water is much more effective at slowing the
asteroid; essentially, all its kinetic energy is ab
sorbed by the ocean and seafloor within 0.7 sec
onds. The water immediately surrounding the tra
jectory vaporizes, and the rapid expansion of the
resulting vapor cloud excavates a cavity in the wa
ter that eventually expands to a diameter of 25 km.
This initial cavity is asymmetric because of the as
teroids inclined trajectory, and the splash, or
crown, is markedly higher on the downrange side
(see Figures 1 and 3). The crowns maximum
height is nearly 30 km at 70 seconds after impact.
The collapse of the crowns bulk makes a rim
wave or precursor tsunami that propagates out
ward, somewhat higher on the downrange side (see
Figures 1 and 4). The crowns higher portion
breaks up into fragments that fall back into the wa
ter, giving this precursor tsunami an uneven and
asymmetric prole.
The rapid conversion of the asteroids kinetic en
ergy into thermal energy produces a rapid expan
sion in the volume occupied by the newly vapor
ized water and bolide material. This is much like
an explosion and acts to symmetrize the subsequent
development. Shocks propagate outward from the
cavity in the water, in the basalt crust and the man
tle beneath (Figure 2). Subsequent shocks are gen
erated as the cavity rells and by cavitation events
that occur in the turbulence that accompanies the
development of the largeamplitude waves. The
shocks are partly reflected and partly transmitted
by the material interfaces, and the interactions of
these shocks with each other and with the waves
make the dynamics complicated.
The hot vapor from the initial cavity expands
into the atmosphere, mainly in the downrange di
rection because of the horizontal component of the
asteroids momentum (Figure 2). When the vapors
pressure in the cavity has diminished sufciently
at about 35 seconds after the impactwater begins
to ll the cavity from the bottom, driven by grav
ity. This lling has a high degree of symmetry be
cause of the uniform gravity responsible for the wa
ter pressure. An asymmetric fill could result from
nonuniform seafloor topography, but we do not
consider that here. The lling water converges on
the cavitys center, and the implosion produces an
other series of shock waves and a jet that rises ver
tically in the atmosphere to a height in excess of 20
km at 150 seconds after impact. The collapse of this
central vertical jet produces the principal tsunami
SAGE ast304
LANL
Time: 30.00 seconds
p 0.075 (gm/cm
3
)
p 0.50 (gm/cm
3
)
p 1.50 (gm/cm
3
)
Figure 3. Perspective plot of three isosurfaces of the density from the
3D run of a 45degree impact of a 1kmdiameter iron bolide into an
ocean 5 km deep, 30 seconds after the beginning of the calculation
(27.5 seconds after impact). We chose the isosurfaces to show the
basalt underlayment, the ocean waters bulk, and the cells containing
water spray (mixed air and water). The crown splashs asymmetry is
evident, as is its instability to fragmentation. Cratering in the basalt is
seen, to a depth of approximately 1 km. The transient cavitys diameter
is at this time approximately 25 km.
SAGE ast304
LANL
Time: 115.00 seconds
p 0.075 (gm/cm
3
)
p 0.50 (gm/cm
3
)
p 1.50 (gm/cm
3
)
Figure 4. Perspective plot of three isosurfaces of the density from the
3D run of a 45degree impact of a 1kmdiameter iron bolide into an
ocean 5 km deep, 115 seconds after impact. The transient cavity has
collapsed under the surrounding waters pressure to form a central jet,
and the crown splash has collapsed almost completely, pockmarking
the waters surface and generating the rst precursor wave.
50 COMPUTING IN SCIENCE & ENGINEERING
wave (see Figure 5). This wave has an initial height
of 1.5 km and a propagation velocity of 170 me
ters/second (m/s).
We follow this waves evolution in three dimen
sions for 400 seconds after impact and nd that the
inclined impact eventually produces a tsunami that
is nearly circularly symmetric at late times (see Fig
ure 6). The tsunami declines to a height (dened as
a positive vertical excursion above the initial water
surface) of 100 meters at a distance of 40 km from
the initial impact, and its propagation speed con
tinues at roughly 170 m/s.
2D WaterImpact Simulations
Because of the high degree of symmetry achieved
late in the 3D calculations, we can learn much
about the physics of impact events by performing
2D simulations. These are, of course, much
cheaper than full 3D calculations, so we can un
dertake parameter studies to isolate the phenom
enas dependence on the impactors properties.
We have therefore performed a series of sup
porting calculations in two dimensions (cylindri
cal symmetry) for asteroids impacting the ocean
vertically at 20 km/s, using the ASCI Blue Moun
tain machines at LANL. We took the asteroids
composition to be either dunite (3.32 grams per
cubic centimeter [g/cc]) as a mockup for typical
stony asteroids, or iron (7.81 g/cc) as a mockup
for nickeliron asteroids. For these projectiles, in
stead of the Sesame tables, we used the simpler
analytical MieGrneisen equation of state to
avoid timestep difficulties during the atmos
pheric passage. The strength model used for the
crust and asteroid are the same in all cases
namely, an elasticplastic model with shear mod
uli and yield stress similar to experimental values
for aluminum. For the known increase of
strength with depth, we use a linear pressure
hardening relationship.
We designed these simulations to follow an as
teroids passage through the atmosphere, its impact
with the ocean, the cavity generation and subse
quent recollapse, and the generation of tsunamis.
The parameter study included six different aster
oid masses. We used stony and iron bodies of di
ameters 250 meters, 500 meters, and 1,000 meters,
all at speeds of 20 km/s. The impacts kinetic ener
gies ranged from 1 gigaton to 200 gigatons (high
explosive equivalent yield).
Table 1 gives a tabular summary of our para
meter study and lists the bolides input charac
teristics (composition, diameter, density, mass,
velocity, and kinetic energy) and the impacts
measured characteristics (maximum depth and di
SAGE ast304
LANL
Time: 150.00 seconds
p 0.075 (gm/cm
3
)
p 0.50 (gm/cm
3
)
p 1.50 (gm/cm
3
)
Figure 5. Similar to Figure 4, but 150 seconds after impact. The central
jet has now collapsed, and both the pockmarked precursor wave and
the somewhat smoother principal wave are evident. The latter wave is
~1.5 km in initial amplitude, and moves with a speed of ~175 m/s.
6e + 06
5e + 06
4e + 06
3e + 06
2e + 06
2e + 06
0
6e + 06
5e + 06
4e + 06
3e + 06
2e + 06
2e + 06
0
6e + 06 2e + 06 0 2e + 06 4e + 06 6e + 06
6e + 06 2e + 06 0 2e + 06 4e + 06 6e + 06
Height (cm)
625 2500 5625 10000 LANL (p 0.9 gm/cm
3
) Height 0.1 cm) 0
(a)
(b)
Figure 6. Overhead plots at a late time showing wave height as a function
of distance along the trajectory (horizontal) and perpendicular to the
trajectory (units of centimeters). The asteroid entered from the right. At
270 seconds, (a) the irregular precursor wave has declined to a few meters
in height and strongly bears the asymmetry of the crown splash, while the
much more regular principal wave, at an amplitude signicantly greater
than 100 meters, is much more symmetrical. The wavelength, measured
as the cresttocrest distance from precursor to principal wave, is 34 km.
At 385 seconds, (b) the precursor wave has left the box, and the principal
wave has a mild quadrupole asymmetry with the maximum wave height
roughly 100 meters, at a distance of 40 km from the impact point.
MAY/JUNE 2004 51
ameter of the transient cavity, quantity of water
displaced, time of maximum cavity, maximum jet
and jet rebound, tsunami wavelength, and
tsunami velocity).
The amount of water displaced during cavity
formation is found to scale nearly linearly with the
asteroids kinetic energy, as Figure 7 illustrates. A
fraction of this displaced mass (ranging from 5 per
cent for the smaller impacts to 7 percent for the
largest ones) is vaporized during the encounters
explosive phase, while the rest is pushed aside by
the vapors pressure to form the transient cavitys
crown and rim.
Figure 7 indicates that the linear scaling with
kinetic energy differs from the scaling predicted
by Keith Holsapple.
5
Holsapple, using dimen
sional analysis informed by experimental results
over many decades in scaled parameters, found
that the ratio of the displaced mass to the pro
jectile mass scales as the Froude number, u
2
/ga,
to the twothirds power, where u is the projectile
velocity, g is acceleration due to gravity, and a is
the projectile radius. The difference between the
Holsapple scaling and our results is most likely
due to the effect of vaporization, which the di
mensional analysis does not include. We also
note that our two projectile compositions differ
from each other by a factor greater than two in
density, and this is also omitted in the dimen
sional analysis. We have begun a new series of 27
runs to investigate the scaling issue further.
These runs are similar to the six runs we report
here, yet we also include bolides of ice and ve
locities of 10 and 15 km/s.
We used Lagrangian tracer particles to mea
sure the amplitude, velocity, and wavelength of
the waves produced by these impacts. These mea
sures are somewhat uncertain because the wave
trains are highly complex and the motions are
turbulent. There are multiple shock reflections
and refractions at the watercrust and waterair
interfaces, as well as cavitation events. For the
larger impacts, the tracer particles execute highly
complex motions, while for the smaller impacts,
the motions are superpositions of approximately
closed elliptical orbits. In all cases, we measure
wave amplitudes by taking half the difference of
adjacent maxima and minima in the vertical ex
cursions executed by the tracer particles, and we
measure wave speeds by plotting the radial posi
tions of these maxima and minima as a function
of time.
With these warnings, we find the tsunami am
plitude to evolve in a complex manner, eventually
decaying faster than , where r is the distance
of propagation from the impact point (see Figure
1 r
Table 1. Summary of parameterstudy runs.
Asteroid material Dunite Iron Dunite Iron Dunite Iron
Asteroid diameter 250 m 250 m 500 m 500 m 1,000 m 1,000 m
Asteroid density 3.32 g/cc 7.81 g/cc 3.32 g/cc 7.81 g/cc 3.32 g/cc 7.81 g/cc
Asteroid mass 2.72e13 g 6.39e13 g 2.17e14 g 5.11e14 g 1.74e15 g 4.09e15 g
Asteroid velocity 20 km/s 20 km/s 20 km/s 20 km/s 20 km/s 20 km/s
Kinetic energy 1.3 GT 3 GT 10 GT 24 GT 83 GT 195 GT
Maximum cavity diameter 4.4 km 5.2 km 10.0 km 12.6 km 18.6 km 25.2 km
Maximum cavity depth 2.9 km 4.3 km 4.5 km 5.7 km 6.6 km 9.7 km
Observed displacement 4.41e16 g 9.13e16 g 3.53e17 g 7.11e17 g 1.79e18 g 4.84e18 g
Time of maximum cavity 13.5 s 16.0 s 22.5 s 28.0 s 28.5 s 33.0 s
Time of maximum jet 54.5 s 65.0 s 96.5 s 111 s 128.5 s 142 s
Time of rebound 100.5 s 118.5 s 137.5 s 162 s 187.5 s 218.5 s
Tsunami wavelength 9 km 12 km 17 km 20 km 23 km 27 km
Tsunami velocity 120 m/s 140 m/s 150 m/s 160 m/s 170 m/s 175 m/s
1.0E + 25
1.0E + 19
1.0E + 18
1.0E + 17
1.0E + 16
1.0E + 26
Asteroid kinetic energy (ergs)
1.0E + 27 1.0E + 28
M
a
s
s
o
f
w
a
t
e
r
d
i
s
p
l
a
c
e
d
(
g
r
a
m
s
)
Figure 7. The mass of water displaced in the initial cavity formation
scales with the asteroids kinetic energy. The squares are the results
from the parameterstudy simulations, as Table 1 tabulates, and the
solid line illustrates direct proportionality. About 5 to 7 percent of this
mass is vaporized in the initial encounter. The circles are predictions of
the crater scaling formula from Keith Holsapple.
5
52 COMPUTING IN SCIENCE & ENGINEERING
8). We found the steepest declines for the smaller
projectiles (as expected from linear theory
4
), and
we have greater condence in the amplitudes mea
sured for these than in the amplitudes measured for
the larger projectiles because of the more complex
motions executed by the tracer particles in the
largeprojectile simulations. Geometrical effects
account for a pure 1/r decline, and the remainder
of the decline is due partly to wave dispersion and
partly to dissipation via turbulence. Realistic
seafloor topography will also influence the waves
development, of course. We also remark that our
rst measured amplitude points are well outside the
transient cavity. Tracers from within the cavity ex
ecute much larger excursions (indeed, some of
them join the jet), and we cannot measure reliable
amplitudes from them.
We expect that the tsunami waves will eventu
ally evolve into classic shallowwater waves
6
be
cause the wavelengths are long compared to the
ocean depth. However, the initial wave trains
complexity and the wavebreaking associated with
the interaction of shocks reflected from the
seafloor do not permit the simplifications associ
ated with shallowwater theory. Much previous
work on impactgenerated tsunamis
7
has used
shallowwater theory, which gives a particularly
simple form for the wave velocitynamely,
, where g is the acceleration due to
gravity and D is the water depth. For an ocean 5
km deep, the shallowwater velocity is 221 m/s. In
Figure 9, we show the wavecrest positions as a
function of time for the simulations in our para
meter study, along with constantvelocity lines at
150 and 221 m/s. From this, we see that the wave
velocities are substantially lower than the shallow
water limit, although there is some indication of
an approach to that limit at late times. This as
ymptotic approach is only observed for the largest
impactors because the waves from the smaller im
pactors die off too quickly for reliable measure
ment of the fareld limit in our simulations.
To illustrate the complications we encountered
in our largeprojectile runs, we show in Figure 10
a closeup snapshot of density and pressure from
the wave train produced by a 1km iron projec
tile. This snapshot is taken 300 seconds after im
pact and about 35 km from the impact point. The
wave moves to the right, and the impact point is
to the left. The vertical excursion of the bulk wa
ter above the original surface is about 1 km at this
point. The dense spray above the wave (up to 1
percent water density) extends 3.5 km up into the
atmosphere, while the lighter spray goes up more
than twice as far. Apparently, the surrender of
wave energy to the atmosphere is a significant
loss mechanism. The bottom frame shows pres
sure with a banded palette to highlight differ
ences. Besides the turbulent pressure field in the
atmosphere, two significant features are a decay
ing cavitation event just aft of the main
peak/trough system, and a shock propagating
backwards from that event and scraping the wa
v gD = ( )
1
10,000
1,000
100
10
1
10
Distance from impact (km)
100 1,000
A
m
p
l
i
t
u
d
e
(
m
)
Dn 250 tr
Dn 250 lsq
Fe 250 tr
Dn 250 lsq
Dn 250 tr
Dn 500 lsq
Fe 500 tr
Fe 500 lsq
Dn 1k tr
Dn 1k lsq
Fe 1k tr
Fe 1k lsq
1/r
Figure 8. The tsunami amplitude declines with propagation distance
faster than 1/r. The legend identies the points associated with
individual runs, where the notation signies the asteroids composition
(Dn for dunite and Fe for iron) and diameter in meters. We also show
lines indicating leastsquares powerlaw ts, with the powerlaw indices
varying from 2.25 to 1.3.
0
900
800
700
600
500
400
300
200
100
0
200
Time (sec)
150 m/s
221 m/s
(Shallowwater
theory)
400 600
W
a
v
e
c
r
e
s
t
p
o
s
i
t
i
o
n
(
k
m
)
Dn 250 m
Fe 250m
Dn 500m
Dn 250 tr
Dn 1kn
Fe 1kn
Figure 9. We plot the tsunami wavecrest positions as a function of time
here for the six runs of the parameter study. The notation in the legend
is similar to Figure 6, with the solid lines at constant velocity to illustrate
that these waves are substantially slower than the shallowwater
theorys prediction. There is an indication, however, that the waves may
be accelerating toward the shallowwater limit at late times.
MAY/JUNE 2004 53
tercrust interface. A new series of runs we are
planning incorporates new diagnostics to better
interpret the energy flows.
Preliminary Study
of a Major Terrestrial Impact
When the projectile diameter is large compared
to the depth of water in the target, the decelera
tion is accomplished almost entirely by the rock
beneath. We therefore need to deal directly with
the issues of the instantaneous fluidization of tar
get rock and its subsequent evolution through
regimes of viscoplastic flow through freezeout.
Because this is a rather new regime for our code,
we decided to begin by examining a wellstudied
event.
8
In extending our impact study to larger
diameters, we accordingly chose to focus on the
shallowwater impact event at the Chicxulub site
in Mexicos Yucatan Peninsula and anticipate that
our early effort on this will not do very well with
the final, strengthdependent, phases of the crater
evolution.
Scientists discovered the Chicxulub impact
structure with Petroleos Mexicanos (Pemex), the
Mexican national oil company.
9,10
This discovery
established the suggestion that an impact was re
sponsible for the mass extinction at the end of the
Cretaceous period, as Luis and Walter Alvarez
and their colleagues proposed,
11
on the basis of
the anomaly in abundances of iridium and other
platinumgroup elements in the boundary bed
ding plane.
Paleogeographic data suggests that the crater
site, which presently straddles the Yucatan coast
line, was submerged at the end of the Cretaceous
on the continental shelf. The substrate consisted
of fossilized coral reefs over continental crust. In
our simulation, we therefore constructed a mul
tilayered target consisting of 300 meters of water,
3 km of calcite, 30 km of granite, and 18 km of
mantle material. It is likely that the Chicxulub
target contained multiple layers of anhydrites and
other evaporites as well as calcite, but for sim
plicity (and because of access to good equations
of state), we simplified the structure to calcite
above granite. Above this target, we included a
standard atmosphere up to 106 km altitude and
started the asteroids plunge at 45 km altitude.
We performed 3D simulations with impact angles
of 30, 40, and 60 degrees to the horizontal as well
as a 2D verticalimpact simulation. In the hori
zontal plane, our computational domain extended
256 km by 128 km because we elected to simulate
a halfspace.
We ran these simulations on the new ASCI Q
computer at Los Alamos, a cluster of ES45alpha
boxes from HP/Compaq. Generally, we ran on
1,024 processors at a time and used about 1 million
CPU hours over the course of these runs. Our
adaptive mesh included up to a third of a billion
computational cells.
The simulation illustrates three prominent fea
tures for a 45degree impact. First, the impact pro
duced a rooster tail that carries much of the hor
izontal component of the asteroids momentum in
Figure 10. A snapshot in density (top) and pressure (bottom) for a small
part of the simulation of the 1kmdiameter iron projectile vertical
impact. This snapshot is taken 300 seconds after impact and illustrates
the principal wave train 35 km out from the impact point, which is to
the left. This frames horizontal dimension is 28 km, and the vertical
dimension is 15 km. The wave is traveling to the right. In the top frame,
the height of the principal wave above the original water surface is 1.2
km, the maximum extent of the dense spray (about 1 percent water
density) is 3.5 km above the original water surface, and the light spray
extends almost to the tropopause at 10 km altitude. The bottom frame
uses a banded palette to highlight pressure differences. A cavitation
event is seen just aft of the principal wave, and a decaying shock
produced by this event is seen propagating backward (toward the
impact point to the left) and scraping the ocean bottom.
54 COMPUTING IN SCIENCE & ENGINEERING
the downrange direction (see Figure 11). This ma
terial, consisting of vaporized fragments of the pro
jectile mixed with the target, is extremely hot, and
will ignite vegetation many hundreds of kilometers
away from the impact site. Second is the highly tur
bulent and energetic plume of ejecta directed pre
dominantly upward (see Figure 12). Ballistic trajec
tories carry some of this material back to Earth in
the conical debris curtain that gradually moves away
from the crater lip and deposits a blanket of ejecta
around the forming crater (see Figure 13). Some
material is projected into orbits that have ground
termini far outside the computational volume, even
extending to the antipodal point and beyond.
We found the blanket of ejecta to be strongly
asymmetrical around the crater, with the uprange
portion much thinner than the rest. This owes
partly to the coupling of the horizontal component
of the asteroids momentum to the debris, and
partly to the ionized and shocked atmosphere in
the asteroids wake producing a zone of avoidance
for the entrained debris. The ejecta blankets lobate
structure seen in Figure 13 is a secondorder effect,
due to the breakup of the unstable ow in the de
bris curtain. The hot structure seen within the
crater in Figure 13 is the incipient formation of a
central peak.
We are conducting further analysis of the simu
lation results from these runs, with the aim of de
termining material and energy partitions among
the resultant features as functions of the impacts
parameters.
W
e are continuing the study we
outline here, with an aim to
ward including better physics
for the later stages of the
craters development. For this, it is important
we include a proper characterization of the
material strength of the geological strata in
which the impact occurs and the dependence
of those strength properties with depth, tem
perature, strain, and strain rate. The data for
these studies is still not readily available for
many of the geological materials of interest,
and some controversy exists over the best way
to implement strength breakdown in hy
drocodes. Our intention is to use a few choices
for strength degradation (for example, acoustic
fluidization and damage mechanics) in our
code and include viscoelastic models as well
as the elasticplastic models we have already
used. Applying our code to other geologic sce
narios that involve rock mobilization (for ex
ample, volcanic eruptions and landslides) will
guide us in appropriately implementing and
validating these models.
Figure 12. Fortytwo seconds after impact, the rooster tail has left the
simulation volume and gone far downrange. The dissipation of the
asteroids kinetic energy, some 300 teratons TNT equivalent, produces a
stupendous explosion that melts, vaporizes, and ejects a substantial
volume of calcite, granite, and water. The dominant feature in this
picture is the curtain of the debris that has been ejected and is now
falling back to Earth. The ejecta follows ballistic trajectories, with its
leading edge forming a conical surface that moves outward from the
crater as the debris falls to form the ejecta blanket. The turbulent
material interior to the debris curtain is still being accelerated upward
by the explosion produced during the craters excavation.
0.50
0.35
0.23
0.13
0.06
0.02
0.01
T
e
m
p
e
r
a
t
u
r
e
(
e
V
)
Figure 11. Seven seconds after a 10kmdiameter granite asteroid strikes
Earth, billions of tons of hot material are lofted into the atmosphere.
This material consists of asteroid fragments, mixed with vaporized
water, calcite, and granite from Earth. Much of this debris is directed
downrange (to the right and back of this image) carrying the horizontal
momentum of the asteroid in this 45degree impact. This image is a
perspective rendering of a density isosurface colored by material
temperature (0.5 eV = 5,800 K). We chose the isosurface, at density
0.005 g/cm
3
, to show everything denser than air. This pictures scale is
set by the back boundary, which is 256km long. The maximum height
of the rooster tail at this time is 50 km.
MAY/JUNE 2004 55
Acknowledgments
We thank Bob Greene for assistance with the
visualization of the 3D runs and Lori Pritchett for help
with executing the simulations. We had helpful
conversations with Eileen Ryan, Jay Melosh, Betty
Pierazzo, Frank Kyte, Erik Asphaug, Steve Ward, and Tom
Ahrens on the impact problem in general. We also thank
the anonymous reviewers for comments that helped
improve this article.
References
1. E. Pierazzo and H.J. Melosh, Understanding Oblique Impacts
from Experiments, Observations, and Modeling, Ann. Rev. Earth
and Planetary Sciences, vol. 28, 2000, pp. 141167.
2. F.T. Kyte, Iridium Concentrations and Abundances of Meteoritic
Ejecta from the Eltanin Impact in Sediment Cores from Polarstern
Expedition ANT XII/4, Deep Sea Research II, vol. 49, 2002, pp.
10491061.
3. S.A. Stewart and P.J. Allen, A 20kmDiameter MultiRinged Im
pact Structure in the North Sea, Nature, vol. 418, 2002, pp.
520523.
4. S.N. Ward and E. Asphaug, Impact Tsunami Eltanin, Deep Sea
Research II, vol. 49, 2002, pp. 10731079.
5. K.A. Holsapple, The Scaling of Impact Processes in Planetary Sci
ences, Ann. Rev. Earth and Planetary Sciences, vol. 21, 1993, pp.
333373.
6. C.L. Mader, Numerical Modeling of Water Waves, Univ. of Calif.
Press, 1988.
7. D.A. Crawford and C.L. Mader, Modeling Asteroid Impact and
Tsunami, Science of Tsunami Hazards, vol. 16, 1998, pp. 2130.
8. E. Pierazzo, D.A. Kring, and H.J. Melosh, Hydrocode Simulation
of the Chicxulub Impact Event and the Production of Climatically
Active Gases, J. Geophysical Research, vol. 103, 1998, pp.
2860728625.
9. A.R. Hildebrand et al., Chicxulub Crater: A Possible Creta
ceous/Tertiary Boundary Impact Crater on the Yucatan Penin
sula, Mexico, Geology, vol. 19, 1991, pp. 867871.
10. V.L. Sharpton et al., New Links Between the Chicxulub Impact
Structure and the Cretaceous/Tertiary Boundary, Nature, vol.
359, 1992, pp. 819821.
11. L. Alvarez et al., Extraterrestrial Cause for the Cretaceous/Ter
tiary Extinction, Science, vol. 208, 1980, pp. 10951008.
Galen R. Gisler is an astrophysicist at the Los Alamos
National Laboratory. He has many years of experience
in modeling and understanding complex phenomena
in Earth, space, and astrophysical contexts. His research
interests include energetic phenomena in geosciences
and using the SAGE and RAGE codes of the Los Alamos
Crestone Project. He has a BS in physics and astronomy
from Yale University and a PhD in astrophysics from
Cambridge University. Contact him at grg@lanl.gov.
Robert P. Weaver is an astrophysicist at Los Alamos
and leader of the Crestone Project, part of the Depart
ment of Energys Advanced Simulation and Comput
ing Initiative. This project develops and uses sophisti
cated 1D, 2D, and 3D radiationhydrodynamics codes
for challenging problems of interest to the DOE. He
has a BS in astrophysics and mathematics from Colgate
University, an MS in physics from the University of Col
orado, and a PhD in astrophysics from the University
of Colorado.
Michael L. Gittings is an assistant vice president and
chief scientist at Science Applications International. He
works full time on a multiyear contract with the Los
Alamos National Laboratory to support and improve
the SAGE and RAGE codes that he began developing
in 1990. He has a BS in mechanical engineering and
mathematics from New Mexico State University.
Charles L. Mader is a fellow emeritus of the Los
Alamos National Laboratory, president of Mader Con
sulting, fellow of the American Institute of Chemists,
and editor of the Science of Tsunami Hazards journal.
He also has authored Numerical Modeling of Water
Waves, Second Edition (CRC Press, 2004) and Numeri
cal Modeling of Explosives and Propellants (CRC Press,
1998). He has a BS and MS in chemistry from Okla
homa State University and a PhD in chemistry from Pa
cic Western University.
Figure 13. Two minutes after impact, the debris curtain has separated
from the rim of the stillforming crater as material in the curtain falls to
Earth. The debris from the curtain is deposited in a blanket of ejecta
that is asymmetric around the crater with more in the downrange than
in the uprange direction. The distribution of material in the ejecta
blanket can be used as a diagnostic to determine the direction and
angle of the asteroids impact.
www.ieee.org/renewal
Renew your
IEEE Computer Society
membership today!
this article, we give more details about
Grbner bases and describe their
main application (algebraic system
solving) along with some surprising
derived ones: inclusion of varieties,
automatic theoremproving in geom
etry, expert systems, and railway in
terlocking systems.
Reduced Grbner Bases
In the previous article, we introduced
Grbner bases of idealswith an ideal
being the set of algebraic linear com
binations of a given set of polynomi
alsas a tool for algebraic system
solving (that is, general polynomial
system solving). We solved such sys
tems using simple commands in a
computer algebra system such as
Maple. Lets review an example from
the previous article.
Example 1. The solution set of the
system
are the points in the intersection curve
of both surfaces. We emulate Maples
notation by preceding inputs with a
>, closing them with a ;, and in
cluding outputs centered in the fol
lowing line:
> gbasis( {x^2  y^2  z,
x^2 + y^2  z} , plex(y,x,z)
);
[x
2
z, y
2
]
Consequently, we also can express this
systems solution set as the intersection
of the parabolic cylinder x
2
z = 0 with
the vertical plane y = 0.
To really delve into algebraic system
solving, though, we first must explain
term orders (such as plex) and reduced
Grbner bases.
Term Orders
The polynomial ring A[x
1
, ..., x
n
] is the
set of polynomials in the variables x
1
,
..., x
n
with coefficients in A. A usually
is a field (known as the base field), and
in our examples, it is the set of real
numbers (). However, this is not nec
essarily always the case.
A product of variables, such as x
1
x
3
3
x
4
, is known as a power product or
monomial. The product of an element
in the base eld with a power product,
such as 7 x
1
x
3
3
x
4
, is known as a
polynomial term.
To be able to say when a polynomial
is simpler (meaning that it is smaller)
than other polynomials in the chosen
ordering, we first must order polyno
mial terms. But before ordering terms,
we must fix a variable order, which is
similar to a letter order. For instance,
our dictionaries are ordered lexico
graphically according to letter order: a
> b > c > ... > z.
Two possible term orders are lexi
cographical (also denoted plex) and to
tal degree (also denoted tdeg). In the
lexicographical order, with x > y > z as
an example, x
2
y > x y
3
because
word xxy would appear before
word xyyy in a dictionary. In the to
tal degree order, with x > y > z as an
example, x
2
y < x y
3
because the de
grees of these monomials are 2 + 1
= 3 and 1 + 3 = 4, respectively. Ties
are usually broken in tdeg by using
lexicographic order.
So how can we order polynomials?
Lets use lc(p) to denote the leading
coefficient of polynomial p (that is,
the coefficient of the greatest term
for the chosen term and variable or
ders). We can say that polynomial p
1
is simpler than p
2
if lc(p
1
) < lc(p
2
). If
they have the same value, we can re
cursively compare p
1
lc(p
1
) and p
2
lc(p
2
) instead.
When we use Maples gbasis com
mand, we must specify a variable or
dering (such as y > x > z) and a term or
der (like tdeg or plex), as Example 1
shows. Which term order is best de
pends on the particular case; its not al
ways easy to decide.
Main Property
of Reduced Grbner Bases
Just as in the theory of vector spaces, in
which bases that contain perpendicu
lar vectors of unit length are particu
(hyperbolic paraboloid)
(elliptic paraboloid)
x y z
x y z
2 2
2 2
0
0
+
56 Copublished by the IEEE CS and the AIP 15219615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
SOME APPLICATIONS OF GRBNER BASES
By Eugenio RoanesLozano, Eugenio RoanesMacas, and Luis M. Laita
I
N THE MARCH/APRIL ISSUE OF CISE, WE DISCUSSED THE GEOM
ETRY OF LINEAR AND ALGEBRAIC SYSTEMS. WE ALSO DEFINED
IDEALS AND BASES SO THAT WE COULD INTRODUCE THE CON
CEPT OF GRBNER BASES FOR ALGEBRAIC SYSTEM SOLVING. IN
Editors: Isabel Beichl, isabel.beichl@nist.gov
Julian V. Noble, jvn@virginia.edu
PRESCRIPTIONS
C O M P U T I N G P R E S C R I P T I O N S
MAY/JUNE 2004 57
larly important, so it is that some
Grbner bases are particularly impor
tant: we call these reduced Grbner
bases. We say that a Grbner basis is
reduced if and only if the leading coef
cient of all its polynomials is 1 and we
cant simplify any of its polynomials
by adding a linear algebraic combina
tion of the rest of the polynomials in
the basis.
The Buchberger algorithm is what
allows Maple and other computer al
gebra systems to compute Grbner
bases. The input to Buchbergers al
gorithm is a polynomial set, a term
order (for instance, tdeg), and a vari
able order (for instance, x > y > z).
The algorithms output is the ideals
reduced Grbner basis with respect
to the specified term and variable or
ders. The key point is that such a re
duced Grbner basis completely
characterizes the ideal: any ideal has
a unique reduced Grbner basis.
1
Consequently,
two sets of polynomials generate
the same ideal if and only if their
reduced Grbner bases are the
same, and
{1} is the only reduced Grbner basis
for the ideal that is equal to the whole
ring (remember that the ideal gener
ated by {1} is always the whole ring,
because any element of the ring can be
generated as the product of 1 and an
element of the ring; the property of
the reduced Grbner bases men
tioned earlier implies the uniqueness
of such a basis).
Because well often refer to reduced
Grbner bases, we should introduce an
abbreviation. Let C be a set of polyno
mials and use GB(C) to denote the
ideals reduced Grbner basis generated
by C with respect to certain term and
variable orders.
Grbner Bases
and Algebraic System Solving
Grbner bases deal with polynomial
ideals, but as the previous article
showed, we also can use them in alge
braic system solving.
Algebraic Systems
with the Same Solutions
A first application in algebraic system
solving would be to check for the
equality of solutions. As a consequence
of the previous sections theoretical re
sults, if GB(pol
1
, ..., pol
n
) = GB(pol
1
, ...,
pol
m
), then systems
have the same solutions.
1
A result close to the converse is true
if the base field is algebraically closed.
(To describe this in full detail, though,
we would have to introduce the so
called radical of an ideal and men
tion Hilberts Nullstellensatz, which is
behind this brief introductions scope.)
Lets use the direct result: well prove
that three systems have the same solu
tions because the reduced Grbner bases
of the corresponding ideals coincide.
Example 2. The following three sys
tems have the same solutions:
(The example shows the intersection of
a cylinder and a plane orthogonal to its
axis; the intersection of a cone and the
same plane; and the intersection of an
elliptic paraboloid, another elliptic pa
raboloid, and a spherical surface, re
spectively.) Well check it by comput
ing the corresponding Grbner bases
in Maple:
> gbasis( {x^2 + y^2  1, z
 1} , plex(x,y,z) );
[z  1, x
2
+ y
2
 1]
> gbasis( {x^2 + y^2  z^2,
z  1} , plex(x,y,z) );
[z  1, x
2
+ y
2
 1]
> gbasis( {x^2 + y^2  z, 
x^2  y^2  z + 2, x^2 + y^2
+ z^2  2} , plex(x,y,z) );
[z  1, x
2
+ y
2
 1]
Distinguishing Real
and Complex Solutions
Whether an algebraic equation has so
lutions clearly depends on the set in
which we are looking for such solu
tions. A field is algebraically closed if
each polynomial with coefficients in
the field also has a root in the field. A
elds algebraic closure is the minimum
algebraically closed eld that contains
the given eld. For instance, the elds
(set of rational numbers) and are
not algebraically closed, because x
2
2
[x] has no rational root, and x
2
+ 1
[x] has no real root either. How
ever, (set of complex numbers) is al
gebraically closed. Moreover, is the
algebraic closure of and .
Whether an algebraic system has
solutions will also depend on the set in
which were looking for such solutions.
For example some algebraic systems
have exactly the same real or complex
solutions, but this is not always the
case, as the next example shows.
Example 3. Consider the algebraic
system below, also used as an example
x y z
x y z
x y z
2 2
2 2
2 2 2
0
2 0
2 0
+
+
+ +
x y z
z
2 2 2
0
1 0
+
x y
z
2 2
1 0
1 0
+
pol
pol
pol
pol
n m
1 1
0
0
0
0
............
'
............
'
58 COMPUTING IN SCIENCE & ENGINEERING
in the previous article (see Figure 1):
Computing the Grbner basis with
Maple, we get
> gbasis( {x^2 + y^2 + z^2 
2, x^2 + y^2  z, x  y} ,
plex(x,y,z) );
[z
2
 2 + z, 2y
2
 z, x  y]
The first polynomial has two roots: (z
= 1 and z = 2). Substituting 1 for z in
the second polynomial and by substi
tution in the third polynomial, we get
two real solutions (points):
,
Nevertheless, two other imaginary so
lutions (points of
3
) correspond to
the other root of the first polynomial
(z = 2):
(x = i, y = i, z = 2),
(x = i, y = i, z = 2)
Algebraic Systems
with No Solutions
We also can use Grbner bases in alge
braic system solving to check the exis
tence of solutions (in the algebraic clo
sure of the base eld).
1
For instance,
has no solution in the algebraic closure
of the base field if and only if GB(pol
1
,
, pol
n
) = {1}
Example 4. The following system
has no real solution (see Figure 2). We
can now check that it has no complex
solutions either:
> gbasis( {x^2  y, x^2 y
+ 1} , plex(x,y,z) );
[1]
However, this is not always the case:
a polynomial system with no real solu
tions can have complex solutions, as
the next example shows.
Example 5. Consider the surfaces of
3
given by system
,
that is, a spherical surface below plane
z = 9/4 and an elliptic paraboloid above
the same plane (see Figure 3). Clearly,
the two surfaces do not intersect in
3
.
Nevertheless, the reduced Grbner ba
sis is not {1}:
> gbasis( {x^2 + y^2 + z^2 
4, x^2 + y^2  z + 5/2} ,
plex(x,y,z) );
[2z  13 + 2z
2
, 2x
2
+ 2y
2

2z + 5]
This is because although the two surfaces
dont intersect in
3
, they do intersect in
3
! The roots of the rst polynomial are
z = 1/2 (33)/2, and substituting these
values for z in the second polynomial, we
get two imaginary circles:
In plane z = 1/2 + (33)/2: 2x
2
+ 2y
2
+ 6 33 = 0.
In plane z = 1/2 (33)/2: 2x
2
+ 2y
2
+ 6 + 33 = 0.
Other Applications
of Grbner Bases
Apart from obvious direct polyno
mial system solving, different fields
have some surprising applications of
Grbner bases.
x y z
x y z
2 2 2
2 2
4 0
5 2 0
+ +
+ +
/
x y
x y
2
2
0
1 0
+
pol
pol
n
1
0
0
............
x y z
_
,
1
2
1
2
1 , ,
x y z
_
,
1
2
1
2
1 , ,
x y z
x y z
x y
2 2 2
2 2
2 0
0
0
+ +
+
C O M P U T I N G P R E S C R I P T I O N S
2
0
2
x
2
1
0
1
2
y
2
0
2
4
6
Figure 3. A nonlinear polynomial system.
The system has no real solution (as can
be seen in the gure), but it does have
complex solutions.
4
3
2
1
0
1
2
3
4
y
2 1 1 2
x
Figure 2. A nonlinear polynomial system.
Its solution set is the intersection of two
parabolas. It has neither real nor complex
solutions (that it has no real solution can
be deduced from the gure).
2
1
0
1
2
x
2
1
0
1
2
y
2
1
0
1
2
Figure 1. A nonlinear polynomial system.
Its solution set is the intersection of a
spherical surface, an elliptic paraboloid,
and a plane. The gure shows two real
solutions, but two other imaginary
solutions (that we cant draw) exist.
MAY/JUNE 2004 59
Inclusion of Varieties
Although Emmy Noether and Wolf
gang Krull developed the basic theory
of algebraic geometry in the 1930s, un
til the implementation of Grbner
bases in computer algebra systems, its
applications were very limited: the ex
amples that could be managed were al
most trivial.
A straightforward application of
Grbner bases is the difficult task of
deciding whether an algebraic vari
ety is included within another one
(an algebraic variety is the solution
set of an algebraic system). We can
easily check, for instance, that the
curve in
3
is contained in the surface x z y
2
= 0 (a
cone). We simply prove that the equation
x z y
2
= 0 doesnt add any constraint to
the equations in the rst system:
> gbasis( {z  x^3, y  x^2}
, plex(x,y,z) );
[z
2
+ y
3
, xz  y
2
, xy  z, 
y + x
2
]
> gbasis( {z  x^3, y  x^2,
x * z  y^2} , plex(x,y,z) );
[z
2
+ y
3
, xz  y
2
, xy  z, 
y + x
2
]
A related eld of application of these
techniques is computeraided geomet
ric design (CAGD).
2
Automatic Theorem
Proving in Geometry
It is possible to automatically prove geo
metric theorems the same way by using
Grbner bases.
3
Both the hypotheses
and the thesis are usually statements
like, a point lies on a geometric object,
and three lines share a point, which
we can write as polynomial equations.
We can express the theorem as
hyp
1
= 0, ..., hyp
k
= 0 thesis = 0.
But to prove such an implication, it is
enough to prove that
thesis hyp
1
, ..., hyp
k
,
which we can check by comparing
GB(thesis, hyp
1
, ..., hyp
k
) and
G(hyp
1
,..., hyp
k
).
z x
y x
3
2
0
0
Figure 4. The control desk of the railway interlocking at a railway station. (a) This
interlocking has a mixed technology: it is computercontrolled, but compatibility is
decided by a combination of relay arrays. (b) Part of the huge room that contains
the relay arrays.
(a)
(b)
60 COMPUTING IN SCIENCE & ENGINEERING
Expert Systems
We can apply a GBbased method to
knowledge extraction and verification
of rulebased expert systems.
4
To do so,
logic formulae can be translated into
polynomials, and the following result
relating to be a tautological conse
quence and polynomial ideal mem
bership is obtained:
if ( A) denotes the polynomial translation
of the negation of a formula A, then A
0
can be deduced from a set of facts
F
1
,...,F
n
, and a set of rules R
1
, ..., R
m
if and
only if ( A) ( F
1
), ..., ( F
n
),
( R
1
), ..., ( R
m
).
And, as mentioned earlier, its easy
enough to compare two GBs to check
for an ideal membership. Moreover,
this result holds both when the under
lying logic is Boolean and when it is
modal multivalued.
Railway Interlocking Systems
We also applied a GBbased method to
checking the safety of switch position,
semaphore color, and train position in
a railway station (see Figure 4). Our
decisionmaking model is topologyin
dependentthat is, it doesnt depend
on track layout.
5
The key idea is to identify trains via
integer numbers, sections of the lines
via polynomial variables, and the con
nectivity among the different sections
via polynomials (trains can pass from
one section to another if they are
physically connected and the posi
tion of the switches and the color of
the semaphores allow it). Lets con
sider an algebraic system constructed
as follows:
If section y is reachable from section
x, we add x (x y) = 0 to the system.
If train 3 is in section x, we add x 3
= 0 to the system.
Notice that the values propagate
along reachable sections: for instance, if
train 3 is in section x, and its possible to
pass from section x to section y, we have
,
which means section y is reachable by
train 3. We thought about this problem
for a long time until we could find
polynomials, x (x y) and x j, that
translated this behavior. Because a sit
uation is unsafe if and only if two dif
ferent trains could reach the same sec
tion, the situations safeness is
equivalent to the algebraic systems
compatibility.
A
lgebraic systems are usually
solved by using numerical meth
ods, but these methods are not appro
priate when dealing with decision
making problems. In such cases, the
Grbner bases method is the key. Al
though some knowledge of commuta
tive algebra is required to know how to
calculate them, why the reduction
process always finishes, and why they
completely identify an ideal, just using
them can be intuitive and useful. In
fact, the symbolic solve commands in
computer algebra systems internally
apply a Grbner basis algorithm if the
system is nonlinear. As this article
shows, a wide variety of applications
arise. One future direction under de
velopment now is the application to
CAGDin particular, to the geometry
of a car bodys pressed steel pieces.
6
Acknowledgments
Research project TIC20001368
C03 (MCyT, Spain) partially sup
ported this work.
References
1. D. Cox, J. Little, and D. OShea, Ideals, Vari
eties, and Algorithms, SpringerVerlag, 1992.
2. L. GonzlezVega, Computer Aided Design
and Modeling, Computer Algebra Handbook,
J. Grabmeier, E. Kaltofen, and V. Weispfen
ning, eds., SpringerVerlag, 2003, pp.
234242.
3. B. Buchberger, Applications of Grbner
Bases in NonLinear Computational Geome
try, Mathematical Aspects of Scientic Soft
ware, vol. 14, J.R. Rice, ed., SpringerVerlag,
1988, pp. 6087.
4. E. RoanesLozano et al., A Polynomial Model
for MultiValued Logics with a Touch of Alge
braic Geometry and Computer Algebra,
Mathematics and Computers in Simulation,
vol. 45, nos. 12, 1998, pp. 8399.
5. E. RoanesLozano and L.M. Laita, Railway In
terlocking Systems and Grbner Bases,
Mathematics and Computers in Simulation,
vol. 51, no. 5, 2000, pp. 473481.
6. L. GonzlezVega and J.R. Sendra, Algebraic
Geometric Methods for the Manipulation of
Curves and Surfaces, Actas del 7
0
Encuentro
de lgebra Computacional y Aplicaciones
(EACA2001), J. Rubio, ed., Universidad de La
Rioja, 2001, pp. 4560.
Eugenio RoanesLozano is an associate profes
sor in the algebra department of the Universidad
Complutense de Madrid. He has a PhD in math
ematics from the Universidad de Sevilla and a
PhD in computer science from the Universidad
Politecnica de Madrid. He is a member of the Real
Sociedad Matematica Espaola, the Sociedad
Matemtica Puig Adam, and the IMACS soci
ety. Contact him at eroanes@mat.ucm.es.
Eugenio RoanesMacias is an associate profes
sor in the algebra department of the Universidad
Complutense de Madrid. He has a PhD in math
ematics from the Universidad Complutense de
Madrid. He is a member of the Real Sociedad
Matematica Espaola, and the Sociedad
Matemtica Puig Adam.
Luis M. Laita is a full professor in the articial in
telligence department of the Universidad Politec
nica de Madrid. He has an Lltd. in physics, a PhD
in mathematics from the Universidad Com
plutense de Madrid, and a PhD in history and
philosophy of science from Notre Dame Univer
sity. He is a correspondent academician of the
Real Academia de Ciencias de Espaa.
x
x x y
y
3 0
0
3 0
( )
C O M P U T I N G P R E S C R I P T I O N S
MAY/JUNE 2004 Copublished by the IEEE CS and the AIP 15219615/04/$20.00 2004 IEEE 61
Editors: Jim X. Chen, jchen@cs.gmu.edu
R. Bowen Loftin, bloftin@odu.edu
VISUALIZATION
V I S U A L I Z A T I O N C O R N E R
Visualization is a process of presentation
and discovery. When a graphic presenta
tion is effective, users perceive relation
ships, quantities, and categories within
the information. They also might inter
act and manipulate various information
aspects, dynamically changing a ren
derings appearance, which could con
rm or contradict their hypotheses de
velopment. Users want to understand
the underlying phenomena via the visu
alization; they dont need to (necessar
ily) understand individual values. An ef
fective visualization should convey the
datas meaning and increase the infor
mations clarity, through the users nat
ural perception abilities.
In additionor as an alternative to
visual mappingswe could map infor
mation into nonvisual forms, any form
that stimulates any of our senses: from
auditory, haptic, olfactory, and gustatory
to vestibular.
1
For example, we could
map monthlong stockmarket data
onto a line graph, with the xaxis repre
senting time and the yaxis the stock
price (we could then plot multiple
stocks using various colored or textured
lines). Alternatively, we could use sound
graphs, in which each stock sounds a
different timbre, with higher stock value
represented by a higher pitch, and the
days and weeks represented by time.
2
Presenting the information in these
nonvisual forms offers many advantages:
They are more accessible to partially
or nonsighted users.
Some modalities might be more ef
fective at representing data (for ex
ample, sonification is useful when
temporal features are important).
Multiple different modalities are
useful when one modality is already
overloaded with numerous variables.
In situations where a display screen
is too small to encapsulate an intri
cate visualization or users cant view
a screen because theyre monitoring
something else (for example, in a
machine room in which an engineer
constantly monitors the material be
ing cut and machined), a nonvisual
form (such as sonication) could be
more appropriate.
These nonvisual visualizations cre
ate a range of challenges: How can we
effectively represent information using
these various modalities? Can users ac
tuallyand accuratelyperceive the
information? These are difcult ques
tions, and there is much research ahead
to work out effective multimodal visu
alization designs. Alternatively, much
research has been completed in the ar
eas of data visual perception and repre
sentation. For example, researchers
have employed empirical studies to as
semble design rules and theories,
3
such
as Gestalt principles of similarity, James
Gibsons affordance theory, or Jacques
Bertins semiology of graphics. Al
though many are merely guidelines,
they do aid us (as datapresentation en
gineers) to recreate good visualizations.
So, what can we learn from one
modality to another? Is there equiva
lence? Can we apply ideas in one modal
ity to gain effective and understandable
realizations in another? Bar charts are
extremely popular visualizations, but
what would a multimodal bar chart look
like? What would an audible bar chart
sound like? What about a haptic bar
chart? Can we learn from one modalitys
design principles and apply that knowl
edge to another? An obvious advantage
is that because users effortlessly under
stand the visual barchart concept they
should instinctively understand an
equivalent design in another modality.
Additionally, good design principles in
one modality might help us generate an
effective realization in another. Well re
turn to our audible bar chart later. For
now, lets try to answer some of the other
questions I raised.
Equivalence Chart Designs
Many current multiperceptual designs
are equivalence designs. For instance,
work by Wai Yu and colleagues demon
strated haptic line graphs.
4
Like their
visual counterparts, the researchers
placed the haptic line graphs on a 2D
grid, with lines representing ridges and
VISUALIZATION EQUIVALENCE
FOR MULTISENSORY PERCEPTION
LEARNING FROMTHE VISUAL
By Jonathan C. Roberts
I
N OUR INFORMATIONRICH WORLD, COMPUTERS GENERATE SO
MUCH DATA THAT COMPREHENDING AND UNDERSTANDING IT
IN ITS RAW FORM IS DIFFICULT. VISUAL REPRESENTATIONS ARE IM
PERATIVE IF WE ARE TO UNDERSTAND EVEN A SMALL PART OF IT.
62 COMPUTING IN SCIENCE & ENGINEERING
valleys. Users traced the line graph path
by following a pointer alongside a ridge
or down a valley.
In this example, Yus team utilized a
Phantom forcefeedback joystick (see
Figure 1), which lets users feel 3D ob
jects, finding that users more success
fully followed the valleys because they
could more easily keep the pointer on
the line. Users could effectively under
stand the graph data, but problems oc
curred when the graph became detailed
and when multiple lines crossed on the
graph (users didnt know which one
they were following). We can envisage
various strategies to overcome these
problems; making individual lines feel
different (for example, by changing
their frictions) or staying on the line
that the user started investigating
(much like a train crossing through a
railroad switch), which could be imple
mented by the geometry conguration
or magnetic forces.
In effect, such a haptic graph mimics
swell paper, or tactile graphics, on which
users can feel and follow raised areas
on the paper (albeit using valleys rather
than ridges). However, the main and
important difference is that the Phan
tom device is pointbased, with the
kinesthetic force realized at a single
point in space, whereas human ngers
are much more versatile in their sensi
tivity. They can feel surrounding in
formation; multiple fingers can mark
points of interest; and the ngers sep
arations can gauge distances. In fact, an
ideal system would stimulate a larger
part of the human finger (effecting
more realistic rendering of the graph
by letting the user perceive surround
ing elements), stimulate multiple fin
gers, and let the user dynamically feel
and explore the information. Devices
with such capabilities do exist, such as
Immersions CyberTouch glove (see
Figure 2), which uses vibrotactile
stimulators placed on each finger and
one on the palm, or dynamic Braille
displays (in which the pattern of six
Braille dots changes); but the resolu
tion and, therefore, the information
detail these devices portray is not as ac
curate as the human nger can sense.
Various examples of sonication
equivalence designs go beyond the sound
graphs Ive mentioned. David Bennett
visualized home heating schematics by
sounding out each nodes position on a
graph.
5
Each node was represented by
two musical motifs played on different
instruments (one each for the x,y coordi
nates), and the number of notes in the
scale corresponded to the coordinate po
sition. In this way, various 2D objects
were sounded.
In 2003, Keith Franklin and I real
ized sonied pie charts.
6
Our example
used 3D sound sources, simulated sur
round sound on headphones using
headrelated transfer functions
(HTRs), which are functions that cre
ate an illusion of sounds at particular
locations based on a human model
(timing of a source to the left and right
ears and modifications by our ears,
head, or torso), and a surroundsound
speaker setup to position the pie seg
ments. We positioned a user in the az
imuth plane, with the pie segments
surrounding the user. We used various
strategies to sound out the pie seg
ments, from placing the segments
around the user to normalizing the
segments to the front. The results
showed that the user easily understood
how the information was being repre
sented, but had difculty in accurately
gauging the segments values. In fact,
mapping the pie segments to spatial
sound is much less accurate than the vi
sual equivalence. This problem is fur
ther exacerbated by the errors nonlin
earity, which depends on the sounds
position surrounding the user (the so
called minimum audible angle
7
).
Another example of using position in
the graphic to represent position in the
sonication is by Rameshsharma Ram
loll and colleagues, who describe an au
dio version of tabular data.
8
In their ex
ample, they map the value to pitch and
the horizontal position of each cell to a
localized sound source: a user hears the
leftmost cell in the left ear and the
right most cell in the right ear, while in
terpolating intermediary cell values be
tween the left and right positions. In
the same way, other researchers have
developed several systems that nonvi
sually represent a computers GUI.
Some systems are speechbased, while
others use nonspeech sounds. For ex
ample, Earcons,
9
which are unique and
identiable rhythmic pitch sequences,
can represent the interfaces menus.
V I S U A L I Z A T I O N C O R N E R
Figure 1. Phantom Desktop haptic device
provides 3D positional sensing.
(Reproduced courtesy of SensAble
Technologies. Phantom and Phantom
Desktop are trademarks or registered
trademarks of SensAble Technologies.)
Figure 2. CyberTouch vibrotactile glove,
with stimulators placed on each nger
and one on the palm. (Reproduced by
permission of Immersion Corporation,
copyright 2004. All rights reserved.)
MAY/JUNE 2004 63
Elizabeth Mynatt and Gerhard Weber
describe two systems: Textual and
Graphical User Interfaces for Blind
People (GUIB) translates the screen
into tactile information, and the Mer
cator project exchanges the interface
with nonspeech auditory cues.
10
It is obviously useful and, indeed, pos
sible to implement equivalence designs.
Developers are gaining inspiration from
one traditional mapping to instigate an
effective mapping in another modality.
Although idea transference is an impor
tant strategy, it might not be wise to fol
low it unconditionally because by fo
cusing on an equivalent design, the
temptationand perhaps the conse
quenceis to recreate the design itself
rather than representing the underlying
phenomenas aspects. Thus, in practice,
the process is necessarily more complex
than applying a onetoone design
translation. Of course, extracting design
principles from one and applying them
to another is useful, but it might be that
the equivalent presentation in another
modality will not look like its equiva
lent. Gaining inspiration from the visual
design equivalent relies on users implic
itly understanding the design and know
ing how to interpret the information. In
reality, a user might not be so familiar
with the original form; for instance, Yu
mentioned that nonsighted users found
it slower to comprehend the realization
compared with sighted users using the
same haptic graphs, mentioning that
this could be due to unfamiliarity with
certain graph layouts.
4
Inspiration
from the Workplace
Rather than gaining inspiration strictly
from the visual domain, perhaps we
should look to the real world or the
workplace. Since the dawn of the visual
interface, designers have applied non
computerizedworkplace or everyday
living ideas to help develop understand
able user interfaces. The idea of the
desktop comes from the ofce, with
documents spread over a desk or work
space and cabinets or folders in which to
store them. Tools such as spreadsheets
are inspired from the tabular columns of
numbers found in ledger sheets. We
take for granted these and other con
cepts, such as cut and paste, in our day
today computing use, but they were in
spired from a noncomputerized world.
Currently, there are various
metaphors for nonvisual interfaces.
We might exchange graphical icons
with auditory icons (using familiar
realworld sounds), or Earcons, which
also encode similarities among an as
sortment of objects. We can shade out
visual interface elements or icons
when they are unavailable in a partic
ular configuration. Similarly, we can
use sound effects or filtears (auditory
icons) to manipulate and perturb au
ditory cues.
11
For example, we could
represent a musical motif more qui
etly, or it could sound more dull or
bright depending on whether it is
iconized. Finally, instead of imple
menting a sonified version of the
desktop metaphor, Maynatt and Ed
wards describe a metaphor called au
dio rooms,
10
(an extension of previous
ideas from Xerox PARC). The rooms
metaphor groups activities together
(much like rooms in a house; kitchen
for cooking, bedroom for sleeping,
and so on). Thus, we can group files
and applications in a room for similar
activities. The rooms also include
doors to traverse into adjacent rooms.
As a consequence, we might ask,
What more can we learn from everyday
visual interfaces and metaphors? Con
sider various aspects of the user inter
face. For example, what would be the
nonvisual counterparts for highlighted
text, popups, or multiple windows?
Looking at the Variables
The equivalent chart designs I men
tioned succeed because they employ
perceptual variables with similar traits.
For instance, sonified pie charts
6
map
each pie segment (usually represented
by an angle) into position surrounding
a user; the visual and haptic line graphs
represent data through a perceptual
variable that demonstrates the data
value by showing distance from a xed
axis. This is consistent with Ben Chal
lis and Alistair Edwards who say, A
consistence of mapping should be
maintained such that descriptions of
actions remain valid in both the visual
and the nonvisual representations.
12
This is polarity mapping.
There are two types of mapping po
larities: positive (a variable increases in
the same direction as the change of the
underlying data) and negative, the con
verse. Bruce Walker and David Lane
summarized that the majority of (sighted
and nonsighted) users allocated the same
polarities to data, with the exception of
monetary values, particularly when
mapped to pitch.
13
They conjecture that
sighted users might associate higher
pitches to faster forms of transport,
whereas nonsighted users relate the val
ues to the everyday sounds of the money
itself (dropped coins make a higher
pitched sound, whereas a stack of paper
money makes a lower pitch, although
the stack holds a higher monetary value).
It also is worth looking further at
the variables. For example, Jacques
Bertin recommends mapping the con
tent to the container using a compo
nent analysis.
3
First, analyze the orig
inal datas individual components and
note whether they are variant or in
variantthe range is small, medium,
or largeand whether the quantities
are nominal, ordinal, or quantitative.
Then, evaluate the containers compo
nents for the same traits, and map one
64 COMPUTING IN SCIENCE & ENGINEERING
into the other. Although Bertin origi
nally was inferring graphics and chart
ing information, the same general
principle is relevant for multimodal in
formation presentation.
Consequently, just as there is a role
for evaluating perception issues and in
vestigating rules and guidelines for us
ing retinal variables, there also is a sim
ilar need for nonvisual variables. Some
graphics researchers have automated
the design of graphical presentations.
14
However, few guidelines currently ex
ist for the use of nonvisual variables.
We shouldnt be surprised that as we
learn about the limitations and problems
with designing visual interfaces, we must
learn about the peculiarities associated
with nonvisual perception. For example,
when we use the same color in different
contexts, with different adjacent colors,
our perception of that color can radically
change. We know some of the issues in
multisensory perception, such as the
minimum audible angle and that ab
solute pitch judgment is difcult (only
about 1 percent of the population has
perfect pitch). But, we need to do more
empirical research to decipher the inter
play between various modalities and also
various parameters.
The Engineering Dataow
Another aspect to contemplate is the
mapping process itself. It is only one of
many procedures needed to generate an
appropriate presentation. Over the
years, researchers have posed various
modus operandi,
15
but many of the fun
damental principles are the same. In vi
sualization, the dataflow model pre
dominates. It describes how the data
ows through a series of transformation
steps; the data is enhanced (which could
consist of filtering, simplifying, or se
lecting an information subset), then this
processed information is mapped into
an appropriate form that can be ren
dered into an image. Perception engi
neers must go through similar steps,
whatever the target modality. They
must select and, perhaps, summarize
and categorize the information before
mapping it into effective perceptual
variables. Thus, it is useful that devel
opers think about this engineering
dataflow and consider the perceptual
implications at each step.
Abstract Realizations
All the previously mentioned designs are
really presentation graphics. They dont
represent the intricacies that, say, infor
mation visualization does to the sighted
user. Thus, an important part of visual
ization is abstraction. Often, users more
easily understand the underlying infor
mation if the information is simplied.
At the 2004 Human Vision and Elec
tronic Imaging banquet for SPIEs Elec
tronic Imaging conference (www.
spie.org), Pat Hanrahan, Canon USA
Professor at Stanford University, spoke
about Realism or Abstraction: The Fu
ture of Computer Graphics.
In his presentation, he said that much
effort has gone into generating realistic
graphics, and more should go into the
process of generating abstract represen
tations. There are many instances when
line, sketchy, or even cartoon drawings
are easier to perceive. Indeed, he men
tioned Ryan and Schwartz, who evalu
ated users response to photographic
and cartoon images in 1956.
16
They
found that people could more quickly
identify a cartoon hand than a photo
graph of one. However, abstract ren
derings often are hard to achieve; in one
respect, realistic renderings merely re
quire the application of mathematics to
the problem, whereas abstract realiza
tions rely on ingenious and clever map
pings (which is a harder process, be
cause they cant be mathematically
dened). For instance, an artist can
change an obscure painting into an un
derstandable picture by merely adding
a few precisely placed lines.
An excellent example of an abstract
realization is the London Underground
Map designed by Harry Beck in 1933
(http://tube.t.gov.uk/guru/index.asp).
It depicts the stations logical positions
rather than their exact positions. Users
arent confused by additional, unneces
sary information because they see only
the important information. Obviously,
this is taskdependent; the map makes
it much easier to understand how to
navigate the railway and work out
where to change stations, but it is im
possible to calculate exact distances be
tween different stations.
If abstract mappings are useful in vi
sual information presentation, then
perhaps they should be equally impor
tant in nonvisual perception. This is a
strong principle to adhere to when de
veloping nonvisual realizations. In our
group, we have tried to apply some of
these ideas. For example, we recog
nized that if the user only needs to
perceive particular features of the pre
sentation, such as maximum and min
imum values, then we only need to
present this abstract information.
17
In
this case, we abstracted important
graph facets (maximum and minimum
points, turning points, and gradient)
and displayed them in an abstract tac
tile work surface. Other abstract ren
derings include sonication of sorting
algorithms
18
and oil and gas welllog
sonification.
19
Indeed, sorting algo
rithm sonification is interesting, be
cause it displays both the current state
of the sorted list and the process of
how two elements are swapped; the
welllog sonification uses a Geiger
counter metaphor to abstractly repre
sent the information.
So, what can we learn from this?
First, abstraction is important, and per
V I S U A L I Z A T I O N C O R N E R
MAY/JUNE 2004 65
ception engineers should think about
how to extract and display the most
datasignificant features. Second, we
should consider that abstract render
ings might be better than realistic rep
resentations. Finally, many of the cur
rent nonvisual representations are
realistic and accurate; perhaps we need
to start thinking about nonrealistic and
nonaccurate renderings that portray
the underlying informations essence: a
cartoonstyle nonrealistic rendering.
Indeed, neat and precise drawings give
a perception of being complete and ac
curate, while stylistic and sketchy dia
grams give the appearance of being in
complete or rough; can we utilize these
ideas to generate more effective nonvi
sual visualization?
N
ow lets nish the thought exper
iment on the visual bar chart. Vi
sual bar charts are popular because they
are convenient, easy to create, and, most
important, easy to understand. A user
quickly eyeballs the graphic, perceives
the overall trend, and immediately real
izes different categoriesinformation
encoded in the bars lengths. After a
while (perhaps only a few milliseconds),
a user might investigate further to de
termine which bar is largest, to which
category it belongs, and its magnitude.
This gives us some targets to design ef
fective nonvisual representations.
First, we still can learn a lot from di
rect equivalent designs and metaphor
equivalences. Most users will instantly
understand the presentations aim and
get on with the task of understanding
the underlying phenomena. Second,
there is a need for tools that enhance the
users discovery. In other words, there is
a need for exploration of and interaction
with these nonvisual realizations.
This is starting to happen,
8,11
but we
need to learn from Ben Shneidermans
mantra of Overview rst, zoom and l
ter, then detailsondemand.
20
This is
an important and effective visualization
idiom, and we should be able to apply it
to nonvisual perception. Third, there is
a need for more abstract nonvisual rep
resentations. Abstraction is important; it
helps users understand easily the infor
mations structure. Think about how
stylistic, cartooning, or nonaccurate ideas
might generate moreeffective nonvisual
forms. Evaluation and empirical testing
is imperative if we are to understand
what is effective and how variables inter
play and interfere with each other.
References
1. R.B. Loftin, Multisensory Perception: Beyond
the visual in visualization, Computing in Sci
ence & Eng., vol. 5, no. 4, 2003, pp. 5658.
2. D.L. Mansur, M.M. Blattner, and K.I. Joy,
SoundGraphs: A Numerical Data Analysis
Method For The Blind, J. Medical Systems,
vol. 9, no. 3, 1985, pp. 163174.
3. C. Ware, Information VisualizationPerception
for Design, Morgan Kaufmann, 2000.
4. W. Yu et al., Exploring ComputerGenerated
Line Graphs Through Virtual Touch, Proc. IEEE
ISSPA 2001, IEEE CS Press, 2001, pp. 7275.
5. D.J. Bennett, Effects of Navigation and Posi
tion on Task when Presenting Diagrams to
Blind People using Sound, Proc. 2nd Intl
Conf., Diagrams, M. Hegarty et al., eds.,
LNCS 2317, Springer, 2002, pp. 161175.
6. K. Franklin and J.C. Roberts, Pie Chart Soni
cation, Proc. Information Visualization (IV03),
Ebad Banissi et al., eds., IEEE CS Press, 2003,
pp. 49.
7. A.W Mills, On the Minimum Audible An
gle, J. Acoustical Soc. Am., vol. 30, no. 4,
1958, pp. 237246.
8. R. Ramloll et al., Using Nonspeech Sounds
to Improve Access to 2D Tabular Numerical
Information for Visually Impaired Users,
Proc. People and Computers XVInteraction
Without Frontiers, Springer, 2001, pp.
515530.
9. M. Blattner, D. Sumikawa, and R. Greenberg,
Earcons and Icons: Their Structure and Com
mon Design Principles, Human Computer In
teraction, vol. 4, no. 1, 1989, pp. 1144.
10. E.D. Mynatt and G. Weber, Nonvisual Pre
sentation of Graphical User Interfaces: Con
trasting Two Approaches, ACM CHI 94
Conf. Proc., ACM Press, 1994, pp. 166172.
11. L.F. Ludwig, N. Pincever, and M. Cohen, Ex
tending the Notion of a Window System to
Audio, Computer, vol. 23, no. 8, 1990, pp.
6672.
12. B.P. Challis and A.D.N. Edwards, Design
Principles for Tactile Interaction, Haptic Hu
manComputer Interaction, S. Brewster and R.
MurraySmith, eds., LNCS 2058, Springer
Verlag, 2001, pp. 1724.
13. B.N. Walker and D.M. Lane, Psychophysical
Scaling of Sonication Mappings: A Compari
son of Visually Impaired and Sighted Listen
ers, Proc. Intl Conf. Auditory Displays (ICAD),
2001, pp. 9094.
14. J. Mackinlay, Automating the Design of
Graphical Presentations of Relational Infor
mation, ACM Trans. Graphics, vol. 5, no. 2,
1986, pp. 110141.
15. J.C. Roberts, Display ModelsWays to Clas
sify Visual Representations, Intl J. Computer
Integrated Design and Construction, D. Bouch
laghem and F. Khosrowshahi, eds., vol. 2, no.
4, 2000, pp. 241250.
16. T.A. Ryan and C.B. Schwartz, Speed of Percep
tion as a Function of Mode of Representation,
Am. J. Psychology, vol. 69, 1956, pp. 6069.
17. J.C. Roberts, K. Franklin, and J. Cullinane,
Virtual Haptic Exploratory Visualization of
Line Graphs and Charts, The Engineering Re
ality of Virtual Reality 2002, M.T. Bolas, ed.,
vol. 4660B, Intl Soc. Optical Engineering
(SPIE), 2002, pp. 401410.
18. M.H. Browns and J. Hershberger, Color and
Sound in Algorithm Animation, Computer,
vol. 25, no. 2, 1992, pp. 5263.
19. S. Barrass and B. Zehner, Responsive Soni
fication of WellLogs, Proc. Intl Conf. Audi
tory Displays, ICAD, 2000; www.icad.org/
websiteV2.0/Conferences/ICAD2000/PDFs/
Barrass.pdf.
20. B. Shneiderman, The Eyes Have It: A Task By
Data Type Taxonomy for Information Visual
izations, Proc. IEEE Visual Languages, IEEE
Press, 1996, pp. 336343.
Jonathan C. Roberts is a senior lecturer at the
Computing Laboratory, University of Kent, UK.
His research interests include exploratory visu
alization, nonvisual and multimodal visualiza
tion, visualization in virtual environments, mul
tiple views, visualization reference models, and
Webbased visualization. He received a BSc and
PhD in computer science from the University of
Kent. He is a member of the ACM, the IEEE,
and Eurographics societies. Contact him at
j.c.roberts@kent.ac.uk.
of time t as
y(t) = x
1
e
1
t
+ x
2
e
2
t
,
where x
1
, x
2
,
1
, and
2
are xed parameters. The negative
values
1
and
2
are rate constants; in time 1/
1
, the rst
exponential term drops to 1/e of its value at t = 0. Often we
can observe y(t) fairly accurately, so we would like to deter
mine the rate and amplitude constants x
1
and x
2
. This in
volves tting the parameters of the sum of exponentials.
In this project, we study efficient algorithms for solving
this problem, but well see that for many data sets, the solu
tion is not well determined.
How Sensitive Are
the x Parameters to Errors in the Data?
In this section, we investigate how sensitive the y function
is to choices of parameters x, assuming that we are given the
parameters exactly.
Typically, we observe the function y(t) for m fixed t val
uesperhaps t = 0, t, 2t, , t
nal
. For a given parameter
set and x, we can measure the goodness of the models t
to the data by calculating the residual
r
i
= y(t
i
) y
e
(t
i
), i = 1, , m, (1)
where y
e
(t) = x
1
e
1
t
+ x
2
e
2
t
is the model prediction. Ide
ally, the residual vector r = 0, but due to noise in the mea
surements, we never achieve this. Instead, we compute
model parameters that make the residual as small as pos
sible; we often choose to measure size using the 2norm:
r
2
= r
T
r.
If the parameters are given, we can nd the x parame
ters by solving a linear leastsquares problembecause r
i
is a lin
ear function of x
1
and x
2
. Thus, we minimize the norm of
the residual, expressed as
r = y Ax,
where A
ij
= e
j
t
i
; j = 1, 2; i = 1, , m; and y
i
= y(t
i
).
We can easily solve this problem by using matrix decom
positions, such as the QR decomposition of Ainto the prod
uct of an orthogonal matrix times an upper triangular ma
trix, or the singular value decomposition (SVD). Well focus
on the SVD because even though its somewhat more ex
pensive, its generally less inuenced by roundoff error and
it gives us a bound on the problems sensitivity to small
changes in the data.
The SVD factors A = UV
T
, where the m m matrix U
satises UU
T
= U
T
U= I (the m midentity matrix), the n
n matrix V satisfies VV
T
= V
T
V = I, and the m n matrix
is zero except for entries
1
2
n
on its main diag
onal. Because r
2
= r
T
r = (U
T
r)
T
(U
T
r) = U
T
r
2
, we can
solve the linear leastsquares problem by minimizing the
norm of U
T
r = U
T
y U
T
Ax = V
T
x, where
i
= u
i
T
y, i = 1, , m,
and u
i
is the ith column of U. If we change the coordinate
system by letting w= V
T
x, then our problem is to minimize
(
1
1
w
1
)
2
+ (
n
n
w
n
)
2
+
n+1
2
+
m
2
.
66 Copublished by the IEEE CS and the AIP 15219615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
FITTING EXPONENTIALS:
AN INTEREST IN RATES
By Dianne P. OLeary
S
UPPOSE WE HAVE TWO CHEMICAL REAC
TIONS OCCURRING SIMULTANEOUSLY. A
REACTANTS AMOUNT y CHANGES BECAUSE OF
BOTH PROCESSES AND BEHAVES AS A FUNCTION
Editor: Dianne P. OLeary, oleary@cs.umd.edu
HOMEWORK
Y O U R H O M E W O R K A S S I G N M E N T
I
n this issue, we investigate the problem of tting a sum of exponential functions to data. This problem occurs in many
realworld situations, but we will see that getting a good solution requires care.
MAY/JUNE 2004 67
In Problem 1, we see that the SVD gives us not only an
algorithm for solving the linear leastsquares problem, but
also a measure of the sensitivity of the solution x to small
changes in the data y.
Problem 1.
a. The columns of the matrix V = [v
1
, , v
n
] form
an orthonormal basis for ndimensional space. Lets
express the solution x
true
as
x
true
= w
1
v
1
+ . w
n
v
n
.
Determine a formula for w
i
(i = 1, , n) in terms of U,
y
true
, and the singular values of A.
b. Justify the reasoning behind these two state
ments:
A(x x
true
) =
y y
true
r means x x
true
 (y y
true
r)
y
true
= Ax
true
means y
true
 = Ax
true
 A x
true
.
c. Use these two statements and the fact that A
=
1
to derive an upper bound on x x
true
/x
true
 in
terms of the condition number (A)
1
/
n
and y
y
true
r/y
true
.
The solution to Problem 1 shows that the sensitivity of
the parameters x to changes in the observations y depends
on the condition number . With these basic formulas in
hand, we can investigate this sensitivity in Problem 2.
Problem 2. Generate 100 problems with data x
true
=
[0.5, 0.5]
T
, = [0.3, 0.4], and
y = y
true
+ z,
where = 10
4
, y
true
contains the true observations
y(t), t = 0, 0.01, , 6.00, and the elements of the vec
tor z are uniformly distributed on the interval [1,1].
In a gure, plot the computed solutions x
(i)
, i = 1, ,
100 obtained via your SVD algorithm, assuming that
is known. In a second figure, plot the components
w
(i)
of the solution in the coordinate system deter
mined by V. Interpret these two plots using Problem
1s results. The points in the rst gure are close to a
straight line, but what determines the lines direction?
What determines the shape and size of the second g
ures point cluster? Verify your answers by repeating
the experiment for = [0.3, 0.31] and also try varying
to be = 10
2
and = 10
6
.
How Sensitive Is the Model
to Changes in the Parameters?
Now we need to investigate the sensitivity to the nonlinear
parameters . In Problem 3, we display how fast the func
tion y changes as we vary these parameters, assuming that
we compute the optimal x parameters using least squares.
Problem 3. Suppose that the reaction results in
y(t) = 0.5e
0.3t
+ 0.5e
0.7t
.
Next, suppose that we observe y(t) for t [0, t
final
],
with 100 equally spaced observations per second.
Compute the residual norm as a function of various
estimates, using the optimal values of x
1
and x
2
for
each choice of values. Make six contour plots of the
log of the residual norm, letting the observation in
terval be t
nal
= 1, 2, , 6 seconds. Plot contours of 2,
6, and 10. How helpful is it to gather data for longer
time intervals? How well determined are the para
meters?
From the results of Problem 3, we learn that the parame
ters are not well determined; a broad range of values
lead to small residuals. This is an inherent limitation in the
problem, and we cannot change it. Nonetheless, we want to
develop algorithms to compute approximate values of and
x as efciently as possible, and we next turn our attention to
this computation.
Solving the Nonlinear Problem
If we are not given the parameters , then minimizing the
norm of the residual r dened in Equation 1 is a nonlinear
leastsquares problem. For our model problem, we must deter
mine four parameters. We could solve the problem by using
standard minimization software, but taking advantage of the
leastsquares structure is more efcient. In addition, because
two parameters occur linearly, taking advantage of that struc
1
n
68 COMPUTING IN SCIENCE & ENGINEERING
ture is also wise. One very good way to do this is to use a vari
able projection algorithm. The reasoning is as follows: our
residual vector is a function of all four parameters, but given
the two parameters, determining optimal values of the two
x parameters is easy if we solve the linear leastsquares prob
lem we considered in Problem 1. Therefore, we express our
problem as a minimization problem with only two variables:
,
where the computation of r requires us to determine the x
parameters by solving a linear leastsquares problem using,
for instance, SVD.
Although this is a very neat way to express our minimiza
tion problem, we pay for that convenience when we evalu
ate the derivative of the function f() = r
T
r. Because the de
rivative is quite complicated, we can choose either to use
specialpurpose software to evaluate it (see the Tools side
bar) or a minimizer that computes a difference approxima
tion to it.
min
r
2
Y O U R H O M E W O R K A S S I G N M E N T
Tools
I
n a previous problem, we studied exponential tting to determine directions of arrival of signals.
1
This problem was
somewhat better posed, because the data did not decay.
Fitting a sum of exponentials to data is necessary in many experimental systems, including molecule uorescence,
2
volt
age formation kinetics,
3
studies of scintillators using Xray excitation,
4
drug metabolism, and predatorprey models. Often,
though, the publication of a set of rate constants elicits a storm of letters to the editor, criticizing the methods used to de
rive them. It is important to do the t carefully and document the methods used.
A good source on perturbation theory, singular value decomposition (SVD), and numerical solution of leastsquares prob
lems is ke Bjrcks book.
5
Looking at a functions contours is a useful way to understand it. The Matlab function contour is one way to construct
such a plot.
Gene Golub and Victor Pereyra described the variable projection algorithm Varpro, which solves nonlinear leastsquares
problems by eliminating the linear variables. Linda Kauffman noticed that each iteration would run faster if certain negligi
ble but expensive terms in the derivative computation are omitted. Golub and Pereyra wrote a recent review of the litera
ture on the algorithm and its applications.
6
In Problems 4 and 5, if no standard nonlinear leastsquares algorithm is available (such as lsqnonlin in Matlab), use a
generalpurpose minimization algorithm.
Although bad computational practices often appear in published papers involving fitting exponentials, many sources
discuss the pitfalls quite lucidly. See, for example, Richard Shrager and Richard Hendlers
7
work and Bert Rusts series of
tutorials.
810
References
1. D.P. OLeary, The Direction of Arrival Problem: Coming at You, Computing in Science & Eng., vol. 5, no. 6, 2003, pp. 6070.
2. A.H. Clayton and W.H. Sawyer, SiteSpecic Tryptophan Dynamics in Class A Amphipathic Helical Peptides at a Phospholipid Bilayer Interface,
Biophysical J., vol. 79, no. 2, 2000, pp. 10661073.
3. R.W. Hendler et al., On the Kinetics of Voltage Formation in Purple Membranes of Halobacterium Salinarium, European J. Biochemistry, vol. 267,
no. 19, 2000, pp. 58795890.
4. S.E. Derenzo et al., Measurements of the Intrinsic Rise Times of Common Inorganic Scintillators, IEEE Trans. Nuclear Science, vol. 47, no. 3, 2000,
pp. 860864.
5. . Bjrck, Numerical Methods for Least Squares Problems, SIAM Press, 1996.
6. G. Golub and V. Pereyra, Separable Nonlinear Least Squares: the Variable Projection Method and Its Applications, Inverse Problems, vol. 19, no.
2, 2003, pp. R1R26.
7. R.I. Shrager and R.W. Hendler, Some Pitfalls in CurveFitting and How to Avoid Them: A Case in Point, J. Biochemical and Biophysical Methods,
vol. 36, nos. 2 and 3, 1998, pp. 157173.
8. B.W. Rust, Fitting Natures Basic Functions, Computing in Science & Eng., vol. 3, no. 5, 2001, pp. 8489.
9. B.W. Rust, Fitting Natures Basic Functions, Computing in Science & Eng., vol. 4, no. 4, 2002, pp. 7277.
10. B.W. Rust, Fitting Natures Basic Functions, Computing in Science & Eng., vol. 5, no. 2, 2003, pp. 7479.
MAY/JUNE 2004 69
Problem 4.
a. Use a nonlinear leastsquares algorithm to deter
mine the sum of two exponential functions that ap
proximates the data set generated with = [0.3,
0.4], x = [0.5, 0.5]
T
, and normally distributed error
with mean zero and standard deviation = 10
4
. Pro
vide 601 values of (i, y(t)) with t = 0, 0.01, , 6.0. Ex
periment with the initial guesses
and
.
Next, plot the residuals obtained from each solu
tion, and then repeat the experiment with = [0.30,
0.31]. How sensitive is the solution to the starting
guess?
b. Repeat the runs of part (a), but use variable pro
jection to reduce to two parameters, the two compo
nents of . Discuss the results.
To nish our investigation of exponential tting, lets try
dealing with some given data.
Problem 5. Suppose that we gather data from a
chemical reaction involving two processes: one
process produces a species and the other depletes it.
We have measured the concentration of the species as
a function of time. (If you prefer, consider the amount
of a drug in a patients bloodstream while the intestine
is absorbing it and the kidneys are excreting it.) Fig
ure 1 shows the data; it is also available at www.
computer.org/cise/homework. Suppose your job (or
even the patients health) depends on determining the
two rate constants and a measure of uncertainty in
your estimates. Find the answer and document your
computations and reasoning.
F
inding rate constants is an example of a problem that is
easy to state and often critically important to solve, but
devilishly difcult to answer with precision.
x
(0)
3
4
1
]
1
,
(0)
[5, 6]
x
(0)
3
4
1
]
1
,
(0)
[1, 2]
0 1 2 3 4 5 6
y
t
0.02
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
Figure 1. Data for Problem 5. Given these measurements of
species concentration (mg/ml) versus time (sec), or drug
concentration (mg/liter) versus time (hours), nd the rate
constants.
25
%
N
o
t
a
m
e
m
b
e
r
?
J
o
i
n
o
n
l
i
n
e
t
o
d
a
y
!
save
on all
conferences
sponsored
by the
IEEE
Computer Society
I E E E
C o m p u t e r
S o c i e t y
m e m b e r s
www.computer.org/join
70 COMPUTING IN SCIENCE & ENGINEERING
Problem 1. Model 1 consists of the differential
equation
We start the model by assuming some proportion
of infected individualsfor example, I(0) = 0.005, S(0)
= 1 I(0), and R(0) = 0. Run Model 1 for k = 4 and =
0.8 until either I(t) or S(t) drops below 10
5
. Plot I(t),
S(t), and R(t) on a single graph. Report the proportion
of the population that became infected and the maxi
mum difference between I(t) + S(t) + R(t) and 1.
Answer: Weve posted sample programs at www.
computer.org/cise/homework. Figure A shows the results;
95.3 percent of the population becomes infected.
Problem 2. Instead of using the equation dR/dt = I/k,
we could have used the conservation principle
I(t) + S(t) + R(t) = 1
for all time. Substituting this for the dR/dt equation
gives us an equivalent system of differential algebraic
equations (DAEs); we will call this Model 2.
Redo Problem 1 using Model 2 instead of Model 1.
To do this, differentiate the conservation principle and
express the three equations of the model as My = f(t,
y), where Mis a 3 3 matrix.
Answer: Figure A shows the results, which, as expected,
are indistinguishable from those of Model 1.
Problem 3.
a. Redo Problem 1 using Model 3
instead of Model 1. For t 0, use the initial conditions
I(t) = 0, S(t) = 1, R(t) = 0,
and let I(0) = 0.005, S(0) = 1 I(0), and R(0) = 0.
Note that these conditions match our previous ones
at t = 0. Compare the results of the three models.
Answer: Figure B shows the results; 94.3 percent of the
population becomes infected, slightly less than in the first
models. The epidemic dies out in roughly half the time.
Problem 4. Let S, I, and R depend on a spatial coor
dinate (x, y) as well as t, and consider the model
To solve this problem, we will discretize and approxi
R(t, x, y)
t
I(t, x, y) / k.
S(t, x, y)
t
I (t, x, y)S(t, x, y)
2
I (t, x, y)
x
2
+
2
I (t, x, y)
y
2
_
,
S(t, x, y),
I (t, x, y)
t
I (t, x, y)S(t, x, y) I(t, x, y) / k
+
2
I (t, x, y)
x
2
+
2
I(t, x, y)
y
2
_
,
S(t, x, y),
dR(t)
dt
I (t k)S(t k),
dS(t )
dt
I(t)S(t),
dI(t )
dt
I (t)S(t) I (t k)S(t k),
dR(t)
dt
I (t) / k.
dS(t )
dt
I(t)S(t),
dI(t )
dt
I (t)S(t) I(t)k,
Y O U R H O M E W O R K A S S I G N M E N T
Partial Solution to Last Issues
Homework Assignment
MORE MODELS OF INFECTION: ITS EPIDEMIC
By Dianne P. OLeary
MAY/JUNE 2004 71
mate the solution at the points of a grid of size n n.
Let h = 1/(n 1) and let x
i
= ih, i = 0, , n 1 and y
j
= jh,
j = 0, , n 1. Our variables will be our approximations
I(t)
ij
I(t, x
i
, y
j
) and similarly for S(t)
ij
and R(t)
ij
.
a. Use Taylor series expansions to show that we can
approximate
.
We can derive a similar expression for d
2
I(t, x
i
, y
j
)/dy
2
.
b. Form a vector
S(t) and
I(t). *
1
]
1
1
1
1
1
1
1
2 2
1 2 1
1 2 1
2 2
2
I t I t I t
h
h I t x y O h
h
I t x y O h
i j ij i j
xx
xx
( ) ( ) ( )
( , , ) ( )
( , , ) ( ).
, , +
+
+
+
1 1
2
2 4
2
2
2
h
3
6
h
2
2
h
3
6
h
2
2
R (t)
t
I (t) / k,
S (t)
t
I (t). *
S (t) A
I (t)
( )
. *
S (t ),
I (t)
t
I (t). *
S (t )
I (t) / k + AI(t) ( ). *
S (t),
d
2
I(t, x
i
, y
j
)
dx
2
I (t )
i1, j
2I (t)
ij
+ I(t)
i +1, j
h
2
+O(h)
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 10 20 30 40 50 60
Time
P
r
o
p
o
r
t
i
o
n
o
f
p
o
p
u
l
a
t
i
o
n
Solution from ordinary differential equation model
Infected
Susceptible
Recovered
Figure A. Proportion of individuals infected by the epidemic
from the ODE Model 1 or the DAE Model 2.
0 10 20 30 40 50 60
P
r
o
p
o
r
t
i
o
n
o
f
p
o
p
u
l
a
t
i
o
n
Infected
Susceptible
Recovered
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Time
Solution from delay differential equation model
Figure B. Proportion of individuals infected by the epidemic
from the DDE Model 3.
72 COMPUTING IN SCIENCE & ENGINEERING
and T and I are matrices of dimension n n. (The notation
C D denotes the matrix whose (i, j)th block is c
ij
D. The
Matlab command to form this matrix is kron(C,D), which
means Kronecker product of C and D.)
Problem 5.
a. Set n = 11 (so that h = 0.1), k = 4, = 0.8, and =
0.2 and use an ODE solver to solve Model 4. For ini
tial conditions, set S(0, x, y) = 1 and I(0, x, y) = R(0, x,
y) = 0 at each point (x, y), except that S(0, 0.5, 0.5) = I(0,
0.5, 0.5) = 0.5. (For simplicity, you need only use I and
S in the model, and you may derive R(t) from these
quantities.) Stop the simulation when the average value
of either
I(t) or
y
x
Figure 2. Polar diagram of the function
sin() =
1
/
2
i (e
i
e
i
). is the angle
with the positive xaxis; the sign on
the lower circle indicates that sin() is
negative for < < 2.
MAY/JUNE 2004 85
rather than real. By multiplying the
function by a constant phase of e
i/2
(see Figure 3f), we get the real func
tion p
y
. To show the similarity between
this representation and the conven
tional polar diagram, we decreased the
rings radius in Figure 3f. As the rings
arbitrary radius approaches zero, the
new representation reduces to a polar
diagram with the added value of phase
color, which allows the presentation of
complex functions (compare Figure 3f
to Figure 2).
Spherical Harmonics
and Their Superposition
A closely related topology to the ring is
the sphere, which adds a second angu
lar variable, , the angle with the posi
tive zaxis. The most notable example
of this topology is the 3D rigid rotor,
which is part of the solution of all cen
tral force systems, including the hy
drogen atom. Again, to emphasize the
systems 2D topology and avoid im
proper inclusion of distance from the
origin as a variable (which often arises
when using polar diagrams), the wave
function is drawn on a spheres surface.
The wavefunctions phase is denoted
by color, as before, but the amplitude is
now encoded as opacity. The wave
function is opaque at the maximum
amplitude, partially transparent at
medium amplitudes, and completely
transparent at the nodes. The physical
basis of this encoding comes from
viewing opacity as a measure of proba
bility density.
We demonstrate this approach for the
three spherical harmonics with l = 1,
Y
1
m
(, ), which are eigenfunctions of
the free particle on a sphere Hamilton
ian. The spherical topology imposes two
constraints on the wavefunction. The
rst concerns , and is the same as in the
case of the ring. The second constraint
concerns the poles ( = 0 and 2), in
which the functions value must be the
same for all values. This can be
achieved either by setting m= 0 (see Fig
ure 4a) or by having a node at the poles
(Figures 4b and 4c). It is instructive to
note the resemblance between Figures
3b and 3c and Figures 4b and 4c when
viewed from the direction of the zaxis.
Encoding the amplitude with opac
ity does not provide a quantitative
measure of it. However, the important
features of the spherical wavefunc
tionsthe direction of maximum am
plitude and the existence of nodal
planesare easily observed. These fea
tures are also sufcient for determining
the result of superposition of the
spherical wavefunctions. Using similar
arguments to those used in the previ
ous section, it is easy to see that p
x
= Y
1
+1
+ Y
1
1
(see Figure 4d), and p
y
= (Y
1
+1
Y
1
1
)/i (see Figures 4e and 4f).
The third orbital in this set, p
z
, is Y
1
0
(see Figure 4a). All three orbitals have
(a) (c) (b)
(d) (f) (e)
y
x
y
x
y
x
y
x
y
x
y
x
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.