You are on page 1of 36

The Materialities of Maya: Making

Sense of Object-Orientation

Casey Alt
Stanford University

The process of computer graphics modeling is often described as


an inherently disembodied activity. At the popular level, this descrip-
tion often stems from the conflation of computer graphics and Vir-
tual Reality—a term that (regrettably) has come to connote a seam-
less, three-dimensional, hyperrealistic “world” of computer graphics.
Initially viewed as an unconstrained realm for sensory experimenta-
tion, Virtual Reality has signified for most critics a superficial dou-
bling of surface reality that privileges visuality in such a way as to
more strongly foster an eye-mind link that has little, if anything, to
do with the particular materialities of human embodiment. As one
of the leading advocates of this stance, Jonathan Crary has argued
that “computer-aided design, synthetic holography, flight simula-
tors, computer animation, robotic image recognition, ray tracing,
texture mapping, motion control, virtual environment helmets,
magnetic resonance imaging, and multispectral sensors are only a
few of the techniques that are relocating vision to a plane severed
from a human observer.”1 As a result of such diagnoses of digital me-
dia, Virtual Reality (VR) most commonly has been hailed as an anti-
dote to embodiment and is often portrayed as a means for freeing
perception (vision) from the constraints of the flesh, enabling expe-
rience to transcend the contingencies of the material.
To a large extent, much of the confusion concerning the specific
media properties of computer graphics has flowed from the fact that
1. Jonathan Crary, Techniques of the Observer: On Vision and Modernity in the Nineteenth
Century (Cambridge, Mass.: MIT Press, 1990), p. 1.

Configurations, 2002, 10:387–422 © 2004 by The Johns Hopkins


University Press and the Society for Literature and Science.

387
388 Configurations

most critics have encountered the material output of computer


graphics programs as already well-recognized and well-theorized me-
dia forms. For example, Alias|Wavefront’s Maya, the 3-D modeling
and animation application that I will be discussing in this paper, can
export representations of any of its 3-D “scenes” as 3-D models that
might appear in a video game or on a product website; as digital
video that might appear as a special effect in a cinematic movie, or as
the entire movie itself; as digital still images that might appear in sci-
entific publications or architectural proposals; and even, to a limited
extent, as text that describes the interrelationships of the program-
matic objects within the scene. The result of this ability for differen-
tial material outputs of computer graphics programs has often led
media critics of one particular medium to project backward that dig-
ital media are simply computerized versions of traditional modes of
production—a move that effectively reduces the inherent multiplic-
ity of digital media to one specific media output. This tendency
seems to occur most often among contemporary film theorists who
have argued that, because computer graphics programs are often used
in films and are often included within the typical filmic conventions
(such as cuts, pan shots, etc.), “new media” in general can be under-
stood as merely an extended practice of filmmaking—an argument
that I would diagnose as both logically and empirically problematic.
To make matters worse, critical accounts have also been compli-
cated by the fact that not only can digital media programs such as
Maya export their computer graphics content to a multitude of me-
dia formats, but there is also a reciprocal mediation occurring since
the content of many of the more “traditional” media types (printed
texts, films, analog audio recordings, etc.) are increasingly being dig-
itized to be used as raw material for digital media applications. Thus,
many critics have characterized digital media as a universal medium
in which other media can be intermixed and interrelated. Theorists
such as Jay David Bolter and Richard Grusin have therefore argued
in Remediation: Understanding New Media that digital media can be
seen as “remediating” other media and even “hypermediating” the
multitude of different formats.2 However, the result of this approach
has been a view that digital media can be understood best through a
bricolage of theories developed around each of the “constituent”
media, a move that reduces digital media to a medium constituted
by differences between the other media it “contains” while denying
any specific, positive mediality or materiality to digital media them-
selves.
2. Jay David Bolter and Richard Grusin, Remediation: Understanding New Media (Cam-
bridge, Mass.: MIT Press, 1999).
Alt / The Materialities of Maya 389

Another problem with such an approach has been arguments in


which the digitization or “remediation” of other media amounts to
the dissolution of media differences into a single, immaterial stream
of information. Friedrich Kittler has perhaps best represented this
perspective by arguing that “the general digitization of channels and
information erases the differences among individual media. Sound
and image, voice and text are reduced to surface effects, known to
consumers as interface. Sense and senses turn into eyewash. . . . In-
side the computers themselves everything becomes a number: quan-
tity without image, sound, or voice.”3 Whereas Kittler in the past has
provided an invaluable perspective and methodology for under-
standing the materiality and cultural embeddedness of other media,
his formulation of digital media insists on denying any materiality to
their forms and effectively resuscitates older cybernetics tendencies
toward reducing mediated experiences to an immaterial sequence of
zeros and ones that Kate Hayles has so carefully catalogued and cau-
tioned against in How We Became Posthuman: Virtual Bodies in Cyber-
netics, Literature, and Informatics.4
My intent in this paper is therefore to offer a positive alternative
for characterizing the materiality and medium specificities of one
particular digital media application: Alias|Wavefront’s premiere com-
puter graphics and modeling program known simply as Maya. In
theorizing the materialities of Maya, I wish to base my analysis on
the model of “materialities of communication” advanced by Tim
Lenoir, which posits that media technologies should be understood
not only as signifying elements but also as material objects that are
constructed according to social and technical constraints, and that
exist in excess of their capacity for signification. Lenoir has described
such an approach with respect to scientific media:
if we consider that science constructs its object through a process of differen-
tial marking, and it makes that object stable through public forms for the con-
struction and dissemination of meaning, then consideration of communica-
tion technologies and technologies of representation becomes fundamental.
They are “machines” which mediate and stabilize our representations. Fur-
thermore, as extensions of the senses simultaneously affecting persons dis-
persed over numerous sites, they are powerful sources of mediation, multipli-
cation, and stabilization of technoscientific practice. In their material form
media do not provide mere “representatives” of an object described by theory,

3. Friedrich A. Kittler, Gramophone, Film, Typewriter, trans. Geoffrey Winthrop-Young


and Michael Wutz (Stanford, Calif.: Stanford University Press, 1999), p. 1.
4. N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Litera-
ture, and Informatics (Chicago: University of Chicago Press, 1999).
390 Configurations

they create the space within which the scientific object exists in a material
form.5

In keeping with Lenoir’s program for understanding the “materiali-


ties of communication,” I seek to adopt here a similar tactic of exteri-
ority in explaining how media are materially constructed objects
whose specific material configurations function to extend their meth-
ods of representation throughout expanding networks of practice.
Of primary concern, therefore, in my examination of Maya, is a
concentration not on Maya’s outputted models, images, or movie
clips, but rather on the locus of praxis itself: Maya’s design interface
and the possibilities for production that this interface enables. While
other approaches have raised interesting questions about how the
products of digital media interact with other media formats, an analy-
sis of the interface maps the specific materialities inherent in the ap-
plication itself and the ways in which these materialities reorganize
both perception and practice through a new phenomenology and
ontology of production. In exploring this interface and its relation
to practice, I will begin by mapping the web of social, technical, and
business relations that were fundamental in shaping Maya as a tool
for mediating user experience.

The Socio-Technical Construction of Maya


As is the case with most media, Maya did not spring fully formed
from the head of Silicon Valley. Digital media do not just happen;
rather, they are constructed through the complex power negotiations
of people and groups, each of whom has specific financial, technical,
and social interests in how a particular media technology is mani-
fested, marketed, and modified. As we shall see in the next section of
this paper, the histories of these interactions often become embed-
ded in the interface of the media technologies themselves.
Now in its fourth product release, Maya is the child of a tripartite
merger of three computer graphics powerhouses: Alias Research,
Wavefront Technologies, and Silicon Graphics. Silicon Graphics, Inc.
(SGI), the largest and best-known of the three companies, was
founded in 1982 by Stanford University professor Jim Clark and his
graduate student Kurt Akely.6 The previous year Clark had received a
patent for the geometry engine, a specially designed series of RISC

5. Timothy Lenoir, “Inscription Practices and Materialities of Communication,” in In-


scribing Science: Scientific Texts and the Materiality of Communications, ed. Timothy Lenoir
(Stanford, Calif.: Stanford University Press, 1998), pp. 1–19, on p. 12.
6. Timothy Lenoir, “All but War Is Simulation: The Military-Entertainment Complex,”
Configurations 8 (2000): 302.
Alt / The Materialities of Maya 391

processors that hardwired many of the early graphics algorithms for


processing and displaying 3-D graphics in realtime. By offloading
many of the bulky software algorithms for processes such as shading,
texturing, and 3-D projections onto a 2-D planar surface into a
“firmware” chip that was separate from the central processor of the
system, the geometry engine (the forerunner of today’s graphics
cards) exponentially increased the speed at which computer graph-
ics could be generated and manipulated. In 1983, SGI released its
first graphics workstation, the Iris 1000, and established itself as the
leader in graphics-rendering hardware and software.
The same year that SGI released the Iris 1000, a Toronto-based
group of graphics programmers—Stephen Bingham, Nigel McGrath,
Susan McKenna, and David Springer—began thinking about creating
a company to develop an intuitive software package for producing
realistic 3-D video animations.7 After securing a $61,000 grant from
the National Research Council (NRC), the team (which would even-
tually be known as Alias Research, Inc.) began developing their first
product code. At the 1985 SIGGRAPH conference in San Francisco,
Alias unveiled its ALIAS/1 modeling and animation package.
ALIAS/1 distinguished itself at the time by its ability to model cardi-
nal splines that produced smoother and more realistic lines and sur-
faces than traditional polygon-based methods. Within the same
year, Alias signed a deal with General Motors to develop a NURBS-
based (Non-Uniform Rational B-Spline) animation system that
would be compatible with GM’s spline-based computer-aided design
(CAD) technology. Also in 1985, Alias approached SGI regarding an
exclusive deal to market their software on SGI graphics workstations.
Spotting a lucrative opportunity to sell more hardware systems, SGI
agreed. Thanks to the success of its GM contract, the following year
Alias secured $1.2 million in venture capital backing from Crownx (a
venture capital company associated with Crown Life) and also
signed deals with Kraft and Motorola. That same year, Alias also beat
out two American bidders to provide the visualization equipment
used in the Hubble telescope.
In 1984, only a year after the founding of Alias, Mark Sylvester,
Larry Barels, and Bill Kovacs founded Wavefront Technologies with
the goal of supplying 3-D graphics software to the television com-
mercial and movie industries. Whereas Alias had initially focused on
providing high-end CAD applications to large product-design divi-
sions, Wavefront set its sights on the entertainment industry. Within
7. Unless otherwise noted, the information concerning the histories of Alias and Wave-
front has been adapted from the detailed corporate history provided by Alias|Wave-
front at http://www.aliaswavefront.com/en/WhoWeAre/industry_timeline/index.shtml.
392 Configurations

that same year, the Santa Barbara–based company released Preview,


which was used to create the animated opening graphics for Show-
time, BRAVO, and National Geographic Explorer televised features.
Wavefront also shipped several copies of Preview to Universal Stu-
dios for television production, which were used for previsualization
and motion camera control on popular shows such as Knight Rider.
In 1985, Wavefront succeeded in selling their Preview package to
NBC in New York, Electronic Arts in London, Video Paint Brush of
Australia, Failure Analysis of Mountain View, California, and NASA.
In 1986, Alias continued its success by releasing its second-
generation ALIAS/2 software package, which integrated much of the
NURBS technologies that had been developed in association with
GM. The success of ALIAS/2 allowed Alias to secure further venture
capital funding from U.S. Investors Greylock and TA Associates in
1987. In 1988, Alias changed its sales focus by granting exclusive
sales rights of ALIAS/2 to the entertainment industry to BTS (now
Phillips BTS), which distributed ALIAS/2 with their Pixelrator ren-
dering machine—a move that allowed Alias to concentrate on ex-
panding its computer-aided industrial design (CAID) markets while
also contesting Wavefront’s exclusive claim to the developing enter-
tainment markets. Alias’s strategy was a successful one: by the end of
1988, the company had expanded its list of clients to include Timex,
Reebok, Oakly, Kenner, BMW, Honda, Volvo, Apple, GE, Sony, In-
dustrial Light and Magic (ILM), Broadway Video, and The Moving
Picture Company. As Alias had encroached upon Wavefront’s enter-
tainment markets, Wavefront retaliated in the industrial design mar-
kets by releasing its Personal Visualizer, a 3-D computer animation
system (codeveloped with SGI) that gave CAD users a point-and-
click interface for high-end photorealistic rendering. Wavefront
ported Personal Visualizer to Sun, IBM, Hewlett-Packard, Tektronics,
DEC, and Sony machines, with the strategy to bundle the software
with every system sold.
The release of ALIAS/2 also sparked interest in Hollywood in using
the powerful set of NURBS-based computer graphics and animation
tools to create images that were not possible with Wavefront’s prod-
ucts. In 1989 James Cameron included the first entirely computer-
generated “character” in his movie The Abyss. The underwater
“pseudopod” creature was created by Steve Williams, who had left
Alias for ILM to help with the film. Williams used ALIAS/2.4.2 (run-
ning on SGI 4D/70G and 4D/80GT workstations) for modeling the
pseudopod because its patch-based B-spline modeling system al-
lowed the water-based creature to be represented with a smoother
geometric model. In the same year, Wavefront expanded into scien-
Alt / The Materialities of Maya 393

Figure 1. 1994 screenshot of graphical user interface for Wavefront’s Data Visualizer. Im-
age source: M. Böttinger, Deutsches Klimarechenzentrum. Personal communication.

tific markets (previously the sole territory of SGI) with its Data Visu-
alizer application, a highly flexible system for visualizing complex
scientific data (Fig. 1).
In 1990, Alias went public on the U.S. market and raised $35 mil-
lion in its initial public offering. Within the same year, the company
decided to definitively stake its claim in the entertainment industry
by differentially marketing its third-generation software release as
Studio for industrial design and as PowerAnimator for the entertain-
ment market. That same year, ILM and James Cameron again struck
gold (quite literally) with Alias software by using PowerAnimator to
create the T1000, Arnold Schwarzenegger’s chromium robotic foe in
Terminator 2: Judgment Day; the T1000 garnered ILM an Academy
Award for Best Visual Effects. In 1991, Alias acquired Sonata, a high-
end 3-D architectural design and presentation system, from T2 Sys-
tems of the United Kingdom. With the addition of Sonata, Alias
comprised four distinct divisions: Alias Division (industrial design
and entertainment), Style! Division (UpFront and Mac/Win for ar-
chitects, and Sketch on Mac for illustrators), Sonata Division (archi-
394 Configurations

tecture), and Full Color Division (prepress and photo retouching). In


1992 Alias launched AutoStudio, an industrial design package specif-
ically tailored to automotive designers. Not to be outdone by Alias’s
aggressive incursions into the entertainment market, Wavefront also
introduced two new products that same year: the first was called
Kinemation with SmartSkin™, a complete 3-D character-animation
system for creating synthetic actors with natural motion and muscle
behavior; the second product, Dynamation, was a powerful 3-D ani-
mation tool for interactively creating and modifying dynamic
events.
In 1993 Alias began production of a new entertainment software
that would eventually become Maya. Also in 1993, Alias released
StudioPaint, a product designed in close collaboration with engi-
neers and designers at Ford. Promoted with the phrase “We’re not
just realistic. We’re real!,” StudioPaint was a high-end paint package
designed for automotive sketching and rendering with real-time air-
brushes. Alias PowerAnimator would also bring in yet another Oscar
for Best Visual Effects for ILM in 1993 for populating Steven Spiel-
berg’s Jurassic Park with its impressively animated dinosaurs. In order
to augment its products with NURBS technology that could rival
that of Alias, Wavefront acquired Thomson Digital Images (TDI), the
leading French computer graphics company, in the same year.
The year 1994 saw the massive expansion of Alias and Wavefront
to both the cinema and video-game sectors of the entertainment
market. Atari adopted Wavefront’s GameWare as the exclusive game
graphics and animation development software for the Atari Jaguar
system. Perhaps even more important, in the same year Alias signed
a similar deal with Nintendo to be the key software provider for Nin-
tendo 3-D graphics development, only a year after Jim Clark an-
nounced that SGI had partnered with Nintendo to bring high-end
3-D SGI technology to the upcoming Nintendo64 game console.8 As
a result of this deal, Alias PowerAnimator was used to create enor-
mously popular Nintendo hits such as Donkey Kong Country, and the
Nintendo contract pushed Alias to the forefront of video-game mar-
ket control. In terms of Hollywood, Dream Quest Images created
more than ninety visual-effects sequences for Tom Clancy’s Crimson
Tide using Dynamation and Composer software. Wavefront software
was used to produce cinematic blockbusters such as Outbreak,
Aladdin, True Lies, and Stargate. Alias also reaped record profits (181
percent increase) in 1994, thanks to the success of PowerAnimator in
five of the biggest movies of the year: Forrest Gump, The Mask, Speed,

8. Lenoir, “All but War Is Simulation” (above, n. 6), p. 306.


Alt / The Materialities of Maya 395

The Flintstones, and Star Trek: The Next Generation. By 1994, Alias
special-effects customers included ILM, Angel Studios, Digital Do-
main, Dream Quest Images, Cinesite, Metrolight Studios, Pixar, Sony
Pictures Imageworks, Video Image, Disney, and Warner Brothers.
The year 1995 marked a turning point in the development of
both companies: on February 7, Wavefront, Alias, and Silicon Graph-
ics announced plans to enter into definitive merger agreements.
More correctly, SGI acquired both Alias Research, Inc., and Wave-
front Technologies and combined them into a single subsidiary com-
pany called Alias|Wavefront (A|W). Mark Sylvester, cofounder of
Wavefront and “Ambassador” at A|W, explained the merger decision
at the time by stating: “We created digital skin, then [Alias] did; now
they’ve created digital hair and we’re working on digital clothing.
With both of us working together, we can attack the bigger technical
problems instead of duplicating work.”9
Immediately following the merger of Alias and Wavefront, A|W fo-
cused its combined attention on the development of a next-generation
animation software. While Alias had been working on its “next-
generation, very secret product development,” Wavefront was in the
process of also developing a revamped product from the technology
it had acquired from TDI.10 Not wishing to replicate progress on dif-
ferent product lines, the president of A|W at the time challenged the
technical team to come up with a unified product agenda that would
fuse the requirements of the Wavefront users, the Alias users, and
the TDI users into one next-generation product that was deliverable
within a year.11 It would in fact take A|W more than three years to
complete its “absolutely overly ambitious goal,”12 but in 1998 it re-
leased its much-anticipated flagship product Maya. Since its initial
release, Maya has continued Alias|Wavefront’s virtually unchal-
lenged domination of the computer graphics world: almost every
cinematic computer-graphics sequence has been produced on A|W
applications, including those in Toy Story, A Bug’s Life, The Matrix,
Star Wars Episodes I and II, Stuart Little, Final Fantasy: The Spirits
Within, Monsters Inc., Shrek, Lord of the Rings, and Spider-Man. Simi-
larly, A|W has continued its penetration of industrial and product-
design markets by uniting its product lines into its Studio|Tools de-

9. Mark Sylvester, in Alias|Wavefront company history located online at http://www


.aliaswavefront.com/en/WhoWeAre/industry_timeline/95–96/index.shtml.
10. Mark Sylvester, in an interview with Perry Harovas published in John Kundert-
Gibbs and Peter Lee, Mastering Maya 3 (San Francisco: Sybex, 2001), p. 911.
11. Ibid., p. 912.
12. Ibid.
396 Configurations

sign system. The company has also sustained much of its influence
in the video-game design sector, though it has been aggressively
challenged by Discreet’s 3-D Studio Max. Maya was used to create
the 3-D models for six of the ten top-selling Sony PlayStation® 2 ti-
tles for December 2000, and is currently being used by over half of
the game developers for the Microsoft Xbox. In its continuing dom-
inance of several different entertainment markets, Maya has there-
fore generated much of our contemporary graphic imaginary.
As its history would suggest, Maya is the result of a very heteroge-
neous software design process: it was born from the merger of the
products and practices of essentially three different graphics software
companies (Alias, Wavefront, and TDI) and at least three different
corporate structures (Alias, Wavefront, and Silicon Graphics). Much
of the challenge in designing Maya therefore centered on combining
the best features of each software system into one product with one
interface, without driving away each company’s previous user base.
Mark Sylvester has emphasized the problems inherent in the new
Maya collaboration:
So you had this really interesting challenge because the Alias developers really
knew how to think along the lines that they had been accustomed to thinking
for 10 years as [sic] the same was with Wavefront developers and the Parisian
developers. The first year was basically spent understanding the requirements
of the various installed bases because they were all very, very different. We also
spent a great deal of time learning how to work together, across continents and
language barriers. . . . There was the California approach, there was the Cana-
dian approach, and there was the Parisian approach. They had their own
zealots who felt that their given approach was the right way to do it. Yet, at
the end of the day, we all made pictures and got pixels up on the screen.13

Maya was thus, from its inception, a hybridization of various design


strategies and programs. However, it is perhaps somewhat surprising
that the multiple cross-cultural negotiations and compromises that
created a corporate environment for Maya were also built into the
very materialities of Maya itself. Indeed, the heterogeneous nature of
its techno-social origins resulted in an interface that is equally frag-
mented, multiple, and disparate.

The Phenomenology of Maya


It has often been said that one does not learn Maya as much as one
attempts to grasp one very small and specific task in Maya and then
gradually expand outward from it. In other words, most users do not

13. Ibid.
Alt / The Materialities of Maya 397

Figure 2. Screenshot from Maya 3.0 showing several ways for viewing object data.

globally comprehend Maya as much as they locally navigate through spe-


cific parts of it—a process that produces a very unique relationship be-
tween digital artists and Maya’s graphic user interface (GUI). While
Maya’s animation and modeling capabilities are far too extensive for
me to fully explore in this paper, I wish to devote this section to map-
ping several of the most salient interface characteristics.
Perhaps the most initially confusing aspect of Maya is that there
is no single fixed way of viewing information. Partially due to the
hybridity of its own design process, Maya allows multiple ways for
displaying its operations. For example, viewing information about a
newly created NURBS sphere can be accomplished by several differ-
ent methods (Fig. 2). As a first view, there is the graphic representa-
tion of the object in the perspective window. To the right of the per-
spective window, there is the Channel Box that lists each of the
object’s various attribute parameters, including its location (transla-
tion) and scale in each direction of the coordinate plane. Addition-
ally, one can also view any of the channel properties in the Attribute
Editor, which lists even more options for the object and its con-
stituent parts. Or, one can open a Hypergraph window, which dis-
plays the object and its various relationships to other objects in the
current scene. As a fifth option, one can view the object in a hierar-
chical relation to all the other scene objects by opening up the Out-
liner window. As yet another view of the sphere, the Script Editor
398 Configurations

Figure 3. Screenshot from Maya 3.0 (in clockwise order from top) showing the perspec-
tive, side orthogonal, front orthogonal, and top orthogonal views of objects in a scene.

lists the actual Maya Embedded Language (MEL) script that was used
to produce the object. Each of these views onto the object provides
a continuously updated, though differently arranged, picture of the
design space. While providing a great degree of interpretative flexi-
bility, this many-faceted approach also requires an extensive orien-
tation process when beginning to use Maya, particularly when most
tutorials encourage the user to work through each of the separate in-
terface options without explicitly stating that each of the multiple
views represents the same object world.
A second, though perhaps less confusing, aspect of Maya’s visual
interface is the ability to simultaneously open multiple graphic dis-
play windows on the same set of objects. For example, if one wants
to view a set of simple objects such as the sphere, cone, cylinder, and
torus in Figure 3, one can do so from one of three “flat” orthogonal
perspective (front, top, and side) windows, or through the default
perspective window, which presents the objects as located within a
“three-dimensional” perspective plane according to all the perspec-
tival conventions of classic Renaissance perspective. In order to
move through the perspective view, one can use a combination of
keyboard and mouse to perform different camera movements in re-
lation to the objects. For example, it is possible to dolly in or out of
the scene by pressing the ALT key and the right and middle mouse
Alt / The Materialities of Maya 399

Figure 4. Maya 3.0 screenshot showing four camera angles from four different camera ob-
ject views.

buttons while moving the mouse upward or downward. To track


from side to side, the user must hold down the ALT key and the mid-
dle mouse button while moving the mouse to the right or left. Tum-
bling, or rotating the camera around any of its three spatial axes, is
accomplished by pressing the ALT key and the right mouse button
while moving the mouse in any direction. Further, the default per-
spective camera is just one possible perspectival view onto the Maya
world: as Figure 4 shows, Maya can accommodate an infinite num-
ber of other camera views, each of which can be manipulated in sim-
ilar ways to display simultaneous but differently located perspectives
onto the scene.
In terms of using Maya’s tools, a comparable (if not greater) set of
options exist. For starters, there are four main sets of tools that de-
scribe distinctly different modes of operation in Maya: modeling, an-
imation, dynamics, and rendering. The modeling menu set contains
tools for building either NURBS or polygonal objects. The animation
menu set provides tools for managing transformations of modeled
objects and their interactions over time. Animation time in Maya is
measured by frames, in which one frame represents 1/24 of a sec-
ond. The dynamics menu set applies simulated rules of physics and
kinematics, such as collisions, gravity, and turbulence, to the objects
within an animation; Maya objects can take on specific “physical”
400 Configurations

Figure 5. Screenshot from Maya 3.0 showing the procedure for toggling through different
main menu sets.

properties so that they can be moved and deformed in response to


the dynamic properties of other objects within the scene. The ren-
dering menu set provides several tools that affect the final look of
Maya’s outputted images or movies, such as surface materials and
textures for different objects, light effects, and motion blurs. As is
shown in Figure 5, choosing different menu sets brings up different
tools that can be used throughout the design process.
To make matters even more complicated, most experienced users
never use the main tool menu shelves that are built into Maya, but
instead use another view of the menus known as the hotbox. The
hotbox is activated by holding down the space bar in any of the
frame views and appears as a semitransparent, superimposed menu
image wherever the mouse pointer happens to be when the spacebar
is pressed (Fig. 6). Though initially somewhat disorienting to use, the
hotbox provides the same tool options as the tool shelves without
taking up precious screen real estate during the design process. As
yet another alternative, Maya also allows for marking menus, which
are engaged by holding down the right mouse button on various ob-
jects within a scene. Each object-specific marking menu displays a
field of menu options for each object. For example, the marking

Figure 6. Screenshot from Maya 3.0 of superimposed hotbox menu.


Alt / The Materialities of Maya 401

Figure 7. Maya 3.0 screenshot of the object-specific marking menu.

menu for the NURBS plane in Figure 7 allows the user to toggle
among isoparm, hull, control vertex, surface patch, or surface point
views of the object, while also allowing for the modification of vari-
ous parameter inputs and actions that are associated with the object.
It is also important to note that Maya allows users to reconfigure
tool menus to their liking, as well permitting experienced users to
script new tools that can be added to any or all sets of tool interfaces.
Since Maya’s interface is designed in its own Maya Embedded Lan-
guage scripting, the user can script new tools that automate routines
and provide customizability to the default interface. For example, if
a user wished to graphically represent a large data series as three-
dimensional spheres whose color corresponds to the magnitude of
each separate value in the series, it would be much easier to write a
MEL script to read in the values and loop through them to construct
a sphere at each value than to manually add each sphere, position it
correctly, and change the color node to reflect the differing values.
While MEL scripts are in some ways the foundation for the Maya in-
terface, it would be incorrect to assume that the graphic interface
could be reduced entirely to the MEL scripts. Rather, as is implied by
the term “embedded language,” the MEL scripting functionality is
intended as an extension of Maya’s normal interface to allow for an
even more flexible design environment.
402 Configurations

Figure 8. Screenshot from Maya 3.0 of wireframe patterns from a NURBS torus geometric
primitive (on the left) and a polygonal torus geometric primitive (on the right). The
square purple dots on the NURBS figure represent surface control vertices.

While I have thus far described the wide diversity of views and
tools that make up Maya, I also wish to briefly discuss yet another
important distinction in Maya: the difference between polygonal-
and NURBS-based modeling systems. Maya allows for the modeling
of objects according to both design systems, and although the two
may initially look quite similar, NURBS and polygon systems pro-
vide dramatically different means for modeling. Though I will focus
mainly on Maya’s NURBS-based modeling capabilities throughout
this paper because it is the feature that most distinguishes Maya
from its competitors, a few industries, particularly the video game
industry, rely primarily on Maya’s polygonal modeling functionality,
since most 3-D gaming cards are designed to quickly process billions
of polygons per second.
Put most simply, the difference between polygonal and NURBS
modeling is that polygonal objects consist of meshes of (usually tri-
angular) interlocked polygon faces (2-D planes), whereas NURBS ob-
jects consist of surfaces that are actually interpolated among various
control vertices in 3–space (Fig. 8). Since polygon surfaces, or poly-
meshes, consist of 3-D lattices of multiple two-dimensional polygon
faces, Maya’s polygonal modeling tools function mainly to extrude,
scale, and reposition primitive object faces into more complex
polygonal shapes. NURBS modeling, on the other hand, consists of
tweaking the various control vertices to modulate the larger interpo-
lated three-dimensional surface as a whole (I will more fully describe
the specifics of NURBS modeling in a moment). Thus, polygonal
modeling involves the transformation of only local faces and their
immediately adjacent planar faces, while NURBS modeling relies
Alt / The Materialities of Maya 403

Figure 9. Screenshot from Maya 3.0 showing surface modeling differences between
NURBS-based and polygonal geometric primitives.

upon the manipulation of control vertices so as to inflect the shape


of the entire object surface, as seen in Figure 9.
Since much of Maya’s power as a 3-D modeling and animation
program stems from its ability to work with complex NURBS sur-
faces, I will conclude my investigation of Maya’s interface by ex-
plaining exactly what is involved in NURBS-based modeling. The
acronym NURBS stands for Non-Uniform Rational B-Splines, which
is a technical term for topological surfaces constructed of curved
lines, or splines, that have been interpolated among various control
vertices so as to create a sort of “average” curve that mediates among
them. Control vertices (CVs) can be thought of as various points in a
space that “pull” a spline toward themselves. For example, the sim-
plest type of spline curve involves a one-degree (or linear) spline, in
which the spline is acted upon by only one CV at a time, producing
a jagged line that simply connects the CVs together. In a three-
degree (or cubic) spline, on the other hand, each point on the curve
is equally inflected or “pulled” by each of the three nearest control
vertices, resulting in a smoother curve that does not pass directly
through each point but rather is averaged between each set of three
subsequent points. As is seen from the cubic spline example, a spline
can be thought of as a line that seeks the path of least resistance
among various control vertices. A seven-degree spline would be a
curve in which each point is inflected by the seven nearest CVs, and
it would be even smoother than a three-degree spline. In the case of
404 Configurations

Figure 10. Maya 3.0 screenshot of one-degree, three-degree, and seven-degree NURBS
curves.

the seven-degree spline in Figure 10, the curve is so smooth as to be


almost flat.
NURBS surfaces, or meshes, are constructed in the same topologi-
cal manner as are spline curves except that the interpolation occurs
in two dimensions—designated by a U and a V direction. As is
shown in Figure 11, the splines going in the U direction and the
splines going in the V direction actually share CVs; adjusting a single
CV within a NURBS surface will therefore cause both the corre-
sponding U and V splines that are partially constituted in relation to
the CV to change shape. The U-V splines form a gridded mesh pat-
tern, and the “rectangular” surfaces interpolated between the spline
grid are known as NURBS patches. Maya users can create complex
NURBS surfaces by manipulating CVs on NURBS primitive surfaces
and combining different NURBS surfaces to form more complex
combinations. Or, Maya can perform various extrapolation tech-
niques such as extruding one spline in the direction of another, ro-
tating a curve in space, or lofting a single NURBS surface among mul-
tiple splines (Figs. 12 and 13).
If this very preliminary introduction to the Maya control inter-
face has produced an unsettling combination of increased under-
standing and increased confusion about the use of Maya, then it has
achieved its intended effect. In addition to providing a basic map of
how users interact with Maya, I also want this introduction to cul-
minate in one overall observation: that the production of 3-D graph-
Alt / The Materialities of Maya 405

Figure 11. Maya 3.0 screenshot of NURBS surface showing U and V coordinate spline
directions.

ics in Maya can in no way be considered in terms of the “uncon-


strained” and spontaneously “amorphous” characterizations that
plague most discussions of computer graphics in general. While
Maya does indeed allow for the production of graphics that display
amazing transformational qualities, the Maya interface itself does not
mirror such a freehanded lack of constraints. On the contrary, working
in Maya requires an exceptional process of mediation between the
goals of the digital artist and the highly structured interface.
One may ask, however, whether Maya’s initially confusing inter-
face should be considered more a product of bad software design
than a new paradigm in mediation (as I will argue in the next sec-
tion). To some extent, the justification for this question becomes
tautological since Maya has in many ways inscribed what is expected
of 3-D modeling within the cultural imagination; therefore, it is no
surprise that Maya to many seems the perfect tool for such tasks.
While I have endeavored to avoid value judgments of “good” and
“bad” design decisions throughout this paper in favor of concen-
trating on the simple facts of what the interface allows, I would
406 Configurations

Figure 12. Maya 3.0 screenshot of a NURBS surface gener-


ated by extruding curve 1 along the path of curve 2.

nonetheless argue that Maya is precisely the type of interface that is


needed for contemporary 3-D modeling and animation markets—for
two main reasons. First, Maya’s somewhat counterintuitive interface
is directly related to solving a rather counterintuitive problem: mod-
eling 3-D objects on the 2-D space of a computer screen. The reason
why paint programs have such an intuitive interface is that the space
of the program mirrors the task at hand: drawing a 2-D line on a 2-D
screen. Maya, on the other hand, must enable a 3-D interaction on a
2-D screen, and therefore requires intervening interfaces to facilitate
such a transformation. Maya is difficult to use because the task of
learning to design 3-D worlds in a 2-D interface requires new users to
perceive and interact with space in new ways; however, once the
user is accustomed to doing so, navigating Maya becomes a very
consistent and logical activity.
As a second point, it must be remembered that Maya is intended
for an audience of highly specialized, technical users. Unlike com-
Alt / The Materialities of Maya 407

Figure 13. Maya 3.0 screenshot of a NURBS surface created by lofting a topology between
curve 1 and curve 2.

mon paint programs or digital editing programs like Adobe Photo-


shop, Maya is not designed for use by the uninitiated lay user and
has not limited its interface options so as to allow them to be easily
understood and implemented by people with relatively little com-
puter graphics experience. Rather, Maya continues to develop dis-
parate but highly specialized tools that reflect the demands of its
users, such as mathematically rigorous tools for modeling, building,
and capturing cinematic animations. As a result, its tools encourage
a wide diversity of specialized techniques that can be further
customized through MEL scripts for very precise design goals.
Whereas most other computer applications have sought interfaces
that standardize toward a lowest common denominator, Maya has
continuously expanded the technical literacy of its users by imple-
menting increasingly specialized design tools.
As a result of all these interface constraints, Maya does not easily
lend itself to the immediate expression of the artist’s creativity;
rather, the artist must gradually learn to think Maya and move through
Maya just as a modern endoscopic surgeon must learn to successfully
408 Configurations

manipulate and navigate current media technologies in performing


each surgery. The resultant graphical effect or image must therefore
be considered as a tightly structured process of collaboration be-
tween the designer and the application, rather than as an unlimited,
freehand expression of the imagination of the artist. In order to suc-
cessfully use Maya, users must crawl inside, navigate, and inhabit the
logic of the application’s complex interactive space. To do so, they
must gradually adapt their usual habits of interaction to accommo-
date Maya’s unconventional interface—a process that effectively re-
organizes perception and cognition into a new field of relations.

Object-Orientation and Haptic Visuality


While I have argued thus far that Maya requires users to reorient
their thinking in accordance with its interface, I have said surpris-
ingly little about the specific properties of that logic. How are events
and relationships among objects ordered in Maya? If Maya consists
of myriad views onto its objects and a dizzying array of tools to ma-
nipulate and transform those objects, what exactly are the objects,
and what do they do? How is sense made from an otherwise multiple
and heterogeneous assemblage of perspectives and techniques?
According to Learning Maya, A|W’s printed introductory tutorial,
“Maya’s architecture can be explained using a single line—nodes with
attributes that are connected.”14 A node can be considered the basic on-
tological category in Maya: “A node is a generic object type in Maya.”15
Different nodes are designed with specific attributes so that each node
can accomplish a particular task. Nodes define all object types in
Maya including geometry, shading, lighting, and even scripts. A
node is therefore an object that consists of specific attributes and be-
haviors. Any change over time to any of the internal object attri-
butes results in an animation. However, nodes rarely function inde-
pendently within Maya, since their attributes are interrelated
through various connections, or dependencies.
For example, suppose there are two NURBS spheres in a Maya
scene, the first entitled ball and the second entitled color_orb. We
could animate ball by changing its Y position with respect to time,
so that it would look as though it were bouncing up and down as the
animation played. We could then establish a dependency between
the Y attribute of the ball node and the color input attribute of the
material assigned to the color_orb node such that as ball changes its
Y attribute, color_orb simultaneously changes its input color value.

14. Learning Maya, p. 5 (italics in original).


15. Ibid., p. 8 (italics in original).
Alt / The Materialities of Maya 409

The ability for Maya to animate and establish dependencies among


nearly every conceivable object class, together with the potential to
create an endless array of new node types and dependencies, is what
gives Maya its modeling and animation power.
Maya’s node-dependent architecture is not unique to Maya or to
computer graphics programs. In fact, it is an example of what is
known as an object-oriented paradigm (OOP). While Maya is perhaps
one of the most expansive and graphical representations of such a
nodal architecture, the connection between object-oriented pro-
gramming and computer graphics programs predates the release of
Maya by nearly thirty-five years. In 1963, an MIT student named
Ivan Sutherland produced a Ph.D. dissertation project entitled
“Sketchpad: A Man-Machine Graphical Communication System,”
which was the first computer graphics program.16 Sutherland’s
Sketchpad used a lightpen to create scalable, duplicatable 2-D engi-
neering drawings on a CRT screen in ways that allowed the user to
draw simple shapes using a computer for the first time.
Three years later, in the fall of 1966, Alan Kay—commonly con-
sidered the inventor of Smalltalk, the first “purely” object-oriented
programming language—entered graduate school at the University
of Utah, where he worked with Sutherland’s longtime colleague,
David Evans. At the time, Evans was working on computer graphics
programs similar to Sketchpad and was engaged in solving the “hid-
den line” problem of 3-D graphics.17 Upon entering Evans’s office,
Kay was handed a copy of Sutherland’s dissertation project and in-
structed to read it as his initial research assignment. Kay has de-
scribed his initial encounter with Sketchpad as follows:
What it could do was quite remarkable, and completely foreign to any use of a
computer I had ever encountered. The three big ideas that were easiest to grap-
ple with were: it was the invention of modern interactive computer graphics;
things were described by making a “master drawing” that could produce “in-
stance drawings”; control and dynamics were supplied by “constraints,” also
in graphical form, that could be applied to the masters to shape and interrelate
[sic] parts. Its data structures were hard to understand—the only vaguely famil-
iar construct was the embedding of pointers to procedures and using a process
called reverse indexing to jump through them to routines.18

16. Timothy Lenoir, “Virtual Reality Comes of Age,” in Funding a Revolution: Govern-
ment Support for Computing Research (Washington, D.C.: National Research Council,
1999), p. 226.
17 Alan Kay, “The Early History of Smalltalk,” in History of Programming Languages-II, ed.
Thomas J. Bergin, Jr., and Richard G. Gibson, Jr. (New York: ACM Press, 1996), p. 515.
18. Ibid., pp. 515–516.
410 Configurations

Shortly after his first exposure to Sketchpad, Kay was given the task
of making sense of an unusual programming language from Norway
called Simula. After days of poring over “80 feet” worth of printed
Simula program listings, Kay eventually came to see that
what Simula was allocating were structures very much like the instances of
Sketchpad. There were descriptions that acted like masters and they could cre-
ate instances, each of which was an independent entity. What Sketchpad
called masters and instances, Simula called activities and processes. Moreover,
Simula was a procedural language for controlling Sketchpad-like objects, thus
having considerably more flexibility than constraints.19

Upon realizing that Simula was a programming language that struc-


tured data into independent data classes in the same way that
Sketchpad handled “master” objects and “instances,” Kay first con-
ceived of a new way of programming that he called object-orientation—
a concept that, in his words, “rotated my point of view through a
different dimension and nothing has been the same since.”20 In
building upon the basic concepts that he gleaned from Simula and
Sketchpad, Kay designed the first purely object-oriented language,
known as Smalltalk.
The simplest description of the difference between procedural and
object-oriented design paradigms is that while procedural languages
consist of a main program that subdivides its operations among its
own compartmentalized subroutines, object-oriented programs con-
sist of the aggregate interactions among a system of independent
data objects, each with its own behaviors and variable attributes. Pro-
cedural programs are organized according to a top-down logic, in
which the main program calls on internal function-specific subrou-
tines, which manipulate the various data variables and return them
to the larger program. Object-oriented programs, on the other hand,
pass messages between various independent data objects that con-
tain certain properties and perform certain actions.
For example, most procedure-oriented programs consist of a
“main program” and several compartmentalized subroutines or func-
tions. As the main program steps through its various operations, it
calls each of its subroutines to perform a specialized task that con-
tributes to the piecewise completion of the main program’s goal. If
one were writing a procedural program to place ten circles in a
19. Ibid., p. 516.
20. Alan Kay, in an interview with Michael Schrage published in “Alan Kay’s Magical
Mystery Tour,” TWA Ambassador, January 1984, p. 36; quoted in Howard Rheingold,
Tools for Thought: The History and Future of Mind-Expanding Technology (Cambridge,
Mass.: MIT Press, 1985), p. 238.
Alt / The Materialities of Maya 411

straight horizontal line across the screen, one solution would be to


write a subroutine called placeCircle to place a circle on the screen at
the location of a global variable called currentLocations. In order to
place each circle in a different spot along the line, one would need a
second function called moveLocation which adds ten units to current-
Location. The main program would then consist of a loop that cycles
through ten repetitions of the placeCircle and moveLocation func-
tions, so that in each iteration a circle is placed on the horizontal
line. Once the main program has completed its ten cycles, the entire
program comes to an end.
In object-oriented programming, on the other hand, different
data groups and data-specific procedures are “encapsulated” in inde-
pendent objects that interact by sending instructional messages to
one another. There is no overarching main program to organize the
series of interactions; instead, an object-oriented program dynami-
cally emerges in the relationships among the objects. If we were to
repeat the previous example of ten circles on a line using an object-
oriented paradigm, we would first need to construct an object that
we could call circleObject. We could then give circleObject the neces-
sary properties, such as a circleCount property. We could then add a
method duplicateSelf to circleObject that checks to see if the circle-
Count property is less than ten, and if so, creates a copy of the cir-
cleObject with all its methods and properties, moves the new copy
ten units to the left of the current circle’s position, increases the new
copy’s circleCount by one, then sends a message to trigger the new
copy’s duplicateSelf method. Such a process would launch a self-
driven chain reaction that would stop in the tenth circle and would
essentially accomplish the same goal as the procedural approach, but
through a profoundly different programming structure. Whereas
procedural programs function according to a unifying conception
and method to drive processes, object-oriented programming allows
for exceptionally accurate local data details that are assigned to
simple algorithmic methods so as to model complex data problems
from the bottom up. Kay has explained the power of such a pro-
gramming framework by stressing that within an object-oriented
paradigm, each object can function as though it is its own “com-
puter” working on a very specific, local task:

In computer terms, Smalltalk is a recursion on the notion of the computer it-


self. Instead of dividing “computer stuff” into things each less strong than the
whole—such as data structures, procedures, and functions that are the usual
paraphernalia of programming languages—each Smalltalk object is a recursion
of the entire possibilities of the computer. Thus its semantics are a bit like hav-
412 Configurations

ing thousands and thousands of computers all hooked up by a very fast


network.21

Rather than needing to “contain” an overall conceptual model


within a “main program,” object-oriented programming allows for a
process of simulation in which accurate predictions of complex sys-
tems are produced outside the objects without the need of any uni-
versal sequence of instructions or control. Kay has described this re-
sistance of an overarching, top-down “theory” by concluding that
within an object-oriented paradigm “questions of concrete represen-
tation can thus be postponed almost indefinitely because we are
mainly concerned that the [objects] behave appropriately, and are
interested in particular strategies only if the results are off or come
back too slowly.”22
From this very cursory explanation of object-oriented program-
ming, one should already begin to see that when discussing the ma-
terialities of object-oriented programs, the very notion of program—
described in procedural languages as a cohesive, single, linear
sequence of instructions—becomes increasingly destabilized and
multiplied. Unlike procedural programs, object-oriented “programs”
have no discrete beginning and ending point, and there is no single
linear path through an easily mapped tree structure or flowchart.
Rather, in object-oriented programming multiple objects interact si-
multaneously at each distinct moment so that the “program”
emerges as a topological field of interactive possibilities among ob-
jects, in much the same way that a NURBS surface is shaped by the
various control vertices that interact to define its shape. Within such
a framework, time itself takes on a different configuration since there
is nothing internal to the “program” that marks a beginning and an
end—objects continue to interact for as long as the “program” is al-
lowed to run. Often, program time is controlled through a more pro-
cedural control (such as the animation timeline in Maya) that is ex-
ternal to the field of interacting objects. Thus, we can say that in
object-oriented “programs,” time exists outside the “program.” The
more one begins to understand the particularities of object-oriented
programming, the more the connotations of the term program be-
come limiting and misleading; as a result, many programmers prefer
to speak of an object-oriented environment, space, or even simulation.
While an understanding of object-oriented programming lan-
guages has helped contribute to a better appreciation of the ways in
which Maya’s own object-oriented interface structures the relation-
21. Kay, “Early History of Smalltalk” (above, n. 17), p. 513.
22. Ibid.
Alt / The Materialities of Maya 413

ships among Maya’s various media objects, there are obviously dis-
tinct phenomenological differences between writing object-oriented
programs in languages such as Smalltalk, C++, or Java and con-
structing a 3-D scene of interrelated objects in Maya. As one would
expect, object-orientation with programming languages consists of
nothing but programming code, and “objects” exist as abstract
classes of data objects and their corresponding methods. Maya’s ob-
jects, however, have a directly embodied presence as visual and hap-
tic 3-D objects that can be manipulated and interconnected within a
perceivable space. Thus, in the case of Maya’s object-oriented para-
digm, the interface demands that the user interact with space and
materiality through a very unusual perceptual strategy—an ap-
proach that requires the intended scenes to emerge not from a top-
down design concept, but rather through the literal building of var-
ious scene objects and the enabling of interactions between those
objects in order to produce the desired final behavior. While such an
altered design approach has undoubtedly reconfigured the way pro-
duction occurs, I will demonstrate in the next section how it has also
engendered new ways of embodying space.

Conceptualizing Object-Orientation
As the powerful design and simulation capabilities of Maya and
other 3-D modeling programs continue to permeate scientific, tech-
nical, and entertainment industries, the object-oriented paradigm
will increasingly become the default modality for production. As we
have already seen, such a framework for production requires users to
cease designing in terms of the top-down binarisms of procedural
models and instead to begin thinking laterally among the different
object nodes of the overall program space, in order to approximate a
desired design outcome. In this model, production involves the
ability to move through multiple nodes and draw connections so
that the manifold space of interactions can be locally navigated and
mapped rather than globally conceptualized. The emphasis on priv-
ileging local, heterogeneous, and situated perspectives over mono-
lithic, essentialist, and hegemonic ontologies and valuations paral-
lels much of the current discourse in critical studies. In the
remainder of this section, I will explore exactly how this conver-
gence between material and intellectual production has already be-
gun to encourage a new framework for thought and embodiment.
It is perhaps not surprising that one of the first contemporary dis-
ciplines to give expression to this new convergence is located at the
cusp of material design and critical discourse. For it is in architecture,
a fertile zone of intersection between critical studies and design ma-
414 Configurations

terialities, that designers and artists began to map the parallels be-
tween poststructuralist discourse and the powerful 3-D design pro-
grams in which the bulk of contemporary architectural practice
takes place. In seeking critical frameworks that depart from the neg-
atively defined approaches of deconstructivism and other post-
modern strategies which emphasize the constitutive difference of ar-
chitectural contexts, a new generation of architects has found
resonance in the works of Gilles Deleuze and Félix Guattari (D+G).23
Primary to D+G’s new experimental philosophy is an understanding
of spatial relationships and embodiment that mirrors the very mode
of interaction that Maya’s interface demands.
In describing their vision for such a program, D+G have adopted
Aloïs Riegel’s categories of haptic and optic to refer to these funda-
mentally different spaces of interaction:
The first aspect of the haptic, smooth space of close vision is that its orienta-
tions, landmarks, and linkages are in continuous variation; it operates step by
step. . . . Examples are the desert, steppe, ice, and sea, local spaces of pure con-
nection. Contrary to what is sometimes said, one never sees from a distance in
a space of this kind, nor does one see it from a distance; one is never “in front
of,” any more than one is “in” (one is “on” . . . ). Orientations are not con-
stant but change according to temporary vegetation, occupations, and precip-
itation. There is no visual model for points of reference that would make them
interchangeable and unite them in an inertial class assignable to an immobile
outside observer. On the contrary, they are tied to any number of observers,
who may be qualified as “monads” but are instead nomads entertaining tac-
tile relations among themselves. . . . the “monadological” points of view can
be interlinked only on a nomad space; the whole and the parts give the eye
that beholds them a function that is haptic rather than optical. . . . Striated [or
optical] space, on the contrary, is defined by the requirements of long-distance
vision: constancy of orientation, invariance of distance through an inter-
change of inertial points of reference, interlinkages by immersion in an ambi-
ent milieu, constitution of a central perspective.24

According to D+G, haptic space exists as a space of object-to-object


contact, in which space is mapped by moving nomadically through

23. For a more thorough discussion of the history of this connection, see Timothy
Lenoir and Casey Alt, “Flow, Process, Fold: Intersections in Bioinformatics and Con-
temporary Architecture,” in Science, Metaphor, and Architecture, ed. Antoine Picon and
Alessandra Ponte (Princeton, N.J.: Princeton University Press, forthcoming).
24. Gilles Deleuze and Félix Guattari, A Thousand Plateaus: Capitalism and Schizophre-
nia, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987), pp.
493–494. For Riegel’s original use of haptic and optic, see Aloïs Riegel, Die Spätrömische
Kunstindustrie (Vienna: Staatdruckerei, 1927).
Alt / The Materialities of Maya 415

various points of occupation. Optic space, on the other hand, is a


space that exists in reference to a single, fixed, external, “objective”
perspective that orders the space according to a unified whole, as in the
case of the top-down main program space of procedural program-
ming. While an observer can occupy the perspective of only one of
many distinct and heterogeneous objects inside or on a haptic space, ob-
servers in an optic space always occupy a distant and externally fixed
perspective that exists as an ideal extension of homogeneous space.
To use yet another pair of Deleuzean terms, haptic spaces are con-
sidered affective spaces because changes in the emergent space of or-
ganization are reflected by internal state changes (affects) within the
various elements of the distributed field of objects—nothing external
to the elements ever changes; all the changes occur within the sepa-
rately encapsulated objects.25 Conversely, optic spaces can be con-
sidered effective spaces because changes are registered in the state of
an external, organizational whole by which everything is framed
and universally referenced. Considering the degree to which D+G’s
formulation of haptic-affective relationships to space is congruent
with the kind of perception and interaction required to navigate 3-D
modeling programs such as Maya, the affinity that new generations
of designers and artists have developed toward the work of D+G be-
comes increasingly meaningful.
One particularly salient example of the difference between affec-
tive and effective space in object-oriented programming is the way
that physics is handled by Maya’s dynamics-modeling engine. The
modeling of complex dynamic relationships among objects in a pro-
gram like Maya undoubtedly requires high-level calculus; however,
the concept of a single, monolithic calculus is somewhat misleading,
since two different approaches to “the calculus” were developed
around the same time by Isaac Newton and Gottfried Wilhelm Leib-
niz. While both systems tend to arrive at similar results, each is em-
bedded in a subtly but significantly different approach to time and
space. Newton’s calculus favors the concept of the derivative and cal-
culates the position and movement of objects as derivative of an ab-
solute reference point of time and space. Leibniz, on the other hand,
focused his calculus mainly on the integral, which calculates the po-
sition and vectorial movement of objects in reference to the other
objects represented within the modeled system—without any need
of an absolute and external time-space referent. Thus, one can say
that in D+G’s terms Newton’s calculus describes an effective space,
while Leibniz’s describes an affective space.

25. Gilles Deleuze, Cinema 1: The Movement Image, trans. Hugh Tomlinson and Barbara
Habberjam (Minneapolis: University of Minnesota Press, 1986), p. 109.
416 Configurations

In the object-oriented world of Maya, physics relationships are


mainly modeled affectively by attaching object fields to the objects in
a scene that require dynamics modeling. As explained in Maya’s
electronic documentation, “Object fields are owned by an object and
exert influence from the object.” Thus, within the affective space of
Maya, physics simulations are not a property of the abstract space of
the simulation—they do not exist according to some absolute and
dissociated background field that inherently permeates the space of
the scene; rather, physics and calculus exist dynamically, as a net-
work of distributed force relationships among various objects. For
example, in order to model a system of planetary gravitation, one
does not simply check a box that “turns on” the gravitation in the
scene, such that all the objects will start acting as though they are
now under the influence of a pervasive gravity; rather, one would as-
sign a gravity field to an object such as the floor plane of a house and
then click on all the objects that one wants to be affected by the
gravitational field. Additionally, within the same space, multiple
gravitational fields (or other force fields) can be assigned among any
combination of objects such that there is never an implicit default
physics that subsumes all objects under a uniform whole; instead,
there are multiple physics, each of which is locally owned by the ob-
jects to which it is attached. Thinking physics within such a space,
therefore, requires the thinking of a very specific approach to
physics and the thinking of a very particular type of space.
The interface space of a 3-D object-oriented application is there-
fore a thoroughly haptic and affective space that requires the user to
function as though he or she were an object in that space. Every
view into a Maya scene is mediated by a camera object that is im-
mersed within a complex web of interactions among all the other
objects of the scene. Though users can move laterally among various
cameras for different views of the object world, they are always im-
mersed in the haptic space of the objects themselves. In other words,
though somewhat to the chagrin of new Maya users who are used to
more conventional modes of top-down production, there is no single
“Default,” “Absolute,” or “Main” external camera from which the scene
can always be holistically “contained” and objectively viewed. Rather, a
user who wishes to view all the objects in a scene must construct a
camera at an appropriate distance and then choose to display the
2-D perspective from that particular camera. Fluctuations of various
object states in the scene are inflected by variations within the in-
ternal states of the cameras and, by extension, in the internal per-
ceptual states of the user.
Keep in mind that so-called “3-D modeling spaces” are almost
Alt / The Materialities of Maya 417

never experienced in three-dimensional space; rather, they are rep-


resented on the 2-D surface of a computer screen. Therefore, it is pre-
cisely this ability to relocate one’s perspective through the various points
within a 2-D object-oriented network of objects that produces the affective
experience of navigating a 3-D space. What I wish to suggest in this ex-
amination of the programming materialities of Maya is that in using
and navigating the interface space of Maya, the internal perceptual
states of users are haptically and affectively inflected and altered so
as to engender the possibility to think and perceive according to an
object-oriented 3-D logic. As Nicholas Negroponte has noted in his
digital-era revision of Marshall McLuhan’s oft-quoted dictum, “The
medium is not the message in a digital world. It is an embodiment
of it.”26 Since Maya and similar 3-D modeling programs increasingly
have become a space for producing the material images and objects
that populate our world, such a haptic, object-oriented space opens
up new relationships for thought and action within the realms of
material production.
In a recent essay entitled “Visions Unfolding: Architecture in the
Age of Electronic Media,” the architect Peter Eisenman has argued
for an understanding of just such a transformation, particularly as it
pertains to his own discipline. Updating Walter Benjamin’s “The
Work of Art in the Age of Mechanical Reproduction,” Eisenman as-
serts that there is a necessary distinction to be drawn between the
mechanically reproduced and the electronically mediated. Accord-
ing to Eisenman, most forms of mechanical reproduction have oper-
ated according to an optical vision that is externally centered in a
single perspective that links “seeing to thinking, the eye to the
mind” and aligns the production of content with the desires of an
anthropomorphizing subject.27 The point of Eisenman’s essay is to
argue that electronic media, which function within a distributed
object-oriented space of “reciprocal subjectivity,” serve to haptically
disrupt the “mind-eye” connection by redistributing the unitary dis-
cursive subject into an emergent field of objects.28
As an extension of his thesis, Eisenman has argued that the build-
ings produced through such affective programming spaces exist as
affective spaces themselves that are intended to displace their in-
habitants’ dominant models of optical vision. Since the distributed,

26. Nicholas Negroponte, Being Digital (New York: Knopf, 1995), p. 18.
27. Peter Eisenman, “Visions Unfolding: Architecture in the Age of Electronic Media,”
in Digital Eisenman: An Office of the Electronic Era, ed. Luca Galofaro (Basel: Birkhäuser,
1999), p. 84.
28. Ibid., p. 88.
418 Configurations

topological surfaces of the structures are not reducible to generalized


concepts or idealized forms, they resist their viewers’ interpretation:
they are forms in and of themselves, rather than subroutines of a
grander programmatic vision. Eisenman describes this power of dig-
ital media to resist the optical Whole:
Once the environment becomes affective, inscribed within another logic or an
ur-logic, one which is no longer translatable into the vision of the mind, then
reason becomes detached from vision. . . . This begins to produce an environ-
ment that “looks back”—that is, the environment seems to have an order that
we can perceive, even though it does not seem to mean anything.29

The ability to “look back” endows architecture with a new power to


deterritorialize (to use D+G’s term) the viewer—a haptic rather than
optic potential to induce an affective change in the spectator.
It is important to realize that such a transformation in perceptive
faculties does not limit itself only to what might more traditionally
be considered the disciplines of visual culture. Architecture is obvi-
ously not the only field to have employed 3-D modeling programs.
In fact, molecular biologists began using 3-D programs to model the
complex folding patterns of proteins as early as 1965.30 While the
use of programs such as Cyrus Levinthal’s CHEMGRAF was intended
to help produce a unified theory of molecular biology, they have ul-
timately had precisely the opposite effect by encouraging a more
process-driven, computational, distributed approach to modeling
biomolecular behavior. A 1985 NIH report sums up the difference
between a unified biology and the information-technology-infused,
heterogeneous, multiple, data-driven, and enfolded state of biology.
Contrasting this biology with theoretical physics, “which consists of
a small number of postulates and the procedures and apparatus for
deriving predictions from those postulates,” the authors of the NIH
report view contemporary biology as an interconnected assemblage
of different strata, from DNA to protein to organismic and behav-
ioral biology—each with its own unique set of laws and processes.31
Rather than with a unified theory, the field’s critical questions can
often be answered only by relating one biological level to another
through the techniques of informatics.
29. Ibid.
30. Timothy Lenoir, “Shaping Biomedicine as an Information Science,” in Proceedings
of the 1998 Conference on the History and Heritage of Science Information Systems, ed. Mary
Ellen Bowden, Trudi Bellardo Hahn, and Robert V. Williams, ASIS Monograph Series
(Medford, N.J.: Information Today, 1999), pp. 27–45.
31. Harold Morowitz, Models for Biomedical Research: A New Perspective (Washington,
D.C.: National Academy of Sciences Press, 1985), p. 21.
Alt / The Materialities of Maya 419

As in the case of the transformation of the unified field of biology


into the more dispersed field of bioinformatics, theory and thought
themselves have become reconfigured according to the inherent ma-
terialities of object-oriented 3-D media. Considering that biology is
by definition the discursive space devoted to the production of sci-
entific knowledge about life, such a reconfiguration implies nothing
less than a fundamental transformation of our own understanding
of biological embodiment. Deleuze and Guattari have subversively
termed such a reconfiguration of biological embodiment along lines
of distributed organization as a Body without Organs. According to
D+G’s term, the model of a Body without Organs is posed in direct
contrast to the normal “organismal” notion of embodiment in
which the body is viewed in terms of an organizational Whole that
subsumes the smaller compartmentalized organs within its own
hegemonic logic, just as a top-down procedural program delegates
functional tasks to its own compartmentalized subroutines. How-
ever, for D+G, a Body without Organs “reveals itself for what it is:
connection of desires, conjunction of flows, continuum of intensi-
ties . . . your own little machine, ready when needed to be plugged
into other collective machines.”32
Yet despite the important effects that the reorganization of built
space and biology are beginning to have in changing cultural per-
ceptions, I would argue that the medium that has most embodied
and popularized this new paradigm has been that of video games,
which have from their origin been object-oriented. As a current ex-
ample of how such a logic is expressed, consider Maxis’s wildly pop-
ular human-life simulation known as The Sims. In discussing the
game, its designer Will Wright revealed that he modeled his syn-
thetic human models as entities that are relationally defined by their
ability to interact with the material commodities that the player
chooses to purchase for them. Each material object has certain prop-
erties and behaviors that endow it with the ability to affect the basic
desires (as modeled on Maslow’s hierarchy of basic human needs) of
its Sim owner. In other words, the assortment of objects that the
player purchases for each of the Sim characters exerts influences on
the characters. For example, if the player buys a houseplant for his
or her Sim, then the houseplant will begin affecting the Sim’s be-
havior by sending messages that the plant needs to be watered—
whereas a car might send messages about maintenance or washing.
According to Wright, each Sim agent is literally defined as an em-

32. Gilles Deleuze and Félix Guattari, “November 18, 1947: How Do You Make Yourself
a Body without Organs?” in Thousand Plateaus (above, n. 24), pp. 149–166, on p. 161.
420 Configurations

bodied site in which all of its purchased objects’ behaviors and de-
mands converge. Each Sim agent exists therefore as a central organ-
izational node within a topological field of desiring objects. As
object-oriented digital media applications such as Maya continue to
haptically reconfigure our own notions of lived embodiment, we
have become increasingly enmeshed in a new ontology of material
culture.

Making Sense
What has emerged from a practical engagement with the materi-
alities of Maya’s interface is that object-orientation is not just an in-
novative programming methodology but also an entirely new para-
digm for making sense of the world. Such an investigation into our
becoming object-oriented may on a certain level seem absurd, since
our own sensorimotor bodies are obviously object-oriented in their
own hodological navigation of local spaces. However, one cannot
say the same about the representations we have constructed to ex-
plain observed phenomena. At least within most Western cultures,
these representations traditionally have privileged a top-down, lin-
ear, procedural approach, largely because our media for instantiating
the representations have themselves allowed only such linear op-
tions. However, the introduction of object-oriented digital media
into the feedback loop has opened up new lines of flight for thought
and representation.
Hermann von Helmholtz, the ninteenth-century German physi-
cist and physiologist, was among the first to oppose a strictly Kant-
ian approach to sensation by arguing that our senses are learned
rather than innate. Against the popular theories of his time, Helm-
holtz deconstructed the assumption that perception was a process of
universal, immutable, and transcendental faculties. In his 1878 lec-
ture entitled “The Facts of Perception,” he directly opposed Kant’s
assertion that our ability to perceive space is a manifestation of a
geometric faculty that exists as a purely a priori condition of
thought.33 Rather, he argued for an empiricist or physical theory of vi-
sion derived from his own experimental research on human sense
perception. Updating Kant’s “nativistic” assertion that the proposi-
tions of geometry are “simply given” or “necessarily true” as an a
priori aspect of thought, Helmholtz argued that what appeared to be
an innate propensity toward a geometric understanding of space is

33. Hermann Helmholtz, “The Facts of Perception,” in Selected Writings of Hermann von
Helmholtz, ed. Russel Kahl (Middletown, Conn.: Wesleyan University Press, 1971), pp.
366–408.
Alt / The Materialities of Maya 421

actually a learned, haptic process by which the mimetic movements of


the eyes in apprehending space and movement train our perceptive
faculties to effectively perceive geometry.34 In order to help prove his
assertion, he at one point went so far as to affix distortional lenses to
his eyes for long periods as a means of demonstrating that the visual
faculty could also eventually be reconfigured in accordance with
other, non-Euclidean spatial geometries. In conducting a wide vari-
ety of similar experiments, Helmholtz demonstrated that the process
of perceptual mediation can itself be transformed in relation to other
intervening media.
Friedrich Kittler has himself argued along similar lines in the past
by stating that “what we take for our sense perceptions has to be fab-
ricated first.”35 Yet in his continued critique of new digital media, he
has criticized the ways in which contemporary software GUIs have
occluded the “direct” interface to computers that the first generation
of machine-language programming ostensibly enabled: “Those good
old times are gone forever. In the meantime, through the use of key-
words like user-interface, user-friendliness or even data protection,
the industry has damned humanity to remain human.”36 In advanc-
ing this position, Kittler rests his argument on the assumption that
graphical user interfaces have merely provided the façade of older
media interfaces which are not native to computers, and therefore
have obscured the “essential” properties of the medium. However, as
I have hoped to demonstrate throughout this paper, the specific ma-
terialities of interfaces such as Maya have actually coevolved with
new design requirements to effect a radical transformation in both
interface design and human modes of perception and representa-
tion. The results of this human-computer collaboration are therefore
not decreased by Maya’s GUI, but are rather materially amplified by
its ability to represent data objects as a world of haptic, 3-D nodes.
Such an embodied interface allows the user’s immediate sensory ex-
perience to function within an object-oriented paradigm, and re-
configures patterns of observing and interacting with perceptual
phenomena. Rather than merely functioning as an opaque surface
that “hides” the “real” inner processes of computers, graphical in-
34. For a recent historical analysis of the scientific practices involved in Helmholtz’s
philosophical position, see Timothy Lenoir, “The Eye as Mathematician: Clinical Prac-
tice, Instrumentation, and Helmholtz’s Construction of an Empiricist Theory of Vi-
sion,” in Hermann von Helmholtz and the Foundations of Nineteenth-Century Science, ed.
David Cahan (Berkeley: University of California Press, 1993), pp. 109–153.
35. Friedrich A. Kittler, “Introduction,” in Gramophone, Film, Typewriter (above, n. 3), p. 34.
36. Friedrich A. Kittler, “Protected Mode,” in Literature, Media, Information Systems: Es-
says, ed. John Johnston (Amsterdam: G+B Arts International, 1997), p. 157.
422 Configurations

terfaces are precisely what keep us in the computer-human feedback


loop that continues to reconfigure the ways in which digital-media
users embody space and time. Thus, as an antidote to Kittler’s claim
that “There Is No Software,”37 I would substitute a new dictum for
our rapidly evolving digital world: There is only interface.

37. See Friedrich A. Kittler, “There Is No Software,” in ibid., pp. 147–155.

You might also like