You are on page 1of 65

Concepts of 3D and

Animation
Concepts of 3D and Animation

© 2013 Aptech Limited

All rights reserved.

No part of this book may be reproduced or copied in any form or by any means – graphic, electronic or
mechanical, including photocopying, recording, taping, or storing in information retrieval system or sent
or transferred without the prior written permission of copyright owner Aptech Limited.

All trademarks acknowledged.

APTECH LIMITED

Contact E-mail: ov-support@onlinevarsity.com

Edition 1 – 2013

Disclaimer: Arena Multimedia is registered Brand of Aptech Ltd.


Preface

Preface

Today computer technology has become so powerful that you can create a real or imaginary world without much
hassle. The animations that are created today on digital media can be easily incorporated in a multimedia presentation
or an interactive game combined with traditional drawing and photography.

The book, ‘Concepts of 3D and Animation’, conveys the excitement of three dimensional computer animation and
imaging and provides technical as well as creative information that is useful and inspiring. This book presents the
concepts required to understand the steps and procedures that lead to the completion of a fully rendered three-
dimensional computer still image or animation.

With the help of this book, students will learn wide range of topics related to animation.

The ARENA Design team has designed this courseware by keeping in mind that motivation coupled with relevant
training and methodology can bring out the best. The team will be glad to receive your feedback, suggestions, and
recommendations for improvement of the book.

Please feel free to send your feedback to the ARENA Design team at the Head Office, Mumbai. (Feedback form is
attached at the end of the book.)

ARENA Design Team

iii
This page has been intentionally left blank
Table of Contents

Table of Contents

Session 1: Principles of Animation.............................................................................................. 1


What is Animation................................................................................................................................................1
Steps for Creating Animated Film........................................................................................................................3
Timing for Animation............................................................................................................................................11
Digital Animation..................................................................................................................................................15
3D Animation Techniques....................................................................................................................................18
Animation Production Terminology......................................................................................................................20
Summary.............................................................................................................................................................23
Exercise...............................................................................................................................................................23

Session 2: 3D Animation Environment......................................................................................... 25


Views...................................................................................................................................................................25
Projections...........................................................................................................................................................26
Summary.............................................................................................................................................................31
Exercise...............................................................................................................................................................32

Session 3: Concepts of Materials and Lights................................................................................ 33


Mapping Objects.................................................................................................................................................33
Shaders...............................................................................................................................................................37
Lighting in 3D Environment.................................................................................................................................38
Summary.............................................................................................................................................................41
Exercise...............................................................................................................................................................42

Session 4: Camera Concepts and Rendering Techniques............................................................... 43


Camera Concepts...............................................................................................................................................43
Rendering............................................................................................................................................................46
Z-buffer................................................................................................................................................................48
Raytracing...........................................................................................................................................................49
Radiosity..............................................................................................................................................................49
Network Rendering..............................................................................................................................................50
Summary.............................................................................................................................................................51
Exercise...............................................................................................................................................................52

Glossary..............................................................................................................................................................53


Concepts of 3D and Animation
Iconography

: Quick test Questions

: Answers to Quick Tests

: Note

: Answers of Exercise

vi
Principles of Animation

Session 1: Principles of Animation

Learning Outcomes
In this session, you will learn to -
● Discuss and describe animation.
● Discuss different types of 3D animation.
● Describe briefly the different stages of creating an animated film.
● Describe briefly the concept of Timing for Animation.
● Describe digital animation.
● Describe various 3D animation techniques.
● Define different animation production terminologies.

Long time ago, animation pioneers began experimenting with moving images to make their art come alive. The
techniques may seem primitive in today’s digital world but the fundamentals remain the same. Animation has been
around, in one form or another, for centuries. Early animation pioneers created simple moving picture devices with
few images and a cardboard disc.

Animation can be a very influential tool for manipulating the perception of learners. In order to illustrate some
concepts, you need motion. In many cases, animating a drawing or object is the simplest method of getting a
concept across to the learner.

1.1 What is Animation


Animation is an illusion of motion. This illusion of motion is created due to a phenomenon called ‘Persistence of
Vision’ in the human eye. Persistence of Vision is a theory, according to which, the image that you see remains
in your eyes for 1/16th of a second. This principle of vision is used to create any kind of animation where multiple
images are slightly different in their placement, shape, size, and lighting. These images are displayed one after the
other at a specified speed that gives an illusion of animation.
But animation is not just limited to persistence of vision. It is a blend of artistic creations, physical groundings, laws
of weight, and momentum. The animations fall under two basic categories, namely 2D and 3D animations.

2D or two-dimensional animations are a series of drawings based on width and height. The effect is very much like
looking at and interacting with a painting that is alive. These drawings generally look flat as they are based on X
(Width) and Y (Height) axis.

3D or three-dimensional animations are a series of images created taking into consideration the height, width, and
depth, so that the objects look solid. It should feel as if you could walk around and behind them.

In a broader way, animation can be classified as classical animation and digital animation. In classical animation,
2D images are created mainly with hand drawings and it is the oldest type of animation. In classical animation, there
are many types like sand, clay, puppet, and cut-out animations.

Digital animations are created using computers or any other digital media. You can create both 2D and 3D animations
digitally. In today’s world, digital animation is most commonly used. It saves a lot of time and gives you the ability to
create any kind of effect, which is not possible with traditional methods of classical animation.

There are three distinct types of digital animation and each type has unique characteristics and uses. These three


Concepts of 3D and Animation
types are frame-based animation, morphing, and 3D character animation.
■ Frame-Based Animation
It is produced by creating a series of individual cels. Flipbook animation, tweened animation, and static
animation are three types of frame-based animation. Figure 1.1 shows how individual frames are created and
animated in frame-based animation.

Figure 1.1: Individual frames are created and then animated in frame-based animation
● Traditional animation

In traditional animation, individual images are drawn on acetate. The changes in moving parts of the
drawing are made from frame to frame. Then, they are photographed and displayed one after the other
while superimposed on a background. When played back in sequence, the minor changes in the drawings
from frame to frame merge and appear to move. The smoothness of the image motion depends on how
gradually the images change from frame to frame.
● Flipbook animation

Electronic tools can be used to create flipbook animation on screen. Digital animation imitates traditional
animation techniques. The animator draws a sequence of individual frames that change gradually from one
frame to the other. When these frames are shown in sequence, the images appear to move.

An appropriate example is a flipbook consisting of a coyote, running from point A to point B as you flip
through the pages of the book.
● Tweening

Electronic tools can be used to simplify the process of creating animation. The animator draws keyframes
posing backgrounds and characters on the screen. Then, the animator defines paths that the characters
follow during the scene. Softwares draw in-between frames by using a process called tweening to create
the illusion of movement. Tools like Macromedia Director and Flash can be used to create tweening. Refer
to figure 1.2.


Principles of Animation

Figure 1.2: First and last frames are created and in-between frames are tweened

Many animations involve only primary movement, that is, an object moves from one place to another on the
screen, but the object itself doesn’t change. For this type of animation, the starting point for an object needs
to be defined, moved along a path from one point to the next, and then the end point needs to be identified.
But many animations do more than moving objects along a path. They add secondary movements to the
primary movement of a character, like arms swinging, cloth flapping in a breeze, or facial expressions. In
these cases, you assign a series of pictures for each position of the element. These pictures are assigned
to a path and the animation plays the images in sequence along that path. Some animation programs even
blur objects slightly when they are animated to enhance the impression of movement.
● Static animation

Static animation is a sequence of one keyframe that does not change. Static animation is used to create
background images, which serve as the context for moving images. Although there is no motion, it is an
important part in creating animation.

Digital animation can be classified as fixed-path animation (pre-programmed by the software developer) or
data-driven (controlled by the constantly changing data drawn from user input).

1.2 Steps for Creating Animated Film


Multimedia is becoming popular over all other medias with several technologies available. It would support use of
animation, video, and audio to supplement the traditional media of text and images. These new media provide more
design options, but also require design discipline.

The fine art of character animation has evolved from the classic days of Disney, Warner Brothers, and Fleischer.
The introduction of computer software in animation studio has influenced the industry. It has changed the way an
animation character is created or even watched. Animation is originally a craft, whose structure is based primarily
on the animation cel (1/30th of a second picture frame) as the main building block. This is a direct outcome of the
physical properties of a movie film.

While creating an animation film, it has to go through phases like:


● Pre-production
● Production
● Post-production

These stages are then sub-divided into parts and the animators execute their work as decided.

The pre-production work starts with a story and ends with a plan to decide what production elements the video
should include. It helps you communicate your message properly. If the production stage starts with the storyboard
and correct equipment is available, then you are on your way to create a quality video.

At post-production level, you have the basic footage. At this point, you will make the crucial decisions as to what stays
and what goes. Also, you can add graphics, which will give your video a polished and professional appearance.


Concepts of 3D and Animation
Now, you will learn in brief about each stage mentioned earlier.

1.2.1 Pre-production
Pre-production is the period of time during which work is done on a show prior to the first rehearsal. During
pre-production, you make decisions that dictate how the rest of the production comes together. Here you will decide
what equipment you will use and what story you will tell.

During pre-production, the following things are finalized so that all obstacles are removed to get a smooth
production.
● Story
● Character development
● Storyboard
● Creating premises
● Scene planning
● Sound tracks and effects
● X sheet

Let us understand each one of them in detail.


■ Story

In any kind of 2D or 3D film making, the story should be noteworthy in order to be successful. A good story is
the key to success of any film. Stories are important to people, politics, and education. A good story can be
based on anything that appeals to people. If the base is strong, you can always enhance it by visually stunning
special effects and catchy sound tracks.
■ Character Development

The character you develop should fit the story perfectly. You need to study a lot of things before developing
each character of the story. Even an insignificant character has precise characteristics, which get displayed in
its movements. Let us say for example, if you want a strong, and interesting character, you will have to know
its physical, psychological, and social make up. Refer to figure 1.3.


Principles of Animation

Figure 1.3: Character development


■ Storyboard

Storyboarding comprises of a set of drawings outlining the plot and shot sequence for something that needs to
be filmed. It gives a comic book feel to the movie scenes. Think of a video as a story. All good stories contain
certain elements. When creating a story, keep in mind the 5 W’s; Who, What, When, Where, and Why. This
helps create the main body of the story.
■ Creating Premises

Premise comprises the core of the statement ‘about something’. It is the assumption and line of action upon
which every successful dramatic story must be based.

There are two types of premises:


● Flushing premises: In this category, the roughly drawn premises are chosen and finalized and the
remaining ones are flushed out.
● Developing premises: Here the chosen premise is developed and finishing touch is given to it.

The premise tells the base of the story and where it is heading. Premise is rarely formed before writing a story.
Premise must consist of three parts, character, conflict, and conclusion. If you refine the story idea to this
essential premise, you will know exactly how it will end, and how you will get to that ending. The premise need
not be always true. A good story must prove its premise.

Tom and Jerry cartoons had a sort of premise, ‘A clever mouse/outsmarts/a larger and dumber opponent’. But
they were mainly just situations, with a series of attack and revenge gags, and thus were rarely true stories.
■ Scene Planning

Once the storyboard and premise is ready, scene panning comes into the picture. At this stage, you decide the

Concepts of 3D and Animation
overall look of the story and the surroundings needed for it.
■ Sound Tracks and Effects

Music is an expressive medium. A single note can express much more than what most people express through
words in a single day. When music and animation are put together it is a perfect fit, complimenting, driving and
inspiring each other, and the audience. In an animated sequence, if you turn off the volume, you will see that
although the images may still be incredible to look at, there is something missing in the animation.
● Onomatopoetics: These are the sounds of the world that can be translated into understandable everyday
speech. A scream, growl, or grunt, the sound of a train or flowing water, and many more movements are
expressed in onomatopoetics. To understand it better, let us take an example of rocket exhaust or the howl
of a wolf. These actions will be expressed in animations as ‘whoosh’ with onomatopoetic sound.

In 3D comics, onomatopoetic sounds are sometimes placed within text balloons as well, but balloons are
generally not used. In animations, sometimes it becomes impossible to spell out onomatopoetic sounds,
such as ‘bang’, ‘punch’, ‘smooch’, and other event-related words used in comics.

While dialogue serves to make a movie understandable, the purpose of a sound effect is to draw audience into
the action. It makes you believe that you are a part of the movie experience.

Sound effects fall into four basic groups:


● Foley

The term Foley is derived from the name Jack Foley, a film sound pioneer from the earliest days of talking
pictures. He discovered that people talking on a screen without any supporting sound effects look unnatural.
When you see people walking, you expect to hear the sound of their footsteps. He created a unique
environment called Foley Stage. In this, artists can duplicate the sound of footsteps, prop handling, or body
movement in synchronization with the picture.
● Designed sounds

Frequently, sounds used in films do not exist in real life. It could be an Imperial Walker or the sound of a
laser pistol. Since state of the art visual effects keep expanding to meet the director’s imagination, the art
of sound design has to keep up. In many instances, great sound design can even make a marginal visual
effect seem more realistic.
● Creature sounds

In many instances, alien life forms and even dinosaurs have become material for modern action films. Under
these circumstances, each animal must have an emotional language. The audience must know intuitively
when the creatures are sad, happy, or angry. To do this, the sound designer will record the voices of many
real animals. Then the sound designer will alter them individually and layer them to create an entirely new,
but believable creature voice.
● Ambience

It is the sound created for the world of movies. If the scene calls for a storm, you hear rain. If the scene is
in a cathedral, you experience the echoes of the characters’ voices or the sounds of their action, all around.
By recreating a scene’s acoustical environment in front of and all around you, the sound designer draws
you into the movie. It makes you feel a part of the action.
■ X Sheet

In animation film, each frame is shot one after the other. This means that there are many individual images as
the film proceeds. In cel animation, a single shot might consist of numerous cels stacked up together. Since
there might be several people handling each image as it makes its way to the camera, there need to be a
systematic way of keeping track of the individual pictures. The sheet numbers run into thousands. That’s what
Exposure Sheets (or X Sheets or Dope Sheets) are for.


Principles of Animation
These sheets can also be used for keeping track of sound recordings. Once the sound track is recorded, it
can be entered on the X Sheet, using the rows of empty frames as a time line. Spoken words are spelled out
vertically along the page. It helps animators to identify the frame for which they need to draw the mouth shape
based on the sound. Refer to figure 1.4 for a dialogue X sheet and figure 1.5 for an Exposure Sheet.

Figure 1.4: A Dialogue X Sheet

Figure 1.5: An Exposure Sheet

Turning a script or storyboard into a video requires preparation. Video incorporates many different elements such
as zoom, dissolves, crossfades, music, and titles. When used properly, these elements can aid story telling. Putting

Concepts of 3D and Animation
them together, however, requires some planning. You must first understand the production elements, before using
them effectively.

1.2.2 Production Process


This is the most challenging stage of creating an animation film. At this stage, you get to see the actual result of the
treatment given to the story and the visual achievement of the director’s imagination.
■ Layout Drawing
Layout refers to the relative positions occupied by the objects on a screen. The objects may be individual words
or groups of text. It could be anything and everything like titles and headings, or sentences, or paragraphs of
body text.
■ Backgrounds
A painting or other artwork that portrays the environment in which the character operates is referred to as
background. However, good or visually appealing animation is best enhanced by the usage of background.
The background symbolizes the mood shown in the film and also reminds the viewer where and how the story
is set. David Lean’s, ‘Lawrence of Arabia’ used one of the most intriguing backgrounds in live action film.

After creating the background and layout, the animation goes through stages like creating extreme key
positions. In this stage, the first and last keyframe of a scene is drawn, then the in-between artists draw the
middle frames adhering to the principles of design. At this point, scene drawings with fine outlines are ready.
After this, the images are scanned and then painted to fill colors. When the drawing is ready with appropriate
colors and required motions, the production stage ends and post-production begins.

1.2.3 Post-production
Post-production is the process of compositing and editing both the picture and sound elements of a motion picture
into an organized whole.
■ Compositing
Compositing is the technique and the art of assembling the image parts collected from multiple sources to
make a new single whole. A photo-realistic image, meaning it depicts a scene that could have really existed, is
called photo-montage. Compositing is a key element in producing stunning visual effects such as Superman
flying and many other amazing scenes.

Regardless of whether a project is photo-montage or collage, the main elements of compositing consist of
selections, copy and paste operations, and positioning of image elements. The finer aspects require blending,
color matching, and general attention to detail.

In films, compositing is done where different elements of a shot are filmed separately and then later put together
into one final composite shot. There are many reasons for filming the elements separately. For example, if the
script requires complex shots with different things happening, and something goes wrong in between, the
whole shot is ruined. It must be shot all over again. Compositing is also necessary to create shots that would
be too difficult, dangerous, expensive, or impossible to film with all the elements together.

There are two types of compositing:


● Optical compositing

Optical compositing is a slow, tedious, stressful, and high-pressure job. Optical compositing is done on an
optical printer. The printer passes different pieces of film containing different elements to be composited,
past each other while being filmed by a camera. High precision lenses must be used to keep all the elements
composited in focus. Often, many different elements must be composited in the same shot. This requires
multiple passes through the optical printer.

Principles of Animation
● Digital compositing

Digital compositing is the latest technology in which the different elements are composited by a computer.
Digital compositing is much faster and more versatile because color corrections and other effects can easily
be applied to different elements. It ensures that they fit together and are believable in the final composite
shot.

A common example is our everyday weather forecast on TV. The weather map is a separate computer
generated shot onto which the announcer is super-imposed. It looks as if that person is standing in front of
a giant TV screen flashing different weather images.

While this is a simple example, many spectacular shots can be achieved using compositing. In fact, it is
one of the most important and widely used effects technique. No special effects movie can be made without
using compositing.

Following are the common steps followed while compositing:


● Background plate

Firstly, a background plate is shot. The background plate is the bottom-most element in which the other
elements will be placed in front.
● Bluescreen

Next, the other elements are filmed against a solid color backdrop. Usually, the blue color is used as the
backdrop, called bluescreen. The reason for the solid blue background is to isolate the element so that it
can be placed into the background plate. Hence, the background plate in the final composite shot replaces
anything in the shot that is blue.
● Matte

Now, a black and white matte must be made of the element, so that it can be properly exposed with the
background plate. The matte is created by photographing the shot of the object in front of the bluescreen
with a special filter. This filter turns blue to white and all other colors to black, leaving a silhouette of the
object. The matte defines what portion of the film is to be exposed onto the final composite. The white
region of the matte is transparent allowing the background plate to show through it. The region that is black
leaves that portion of the film unexposed. Hence, it allows the corresponding bluescreen element to be
exposed later.

Environment matting captures an object in the foreground and its traditional opacity matte from a
real-world scene. Additionally, it also captures the description of how that object refracts and reflects light,
which is called an environment matte. The foreground object can then be placed in a new environment,
using environment compositing, where it will refract and reflect light from that scene. Objects captured in
this way exhibit specular, glossy, and translucent effects. They show selective attenuation and scattering
of light as per the wavelength. Moreover, the environment compositing process, which can be performed
largely with texture mapping operations, is fast enough to run at interactive speeds on a desktop PC.
● Final composite shot

Once all the elements and the background plate are filmed and mattes are ready, they are passed through
an optical printer or computer. This will help produce the final composite shot that will appear on the big
screen.

Compositing is often the last step in the artistic process of film effects. Still, viewers do not realize that some
of the most amazing effects are filmed separately and combined later.
■ Editing
Editing is selecting, arranging, and pacing of both images and sound in a particular order to tell the film story.


Concepts of 3D and Animation
It is a very important element of television and film production. Editing can be used to add many facets to the
final production. This includes developing the narrative, evoking emotion, controlling time, and indicating a
point of view. Editing also helps in encouraging viewers to identify characters and control pace. Editing controls
the pace, timing of the film, and how the narrative is revealed. It also controls the visual style. For example,
constant use of close-ups rather than other shots. Now, lets learn something about analog and digital film
editing.

To put it in plain theoretical terms, editing is step-by-step transformation of separated, non-linear footage
sources (paradigma) into linear chains of sequences (syntagma). In analog film editing, operating with thin
strips, cutting them with little knives, splicing, and refining sequences means that the options available in
making further changes are reduced.

During the traditional work process, the cut becomes more rigid and exchange becomes more difficult. However,
digital editing enables open the permanent exchange between syntagma and paradigma. This transparency is
essentially a new definition of editing with computers. It leads to more open and smoother work methods with
more emphasis on editing process.

The basic effects used in editing are -


● Fade Out

The image fades into a single color. This is often associated with the end of a particular scene in the
narrative, where the image traditionally fades to black.
● Fade In

This is opposite of fade out. The image fades out of a single color and is often associated with the beginning
of a scene.
● Dissolve

Blending of one image into another. This blending can be of audio as well as video. Applications include
blending into dream sequences and large leaps in time.
● Wipe

Physical change between two images. For instance, a line going from one side of the screen to the other
pushes out the old image and introduces the new. Now, let’s take a look at continuity editing and its
principles.

Continuity editing is the method of providing a seamless transition from one cut to the other as well as the
cut itself. It is achieved by similar framing, similar setting, and similar rhythm.

Some of its principles are -


● Shot reverse shot

It is simply the reverse of a shot. A typical example is a conversation where two people are facing each
other. Cameras are placed on each person in close up. The first shot is person A asking a question. The
second shot is person B answering it, the reverse shot. Shot reverse shot is mostly used in conjunction with
the 180° rule.
● Eyeline match

Linking in with the idea described above, eyeline match is simply two shots. If, in the first shot, something
is looking at something else (the subject), then the second shot tells you what is being looked at (the
object).
● Match-on action

This is where movement occurs in the frame. If a character moves around or out of the frame, the second
shot must continue this movement. The editing matches the movement between shots.

10
Principles of Animation
Editing allows you to achieve different viewpoints. It can help you experience greater depth of emotion and
tension. You can be in a number of places and situations simultaneously. However, this is not to say that editing
is everything needed to create a decent and coherent style in film.
■ Mixing Audio-Video
The emotional coloring of music reinforces the mood of the scene and is often used to manipulate audience
emotion. Music can often change the entire meaning of what actors are saying and it is often thought of as the
equivalent of a narrator in a book. Music is one of the most powerful ingredients in any scene. When it is used
efficiently, it can dramatically affect all other elements, adding suspense, mystery, excitement, and drama.
Sometimes musical themes are used for individual characters and the audience identifies that character with
the music.

I n films, post-production of sound generally refers to the process of sound editing, sound design, scoring, and
mixing. Once all the sound elements are assembled, they must be edited, cut, and spliced into the correct order
to match each scene. After these sounds are edited to match the scenes, they are pre-mixed. Since there can
be hundreds of individual sound elements in a scene, it is best to group them together by content and mix them
into stems. These stems often follow the basic elements of film sound, dialogue, music, and sound effects.
■ Final Mixing
Once the sound has been designed, edited, and pre-mixed, it is brought together in a movie theatre environment
for the final mix. Here, the director, sound designer, dialogue mixer, and music mixer determine the overall
quality, character, and placement of each sound element.

The final mix of a film can take two weeks or more, as each scene is replayed over and over again allowing
for subtle changes to be noted and made. Here, the locations of sounds are matched to the picture. Sound
movement or panning is determined here also as the level and character of the ambience. Dialogue levels
and locations are set amidst the competition from sound effects and music. Everything comes together in this
controlled environment.

Quick Test 1
1. A wipe is the physical change between two images. (True/False)
2. The three stages of animation are pre-production, production, and __________.

1.3 Timing for Animation


Timing is an extremely important factor in any form of media. It determines the effect it will have on the audience.
Let’s take an in-depth look at timing for animation.
■ General Principles of Timing
The clarity of an idea depends on two factors, namely good staging and good timing. Following is the brief
given for the same.
● Good staging and layout is important because each scene and action should be conveyed in a clear and
effective manner.
● Good timing is essential so that enough time is spent preparing the audience for something to happen.
The audience focus should be on the action itself than the reaction to the action. Too much time cannot
be utilized for any of these actions and ill timing will cause the audience attention to wander. But you
cannot use very little time either. The audience might not get adequate time to react and the action may be
missed. Hence, the idea will get wasted. It is also important to know how a certain audience would react to
a particular action.

11
Concepts of 3D and Animation
Animation has a very wide range of uses, from entertainment to advertising, from industry to education,
and from short films to features. Different types of animation require different approaches to timing. Refer
to figure 1.6.

Figure 1.6: Splash created when stone is thrown in water


■ Limited Animation
Limited animation allows as many repeats as possible within the 24 frames per second (fps). Usually, as a rule,
a maximum of six drawings are produced for each second of animation. Limited animation requires a skilled
animator because he needs to create an illusion of action with the greatest sense of economy. Refer to figure
1.7 and 1.8.

Figure 1.7: Frames to create illusion of waving object

12
Principles of Animation

Figure 1.8: Illusion of a running dog


■ Good Timing
In animation, timing gives meaning to the movement. Movement can easily be achieved by drawing the same
thing in two different positions and inserting a number of drawings between the two objects. But this will result
in movement, and not animation.

A character can be animated from one point to another if the forces that are producing the movement is
considered. Firstly, the gravity pulls the character towards the ground. Secondly, the character’s body is built
and joined in a certain way. This is acted on by a certain arrangement of muscles, which tend to work against
gravity. Thirdly, there are psychological reasons for his action.
■ Timing a Slow Action
The closer the drawings are to each other, the slower the movement appears. Similarly, the wider the spacing
between drawings, the quicker it looks on screen.

Closely spaced drawings need great accuracy and precise distances. If the accuracy and distance is inaccurate,
then closely spaced drawings tend to jitter. Usually, in cartoons, live-action is preferred to slow motion. When
slow motion is used, it should contain sufficient rhythm, bounce, and flexibility. The drawings also need extremely
careful tracing. Sometimes, it is preferable to advance the animation by short camera dissolves rather than
laboriously filling in every in-between. Figure 1.9 shows timing a slow action.

13
Concepts of 3D and Animation

Figure 1.9: Timing a slow action


■ Timing a Fast Action
Fast action suits animation best. It gives the animator an opportunity to create an illusion of pace and energy.
This is more difficult to achieve with live action.

The important point to remember about fast action is that the audience should be able to follow the on-screen
proceedings. If it is too fast, the audience will fail to understand what’s happening on the screen. The character
must prepare for the action in a manner where the audience anticipates quick movement and can follow the
action to the end. Figure 1.10 shows timing a fast action.

14
Principles of Animation

Figure 1.10: Timing a fast action

1.4 Digital Animation


In the past, traditional animation techniques were limited, and required large production crews with expensive
pre/post production editing equipment. This resulted in long periods of time to produce a single minute of animation.
Today’s technology allows the power of animation to be integrated in almost any form of digital medium using a
variety of production tools to best suit our needs and budget restrictions.

Classical animation has been around for over 70 years and is achieved by taking hand-sketched line drawings.
These drawings are inked and painted onto transparent plastic cels (celluloids). They are then filmed individually
(called frames) one at a time with a camera. This process is difficult, time consuming, and expensive. Hand-painted
animation cels have been replaced by scanned digital files painted with the use of a mouse or digital pen/drawing
tool. 3D animation is fully computer-generated imagery, also referred to as CGI.

Today, it is hard not to consider the advantages of three-dimensional or even multi-dimensional systems of
visualization, and operation. It allows you a higher level of representation and even more powerful viewing of spatial
relationships among objects.

Efforts have been made to take this one step further and utilize Virtual Reality (VR) systems for a
‘full immersion’ type of visualization. In this type of visualization, the operator participates in observing,
navigating, and interacting as a first person.
15
Concepts of 3D and Animation
The disadvantages of higher dimensions are its inherent computer-intensive operations for rendering and
visualization. It requires expensive, faster, and powerful hardware and software with a good operating systems.

The gaming world has been keen in taking full advantage of these developments. This can be seen in the fierce
competition to provide the most engaging gaming experience at the expense of achieving graphics realism and
interactivity.

The concepts and principles of 2D animation continue to provide the base for 3D animation. Till date, group cartoons
created in 2D animations are always well liked and appreciated.

The use of classical animation may have changed a lot over the years. But the general principles of planning, layout
designing, timing, and story boarding, have basically stayed the same. They are used as a guideline to create all
other types of animation used today.

Digital animation refers to any type of animation saved as an electronic file stored in a memory bank of a computer,
or other digital file saving device. They can be totally computer generated or mixed with other digital elements,
which are then assembled together in an animation production program.

This type of animation is the most commonly used today. Certain elements (frames) of an animation can be taken
out and utilized to produce other graphic image files such as poster or ad materials. Using digital animation makes
editing easier and can be done almost instantly with a few clicks of the mouse. Animation shot on film would require
visually inspecting the film for the cut location and then physically editing it by hand. This would consume hours
of time resulting in higher production costs. Most digital animations can be converted from one type of media file
format to another. Refer to figure 1.11 for projection planar.

Figure 1.11: Projection planar

1.4.1 What is 3D
The dimension of an object is its physical attribute related to the 3D space. There is no such thing as existence
of an object in physical space without reference to some material object. The space in which the object is placed
determines whether the object is 2D or 3D.

You can understand a concept of three dimensions with a simple exercise. Draw a line. An object that is a line has
got a single dimension, the length. Refer to figure 1.12.

16
Principles of Animation

+Y

-X +X

-Y

Figure 1.12: Horizontal or vertical line indicating a single axis

Now, you can draw a square. Paint it with your desired color and pattern. It will look flat but contain two dimensions,
width and height. Refer to figure 1.13.

+Y

-X +X
-Y
Figure 1.13: Flat plain
Now, let’s draw a cube and paint it like a square. You will notice that it has width, height, and depth. Refer
to figure 1.14.

+Y

-Z

+X
-X
-Y
+Z
Figure 1.14: Height, width, and depth of 3D object
Three-dimensional objects are expressed in 3D environment by their width, height, and depth. They can also be
represented in terms of axes such as X, Y, and Z respectively. Since you need to have a third axis to indicate
accurate position of these objects. They are referred to as 3D objects.

These three axes can be understood better with right hand rule. Hold your right hand up with the palm facing
towards the book and the fingers folded in. Project the thumb out towards the right so that it makes right angle to
the palm. Point the index finger up and the middle finger pointed towards the book at a right angle to the index
finger. The thumb represents the X-axis, the index finger represents the Y-axis, and the middle finger represents the
17
Concepts of 3D and Animation
Z-axis. With this thumb rule, you understand the positive values of the object in the 3D environment but you must
note that an object can also be expressed in negative values. The mid point of these three axes is called the origin.
Refer to figure 1.15.

Figure 1.15: Right hand rule


A 3D scene can be viewed from many different angles. These angles are called views. While creating a character
or a scene, it is essential that it contains the best possible view in the requirement of the scene.

1.5 3D Animation Techniques


Animation techniques are wide in variety, hence it becomes difficult to categorize them. Techniques are often
combined due to their variety. Following are the techniques used in animations:

1.5.1 Morphing
Morphing is a special effect, which is generally used in motion pictures and animations. It changes one image into
another in a seamless transition. This technique is useful in dream sequences where it can show a person changing
into some other form with the help of magic or technology. Such depictions are usually achieved through crossfading
techniques on film. Since the early 1990s, crossfading has been replaced by superior computer software to create
realistic transitions.

Earlier, a morph would be achieved by crossfading from one actor or object to another. But this technique had its
limitations. In this technique, the actors or objects would have to stay motionless in front of a background. It was not
possible to change the background or move about in the frame before and after shots. Later, advanced crossfading
techniques were employed. These advanced techniques faded different parts of an image to the other gradually
instead of fading the entire image at once. Many films have implemented the morphing effect. 3D softwares provide
various tools to create morphing effect. Morphing softwares continue to advance even today.

Many morphing programs can automatically morph images that correspond closely enough with relatively little
instruction from the user. This has led to the use of morphing techniques to create convincing slow-motion effects.
It has also developed as a transition technique between one scene and another in television shows, even if the
contents of the two images are entirely unrelated. In this case, the software attempts to find corresponding points
between the images and distort one into the other as they crossfade. In effect, morphing has replaced the use of
crossfading as a transition. Crossfading was originally used to produce morphing effects. Refer to figure 1.16.

18
Principles of Animation

Figure 1.16: Morphing effect

1.5.2 Rotoscoping
Rotoscoping is an animation technique that traces a projected image from a film using a rotoscope. A rotoscope is a
device used in animation to trace a projected image from a film to reproduce live-action movement. Originally, pre-
recorded live-action film images were projected onto a frosted glass panel and re-drawn by an animator. Animators
use rotoscoping to trace live-action film movement, frame by frame, for use in animated films. Eventually, rotoscopes
were replaced by computers. More recently, the rotoscoping technique has been referred to as interpolated
rotoscoping.

In live action movies, the rotoscoping tool has been successfully used for special effects. An object is traced and a
silhouette (matte) is made, which creates an empty space in a background scene. This allows the desired character
or object to be placed in the scene. However, blue screen techniques have largely replaced rotoscoping.

Rotoscoping has also been used as a special visual effect (such as a glow, for example) where it is guided by the
matte or rotoscoped line. An appropriate example would be the glowing lightsaber effect used in the original Star
Wars films. The visual effect was implemented by creating a matte based on sticks held by the actors.

The term rotoscoping is now used for the corresponding all-digital process of tracing outlines over digital film images
to produce digital mattes.

Walt Disney and his animators used rotoscoping in Snow White and the Seven Dwarfs in 1937. Rotoscoping was
also used in Cinderella and One Hundred and One Dalmatians. The rotoscope was mainly used for studying human
and animal motion, and not actual tracing.

1.5.3 Stop-Motion
It is an animation technique where static objects are given the illusion of motion. The object moves by very small
amounts between individually photographed frames. It helps produce the effect of motion when a series of frames
of conventionally drawn and painted animation is played back at normal speed.

Clay animation is one of the many forms of stop-motion animation. In clay animation, each animated piece can be
shaped as per requirements. Clay animation is also known as claymation.

Clay animation can take the style of freeform clay animation where the shape of the clay changes radically as
the animation progresses. It can also adapt to the style of character clay animation where the clay maintains a
recognizable character throughout a shot.

A final clay animation technique is called clay painting where clay is placed on a flat surface and moved like wet oil
paints on a traditional artistic canvas. It helps produce any images, but with a clay appearance to them. Joan Gratz
was a pioneer of this technique. Refer to figure 1.17.

19
Concepts of 3D and Animation

Figure 1.17: Clay model for stop-motion animation

Stop-motion animation is essential for model animation. It is the process of animating realistic models designed to
be combined with live action footage. This is done to create the illusion of a real-world fantasy sequence.

Stop-motion is used to produce the animated movements of any non-drawn objects, such as toys, blocks, dolls, etc.
An example is the Cartoon Network TV series and Robot Chicken. It is also the means for producing pixilation. In
pixilation, live actors are used as frame by frame subjects in an animated motion picture. Their poses are recorded
frame by frame.

A simplified variation of graphic animation is called direct manipulation animation. It involves frame-by-frame altering
or adding of a single graphic image. This animation is produced because the stop-motion process simply animates
a series of drawings, which most people associate with the generic animation term.

1.5.4 Anime
Anime is the Japanese abbreviation of the word Animation. Outside Japan, the term generally refers to animation
created in Japan.

Computer-assisted animation has taken a leap in the past few years, hence anime has benefited from this leap.
The story lines in an anime represent most major genres of fiction. Anime is broadcast on television, distributed on
media, such as DVD and VHS, and included in video games. Additionally, some are produced as full length motion
pictures. Anime often draws influence from Manga (Japanese comic strips and cartoons), light novels, and other
cultures. Some anime storylines have been adapted into live action films and television series.

1.6 Animation Production Terminology


The mere knowledge of animation is not sufficient. The terminologies associated with animation are equally
important. They are universally used terms in the field of animation. These terms will prove extremely useful when
you are working on a project or giving interviews. Let’s take a look at the most important terminologies.
● Animation fps

A motion picture is projected at 24 frames per second (fps). Hence, in two hour movie, 173,000 frames are
projected. This would also be the minimum number of cels that need to be created.

20
Principles of Animation
● Background drawing

It is the area where action takes place. There are usually very few backgrounds in a film compared to
cels.
● Cel and cel setup

An image is drawn on a clear piece of plastic. This plastic is known as cel. The general size of a cel is 12
½ by 16 ½. The outline of the picture is drawn on the front of the cel and is colored along the back. One or
more cels overlaid on a background is known as cel setup.
● Character models

The first thing an animator does is to create a model sheet of their character. The model sheet contains the
character in a variety of facial expressions and poses, which will serve as the model for each time they are
drawn.
● Cinematography

It is the art and craft of photography, where objects move on the lengths of continuous film. Various devices,
procedures and techniques are used to achieve this. Cinematography can also be described as painting
motion with light.
● Clean ups

They are tracings that are made of rough drawings on which color and shading specifications are marked.
● Depth of field

Depth of field deals with the range of depths over which objects in a frame are in focus. This is easy to
accomplish in live-action photography but in animation it becomes somewhat tricky. The objects that are
opposite to what you want the viewer to focus on is rendered blurry. It forces the eye to focus on the next
focal place. Depth of field is very important in computer animation.
● Key setup

It is the combination of original production cel or cels, and the original background on which these cels
belong. It completes the picture as seen in the film, generally the rarest and most valuable of any studio’s
art.
● Layout

The black and white rendering done by a layout person that determines the basic composition of the
scene.
● Lead animator

This animator is responsible for creating and animating one particular character in the film.
● Maquette

It is a statue based on the model sheet. It allows everyone to see the character in three dimensions.
● Model sheet

It is the drawing of a single character, in a variety of attitudes and expressions, created as a reference guide
for animators.
● Motion blur

Motion blur helps in bring the frames together and eliminating the jittery images that can come from
animation.
● Rough animation drawings

They are the original first sketches of a character in action. In computer animation, this is done with wire-

21
Concepts of 3D and Animation
frames.

There are three types of roughs:

◦ Key drawings: These are done by the lead animator alone. A general rule is that one key drawing is
done for every five frames of film.

◦ In-betweens: These are completed by an animation assistant and compliment the drawings done
between the key drawings.

◦ Breakdowns: The drawing at the center mark from one key drawing to the next is known as a breakdown.
For example, the scene is a car driving right to left across the screen. The key drawings are the car
entering and exiting the scene and the breakdown is the car at the center spot of the screen.
● Sequence

The created cels are put together to form a sequence.


● Timing Out

This is an animation key. It involves setting all the on screen action to the proper beats including music,
sound effects, dialogue, etc.
● Rendering

This is the final step in computer animation. During rendering, the computer takes each pixel that appears on
screen and processes the components. Then, it adds some motion blur before creating the final image.
● Rough sketch

It is the animator’s drawings used in the process of creating the finished image to be transferred to cel.
● Xerography

An electrostatic process adapted for transferring animators’ pencil drawings to cels. It was tested in Sleeping
Beauty, and then used for the first time in a feature with 101 Dalmatians. The process was used in The Little
Mermaid, after which the computer eliminated the need for cels.

Quick Test 2
1. Name the three types of roughs.
2. Three-dimensional objects are expressed in 3D environment by their width, height, and _______.

22
Principles of Animation

1.7 Summary
● Animation is an illusion of motion. 2D and 3D animation are the most common types of animation.
● The three distinct types of animation are frame-based animation, morphing, and 3D character animation.
● Pre-production, production, and post-production are the three important steps of creating an animated
film.
● The important factors in good timing for an animated film are limited animation, timing slow and fast action,
and good timing itself.

● Compositing is the technique and the art of assembling image parts collected from multiple sources to
make a new single whole. Optical compositing and digital compositing are the two types of compositing.

● In ‘fade in’, the image fades out of a single color. In ‘fade out’, the image fades into a single color. Fade in is
generally associated with the beginning of a scene while fade out is associated with the end of a scene.

● Morphing changes one image into another in a seamless transition. It is a special effect, which is generally
used in motion pictures and animations.
● Rotoscoping is an animation technique that traces a projected image from a film using a rotoscope. A
rotoscope is a device used in animation to trace a projected image from a film to reproduce live-action
movement.

● Stop-motion is an animation technique, where static objects are given the illusion of motion. Claymotion is
a type of stop motion.
● Anime is the Japanese abbreviation of the word animation. Outside Japan, the term generally refers to
animation created in Japan.

23
Concepts of 3D and Animation

1.8 Exercise
1. __________ is the technique and the art of assembling image parts collected from multiple sources to make a
new single whole.

2. The electrostatic process adapted for transferring animators’ pencil drawings to cels is called __________
a. Xerography
b. Storyboarding
c. Motion blur
d. Compositing

3. Maquette is a statue based on a model sheet that allows everyone to see a character in three dimensions.
(True/False)

4. ________ helps bring the frames together, eliminating the jittery images that can come from animation.

5. The ________ is the bottom-most element in which the other elements will be placed in front.

6. Anaheim is the Korean abbreviation of the word animation. (True/False)

Quick Test 1
1. True.
2. Post-production.

Quick Test 2
1. Key drawings, in-betweens, and breakdowns.
2. Depth.

1. Compositing.
2. Xerography.
3. True.
4. Motion blur.
5. Background plate.

6. False. Anime is the Japanese abbreviation of the word animation.

24
3D Animation Environment

Session 2: 3D Animation Environment

Learning Outcomes

.
In this session, you will learn to -
● Describe the different animation perspectives.
● Describe different projections.

Today, it is hard to find any form of entertainment without any type of digital art in it. A good movie (2D or 3D
animation) is an interesting combination of smart skills, sharp artistic observation, and understanding.

2.1 Views
While creating any type of scene, you need to consider different ways by which it can be projected. At the same time,
it is also important to consider how the audience is going to view it. With the help of CGIs (Computer Generated
Images), a particular scene or a single object can be viewed in many ways. For example, a car moving on the
highway will look different from the front, side, and top views. Similarly, along with the car, the surroundings can be
seen from different angles. If the scene requires the movement of an object from all sides, a good study of views
becomes mandatory.

Views and projections are associated as they can create an appropriate look for the scene. The chances of a good
scene getting ruined are high if the views and projections are incorrect. Correct views are also important for good
animation. Let’s have a look at different views and understand the terminologies related to it.
● Top view

A principal view of an object created by rotating the line of sight by 90 degrees around the horizontal axis
above the front view.
● Front view

A principal view of an object, positioning the object in such a manner that the majority of its features will be
located in the front, right side, and top views.
● Bottom view

It is created by rotating the object by 90 degrees around the horizontal axis below the front view. This view
is not typically included in a standard multiview drawing.
● Bird’s eye view

A detailed view of a scene seen by looking straight down from altitude is known as bird’s eye view. From
this viewpoint, the ground line is below the horizon line of the object. Figure 2.1 shows an image taken with
bird’s eye view.

25
Concepts of 3D and Animation

Figure 2.1: Bird’s eye view


● Direct view

It is a descriptive geometry technique that places the observer at an infinite distance from the object with
the observer’s line of sight perpendicular to the geometry in question. In third-angle projection, a projection
plane is placed between the observer and the object. The geometry is projected onto the projection plane.
This method is also referred to as the natural method.
● Ground’s eye view

It is a perspective graphic, a viewpoint looking up at the object. From this viewpoint, the horizon line is
leveled with the ground line of the object.
● Auxiliary view

The view derived from any image plane other than the frontal, horizontal, or profile planes is the auxiliary
view.
● Central view

The view from which related views are aligned in an orthographic drawing is known as the central view. The
distances and features projected or measured from the central view create adjacent views.

Quick Test 1
1. Apart from views which other factor can give a proper look to the scene?
2. A brief view of a scene seen from ground level is known as bird’s eye view. (True/False)

2.2 Projections
The word ‘projection’ is used to represent objects and structures graphically on 2D media. The projection surface
is a plane, such as picture plane and projection plane. A good knowledge of views and projection is an absolute
necessity for any 3D animator, because it is the prime concern while building a 3D scene. When these scenes are
rendered, each image file should have a graphically correct projection. This results in successful projection.

In 2D animation, the sketching of layouts and drawing within the given frame area is difficult. Here, the accurate
knowledge of projections is essential. A wrong projection of a layout or scene can ruin the entire layout and make
the output look non-realistic. Now, let’s focus on the various types of projection in this section. Some of the primary
projection methods include orthographic, oblique, and perspective projections.

26
3D Animation Environment
2.2.1 Orthographic Projection
A three-dimensional object can be represented in two dimensions by a method known as orthographic projection. It
utilizes multiple views of the object, from points of view (POV) rotated around the object’s center through increments
of 90°. In other words, it is a multiview drawing that shows every feature of an object in its true size and shape. In
a multiview drawing, a front, side, and plan view is drawn so that a person looking at the drawing can see all its
important sides. Similarly, the views are obtained by rotating the object about its center through increments of 90°.

In orthographic projection, there are two types of projections used. Here, the views are positioned relative (esti-
mated by comparison) to each other, first-angle or third-angle. In both types of projection, the appearances of views
can be projected onto the planes that help form a 6-sided box around the object. But they do differ in position for
plan, front, and side views.

When a design is almost ready to be implemented, orthographic projections prove resourceful. Orthographic
projections are extremely useful for 2D applications and games as well. Refer to figure 2.2.

Figure 2.2: Orthographic projections of a 3D image

2.2.2 Perspective Projection


The type of projection that helps project 3D images on a planar (2D or two-dimensional) surface is called perspective
projection. A paper or painting canvas would be an appropriate example of a planar surface. It is a projection
technique in which some or all of the projectors converge at predefined points. It is also known as perspective view,
perspective drawing, or simply perspective.

Perspective projection helps estimate actual visual perception. This means that this technique is used to
approximately replicate how humans perceive objects in the real world. It must be made in accordance with an
established geometric protocol that might have been laid down previously.

The perspectives on a planar surface contain distortion, similar to the distortion created when portraying the earth’s
surface on a planar map.

This projection technique can be better understood with the help of a real life example. Take four bottles of the
same size and place them one behind the other in a straight line. Maintain a certain distance between each bottle.

27
Concepts of 3D and Animation
Observe the bottles from a distance. Notice that though all the bottles are of the same size, from a distance the
furthermost bottle appears to be smaller, whereas the closest bottle looks comparatively bigger. This projection
appears to be in perspective because an angle is maintained between the eye and the placed objects. But, if these
bottles are observed in parallel projection, all the bottles appear of the same size because this projection does not
have a vanishing point. Refer to figure 2.3, 2.4, and 2.5.

Figure 2.3: One point perspective

Figure 2.4: Two point perspective

28
3D Animation Environment

Figure 2.5: Three point perspective


Note:
Another example of perspective projection is the railway tracks at an equivalent distance. The tracks
appear to be meeting at vanishing point at a particular distance. In reality, they are actually at a distance
from each other.

Following are some terms related to perspective projection:


■ Picture Plane
It is a flat surface (imaginary) usually located between the station point and the point at which the object is
viewed. The station point is a location or point from which an artist wants the observer to experience the
artwork. Generally, it is a vertical plane perpendicular to the horizontal projection of the line of sight to the
object’s order of interest.
■ Vanishing Point
The point in a perspective drawing, where parallel lines appear to come together is called the vanishing point.
It is also referred as the point in distance, where the two borders of a road appear to meet. The number and
placement of vanishing points determine the perspective technique being used.
A drawing with 1-3 vanishing points is known as a linear perspective.

Curvilinear perspective is a drawing that uses five vanishing points mapped into a circle. Four vanishing points
are placed around in a circle. These points are named as N, W, S, and E. The fifth vanishing point is placed at
the center of the circle.

Reverse perspective is a perspective drawing technique, in which the further an object is placed, the larger it is
drawn. In this technique, the vanishing points are placed outside the painting. This helps create an illusion that
these points are in front of the painting.
■ Perspective
The basis for graphical perspective are the rays of light that travel from an object to the viewer’s eye through the
picture plane. In graphic arts, such as drawing, it is an approximate representation of the image perceived by
the eye. This representation is made on a flat surface. The prominent features of perspective are as follows:
● Objects are drawn smaller as their distance from the observer increases.
● Items are distorted when viewed at an angle (spatial foreshortening).

In art, the term ‘foreshortening’ can be used interchangeably with the term perspective. But foreshortening can

29
Concepts of 3D and Animation
occur in other types of non-perspective drawing representations as well. An example of this would be oblique
parallel projection.

2.2.3 Axonometric or Parallel Perspective Projection


Axonometric or parallel perspective projection is the three-dimensional drawing of an object that helps create an
image true to scale. But this drawing could be incorrect in terms of perspective. For example, a building, in which
the floor plan is the basis for visible elevations would help create a diagram true to scale but it would have a skewed
perspective. Vertical lines are projected from the plan at the same scale, the usual angle of projection is 45°. An
isometric projection is a slightly flattened variation of the same. Usually, in axonometric drawing, a single (one) axis
of space is shown as the vertical.

The three types of axonometric projections are isometric projection, dimetric projection, and trimetric projection.
■ Isometric Projection
Foreshortening is the shortening of lines in a drawing so that an illusion of depth is created. In an isometric
projection, the direction of viewing makes three axes of space appear equally foreshortened. Here, the displayed
angles among the three axes of space and the scale of foreshortening are universally known. However, while
creating the final isometric instrument drawing, a full-size scale (without using a foreshortening factor), can be
implemented to good effect as the resultant distortion is difficult to comprehend or perceive.
■ Dimetric Projection
Dimetric projection is a form of axonometric projection, in which direction of viewing is such that two of the
three axes of space appear equally foreshortened. Here, the attendant scale and angles of presentation are
determined in accordance to the angle of viewing. Also, the vertical scale of the third direction is determined
separately. In dimetric projections, approximations are common.
■ Trimetric Projection
In trimetric projections, the direction of viewing is such that all three axes of space appear unequally
foreshortened. The scale along each of the three axes and the angles among them are determined separately
as dictated by the angle of viewing. In trimetric drawings, approximations are common.

2.2.4 Oblique Projection


In oblique projections, parallel projectors come out from all points of an imaginary object. They strike the projection
plane at an angle other than 90 degrees. This event occurs completely opposite to orthographic projectors, which
strike the plane of projection at 90 degrees. In orthographic and oblique projections, parallel lines in space appear
parallel on the final projected image. Oblique projection is used exclusively for pictorial purposes rather than for
formal working drawings because of its simplicity.

Quick Test 2
1. In ________ projections, the direction of viewing is such that all three axes of space appear unequally
foreshortened.
2. A picture plane is a flat surface (imaginary) usually located between the station point and the point at which
the object is viewed. (True/False)

30
3D Animation Environment

2.3 Summary
● With the help of Computer Generated Images (CGIs), a particular scene or a single object can be viewed
in many ways.
● The word ‘projection’ is used to represent objects and structures graphically on a 2D media.
● A three-dimensional object can be represented in two dimensions by a method known as orthographic
projection.
● The type of projection that helps project 3D object images on a planar (2D or two-dimensional) surface is
called perspective projection. It helps estimate actual visual perception.
● The point in a perspective drawing where parallel lines appear to come together is called the vanishing
point. A drawing with 1-3 vanishing points is known as a linear perspective.
● Axonometric or parallel perspective projection is the three-dimensional drawing of an object that helps
create an image true to scale.
● In oblique projections, parallel projectors come out from all points of an imaginary object. They strike the
projection plane at an angle other than 90 degrees.

31
Concepts of 3D and Animation

2.4 Exercise
1. 3D animation is computer-generated imagery, also referred to as CGI. (True/False)

2. What is the point in a perspective drawing where parallel lines appear to come together?
a) Vanishing point
b) Appearing point
c) Axonometric projection
d) Dimetric projection

3. In oblique projections, parallel projectors come out from only one point of an imaginary object. (True/False)

4. A three-dimensional object can be represented in two dimensions by a method known as __________.

Quick Test 1
1. Projections.
2. False. A detailed view of a scene seen by looking
straight down from altitude is known as bird’s eye view.

Quick Test 2
1. Trimetric.
2. True.

1. True.

2. Vanishing point.

3. False. In oblique projections, parallel projectors come out from all points of an imaginary object.

4. Orthographic projection.

32
Concepts of Materials and Lights

Session 3: Concepts of Materials and Lights

Learning Outcomes

.
In this session, you will learn to -
● Discuss and bring out the contrast between maps and mapping.
● Define shading in 3D and discuss its function.
● Discuss various types of shading.
● Describe different types of lights and discuss its features.

Lighting systems play an important role in an animation film project, especially when the characters are 3D. The
animation movies would be incomplete or uninteresting without lights. Therefore, lights should be treated like a
character and productively used.
Texture mapping is a powerful technique for adding realism to a computer-generated scene. In its basic form,
texture mapping lays an image (the texture) onto an object in a scene.

3.1 Mapping Objects


The word ‘map’ is originally referred to as a bitmap image that must be mapped onto the surface of the object.
Later, due to common usage, the idea was expanded to something as broad as texture. It is normally assigned to
something that imparts a particular attribute to a 3D object. Along with map, there are two related terms like Map
Projection and Mapping, which are used very often.
■ Mapping
Mapping is the process by which texture is applied on the surface of an object. For example, if the object is
round or cylindrical, a cylindrical mapping is applied. Whereas, if the surface is a box, cube type mapping is
used. In general, the mapping can be done in spherical, cylindrical, flat, cubic, and UV manner. For mapping
effect, refer to figure 3.1.

Figure 3.1: Mapping effect

3.1.1 Map Projection


Map projection can be referred to as a mathematical formula for converting points on a sphere. The projection of
Earth on a flat map can be cited as a prime example. Image maps can be projected in different ways onto three-
dimensional surfaces. The selection of mapping projection method should be based on creative considerations. But
while making these considerations, production concerns should be kept in mind and not ignored. Some of the most
useful projection methods are flat and cubical projection, cylindrical projection, and spherical projection.

33
Concepts of 3D and Animation
■ Flat Projection
In flat projection, maps are applied to a surface in a flat way. The results in this method are predictable and
the potential for distortion can be kept to a minimum. This can be achieved if the three-dimensional surface is
parallel to the projection plane. Refer to figure 3.2.

Figure 3.2: Brick wall in flat projection


■ Cubical Projection
In cubical projection, the map is repeated on each of the six sides of a cube. This method is particularly useful
with cubes, but only as long as one of the planes of the cube is parallel to the projection plane. This method is
a variation of the flat projection method. Refer to figure 3.3.

Figure 3.3: Cubical projection of a brick map


■ Cylindrical Projection
In cylindrical projection, maps are applied onto surfaces by wrapping the sides of the map around the shape
until the two ends of the map meet behind the object. This method is used to do object mapping of elongated
objects like a carrot or a glass or a bottle. Refer to figure 3.4.

Figure 3.4: Effect of cylindrical mapping on three different surfaces


■ Spherical Projection
In spherical projection, a rectangular map is applied to a surface by wrapping it around the surface. It is
wrapped till the opposite sides meet. It is squeezed at the top and bottom, and stretched till the entire object is
covered. Refer to figure 3.5.
34
Concepts of Materials and Lights

Figure 3.5: Different types of projections applied on same base, cylindrical and spherical

3.1.2 Mapping Types


There are various types of maps and they are categorized according to the effect they generate on the texture.
■ Environmental Mapping
Environmental mapping can generate a single texture image of a perfectly reflecting sphere in the environment.
The image consists of a circle representing the hemisphere of the environment behind the viewer. This is
surrounded by an annulus (doughnut shape) representing the hemisphere in front of the viewer. The image
generated is a perfectly reflecting sphere in the environment where the viewer is infinitely far from the sphere.
■ Procedural Maps
Procedural maps are surface textures or values created by an algorithm or mathematical formula rather than
a bitmap image. Procedural maps are random and seamless. They do not possess the repetitive structure of
bitmap images that need to be tiled. Refer to figure 3.6.

Figure 3.6: Procedural maps applied on various surfaces

Procedural maps consume very less memory. These maps offer classic base textures, such as marble and
wood. Designers are comfortable with these base textures, so this problem occurs. Procedural maps are often
referred to as shaders. Refer to figure 3.7.

35
Concepts of 3D and Animation

Figure 3.7: Example of procedural map


■ Texture Map
A texture map is a way of controlling the diffuse color of a surface on a pixel-by-pixel basis, rather than
assigning a single overall value. This is commonly done by applying a color bitmap image to the surface. But
color patterns can also be generated by the application itself. This helps you to create procedural textures.
■ Reflection Map
A reflection map consists of a two-dimensional image that is applied to a three-dimensional surface. It is done
with the purpose of making the surface reflective. A surface with a reflection map reflects the image of three-
dimensional models that are placed in front of the surface. Reflection maps help users to add an element of
reality to their materials.

Among material maps, reflection maps are usually the most used. Reflection maps do not require mapping
coordinates. The Reflection Blur setting helps determine how sharp the reflection will be depending on the
setting. Higher the setting in the Reflection Blur feature, the softer is the image produced. Reflection maps give
the illusion that an object is shiny and reflective without actually reflecting the scene.
■ Color Map
Color maps are used to calculate the color of light reflected by the three-dimensional surface, on which the color
map has been placed. These maps are mainly used to represent the images and labels found on packages and
containers, such as cardboard cereal boxes. These maps are also known as bitmaps or picture maps because
they often involve photographic images.
■ Bump Map
A bump map uses a bitmap image that gives visual impression of geometric change. But it only gives a visual
impression and does not actually change the structure of an object. It merely changes the interior shading of
an object to conform to the pattern of the bump map.
■ Displacement Map
A displacement map is an image that is used to shift the position of points within an object. This helps the object
to change its shape. The displacement map changes the position of points, hence they are difficult to control.
This phenomenon can be easily noticed when polygons are animated within close proximity to other polygons.
If displacement maps are used to animate or form an object’s geometry, a certain number of polygons need to
be created. The theory behind this is that they need to match or be in line with the map intended to be created.
Displacement maps are often used to create three-dimensional terrain that includes mountains and valleys.
■ Transparency Map
A transparency map consists of a monochromatic two-dimensional surface with the purpose of making all
or some of the surface transparent. The transparency map usually resembles the texture map, except that

36
Concepts of Materials and Lights
it consists of differing shades of gray (including black and white). The transparency map determines where
exactly the texture map should be applied. Areas in the map that are black will cause the corresponding region
of the model to become invisible (i.e. transparent). Areas that are white will be opaque and areas that are gray
will have varying degrees of transparency depending on the shade of gray.

The basic idea behind transparency map is that the rendering program looks at the brightness values of the
pixels in the map. It helps the rendering program determine whether the surface will be transparent, opaque,
or translucent.

3.1.3 Mapping Coordinates


Coordinates in the three-dimensional world act as directional guidelines. There are six directions available in three
pairs:
● Left and right - the horizontal directions.
● Up and down - the vertical directions.
● Forward and backwards (or front and behind) - it does not have any general name.

In an abstract 3D space found in a 3D computer graphics application, there is no particular meaning to up and down,
left or right, forward and backward. It is simply a pure imaginary space of 3 dimensions called X, Y, and Z-axis.
These coordinates are used to indicate the placement, orientation, and scale of an object in the 3D world. Similarly,
the mapping coordinates help indicate the placement, orientation, and scale of a map on the geometry.

While applying materials onto the object, the correct map projection is essential. It makes the object look accurate in
terms of its look and directions. These coordinates specify how the map is projected onto the material and whether
it is projected as tiled or mirrored. Mapping coordinates are also known as UV or UVW coordinates. By adjusting
coordinate parameters, you can move a map relative to the surface of the object to which it is applied and achieve
other effects.

3.2 Shaders
Shading means simulating light falling on a scene and the effect it has on the brightness and color of different areas
of objects in the rendering process. The shading value is calculated based on the relationship between the surface
normals and the light sources that reach the surface.

The surface normals are vectors or straight lines with a specific direction. They are located on the
vertices or corners of each polygon of the surface.

The shader defines how any material created in it will handle shininess, opacity, bumps, and so on. Objects in 3D
scenes are most commonly created by fitting together a large number of polygons (usually triangles), edge to edge.
This creates a wireframe that is similar to the object’s surface. There are a number of ways to make objects more
realistic. Texture mapping and shading are examples of the same.

There are different ways to shade a scene, but let’s look at those most commonly used for creating a 3D
animation.
■ Flat Shading
Flat shading means the same color is applied to an object in order to represent the effects of light. All pixels
inside a polygon are given the same shade. Here, a single color is assigned to each polygon in an object. It
results in objects getting a solid appearance as compared to wireframe models. This method is effective unless
the individual polygons, which make up the object, are extremely small and the graphic card’s color depth is
sufficiently high. Then, the images appear blocky, because the junctions between polygons are still visible.

37
Concepts of 3D and Animation
■ Interpolative Shading
In interpolative shading, the pixels inside a polygon are given a particular shade determined by interpolating
between the polygon’s vertices or edges.
■ Gouraud Shading
Gouraud shading is a method in which the triangle color is obtained by interpolating the vertex colors, located
at each corner of the triangle. It involves calculating the color at each polygon’s vertices (the corners) and
blending these colors into each other across the surface of the polygon. It hides the junctions between polygons
and produces a smooth appearance in curved object surfaces. By utilizing this technique, 3D objects appear
increasingly realistic due to the smooth and curved appearance of the surfaces. This occurs despite the fact
that they consist of many separate polygons.
■ Phong Shading
Phong shading introduces reflective highlights onto a surface. It is done by calculating the brightness of a
surface pixel by linearly interpolating points on a polygon and using the cosine of the viewing angle. The
resultant image is sharp, producing objects that are more realistic than usual. This occurs especially when
the object is one in which you are used to see sharply defined reflections from light sources. Phong shading
requires much more processing power than Gouraud shading. So, it is more likely to be confined to visualization
systems running on powerful equipment.
■ Metal Shading
Metal shading requires extreme use of real time calculation power. This method is more complex than Phong
shading and should be reserved for objects that are metallic.

Quick Test 1
1. The cubical projection is useful when one of the planes of the cube is at right angles to the projection plane.
(True/False)
2. Different colors applied to an object in order to represent the effects of light is called flat shading. (True/
False)

3.3 Lighting in 3D Environment


Lighting in computer graphics refers to the placement of lights in a scene to achieve the desired effect. Image
synthesis and animation packages contain different types of lights that can be placed in different locations and
modified by changing the parameters. People creating images or animations place little importance on lighting.
Lighting is a very important part of image synthesis, hence this aspect cannot be ignored. Refer to figure 3.8.

38
Concepts of Materials and Lights

Figure 3.8: Lighting effects

3D applications usually use four different light sources - ambient lights, global lights, point lights, and spotlights.
■ Ambient Light
The natural light in a scene is known as ambient light. When you turn a light bulb on, the light emitted by the
bulb bounces off walls, tables, chairs, etc. As this light bounces, it illuminates other objects around it. Ambient
light prevents objects from being completely black in the shadows. 3D artists should not change global ambient
lighting from the default black unless necessary. This is because the ambient lights increase the light evenly on
everything in the scene. They wash out bump maps, shadows, and other effects based on light contrast. They
place specific lights in the scene to achieve effects.

The key features of ambient lighting are:


● Light rays are pointed at all directions, filling an environment quickly, and illuminating shapes evenly.
● Ambient lights are used to raise the overall light level.
● Low ambient light creates artificial contrast imagery while high ambient light looks washed out.

● Usually, single ambient light in the scene is appropriate.


■ Directional or Global Lights
Directional lights are parallel rays that light up a scene in one direction. These lights are like the Sun where the
source is far away but the light is directional. This light affects everything in the scene and hence everything is
illuminated or cast shadows from it. The careful balance of ambient and direct light sources is the key to good
lighting. Direct light is often used for outdoor scenes.

The key features of directional light are:


● Light rays are parallel and aim in one direction.

● Directional lights are used to simulate lights in the distance, like the Sun.
■ Point Lights
Point lights originate from one location and emit light in all directions. An important thing about point lights
is that the render shows the result of light but does not actually render the source. These lights act like light
bulbs.

The key features of point light are:


● Light rays are radiated from a single focal point in all directions.

● Point lights are used to simulate lights, like light bulbs.


■ Spotlights
Spotlights are lights that produce a strong beam of light of controllable width. It gives you control over everything
in the scene. This ranges from brightness to direction, and color to volumetic effects.

39
Concepts of 3D and Animation
The key features of spotlight are:
● Light rays emit radially from a point within a cone.
● The spread angle is varied to widen or narrow the cone.
● The concentration of the spotlight should be varied to focus the spotlight.
■ Lighting in Film and Video
Film stock has a greater dynamic range for light than videotape. Videotape has improved, but the film still
has a greater range. This means that early video, and to a lesser extent current video, requires more light for
exposure. So, even though the techniques used were similar, early video had a flat, low contrast look. This look
is still seen in some videos.

The basic lighting for film and video was a three-point system, consisting of a key light, a fill light, and a
backlight. Other extensive lighting systems include eye light, background light, and kicker light.
● Key light

Key lights are the brightest lights in a scene and focus on the most important aspect of the scene.
● Fill light

It is additional light used to brighten shadow areas. These lights are usually much dimmer and are present
to assist in illumination. These lights soften the shadows caused by the key lights. If the fill light is too
intense, then a low contrast, flat image is created.
● Backlight

Backlight is illumination from the rear view of a subject so that light does not directly enter the camera lens.
It helps outline the subject, especially the upper portion, and separates it from the background.
● Eye light

It is a small light that can be focused to reflect in the subject’s eye, giving it a reflective sparkle.
● Background light

This light is used to illuminate the background.


● Kicker light

The kicker light is similar to backlight as it helps separate the subject from the background. It is usually
placed low and behind the subject. This light may be opposite the key light. Different effects can be obtained
by varying the intensity, diffuseness, position, and number of lights.

Highly reflective surfaces, which reflect the light of real light source can be used as virtual lights.

Quick Test 2
1. The natural light in a scene is known as __________.
2. _________ light originates from one location and emit light in all directions.

40
Concepts of Materials and Lights

3.4 Summary
● Mapping is the process by which texture is applied on the surface of an object.
● Map projection can be referred to as a mathematical formula for converting points on a sphere.
● The different types of projection are flat, cubical, cylindrical, and spherical.
● The different types of maps are environmental, procedural, texture, reflection, color, bump, displacement,
and transparency maps.
● Shading is the simulation of light falling on a scene and its effect on the brightness and color of different
areas of the objects in the rendering process. The different methods of shading include flat, interpolative,
gouraud, phong, and metal shading.
● The four different lights used in 3D applications are ambient, global, point, and spotlight.
● The basic lighting for film and video was a three-point system, consisting of a key light, a fill light, and a
backlight. Other extensive lighting systems include eye light, background light, and kicker light.

41
Concepts of 3D and Animation

3.5 Exercise
1. The process by which texture is applied on the surface of an object is called _____.
a. Shading
b. Lighting
c. Mapping
d. Projection

2. _______ are surface textures or values created by an algorithm or mathematical formula rather than a bitmap
image.

3. _________ means simulating light falling on a scene and the effect this has on the brightness and color of
different areas of objects in the rendering process.

4. _______ lights are parallel rays that light up a scene in one direction.

5. _______ light is additional light used to brighten shadow areas.

Quick Test 1
1. False. Cubical projection is useful with cubes as long as one of the planes of the
cube is parallel to the projection plane.
2. False. The same color applied to an object in order to represent the effects of light
is called flat shading.

Quick Test 2
1. Ambient light.
2. Point.

1. Mapping.

2. Procedural maps.

3. Shaders.

4. Directional.

5. Fill light.

42
Camera Concepts and Rendering Techniques

Session 4: Camera Concepts and Rendering


Techniques
Learning Outcomes

.
In this session, you will learn to -
● Discuss types and different features of cameras.
● Define rendering and its steps.
● Discuss various rendering engines.
● Define and discuss Z-buffer.
● Discuss ray tracing.
● Discuss radiosity.
● Discuss network rendering.

In the previous session, you learnt about the concepts of lights, maps, shaders, cameras, and the terminologies
related to them. After putting them into a scene, you need to output the scene for using it in desired medias. Most
of the visual characteristics of a simulated three-dimensional environment are determined through the rendering
process.

In this session, concepts of cameras and process of rendering will be covered. You will also understand rendering
engines, the types of rendering, and when they are used.

4.1 Camera Concepts


The camera helps audience see events unfolding on the screen. The camera should be used in such a way that it
adds to whatever is happening in a given scene. It should not distract the viewer from it with confusing movements
or unnecessary motion.

Real-world scenes are captured directly if the movie has been shot with an actual camera. However, when you are
working with computer graphics, the camera should be treated like a real-life movie camera.

4.1.1 Types of Cameras


Generally, for user convenience, all three-dimensional rendering programs provide a default or standard camera.
This camera is aimed at the center (origin) of the imaginary three-dimensional world. It is usually equipped with an
imaginary lens of medium focal length. The lens represents the scene in front of it using perspective projection. It
projects all objects in the three-dimensional environment onto the image plane. Other views of the default camera
are commonly shown in front, top, and side orthographic projections.

There are other cameras that can be created in addition to the default camera. These additional cameras are
usually established on a target-based vision or free vision through cameras.
● Target camera

Target camera forms a two-part object. The camera is linked to a target that can be animated separately to
control where the camera is pointing. A target camera looks at an invisible object, which is a target.
● Free camera

A free camera sees what it is pointed at, which can change as objects move in the scene. Since the camera

43
Concepts of 3D and Animation
will be stationary, you can avoid extra clutter in the viewports by using the free camera, which does not
have a target.

When multiple cameras are present in the three-dimensional space, only one camera can be active at a time. The
understanding of the types of cameras is extremely essential. But at the same time, you need to gain in-depth
knowledge related to the terminologies of camera. These are helpful to manipulate the camera with a variety of
options. Let’s begin to understand them.

4.1.2 Pyramid of Vision


The pyramid of vision is defined as the portion of the three-dimensional environment seen through the camera. It is
also known as the cone of vision. The eye’s cone of vision is replaced by pyramid of view in cameras, as the view
area is rectangular. The objects that are located inside this pyramid can be viewed by the camera.

4.1.3 Point of View (POV) and Point of Interest (POI)


The point of view is a subjective camera angle that becomes the perspective of a character. You look at the world
through the eyes of the character. In other words, it is the location in the scene where the camera is placed. The
point of interest or the center of interest is the location in space where the camera is focused.

4.1.4 Clipping Planes


It is a plane that defines the portion of a 3D design displayed in a view. The clipping planes are perpendicular to
the line of sight. It throws away polygons on the other side of it. This can dramatically speed up the rendering of a
polygonal scene, since unneeded polygons take up valuable processing power. Refer to figure 4.1.

Figure 4.1: Far clipping plane


The far clipping plane defines the most distant area that can be seen by the camera. Whereas, the near clipping
plane represents the visible area closest to the camera. Refer to figure 4.2.

Figure 4.2: Near clipping plane

44
Camera Concepts and Rendering Techniques

4.1.5 Field of View (or Vision)


Field of view (FOV) is a traditional photography term, which can be defined as the area that the viewer can see
in the Real time 3D (RT3D) scene. The FOV is usually determined by its width in degrees and the camera’s focal
length. The shorter the focal length, the wider the FOV. The clipping planes make the pyramid of vision shorter and
define the field of view or image plane. Refer to figure 4.3.

Figure 4.3: The shorter the focal length, the wider the FOV

4.1.6. Focal Length


The focal length of a lens is the distance between the optical center of the lens and the place where it focuses its
image. The focal length of a camera controls the way in which three-dimensional objects are seen by the camera.
It is measured in millimeters. The lens on a digital camera is marked with its focal length and this is typically a very
small number such as 6 - 15mm. In traditional 35mm photography, everyone is used to the common focal lengths of
28mm, 50mm, 200mm, etc. The focal length in a virtual camera is defined by the relation between the near clipping
plane and the far clipping plane. Refer to figure 4.5.

Some common focal lengths lens sizes:

< 20mm = Super wide angle

24mm - 35mm = Wide angle

50mm = Normal, the same picture angle as your eye (46 degrees)

80mm - 300mm = Telephoto

> 300mm = Super telephoto

The focal length has a direct effect on perspective.

Figure 4.5: Focal length

45
Concepts of 3D and Animation
4.1.7 Depth of Field
The depth of field is the portion of the scene in front of the camera that appears focused. It is defined by the area
between the near and the far focal planes. These are the zones on either side of the principal focus point that will
remain in focus at a given setting.

3D packages are built around the metaphor of a film studio. The user looks through a camera into a space populated
with objects and lit by lights. Refer to figure 4.6.

Figure 4.6: Depth of field

4.1.8 Panning and Tilting


You can make almost any move using a virtual camera. However, it is still a good idea to use the real world moves.
Panning and tilting are the two ways by which a camera can move.

Panning indicates the horizontal movement and rotation of the camera whereas tilting is the vertical movement.
Panning method is used to follow a moving object or character, or to show a lot can fit into a single frame, such as
panning across a landscape. It is also used as a transition between one camera position and another.

Panning and tilting are sometimes used to create suspensions by delaying the revelation of off-screen happenings.
On the other hand, fast panning/tilting (sometimes called zip panning/tilting) suggests child-like exuberance or
wonderment.

Quick Test 1
1. The depth of field is the portion of the scene in front of the camera that appears focused. (True/False)
2. The ___________ of a lens is the distance between the optical center of the lens, and the place where it
focuses its image.

4.2 Rendering
Rendering is the art and process of giving the final touch to a 3D scene. In this process, the vector data gets
converted to raster images or animation as per the instructed output settings.

3D rendering is a creative procedure similar to photography or cinematography, which includes lighting, staging
scenes, and producing images. The three-dimensional data depicted could be a complete scene, including geometric
models of different three-dimensional objects, buildings, landscapes, and animated characters. Artists need to
create this scene by modeling and animating before the rendering can be done.

46
Camera Concepts and Rendering Techniques
In the rendering process, the camera takes a picture of the objects as seen from the camera’s viewpoint, direction,
and field of view. The rendering is achieved mathematically by tracing lines from the vertices of the polygons in all
objects in the scene back to the viewpoint of the camera. The color of each pixel must be determined after knowing
what surfaces are visible to the camera and where they project onto the viewing plane. The sophistication of the
process in a given case results in a combination of the object’s surface properties (color, shininess, etc.) and the
lighting placed in the scene. An animation is a series of such renderings with a slightly changed scene.

The rendering process consists of six major steps regardless of the computer systems used. These steps need not
be followed in a fixed order. They can be changed as per project requirements.
● Getting models stored on the hard disk or creating models in the programme.
● Placing the camera.
● Defining light sources.
● Specifying characteristics of the surfaces of objects, including color, texture, shininess, reflectivity, and
transparency.
● Shading techniques.
● Animation.

3D graphics model the way light interacts with objects and with our eyes. Therefore, objects do not have the type
of color seen in painting and 2D computer graphics. They have surface qualities that reveal themselves under the
particular lighting in a scene. If there is no lighting, the object is rendered black regardless of what color it would
appear in light.

Rendering time is an extremely important consideration in 3D animation. Rendering time is a function that is not
restricted to the power of the computer used. It also includes the number of polygons in the scene, the complexity of
the lighting, and the presence of computational intensive elements, such as transparency and reflective surfaces.

Rendering sometimes takes a long time, even on fast computers. This is because the software is
essentially photographing each pixel of the image. It also involves calculating the color of just one pixel
that can involve a great deal of calculation and tracing rays of light as they would bounce around the 3D
scene.

4.2.1 Rendering Engines


There are different engines available to render 3D objects and scenes.
● Game engine

The core software component of a computer video game or other interactive application supported with
real-time graphics is called a game engine. A game engine enables the game to run on multiple platforms
such as game consoles. Game engine can run on different desktop operating systems, such as Linux, Mac
OS X, and Microsoft Windows.

The core functionality provided by a typical game engine includes:

◦ Rendering engine (renderer) for 2D or 3D graphics.

◦ A physics engine or collision detection and response.


◦ Sound, scripting, animation, artificial intelligence, networking, streaming, memory management,
threading, and a scene graph.

Game engines provide platform abstraction that allows the same game to be run on various platforms
including game consoles and personal computers.

Few game engines provide real-time 3D rendering capabilities instead of the wide range of functionality
47
Concepts of 3D and Animation
required by games. These engines rely on game developers to implement the remaining functionality or
assemble it from other game middleware components. These types of engines are generally referred to as
a graphics engine, rendering engine, or 3D engine instead of the broader term game engine.

Modern game or graphics engines generally provide a scene graph, which is an object-oriented representation
of the 3D game world. It simplifies game design, and can be used for more efficient rendering of vast virtual
worlds.

The process of rendering happens because of rendering engines. Most 3D softwares have their own in-built
rendering engines. Apart from those, there are many rendering engines available, which help in external
rendering.
● RenderMan

It is a popular rendering solution for high-end CGI and special effects. It is a well known software for good
quality post-production features, such as anti-aliasing, motion blur, and depth of field. There is also a
programmable and extensible shader language that provides a certain degree of flexibility and accuracy
necessary for high-volume production. With this package, you can manage large models and massive
scene files.
● RenderMax-lite

It provides the artist with the ability to complete complex, time consuming rendering tasks for a single image
or a full animation project in record-breaking time.
● Aqsis

It is a tools suite for producing photo realistic 3D images.


● Spider

It allows us to easily queue groups of frames to be processed across the entire network.
● 3Delight

It is a fast RenderMan compliant renderer designed to produce photo realistic images for important
production environments.
● Smedge

It is a distributed rendering tool that can control any program capable of rendering from a command line.

You can render a scene using various rendering methods. The most popular methods are Z-buffer,
raytracing, and radiosity.

4.3 Z-buffer
The Z-buffer technique, used in computer graphics programming, determines which objects are behind and in front
of each other in a 3D view. It is taken into consideration from the eye’s perspective.

Z-buffer rendering gets its name from the fact that all objects in the scene are sorted by their Z position or depth
in scene. This depth information is kept in a buffer and made available to the rendering process as the hidden
surface removal calculations are performed. The rendering method makes the hidden surface remove one object
and one pixel at a time. Z-buffering involves pixel data comparisons, not polygon comparisons. Z-buffering does not
require that objects be polygon-based. As long as a shade and Z value can be calculated for each projected point,
Z-buffering can be used, no matter how the objects are actually represented (e.g. RGB volumes). Z-buffering uses
a frame buffer (memory) with a color value for each pixel and a Z-buffer, with the same number of entries, in which
a Z value is stored for each pixel.

One of the main advantages of Z-buffering is that you do not need to sort the polygons in the scene, which is very

48
Camera Concepts and Rendering Techniques
useful when scene complexity is high. However, a degree of sorting is sometimes used at the object-level when
parts of the scene have transparent properties.

4.4 Raytracing
It is a popular rendering technique. As the name suggests, it involves tracing the path taken by rays of light within a 3D
scene. However, the raytracing does not start from the scene’s light source but from the observer’s eye. Raytracing
a complex 3D model may take hours for a computer to render. It is important to understand the technology of this
technique to achieve the required results.

Although they can be effective at producing recognizable 3D scenes, Gouraud and Phong shading will never produce
anything that could be described as photo-realistic. To achieve this, it’s necessary to use rendering techniques that
imitate the physics of real-world lighting. The most commonly used of these are called raytracing and radiosity.

Raytracing is especially useful for rendering images that contain reflective surfaces and transparent or translucent
objects. Raytracing is versatile because of the large range of lighting effects it can model. Images produced using
raytracing techniques tend to look artificial with sharp-edged shadows. Refer to figure 4.7.

Figure 4.7: Raytracing

4.5 Radiosity
Radiosity and raytracing algorithms are different and yet they are complementary in many ways. Radiosity is an
advanced form of raytracing. Radiosity rendering works by treating every surface as a light source. It calculates
the amount of light hitting each surface while taking the surface texture into consideration and the amount of light
reflected (or radiated) from it. This calculation includes both surfaces that are visible and invisible to the viewer.
Shadow edges become softer and parts of the scene not directly lit by a light source will be made visible.
The main advantages of radiosity are as follows:
● Calculates light intensity on each surface.
● Supports indirect lighting and penumbra shadows.
● Inter-object diffuse illumination possible.
● Extremely realistic looking output.
● The point from which a scene is viewed can be moved without having to do all the calculations again,
provided that the lighting has not been changed. This can happen because it takes into account surfaces
hidden from the viewer.
● Ideal for rendering product visualizations and architectural walkthroughs.

The main disadvantages of radiosity are as follows:


● The amount of initial processing required is much greater than that demanded by raytracing.

49
Concepts of 3D and Animation
● As its use tends to be restricted to high-end systems, often using multiple processors in parallel processing
environments, it turns out to be extremely expensive.

4.6 Network Rendering


You can render a single file using computer networks, which offer many ways to increase the speed of rendering.
This process is called network rendering. You can increase the speed of rendering with this method. But, the
final performance depends on the network detail, the network rendering features offered by the software, and the
computer hardware.

There are many strategies available for network rendering but the most common are distributed rendering and
remote rendering.

4.6.1 Distributed Rendering


In this method, a rendering job is assigned to different computers on the network. For example, say you have to
render large number of frames. You can split the total number of frames into certain numbers and assign them to
each machine. This strategy requires software that is able to split a rendering job into several sections and then put
the end result together.

4.6.2 Remote Rendering


The remote rendering feature allows an application to run on a remote server machine. It can also perform 3D
rendering on the remote machine, while displaying on a local machine. The power of remote rendering allows users
to view and interact with extremely large data sets from any client machine located almost anywhere. Performance
with remote rendering depends primarily on two factors, rendering speed on the remote server machine and the
available network bandwidth.

Quick Test 2
1. The remote rendering feature allows an application to run on a remote _________ machine.
2. State one advantage of Z-buffering.

50
Camera Concepts and Rendering Techniques

4.7 Summary
● In addition to the default camera, two other types of camera are free camera and target camera.

● The point of view is a subjective camera angle that becomes the perspective of a character.

● Field of view is a traditional photography term, which can be defined as the area that the viewer can see in
RT3D scene.

● The focal length of a lens is the distance between the optical center of the lens and the place where it
focuses its image. The focal length of a camera controls the way in which three-dimensional objects are
seen.

● The depth of field is the portion of the scene in front of the camera that appears focused. It is defined by the
area between the near and the far focal planes. These are the zones on either side of the principal focus
point that will remain in focus at a given setting.

● The Z-buffer technique, used in computer graphics programming, determines which objects are behind and
in front of each other in a 3D view.
● Rendering is the art and process of giving the final touch to a 3D scene.

● Raytracing involves tracing the path taken by rays of light within a 3D scene. But, contrary to popular belief,
it doesn’t start working from the scene’s light source but from the observer’s eye.

● Radiosity renderings work by treating every surface as a light source. It calculates the amount of light
hitting each surface while taking the surface texture into consideration and the amount of light reflected (or
radiated) from it.

● The process of rendering a single file using computer networks, offering many ways to increase the speed
of rendering is called network rendering.

51
Concepts of 3D and Animation

4.8 Exercise
1. The different types of camera are the default camera, _______ camera, and _______ camera.

2. Whats is the subjective camera angle, that becomes the perspective of a character, called?
a. Point of Interest
b. Point of View
c. Radiosity
d. Z-buffer

3. The core software component of a computer video game or other interactive application supported with real-
time graphics is called a __________.

4. RenderMan is a popular rendering solution for high-end CGI and special effects. (True/False)

5. The Z-buffer technique determines which objects are behind and in front of each other in a 3D view. (True/
False)
6. _______ involves tracing the path taken by rays of light within a 3D scene.

7. Radiosity calculates light intensity on each surface. (True/False)

. The process of rendering a single file using computer networks, offering many ways to increase the speed of
rendering is called _____________.

Quick Test 1
1. True.
2. Focal length.

Quick Test 2
1. Server.
2. One of the main advantages of Z-buffering is that you do not need to sort the
polygons in the scene, which is very useful when scene complexity is high.

1. Target, Free.

2. Point of View.

3. Game engine.

4. True.

5. True.

6. Raytracing.

7. True.

. Network rendering.

52
Glossary

Glossary

A
Animator

An artist who draws characters in motion.

Anime

It is the Japanese abbreviation of the word ‘Animation’.

B
Background light

This light is used to illuminate the background.

Backlight

Backlight is illumination from the rear of a subject so that light does not directly enter the camera lens.

C
CGI

A term meaning computer graphics imagery. It has been used by Disney. For example, in the crowd scenes in ‘The
Hunchback of Notre Dame’ to create thousands of individual characters moving simultaneously by computer.

Clean-up

The process of refining the lines of rough animation and adding minor details.

Clipping Planes

It is a plane that defines the portion of a 3D design displayed in a view.

Compositing

Compositing is the technique and the art of assembling the image parts collected from multiple sources to make a
new single whole.

D
Depth of Field

The depth of field is the portion of the scene in front of the camera that appears focused. It is defined by the area
between the near and the far focal planes.

Digital Animation

Digital animation refers to any type of animation saved as an electronic file stored in a memory of a computer, or
other digital file saving device.

53
Concepts of 3D and Animation
Dimension

The height, width, or depth of an object.

E
Editing

Editing is a process where both, images and sound are selected are arranged in a particular order to tell the story
of the film in an organized manner.

Eye Light

It is a small light that is used to focus in the subject’s eye to give it a reflective sparkle.

F
Fill Light

It is additional light used to brighten shadow areas.

Focal Length

A lens’ angle of view, most commonly indicated as wide-angle, normal or telephoto. Usually compared to a 35mm
camera’s lens.

I
In-betweening

Same as interpolation. In traditional animation in-betweening is a tedious process of drawing less important frames
between keyframes, done by a leaque of betweeners. Keyframes are drawn by the animator. In computer animation
in-betweening is completely handled by the computer.

Interpolation

Change of state of an object over time between two consecutive keyframes.

K
Key Light

Key Lights are the brightest lights in a scene and focus on the most important aspect of the scene.

Key Set-up

The combination of original production cel or cels and the original background on which these cels belong, making
the complete picture as seen in the film. Generally the rarest and most valuable of any studio’s art.

Kicker Light
The kicker light is similar to a back light as it helps separate the subject from the background.

54
Glossary

L
Layout

The black and white rendering done by a layout person that determines the basic composition of the scene.

Light

An object that emits light. Point lights, Spotlights, Distant lights, and Ambient lights are certain types of light.

M
Mapping

It is the process by which texture is applied on the surface of an object.

Matte Painting

A special effect whereby an object or landscape, for example a castle or an island, is painted on glass and set in
front of the camera so that both the real setting and the painting are filmed as one.

Model Sheet

Drawings of a single character of grouping of characters, in a variety of attitudes and expressions, created as a
reference guide for animators.

Morphing

It is a type of animation that transforms the form of one object into another form smoothly.

P
Persistence of Vision

Persistence of Vision is a theory, according to which, the image that you are seeing remains in your eyes for 1/16th
of a second.

Picture Plane

It is a flat surface (imaginary) usually located between the station point and the point at which the object is viewed.

Point of View

It is a subjective camera angle that becomes the perspective of a character.

Production Cel

Any cels used in the making of a production. This may include cels that were meant to be used, but were cut from
the film. That is why it is important to find the exact location of the production cel you are considering in the film of its
origin. Some cels are rarer because of their cut. Production cels are an essential part of any animation collection.

Projection

It is the method where objects and structures are represented graphically on 2-D medium.

Pyramid of Vision

The pyramid of vision is defined as the portion of the three-dimensional environment seen through the camera.

55
Concepts of 3D and Animation

R
Radiosity

It is an advanced form of ray tracing where rendering work by treating every surface as a light source.

Raytracing

It involves tracing the path taken by rays of light within a 3D scene.

Rendering

It is the art and process where vector data gets converted to raster images or animation as per the instructed output
settings.

Rotoscoping

It is an animation technique that traces a projected image from a film using a rotoscope. A rotoscope is a device
used in animation to trace a projected image from a film to reproduce live-action movement.

Rough Sketch
The animator’s drawings used in the process of creating the finished image to be transferred to cel.

S
Shading

The technique of giving the illusion of depth to a surface by varying the color of the polygons which make up that
surface.

Shadow

The effect of a light source being blocked from illuminating an object due to another object being in the path between
the light and the object. Shadows can be calculated in Phong shading (Best) or ray tracing.

Stop- Motion

It is an animation technique where static objects are given the ‘illusion of motion’.

Storyboard

A storyboard is a set of drawings outlining the plot and shot sequence for something that needs to be filmed.

Storyboard Drawing

A sketch made for the storyboard, which conveys the original plot and action.

T
Tweening

It is the process where illusion of movement is created by creating in-between frames. The animator draws the
keyframe poses or actions.

V
Vanishing Point

The point in a perspective drawing where parallel lines appear to come together.

56
Glossary

X
X-Sheet

The animation film is made up of large number of frames. The X-sheet is used to keep track of the individual frame.
These sheets can also be used to keep track of the sound recordings.

Z
Z-Buffer

The Z- buffer technique, used in computer graphics programming, determines which objects are ‘behind’ and ‘in
front’ of each other in a 3D view.

57
This page has been intentionally left blank
Bibliography

3ds Max 9 Bible


- Kelly L. Murdock

3ds Max 9 Essential Training


- Chad Perkins

You might also like