You are on page 1of 25

Animation

Animation refers to the movement on the screen of the display device created by
displaying a sequence of still images. Animation is the technique of designing,
drawing, making layouts and preparation of photographic series which are
integrated into the multimedia and gaming products. Animation connects the
exploitation and management of still images to generate the illusion of movement. A
person who creates animations is called animator. He/she use various computer
technologies to capture the pictures and then to animate these in the desired
sequence.

Animation includes all the visual changes on the screen of display devices. These
are:

1. Change of shape as shown in fig:

2. Change in size as shown in fig:

3. Change in color as shown in fig:


4. Change in structure as shown in fig:

Application Areas of Animation


1. Education and Training: Animation is used in school, colleges and training
centers for education purpose. Flight simulators for aircraft are also animation
based.
2. Entertainment: Animation methods are now commonly used in making motion
pictures, music videos and television shows, etc.
3. Computer Aided Design (CAD): One of the best applications of computer
animation is Computer Aided Design and is generally referred to as CAD. One of
the earlier applications of CAD was automobile designing. But now almost all types
of designing are done by using CAD application, and without animation, all these
work can't be possible.
4. Advertising: This is one of the significant applications of computer animation. The
most important advantage of an animated advertisement is that it takes very less
space and capture people attention.
Backward Skip 10sPlay VideoForward Skip 10s
5. Presentation: Animated Presentation is the most effective way to represent an
idea. It is used to describe financial, statistical, mathematical, scientific & economic
data.

Animation Functions
1. Morphing: Morphing is an animation function which is used to transform object
shape from one form to another is called Morphing. It is one of the most
complicated transformations. This function is commonly used in movies, cartoons,
advertisement, and computer games.
The process of Morphing involves three steps:
In the first step, one initial image and other final image are added to morphing
application as shown in fig: Ist & 4th object consider as key frames.
The second step involves the selection of key points on both the images for a
smooth transition between two images as shown in 2nd object.

3. In the third step, the key point of the first image transforms to a corresponding
key point of the second image as shown in 3rd object of the figure.
2. Wrapping: Wrapping function is similar to morphing function. It distorts only the
initial images so that it matches with final images and no fade occurs in this
function.
3. Tweening: Tweening is the short form of 'inbetweening.' Tweening is the process
of generating intermediate frames between the initial & last final images. This
function is popular in the film industry.

4. Panning: Usually Panning refers to rotation of the camera in horizontal Plane. In


computer graphics, Panning relates to the movement of fixed size window across
the window object in a scene. In which direction the fixed sized window moves, the
object appears to move in the opposite direction as shown in fig:

If the window moves in a backward direction, then the object appear to move in the
forward direction and the window moves in forward direction then the object appear
to move in a backward direction.
5. Zooming: In zooming, the window is fixed an object and change its size, the
object also appear to change in size. When the window is made smaller about a
fixed center, the object comes inside the window appear more enlarged. This
feature is known as Zooming In.
When we increase the size of the window about the fixed center, the object comes
inside the window appear small. This feature is known as Zooming Out.
6. Fractals: Fractal Function is used to generate a complex picture by using
Iteration. Iteration means the repetition of a single formula again & again with
slightly different value based on the previous iteration result. These results are
displayed on the screen in the form of the display picture.

Common steps of designing the animation


sequence.
Common Steps of Designing the Animation Sequence
Common Steps of designing the animation sequence are as given:
1) Layout of Storyboard: Storyboard layout is the action outline utilized to illustrate
the motion sequence as a set of basic events which are to acquire place. This is the
kind of animation to be produced that selects the storyboard layout. So, the
storyboard comprises a set of rough sketches or a list of basic concepts for the
motion.
2) Definition of Object: The object definition is specified for all participant objects in
action. The objects can be explained in terms of fundamental shapes, related
movements or movement with shapes.
3) Specification of Key Frame: this is the detailed drawing of the scene at an exact
time in the animation sequence. Inside each key frame, all objects are positioned as
per to time for that frame. Several key frames are selected at the extreme positions
in the action; others are spaced hence the time interval among key frames is not as
great. More key frames are given for intricate motion than for easy, slowly varying
motions.
4) In-between frames Generation: In-among frames are the middle frames among
the key frames. The number of among frames is based on the media to be utilized
to display the animation. In common, film needs twenty-four frames per second, and
graphic terminals are refreshed on the rate of 30 to 60 frames per second.
Classically the time interval for the motion is set up hence there are 3 to 5 among
for each pair of key frames. Based upon the speed identified for the motion, several
key frames can be duplicated.
What is a keyframe?
A keyframe, also written as “key frame,” is something that defines the starting
and/or ending point of any smooth transition. That something can be a drawing in
animation or a particular frame of a shot when dealing with film or video. Any shot,
animated or live-action, is broken down into individual frames. You can think of
keyframes as the most important frames of a shot that set the parameters for the
other frames and indicate the changes that will occur throughout as transitions.

Keyframe Characteristics:
Important individual frames from within a shot
Keyframes exist in animation and live-action
Sets a start/stop point for a transition

The origin of keyframes


These days, the word keyframes is often associated with video editing, but they
originated in animation long before digital video editing.

In traditional animation, each frame is drawn by hand.Because of the heavy


demands and time-consuming nature of animation, those films are typically
made by a number of artists working together. One efficient way to both save
time and ensure quality is to have the lead animators draw the most important
frames and leave the transitional frames between them to the junior animators.

These important frames drawn by the lead animators became known as


keyframes. The transitional frames that connected the various keyframes
together become known as in-betweens.

Even if a single artist is drawing an entire scene, it is still common practice to


begin by drawing the keyframes, then going back and adding the in-betweens.

BASICS OF ANIMATION

Traditional and historical methods for production of animation

In computer graphics all transformations are related to space and not to time.
Here, lies the basic difference between Animation and graphics. The difference is
that animation adds to graphics, the dimension of time, which vastly increases
the amount of information to be transmitted, so some methods are used to
handle this vast information and these methods are known as animation
methods the Figure 1 gives a broad description of methods of animation
First method: Here, artist creates a succession of cartoon frames, which are then
combined into a film.

Second method: Here, the physical models are positioned to the image to be
recorded. On completion the model moves to the next image for recording and
this process is continued. Thus, the historical approach of animation has
classified computer animation into two main categories:

a) Computer-assisted animation usually refers to 2D systems that computerise


the traditional animation process. Here, the technique used is interpolation
between key shapes which is the only algorithmic use of the computer in the
production of this type of animation equation, curve morphing (key frames,
interpolation, velocity control), image morphing.

b) Computer generated animation is the animation presented via film or video,


which is again based on the concept of persistence of vision because the
eyebrain assembles a sequence of images and interprets them as a continuous
movement and if the rate of change of pictures is quite fast then it induce the
sensation of continuous motion.

This motion specification for computer-generated animation is further divided


into 2 categories:

Low level techniques (motion specific) Techniques used to fully control the
motion of any graphic object in any animation scene, such techniques are also
referred as motion specific techniques because we can specify the motion of any
graphic object in scene, techniques like interpolation, approximation etc., are
used in motion specification of any graphic object. Low level 7 Computer
Animation techniques are used when animator usually has a fairly specific idea
of the exact motion that he or she wants.

High level techniques (motion generalized) Techniques used to describe general


motion behavior of any graphic object, these techniques are algorithms or
models used to generate a motion using a set of rules or constraints. The
animator sets up the rules of the model, or chooses an appropriate algorithm,
and selects initial values or boundary values. The system is then set into motion
and the motion of the objects is controlled by the algorithm or model, this
approaches often rely on fairly sophisticated computation such as vector algebra
and numerical techniques and others. Isn’t it surprising that the Computer
animation has been around as long as computer graphics which is used to create
realistic elements which are intermixed with the live action to produce
animation. The traditional way of animation is building basis of the computer
generated animation systems and are widely used now a days by different
companies like, Disney, MGM, Warner Bros, etc, to produce realistic 3D
animation using various animation tools. As various tools are available for
different uses. Thus, the basic problem is to select or design animation tools
which are expressive enough for the animator to specify what s/he wants to
specify while at the same time are powerful or automatic enough that the
animator doesn't have to specify the details that s/he is not interested in.
Obviously, there is no single tool that is going to be right for every animator, for
every animation, or even for every scene in a single animation. The
appropriateness of a particular animation tool depends on the effect desired by
the animator. An artistic piece of animation will probably require different tools
for an animation intended to simulate reality. Some examples of the latest
animation tools available in the market are Softimage (Microsoft),
Alias/Wavefront (SGI), 3D studia MAX (Autodesk), Lightwave 3D (Newtek),
Prism 3D Animation Software (Side Effects Software), HOUDINI (Side Effects
Software), Apple’s Toolkit for game developers, Digimation, etc.

Morphing

Image morphing is a special form of image warping which is a simple and


smooth transition between two or more images. In simple words, its like a
transformation of one image to another. This is mostly seen in movies and
animations.

For example: Suppose we want to transform one face into another face. So, the
first step would be to select the corresponding features like eyes, nose, and
mouth in both images. Then, we would create a smooth transition between these
features to create a morphing effect. This is similar to an age filter.

Pixel

Pixel is the smallest unit of an image. Each pixel is a single point in the image
and has attributes like color and intensity.

Transformation Matrix

Transformation matrix is used to represent geometric transformations like


rotation, scaling, and translation. This helps us to control the image warping.

TWEENING

What is tweening?

Tweening is the process of creating the inbetweens, which are the images that go
between keyframes. Also known as 'inbetweeing,' the result in a smooth
transition between two keyframes that depict different points in an action.
Tweening is necessary to convey a sense of fluid movement with still images.
Inbetweens are typically considered less imperative than keyframes. Lead artists
draw keyframes while inbetweens are often handled by junior artists or
assistants.
Tweening Characteristics:

The drawings between keyframes

Used to convey smooth motion

Typically made by junior artists or assistants

Unit 6 color model


Basic Illumination Models

Illumination model, also known as Shading model or Lightning model,


is used to calculate the intensity of light that is reflected at a given point
on surface. There are three factors on which lightning effect depends
on:

Light Source :
Light source is the light emitting source. There are three types of light
sources:

Point Sources – The source that emit rays in all directions (A bulb in a
room).

Parallel Sources – Can be considered as a point source which is far from


the surface (The sun).

Distributed Sources – Rays originate from a finite area (A tubelight).

Their position, electromagnetic spectrum and shape determine the


lightning effect.

Surface :
When light falls on a surface part of it is reflected and part of it is
absorbed. Now the surface structure decides the amount of reflection
and absorption of light. The position of the surface and positions of all
the nearby surfaces also determine the lightning effect.
Observer :
The observer’s position and sensor spectrum sensitivities also affect the
lightning effect.

1. Ambient Illumination :
Assume you are standing on a road, facing a building with glass exterior
and sun rays are falling on that building reflecting back from it and the
falling on the object under observation. This would be Ambient
Illumination. In simple words, Ambient Illumination is the one where
source of light is indirect.

The reflected intensity Iamb of any point on the surface is:

2. Diffuse Reflection :
Diffuse reflection occurs on the surfaces which are rough or grainy. In
this reflection the brightness of a point depends upon the angle made
by the light source and the surface.

The reflected intensity Idiff of a point on the surface is:


3. Specular Reflection :
When light falls on any shiny or glossy surface most of it is reflected
back, such reflection is known as Specular Reflection.

Phong Model is an empirical model for Specular Reflection which


provides us with the formula for calculation the reflected intensity
Ispec:
Half tonning Technique:

Newspaper, photographs simulate a grey-scale image that


can printed using only black ink.

A newspaper picture is, in fact, made up of a pattern of tiny


black dots of varying size.

The human visual system has a tendency to average


brightness over small areas, so the black dots and their white
background merge and are perceived as an intermediate
shade of grey.

The process of generating a binary pattern of black and white


dots from an image is termed half toning.

In traditional newspaper and magazine production, this


process is carried out photographically by projection of a
transparency through a 'halftone screen' onto film.

The screen is a glass plate with a grid etched into it.

Different screens can be used to control the size and shape of


the dots in the half toned image.

In computer graphics, half toning reproductions are


approximated using rectangular pixel region say 2 x 2 pixels
or 3 x 3 pixels.
These regions are called as “Halftone Patters” or “Pixel
Patterns”.

2 x 2 pixel patterns for creating five intensity levels are


shown in figure 43.

Dithering technique:

Another technique for digital half toning is dithering.


It is the technique for approximating halftones without
reducing resolution, as pixel grid patterns do.

Dithering can be accomplished by Thresholding the image


against a dither matrix.

To obtain n2 intensity levels, it is necessary to setup an n x n


dither matrix Dn whose elements are distinct positive
integers in the range of 0 to n2 – 1.

Matrix for 4 intensity level and 9 intensity level is shown


below.

The elements of a dither matrix are thresholds.

The matrix is laid like a tile over the entire image and each
pixel value is compared with the corresponding threshold
from the matrix.
The pixel becomes white if its value exceeds the threshold or
black otherwise.

This approach produces an output image with the same


dimensions as the input image, but with less detail visible.

High order dither matrices can be obtained from lower order


matrices with the recurrence relation.

Algorithm to halftone an image using a dither matrix

Computer Graphics | The RGB color model


 ·


The RGB color model is one of the most widely used color representation method in
computer graphics. It use a color coordinate system with three primary colors:
R(red), G(green), B(blue)
Each primary color can take an intensity value ranging from 0(lowest) to 1(highest).
Mixing these three primary colors at different intensity levels produces a variety of
colors. The collection of all the colors obtained by such a linear combination of red,
green and blue forms the cube shaped RGB color space.

The corner of RGB color cube that is at the origin of the coordinate system corresponds
to black, whereas the corner of the cube that is diagonally opposite to the origin
represents white. The diagonal line connecting black and white corresponds to all the
gray colors between black and white, which is also known as gray axis.
In the RGB color model, an arbitrary color within the cubic color space can be specified
by its color coordinates: (r, g.b).
Example:
(0, 0, 0) for black, (1, 1, 1) for white,
(1, 1, 0) for yellow, (0.7, 0.7, 0.7) for gray
Color specification using the RGB model is an additive process. We begin with black
and add on the appropriate primary components to yield a desired color. The concept
RGB color model is used in Display monitor. On the other hand, there is a
complementary color model known as CMY color model. The CMY color model use
a subtraction process and this concept is used in the printer.
In CMY model, we begin with white and take away the appropriate primary components
to yield a desired color.
Example:
If we subtract red from white, what remains consists of green and blue which is cyan.
The coordinate system of CMY model use the three primaries’ complementary colors:
C(cray), M(magenta) and Y(yellow)

The corner of the CMY color cube that is at (0, 0, 0) corresponds to white, whereas the
corner of the cube that is at (1, 1, 1) represents black. The following formulas summarize
the conversion between the two color models:

YIQ COLOR MODEL


During the early days of color television, black-and-white sets were still
expected to display what were originally color images. YIQ model separated
chrominance from luminance. Luminance information is contained on the Y-
channel, whereas color information is carried on I and Q channels (in-phase
and in-quadrature) , in-short YIQ(Luminance, In-phase, Quadrature). In
addition to providing a signal that could be displayed directly on black-and-
white TVs, the system provided easy coding and decoding of RGB signals
which was not directly possible.
Due to the fact that the Y-channel carries a lot of luminance information, it has
a bandwidth assigned to it of 4Mhz, I-channel has a bandwidth assigned to it of
1.5Mhz, and Q-channel has a bandwidth of 0.6Mhz.

The Y component is the grayscale to drive old black-and-white TVs. The I


component goes from orange to blue and the Q component goes from purple
to green.

To find I-channel and Q-channel:

In-Phase = Red - Yellow


Quadrature = Blue - Yellow
It is not possible to directly display a YIQ image while developing. The show
function only recognizes RGB colors. If you try to display an image in another
colorspace, the show function will display the wrong color. To use show, we
have to use conversions.
 From YIQ to RGB conversion:

 From RGB to YIQ conversion:

Advantages:

 The advantage of this model is that more width can be assigned to the Y-
component (Luminance) because the human visual system is more
sensitive to changes in luminance than to changes in hue or saturation.
 For display on a screen, color television sets map these channels Y, I, and
Q to R, G, and B.

Disadvantage:

 Due to the high implementation costs for true I and Q decoding, few
television sets do it.
 To accommodate the bandwidth variances between I and Q, each of I and
Q needs a distinct filter.
 Due to different bandwidths, the “I” filter has a time delay to match the “Q”
filter’s long delay.
 In black-and-white TV only Only Y can be noticed.

CMY Color Model

RGB and HSV, two commonly used color models are discussed in the
articles: RGB, HSV. In this article, we introduce the CMY and CMYK
color models.
Cyan, magenta and yellow are the secondary colors of light and the
primary colors of pigments. This means, if white light is shined on a
surface coated with cyan pigment, no red light is reflected from it. Cyan
subtracts red light from white light. Unlike the RGB color model, CMY
is subtractive, meaning higher values are associated with darker colors
rather than lighter ones.

Devices that deploy pigments to color paper or other surfaces use the
CMY color model, e.g. printers and copiers. The conversion from RGB
to CKY is a simple operation, as is illustrated in the Python program
below. It is important that all color values be normalized to [0, 1] before
converting.

C=1-R

M=1-G

Y=1-B
Below is the code to convert RGB to CMY color model.

# Formula to convert RGB to CMY.

def rgb_to_cmy(r, g, b):

# RGB values are divided by 255

# to bring them between 0 to 1.

c = 1 - r / 255

m = 1 - g / 255

y = 1 - b / 255

return (c, m, y)

# Sample RGB values.

r=0

g = 169

b = 86

# Print the result.

print(rgb_to_cmy(r, g, b))

Output:

(1.0, 0.33725490196078434, 0.6627450980392157)

According to the color wheel shown above, equal amounts of cyan,


magenta, and yellow should produce black. However, in real life,
combining these pigments produces a muddy-colored black. To
produce pure black, which is quite commonly used while printing, we
add a fourth color — black, to the pigment mixture. This is called four-
color printing. The addition of black in this model results in it being
referred to as the CMYK color model.

HSV Color Model

A color model is a multidimensional representation of the color


spectrum. The most relevant color spectrums are RGB, HSV, HSL and
CMYK. A color model can be represented as a 3D surface (e.g.
for RGB) or go into much higher dimensions (such as CMYK). By
adjusting the parameters of these surfaces, we can obtain different
colors that we see in the color spectrum around us.

HSV:

An HSV color model is the most accurate color model as long as the
way humans perceive colors. How humans perceive colors is not like
how RGB or CMYK make colors. They are just primary colors fused to
create the spectrum. The H stands for Hue, S stands for Saturation, and
the V stand for value. Imagine a cone with a spectrum of red to blue
from left to right, and from the centre to the edge, the color intensity
increases. From bottom to up, the brightness increases. Hence resulting
white at the center up layer. A pictographic representation is also
shown below.

Hue: Hue tells the angle to look at the cylindrical disk. The hue
represents the color. The hue value ranges from o to 360 degrees.

Angle (in degree) Color

0-60 Red
Angle (in degree) Color

60-120 Yellow

120-180 Green

180-240 Cyan

240-300 Blue

Magent
300-360
a

Saturation: The saturation value tells us how much quantity of


respective color must be added. A 100% saturation means that
complete pure color is added, while a 0% saturation means no color is
added, resulting in grayscale.

Value: The value represents the brightness concerning the saturation of


the color. the value 0 represents total black darkness, while the value
100 will mean a full brightness and depend on the saturation.

Advantages:

The advantage of HSV is that it generalizes how humans perceive color.


Hence it is the most accurate depiction of how we feel colors on the
computer screen. Also, the HSV color space separates the luma from
the color information. this allows us to perform the operations
mentioned above in the applications section, as the histogram
equalization will only be required for intensity values.

Applications:

HSV model is used in histogram equalization.

Converting grayscale images to RGB color images.


Visualization of images is easy as by plotting the H and S components
we can vary the V component or vice-versa and see the different
visualizations

You might also like