You are on page 1of 36

NOORUL ISLAM CENTRE FOR HIGHER EDUCATION, KUMARACOIL

DEPARTMENT OF INFORMATION TECHNOLOGY

GRAPHICS AND MULTIMEDIA (IT 2220)

CLASS: S6 IT

QUESTION BANK

Prepared By,

Y.Jeyasheela

ASP/IT
PART –A

UNIT I

1.Define Computer graphics.


Computer graphics are pictures and films created using computer. It refers to computer generated
image data created with the help of specialized graphical hardware and software.

2.Mention some commonly used input and output devices.


Some of the commonly used input devices are: Keyboard, mouse, trackball space ball,
joystick, digitizers, touch panels, data glove, Image scanners, voice systems.
Some of the commonly used output devices are: Printers, plotters, and video display monitors.

3. Differentiate impact and non-impact printers.


An impact printer creates an image on paper by striking the paper with a metal, plastic or rubber
impression tool. Example: Type writer, dot matrix printer, letter press or rubber stamp
Non-impact printer creates an image on paper by spraying ink on the paper(inkjet), electrostatic
transfer (laser printer or copier) or offset printing transferring the image to paper from a rubber
blanket.

4. Categorize the various video display devices.


Cathode Ray tube
Raster scan display
Random scan display

5. Paraphrase the concept of refreshing of the screen.


Some method is needed for maintaining the picture on the screen. Refreshing of screen is done
by keeping the phosphorus glowing to redraw the picture repeatedly. (i.e.) By quickly directing
the electronic beam back to the same points.

6. Illustrate Bresenham’s algorithm with end points (20,10) and (30,18). Initial point (x0 ,y0)
=(20,10).Determine the successive pixel position and decision parameter pk+1 .
p0 = 16-10=6
Next pixel is (21,11)
pk+1 = 2

7. Illustrate the polyline() with example.


Polyline –is used to draw a series of connected straight lines. This function is used to define a set
of n-1 connected straight line segments. To display a single straight line segment, set n=2 and list
x and y values of the two endpoint coordinates in wcPoints.
Example: generate two connected line segments, with endpoints at (50,100),(150,250)
and(250,100)
wcPoints.x[1]=50;
wcPoints.y[1]=100;
wcPoints.x[1]=150;
wcPoints.x[1]=250;
wcPoints.x[1]=250;
wcPoints.x[1]=100;

8. What are the various attributes of a line?


The line type, width and color are the attributes of the line. The line type includes
solid line, dashed lines, and dotted line

9.Demonstrate the concept of scan conversion.


A major task of the display processor is digitizing a picture definition given in an application
program into a set of pixel-intensity values for storage in the frame buffer. This digitization process
is known as scan conversion.

10.What is an output primitive?


Graphics programming packages provide function to describe a scene in terms of these basic
geometric structures, referred to as output primitives.

11.Outline the concept of Aspect ratio.

The ratio of vertical points to the horizontal points necessary to produce length of lines in both
directions of the screen are called the Aspect ratio. Usually the aspect ratio is 3⁄4.

12.What is a point in the computer graphics system?


The point is a most basic graphical element & is completely defined by a pair of user coordinates
(x, y).

13.Summarizethe features of Inkjet printers.


• They can print 2 to 4 pages/minutes.
• Resolution is about 360d.p.i. Therefore, better print quality is achieved.
• The operating cost is very low. The only part that requires replacement is ink cartridge.
• 4 colors cyan, yellow, magenta, black are available.

14.Categorize the basic fill styles in area fill attributes.


• Hollow with a color border
• Filled with solid color
• Filled with a specified pattern

15.Define ellipse.
Ellipse is defined as the set of points such that the sum of distances from any two fixed point
will give a point p(x,y) on the circumference of the ellipse.

UNIT II

1.What is translation?
Translation is the process of changing the position of an object in a straight-line path from one
coordinate location to another. Every point (x , y) in the object must undergo a displacement to
(x1,y1). The transformation is:
x1 = x + tx
y1 = y+ty

2.Differentiate between window and view port.


A world coordinate area selected for display is called a window. An area on a display device to
which a window is mapped is called a viewport. The window defines what is to viewed; the
viewport defines where it is to be displayed.

3.Compare the concept of uniform scaling with differential scaling.


When the scaling factors sx and sy are assigned to the same value, a uniform scaling isproduced
that maintains relative object proportions. Unequal values for sx and sy result in a differential
scaling that is often used in design application.

4.Categorize the various types of clipping.


Point clipping, line clipping, area clipping, text clipping and curve clipping.

5. Define clipping.
Clipping is the method of cutting a graphics display to neatly fit a predefined graphics regionor
the view port.

6.Categorize the various types of Text clipping.


• All-or-none string clipping - if all of the string is inside a clip window, keep it otherwise
discards.
• All-or-none character clipping – discard only those characters that are not completely
inside thewindow. Any character that either overlaps or is outside a window boundary is
clipped.
• Individual characters – if an individual character overlaps a clip window boundary, clip
off theparts of the character that are outside the window.

7.Illustrate with example, clipping a point


Assuming that the clip window is a rectangle in standard position, we save a point
P=(x,y) for display if the following inequalities are satisfied:
xwmin ≤ x≤ xwmax
ywmin ≤ y≤ ywmax
(xwmin , xwmax , ywmin, ywmax )can be world coordinate window boundaries or view port
boundaries.If any one of these inequalities is not satisfied, the points are clipped (not saved for
display).

8.Interpret the meaning of fixed point scaling.


The location of a scaled object can be controlled by a position called the fixed point that is to
remain unchanged after the scaling transformation.

9.What is the need of homogeneous coordinates?


To perform more than one transformation at a time, use homogeneous coordinates or matrices.
They reduce unwanted calculations, intermediate steps, saves time and memory and produce a
sequence of transformations.
10.Write the transformation matrix for shearing relative to x-axis

11.Write the transformation matrix to translate 2 units right.

12.What is scaling?
The scaling transformations changes the size of an object and can be carried out bymultiplying
each vertex (x,y) by scaling factor Sx,Sy where Sx is the scaling factor of x and Sy is thescaling
factor of y.

13.Paraphrase the concept of covering (exterior clipping).


This is just opposite to clipping. This removes the lines coming inside the windows and displays
the remaining. Covering is mainly used to make labels on the complex pictures.

14.Compare All-or-none string clipping with All-or-none character clipping.


All-or-none string clipping -if all of the string is inside a clip window, keep it otherwisediscards.
All-or-none character clipping – discard only those characters that are not completely inside
the window. Any character that either overlaps or is outside a window boundary is clipped.

15.Define viewing transformation


The mapping of a part of a world coordinate scene to device coordinates is referred to as viewing
transformation

UNIT III

1. Categorize the various representation schemes used in three dimensional objects.


Boundary representation (B-res) – describe the 3 dimensional objects as a set of surfacesthat
separate the object interior from the environment.
Space- portioning representation – describe interior properties, by partitioning the spatial
region containing an object into a set of small, no overlapping, contiguous solids.

2. What is a Polygon mesh?


Polygon mesh is a method to represent the polygon, when the object surfaces are tiled, it is more
convenient to specify the surface facets with a mesh function. The various meshes are
Triangle strip – (n-2) connected triangles
Quadrilateral mesh – generates (n-1)(m-1) Quadrilateral

3. Illustrate XYZ color model


The set of CIE primaries is generally referred to as the XYZ or(X,Y,Z) ,color model,where
X,Y,Z represent vectors in a three dimensional, additive color space.
C=XX+YY+ZZ
4.Differentiate between interpolation spline and approximation spline.
When the spline curve passes through all the control points then it is called interpolate. When
the curve is not passing through all the control points then that curve is called approximation
spline.

5.List out a few advantages of rendering polygons by scan line method?


i. The max and min values of the scan were easily found.
ii. The intersection of scan lines with edges is easily calculated by a simple incremental method.
iii. The depth of the polygon at each pixel is easily calculated by an incremental method.

6.Compare parallel projection with perspective projection.


Parallel projection is one in which z coordinates is discarded and parallel lines from each vertex
on the object are extended until they intersect the view plane.
Perspective projection is one in which the lines of projection are not parallel. Instead, they all
converge at a single point called the center of projection.

7.What is animation?
Computer animation refers to any time sequence of visual changes in a scene. In addition to
changing object position with translations or rotations, a computer generated animation could
display time variations in object size, color, transparency, or surface texture.

8.Paraphrase the term chromaticity


The term chromaticity is used to refer collectively to the two properties describing color
characteristics: purity and dominant frequency

9.Write down the steps involved in 3D transformation.


• Modeling Transformation
• Viewing Transformation
• Projection Transformation
• Workstation Transformation

10.Define Illumination model.


An illumination model, also called a lighting model and sometimes referred to as a shading
model, is used to calculate the intensity of light that we should see at a given point on the surface
of an object.
11.Summarize surface rendering algorithm
Surface rendering algorithm uses the intensity calculations from an illumination model to
determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface
point, or the rendering can be accomplished by interpolating intensities across the surfaces from a
small set of illumination model calculations.
12.Statethe properties of light
• Light is a narrow frequency band within the electromagnetic system
• Other frequency bands within this spectrum are called radio waves, micro waves, infrared
waves and x-rays.
• Each frequency value within the visible band corresponds to a distinct color

13.Interpret the term diffuse reflection


Surfaces that are rough or grainy, tend to scatter the reflected light in all directions. This
scattered light is called diffuse reflection.

14.Differentiate between CMY and HSV color models


The HSV (Hue, Saturation, Value) model is a color model which uses color descriptions that
have a more intuitive appeal to a user. To give a color specification, a user selects a spectral
color and the amounts of white and black that is to be added to obtain different shades, tint, and
tones.
A color model defined with the primary colors cyan, magenta, and yellow is useful for
describing color output to hard-copy devices.

15.Define color gamut.


Color models that are used to describe combinations of light in terms of dominant frequency use
3 colors to obtain a wide range of colors, called the color gamut.

UNIT –IV
1.List out the data elements of Multimedia systems
• Facsimile
• Document Images
• Photographic Images
• Geographic Information System Maps (GIS)
• Voice Commands and Voice Synthesis
• Audio Messages
• Video Messages
• Full motion stored and Live Video Holographic Images Fractals

2.Summarize the data element Fascimile


Fascimile transmissions were the first practical means of transmitting document images over a
telephone line.The basic technology,now widely used,has evolved to allow higher scanning
density for better quality fax.

3.Categorize the various types of compression.


Lossless compression
Lossy compression

4.Illustrate Run length encoding with example.


Run –length encoding is the simplest and earliest of the data compression schemes developed.It
is so simple that it has no need for a standard.It has primarily been used to compress black and
white images.It has also formed the basis for the other types of compression scheme.
For example,the string,0000000000000000000001111111000000000000000001111
is represented as
Byte 1,Byte 2,Byte 3,Byte 4,Byte 5,Byte 6,Byte 7,Byte 8,……Byte N-1,Byte N,
0x14,0x00,0x01,0x11,0x00,0x04,0x01……
The above string represents hex 13 consecutive zeroes, hex 7 consecutive ones, hex 10
consecutive zeroes and hex 4 consecutive ones

5.Mention few advantages of CCIIT Group3 2D Scheme


The following lists the key advantages of the CCIIT Group 3 2D scheme;
• The implementation of the K factor allows error-free transmission
• It is a worldwide facsimile standard also accepted for document imaging applications
• Due to its two-dimensional nature, the compression ratios achieved with this scheme are
better than CCIIT Group 3 1D.

6.Write the data format for CCIIT Group 4 2D Compression


Data Data Data …..Data Data EOL EOL PAD
Line Line Line Line Line bits
1 2 3 n-1 n

7. Differentiate between the color characteristics hue and saturation


Hue: This is the color sensation produced in an observer due to the presence of certain wavelengths
of color. Each wavelength represents a different hue. For example, the eye can discriminate
between red and blue colors of equal intensity because of the difference in hue.
Saturation: This is a measure of color intensity, for example, the difference between red and pink.
Although twp colors may have the same predominant wavelength, one may have more white color
mixed in with it and hence appear less saturated.

8. Define Multimedia.
Multimedia is defined as a Computer based Interactive Communication process that
incorporates text, numeric data, record based data, graphic art, video and audio elements,
animation etc. It is used for describing sophisticated systems that support moving images and
audio. Eg. Personal Computer

9. State the resolution of Facsimile, Document Images and Photographic Images.


• Facsimile-100 to 200 dpi
• Document images – 300 dpi (dots/pixels per inch)
• Photographic images – 600 dpi

10. Categorize the two technologies used for storage and display of GIS systems.
• Raster Storage Raster Image (Raster Image has basic color map)
• vector overlay and text display

11. What are the applications of Photographic Images?


Photographic images are used in Imaging Systems that are used for identification such as
Security Badges Fingerprint Cards Photo Identification Systems Bank Signature Cards Patient
Medical Histories.

12. Paraphrase the term Holography.


Holography is defined as the means of creating a unique photographic image without the use of
lens.
13.State the applications of Document Imaging.
Document Imaging is used in organizations such as
• Insurance agencies
• Law offices
• Country and State Governments
• Federal Government
• Department of Defence (DOD)

14. What is a Binary Image?


Binary Images contain black and white pixels and generated when a document is scanned in a
binary mode.

15. Compare Busy Image with Continuous-tone Images.


In a Busy image adjacent pixels or group of adjacent pixels change rapidly. The grayscale or
color images are known as Continuous-tone images.

UNIT -V
1. What is Hypermedia?
The linking of media for easy access is called Hypermedia. The media may be of any type such
as text, audio, video etc. A hypermedia document contains a text and any other sub objects such
as images, sound, full-motion video etc

2. Paraphrase the term Hypertext.


The linking of associated data for easy access is called Hypertext. It is an application of
indexing text to provide a rapid search of specific text strings in one or more documents. It is an
integral component of Hypermedia. Hypermedia document is the basic object and text is a sub
object.

3. Categorize the various types of Multimedia System Architecture.


Multimedia Workstation Architecture the IMA Architectural Framework
Network Architecture for Multimedia Systems

4. Summarize the causes that lead to Network Congestion.


Number of users accessing the network increasing computing power of desktop systems,
workstations and PC’s ,Business needs for complex networks, Increased traffic loads ,Use of
client/server architectures, Graphics intensive applications, Voice and voice based multimedia
applications

5. What are the evolving technologies of Multimedia?


Hypermedia Documents ,Hypertext, Hyper speech ,HDTV & UDTV ,3D Technologies and
Holography, Fuzzy logic ,Digital Signal Processing

6.Compare the feature of Visible Images with Non-visible Images


Visible Images include Drawings such as Blueprints Engineering Drawings Space maps for
Offices Town Layouts Paintings Photographs Documents & Still Frames
Non-visible Images are not stored as images but it is displayed as images. The examples include
Pressure Gauges Temperature Gauges & Other metering displays

7.Write the steps needed for good hypermedia design


• Determining the type of hypermedia application
• Structuring the information
• Determining the navigation throughout the application
• Methodologies for accessing the information
• Designing the user interface

8.Summarize structuring the information


The goal of information structuring is to identify the information objects and to develop an
information model to define the relationships among these objects. A good information structure
consists of the following modeling primitives:
• Object type and object hierarchies
• Object representations
• Object connections
• Derived connections and representations

9.Classify the different kind of user interface development tools


• Media editors
• An authoring application
• Hypermedia object creation
• Multimedia object locator and browser

10.Define mobile messaging


Mobile messaging represents a major new dimension in the users interaction with the messaging
system. With the emergence of remote access from mobile users using personal digital assistants
and notebook computers, made possible by wireless communications developments supporting
wide ranging access using wireless modems and cellular telephone links, mobile messaging has
significantly influenced messaging paradigms.

11.Outline the steps required for creating hypermedia report


Planning
Creating each document
Integrating components

12.Paraphrase the term document store


A document store is essential for applications that require storage of large volumes of documents.
For example, applications such as electronic mail, information repositories and hypertext require
storage of large volumes of documents in document databases.

13.What are the objects of Multimedia?


Text
Images
Audio and Voice
Full-motion and Live video

14.What is Data Duplication?


Data Duplication is the process of duplicating an object that the user can manipulate. It does not
require any synchronization of the duplicate object with the master object.

15.What is Negative or Reverse Compression?


If the number of bytes is increased than the bytes in run length encoding.i.e. if the number of
bytes is increased than the original image during Compression then it is called Negative
Compression.

PART – B

UNIT I
1. Derive the decision parameter and write the procedure for Bresenham’s Line
drawing algorithm. (16)

• Derivation (10)
• Procedure (6)
procedure linebRES(xa,ya,xb,yb:integer);
var
dx,dy,x,y,xEnd,p:integer;
begin
dx=abs(ax-xb);
dy=abs(ya-yb);
p=2.dy-dx;
//determine which point to use as start,and which as end
If(xa>xb)then
begin
x=xb;
y=yb;
xend=xa;
end
else
begin
x=xa;
y=ya;
xEnd=xb;
end;
setPixel(x,y,1);
while x<xEnd do
begin
if p<0 then
p=p+2*dy;
else
begin
y=y+1;
p=p+2*(dy-dx);
end;
setPixel(x,y,1);
end
end;

2. i)Define circle. (02)


ii)Derive the decision parameter and write the procedure for Midpoint circle
algorithm. (14)
i)A circle is defined as the set of points that are all at a given distance r from a centre position
(xc yc )(02)
• Derivation (08)
• Procedure (06)
• Enter the center coordinates.
• Enter the radius.
• P0 = 1-r
• x=0
y=r
• Introduce the function plotcircle().Here xc,yc,x,y are the arguments.
• Check x<y using the looping statement.
If x<y, x=x+1
If p<0,
Pk+1= pk+2(xk+1)+1
If p>0,
Y=yk-1
Pk+1 =pk+2(xk+1)+(1-2yk)-(yk-1-yk)+1
• Plot the circle

3.Explain the various video display devices in detail. (16)

Computer graphics is an art of drawing pictures on computer screens with the help of
programming. It involves computations, creation, and manipulation of data. In other words, we
can say that computer graphics is a rendering tool for the generation and manipulation of images.
Cathode Ray Tube(06)
The primary output device in a graphical system is the video monitor. The main element of a
video monitor is the Cathode Ray Tube CRT, shown in the following illustration.
The operation of CRT is very simple −
The electron gun emits a beam of electrons cathode rays. The electron beam passes
through focusing and deflection systems that direct it towards specified positions on the
phosphor-coated screen.
When the beam hits the screen, the phosphor emits a small spot of light at each position
contacted by the electron beam.
It redraws the picture by directing the electron beam back over the same screen points
quickly.

There are two ways Random scan and Raster scan by which we can display an object on the
screen.
Raster Scan (06)
In a raster scan system, the electron beam is swept across the screen, one row at a time from top
to bottom. As the electron beam moves across each row, the beam intensity is turned on and off
to create a pattern of illuminated spots.
Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer. This
memory area holds the set of intensity values for all the screen points. Stored intensity values are
then retrieved from the refresh buffer and “painted” on the screen one row scanline at a time as
shown in the following illustration.
Each screen point is referred to as a pixel pictureelement or pel. At the end of each scan line, the
electron beam returns to the left side of the screen to begin displaying the next scan line.
Random Scan (04)

In this technique, the electron beam is directed only to the part of the screen where the picture is
to be drawn rather than scanning from left to right and top to bottom as in raster scan. It is also
called vector display, stroke-writing display, or calligraphic display.
Picture definition is stored as a set of line-drawing commands in an area of memory referred to
as the refresh display file. To display a specified picture, the system cycles through the set of
commands in the display file, drawing each component line in turn. After all the line-drawing
commands are processed, the system cycles back to the first line command in the list.
Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times
each second.

4.i)List out a few applications of Computer Graphics. (02)


ii)Discuss the applications of Graphics in detail. (14)
i) Application of Computer Graphics (02)
Computer Graphics has numerous applications, some of which are listed below −
Computer graphics user interfaces GUIs
Business presentation graphics
Cartography
Weather Maps Satellite Imaging Photo Enhancement
Medical imaging Engineering Drawings Typography Architecture.
Art Training Entertainment Simulation and modeling

ii)Application of Computer Graphics (14)


Computer Graphics has numerous applications, some of which are listed below −
Computer graphics user interfaces GUIs − A graphic, mouse-oriented paradigm which
allows the user to interact with a computer.
Business presentation graphics − "A picture is worth a thousand words".
Cartography − Drawing maps.
Weather Maps − Real-time mapping, symbolic representations.
Satellite Imaging − Geodesic images
Photo Enhancement − Sharpening blurred photos.
Medical imaging − MRIs, CAT scans, etc. - Non-invasive internal examination.
Engineering drawings − mechanical, electrical, civil, etc. - Replacing the blueprints of
the past.
Typography − The use of character images in publishing - replacing the hard type of the
past.
Architecture − Construction plans, exterior sketches - replacing the blueprints and hand
drawings of the past.
Art − Computers provide a new medium for artists.
Training − Flight simulators, computer aided instruction, etc.
Entertainment − Movies and games.
Simulation and modeling − Replacing physical modeling and enactments

5.i)Define line. (02)


ii)Write the procedure for drawing a line using DDA Line drawing algorithm in
detail. (14)
i)Line: A line connects two points. It is a basic element in graphics. To draw a line, two
endpoints are needed. (02)
ii)Procedure: (14)
• The digital differential analyzer(DDA) is a scan conversion line algorithm based on
calculation either Dy or Dx.
• The line at unit intervals is one coordinate and determine corresponding integer values
nearest line for the other coordinate.
• Consider first a line with positive slope.
Step : 1
If the slope is less than or equal to 1 ,the unit x intervals Dx=1 and compute each successive
y values.
Dx=1

m = Dy / Dx
m = ( y2-y1 ) / 1
m = ( yk+1 – yk ) /1

yk+1 = yk + m
• subscript k takes integer values starting from 1,for the first point and increment by 1
until the final end point is reached.
• m->any real numbers between 0 and 1
• Calculate y values must be rounded to the nearest integer
Step : 2
If the slope is greater than 1 ,the roles of x any y at the unit y intervals Dy=1 and compute
each successive y values.
Dy=1

m= Dy / Dx
m= 1/ ( x2-x1 )
m = 1 / ( xk+1 – xk )

xk+1 = xk + ( 1 / m )

• Equation 6 and Equation 7 that the lines are to be processed from left end point to the
right end point.
Step : 3
If the processing is reversed, the starting point at the right
Dx=-1
m= Dy / Dx
m = ( y2 – y1 ) / -1
yk+1 = yk - m

Iintervals Dy=1 and compute each successive y values.


Step : 4
Here, Dy=-1
m= Dy / Dx
m = -1 / ( x2 – x1 )
m = -1 / ( xk+1 – xk )
xk+1 = xk + ( 1 / m )

6.Describe the various input and output devices in detail. (16)

Various input and output devices are : (16)


Keyboards
Mouse
Trackball and Space ball
Joysticks
Data glove
Digitizers
Image scanner
Touch panels
Light pens
Voice systems
Hard copy devices
UNIT II
1. i)Define Region code. (02)
ii)Illustrate Cohen Sutherland Line Clipping algorithm with example. (14)
i)Region code (02)
Every line end point in a picture is assigned a four-digit binary code, called a region
code, that identifies the location of the point relative to the boundaries of the clipping
rectangle.
ii)The algorithm includes, excludes or partially includes the line based on where:
• Both endpoints are in the viewport region (bitwise OR of endpoints == 0): trivial accept.
• Both endpoints are on the same non-visible region (bitwise AND of endpoints! = 0):
trivial reject.
• Both endpoints are in different regions: In case of this non trivial situation the algorithm
finds one of the two points that is outside the viewport region (there will be at least one
point outside). The intersection of the outpoint and extended viewport border is then
calculated (i.e. with the parametric equation for the line) and this new point replaces the
outpoint. The algorithm repeats until a trivial accept or reject occurs.
The numbers in the figure below are called outcodes. An outcode is computed for each of
the two points in the line. The first bit is set to 1 if the point is above the viewport. The
bits in the outcode represent: Top, Bottom, Right, Left. For example the outcode 1010
represents a point that is top-right of the viewport. Note that the outcodes for endpoints
mustbe recalculated on each iteration after the clipping occurs.
1001 1000 1010
0001 0000 0010
0101 0100 0110
(14)
2.Demonstrate the concept of 2D viewing in detail.(16)
2D viewing-transformation pipeline (08)
The mapping of a 2D world coordinate system to device coordinates is called
a two-dimensional viewing transformation. Usually, in 2D, viewing coordinates and world
coordinates are the same.
MC Construct scene WC Convert world
VC
Transform
NVC
Map normalized DC
Coordinates viewing coordinates to
in
to viewing coordinates
NC device
world coordinates
coordinates to normalized coordinates
by transforming
coordinates
modeling
coordinates
• Viewing coordinate reference frame (03)
o This coordinate system provides the reference for specifying the world
coordinate window.
• Window –to-viewport coordinate transformation (05)
o Perform a scaling transformation using a fixed-point position of (xwmin, ywmin)
that scales the window area to the size of the viewport.
o Translate the scaled window area to the position to the viewport.

3. Explain Point and Line Clipping with example in detail. (16)


• Clipping (02)
Remove points outside a region of interest.
• Discard (parts of) primitives outside our window...
▪ Point clipping (07)
Remove points outside window.
• A point is either entirely inside the region or not.
▪ Line clipping (07)
Remove portion of line segment outside window.
• Line segments can straddle the region boundary.
• Liang-Barsky algorithm efficiently clips line segments to a
halfspace.
• Halfspaces can be combined to bound a convex region.
• Use outcodes to better organize combinations of halfspaces.
• Can use some of the ideas in Liang-Barsky to clip points.

4.Illustrate Lian Barsky Line Clipping algorithm in detail. (16)


o Algorithm (03)
The Liang–Barsky algorithm uses the parametric equation of a line and inequalities
describing the range of the clipping box to determine the intersections between the line and
the clipping box. With these intersections it knows which portion of the line should be
drawn. This algorithm is significantly more efficient than Cohen–Sutherland, but Cohen-
Sutherland does trivial accepts and rejects much faster, so it should be considered instead
if most of the lines you need to clip would be completely in or out of the clip window.
Let be the line which we want to study. The parametric equation of the
line segment from gives x-values and y-values for every point in terms of a parameter that
ranges from 0 to 1. The equations are

and
o Conditions (08)
▪ xwmin<=x<=xwmax
▪ ywmin<=y<=ywmax
o Derivation of decision parameters (03)
o Advantages (02)
▪ Can be extended for 3D clipping.
▪ It eliminates the intersection calculations.
Faster than Cohen Sutherland line clipping algorithm

5.Explain 2D Geometric transformations in detail (16)


• Translation (06)

• Rotation (05)

• Scaling (05)
6.i) Define polygon. (02)
ii)Illustrate Sutherland-Hodgeman polygon clipping algorithm in detail. (14)
o Polygon (02)
▪ Polygon is a shape having many sides or many faces.
o Four cases (10)
▪ The line moves from outside to inside (v1 to v2).
▪ If the line is completely present inside.
▪ If the line is moving from inside to outside.
▪ If the line is present completely outside.
o Advantage (02)
▪ It is suited for convex polygon.
o Disadvantage (02)
▪ It is not suited for concave polygon.
▪ If a concave polygon is clipped, will get a continuous line.

UNIT III

1. Illustrate 3D Transformations in detail. (16)

• Transformations are an important part of every 3D application.


A number of specific applications of transformations:
• Aligning an object with the camera (object is always in the same relative position
from the camera)
• A hierarchy of transformations can be used to transform objects with parent child
relationships (for example the rotation of a steering wheel in a moving car)
Translation (05)
The first example is the translation of a point. To translate a point we need to multiply the
homogeneous representation of the vector with the 4x4 matrix. The translation vector [tx,
ty, tz] will be placed on the last row of the transformation matrix

Every point that is multiplied with the given matrix will be translated with the amount [tx,
ty, tz]. To multiply a homogeneous coordinate with a matrix we need to multiply the vector
with each column of the matrix.
The final result of the transformation is:

This result is as expected but it is important to note that this transformation would not work
without homogeneous coordinates. It could come across as overkill to use matrices for this
simple transformation. However, the next section will demonstrate the usage of matrices
to rotate a vertex round the origin.
Rotation matrices for Euler angles (rotation round X, Y and Z axis) (06)
Euler angles can be used to describe the rotation of an object in 3D space. It is customary
in a 3D application to split the rotation up in 3 parts: yaw (rotation about Y axis), pitch
(rotation about z axis) and roll (rotation about X axis).

Rotation about the Z axis


We start with this rotation matrix because it corresponds to a rotation in 2D. The rotation
matrix will only change the x and y component of the vertex:

The formula for this rotation is :

OpenGL defines the matrix in column major order and for OpenGL the matrix would be

Scaling (05)
o It alters the size of the object.
o Equations.

2.Compare and contrast HLS and HSV color models in detail (16)
• HLS color model (08)
o This model is based on intuitive color parameters.
o This model has the double-cone representation.
o The three color parameters in this model are hue (H), lightness (L), and
saturation (S).
• HSV color model (08)
o HSV model uses color descriptions that have a more intuitive appeal to a user.
o To give a color specification, a user selects a spectral color and the amounts of
white and black that is to be added to obtain different shades, tints and tones.
o Color parameters are hue (H), saturation (S), and value (V).

3.Explain the basic illumination models in detail. (16)


• Ambient light (02)
o Ambient light or background light is a simple way to model the
combination of light reflections from various surfaces to produce a
uniform illumination.
• Diffuse reflection (02)
o Ambient-light reflection is an approximation of global diffuse
lighting effects.
o Diffuse reflections are constant over each surface in a scene,
independent of the viewing direction.
• Specular reflection and the phong model (02)
o Specular reflection is the result of total, or near total, reflection of
the incident light in a concentrated region around the specular
reflection angle.
o An empirical model for calculating the specular-reflection range
developed by Phong Bui Tuong called Phong specular-reflection
model.
• Warn model (02)
o The warn model provides a method for simulating studio lighting
effects by controlling light intensity in different directions.
• Intensity attenuation (02)
o It is an effective method for limiting intensity values when a single
light source is used to illuminate a scene.
• Color considerations (02)
o Most graphics displays of realistic scenes are in color.
o The illumination model considers only monochromatic lighting
effects.
o To incorporate color, the intensity equations are used as a function
of the color properties of the light sources and object surfaces.
• Transparency (02)
o A transparent surface produces both reflected and transmitted light.
o The relative contribution of the transmitted light depends on the
degree of transparency of the surface and whether any light sources
or illuminated surfaces are behind the transparent surface.
• Shadows (02)
o Hidden- surface methods can be used to locate areas where light
sources produce shadows.
o By applying a hidden-surface method with a light source at the view
position, determine which surface sections cannot be seen from the
light source. These are the shadow areas.

4.Explain standard primaries and the chromaticity diagram in detail. (16)


• XYZ color model (08)
o The set of CIE primaries is generally referred to as the XYZ or (X, Y, Z) color
model, where X, Y, Z represents vectors in a three-dimensional, additive color
space.
• CIE chromaticity diagram (08)
o The normalized amounts x and y for colors are plotted in the visible spectrum,
the tongue shaped curve is obtained known as CIE chromaticity diagram.

5.Illustrate depth-sorting method in detail (16)


• Basic functions (02)
• Surfaces are sorted in order of decreasing depth.
• Surfaces are scan converted in order, starting with the surface of greatest
depth.
• Painter’s algorithm (06)
▪ Used for solving the hidden-surface problem.
• Tests (08)
▪ The bounding rectangles in the xy plane for the two surfaces do not
overlap.
▪ Surface S is completely behind the overlapping surface relative to
the viewing position.
▪ The overlapping surface is completely in front of S relative to the
viewing position.
▪ The projections of the two surfaces onto the view plane do not
overlap.

6. Describe quadratic surfaces with the parametric representations in detail. (16)


• Quadratic surfaces (04)
o These are the frequently used class of objects described using second degree
equations or quadratic equations.
• Sphere (04)
o It is a collection of points with a common distance r from the center and the
points on circumference of sphere is denoted by p(x, y, z).
o Parametric equation representation.
• Ellipsoid (04)
• An ellipsoid surface is an extension of a spherical surface.
• Parametric equation representation.
• Torus (04)
• A torus is a doughnut-shaped object generated by rotating a circle or other
conic about a specified axis.
• Parametric equation representation.

UNIT IV
1.i) What are the advantages and disadvantages of lossless compression?(06)
ii)Compare and contrast these with lossy compression schemes. (10)

i)Lossless compression refers to compression in which the image is reduced without any quality
loss.
Advantages:
Very small file sizes and lots of tools, plugins, and software support it.
Disadvantages:
Quality degrades with higher ratio of compression. Can’t get original back after compressing.
(06)
ii)Lossless vs. lossy compression:
The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a
much smaller compressed file than any known lossless method, while still meeting the requirements of the
application. Lossy methods are most often used for compressing sound, images or videos. The compression
ratio (that is, the size of the compressed file compared to that of the uncompressed file) of lossy video
codecs are nearly always far superior to those of the audio and still-image equivalents. Audio can be
compressed at 10:1 with no noticeable loss of quality, video can be compressed immensely with little visible
quality loss, eg 300:1. Lossily compressed still images are often compressed to 1/10th their original size,
as with audio, but the quality loss is more noticeable, especially on closer inspection. When a user acquires
a lossily-compressed file, (for example, to reduce download-time) the retrieved file can be quite different
from the original at the bit level while being indistinguishable to the human ear or eye for most practical
purposes. Many methods focus on the idiosyncrasies of the human anatomy, taking into account, for
example, that the human eye can see only certain frequencies of light. The psycho-acoustic model describes
how sound can be highly compressed without degrading the perceived quality of the sound. Flaws caused
by lossy compression that are noticeable to the human eye or ear are known as compression artifacts.
Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the
sender's data more concisely, but nevertheless perfectly. Lossless compression is possible because most
real-world data has statistical redundancy. For example, in English text, the letter 'e' is much more common
than the letter 'z', and the probability that the letter 'q' will be followed by the letter 'z' is very small. Another
kind of compression, called lossy data compression, is possible if some loss of fidelity is acceptable. For
example, a person viewing a picture or television video scene might not notice if some of its finest details
are removed or not represented perfectly. Similarly, two clips of audio may be perceived as the same to a
listener even though one is missing details found in the other. Lossy data compression algorithms introduce
relatively minor differences and represent the picture, video, or audio using fewer bits. Lossless
compression schemes are reversible so that the original data can be reconstructed, while lossy schemes
accept some loss of data in order to achieve higher compression. However, lossless data compression
algorithms will always fail to compress some files; indeed, any compression algorithm will necessarily fail
to compress any data containing no discernible patterns. Attempts to compress data that has been
compressed already will therefore usually result in an expansion, as will attempts to compress encrypted
data. In practice, lossy data compression will also come to a point where compressing again does not work,
although an extremely lossy algorithm, which for example always removes the last byte of a file, will always
compress a file up to the point where it is empty.(10)
2. With neat diagram explain the multimedia system architecture. (16)
Multimedia encompasses a large variety of technologies and integration of multiple architectures
interacting in real time. All of these multimedia capabilities must integrate with the standard user
interfaces such as Microsoft Windows. The following figure describes the architecture of a
multimedia workstation environment.

For each special device such as scanners, video cameras, VCRs and sound equipment-, a software
device driver is need to provide the interface from an application to the device. The GUI require
control extensions to support applications such as full motion video
High Resolution Graphics Display
The various graphics standards such as MCA, GGA and XGA have demonstrated the increasing
demands for higher resolutions for GUls.
Combined graphics and imaging applications require functionality at three levels. They are
provided by three classes of single-monitor architecture.
(i) VGA mixing: In VGA mixing, the image acquisition memory serves as the display source
memory, thereby fixing its position and size on screen:
(ii) VGA mixing with scaling: Use of scalar ICs allows sizing and positioning of images in pre-
defined windows.
Resizing the window causes the things to be retrieved again.
(iii) Dual-buffered VGA/Mixing/Scaling: Double buffer schemes maintain the original images
in a decompression buffer and the resized image in a display buffer.
The IMA Architectural Framework
The Interactive Multimedia Association has a task group to define the architectural framework for
multimedia to provide interoperability. The task group has C0ncentrated on the desktops and the
servers. Desktop focus is to define the interchange formats. This format allows multimedia objects
to be displayed on any work station.
The architectural approach taken by IMA is based on defining interfaces to a multimedia interface
bus. This bus would be the interface between systems and multimedia sources. It provides
streaming I/O services, including filters and translators.
Network Architecture for Multimedia Systems:
Multimedia systems need special networks. Because large volumes of images and video messages
are being transmitted. Asynchronous Transfer Mode technology (ATM) simplifies transfers across
LANs and WANs. (16)

3. Describe the algorithms for the CCITT Group 3 standard. How does CCITT Group 4
differ from CCITT Group 3? (16)
Many facsimile and document imaging file formats support a form of lossless data compression
often described as CCITT encoding. The CCITT (International Telegraph and Telephone
Consultative Committee) is a standards organization that has developed a series of
communications protocols for the facsimile transmission of black-and-white images over
telephone lines and data networks. These protocols are known officially as the CCITT T.4 and T.6
standards but are more commonly referred to as CCITT Group 3 and Group 4 compression,
respectively. Sometimes CCITT encoding is referred to, not entirely accurately, as Huffman
encoding. Huffman encoding is a simple compression algorithm introduced by David Huffman in
1952. CCITT 1-dimensional encoding, , is a specific type of Huffman encoding. The other types
of CCITT encodings are not, however, implementations of the Huffman scheme. Group 3 and
Group 4 encodings are compression algorithms that are specifically designed for encoding 1-bit
image data. Many document and FAX file formats support Group 3 compression, and several,
including TIFF, also support Group 4. Group 3 encoding was designed specifically for bi-level,
black-and-white image data telecommunications. All modern FAX machines and FAX modems
support Group 3 facsimile transmissions. Group 3 encoding and decoding is fast, maintains a good
compression ratio for a wide variety of document data, and contains information that aids a Group
3 decoder in detecting and correcting errors without special hardware. Group 4 is a more efficient
form of bi-level compression that has almost entirely replaced the use of Group 3 in many
conventional document image storage systems. (An exception is facsimile document storage
systems where original Group 3 images are required to be stored in an unaltered state.) Group 4
encoded data is approximately half the size of 1-dimensional Group 3-encoded data. Although
Group 4 is fairly difficult to implement efficiently, it encodes at least as fast as Group 3 and in
some implementations decodes even faster. Also, Group 4 was designed for use on data networks,
so it does not contain the synchronization codes used for error detection that Group 3 does, making
it a poor choice for an image transfer protocol. Group 4 is sometimes confused with the IBM MMR
(Modified Modified READ) compression method. In fact, Group 4 and MMR are almost exactly
the same algorithm and achieve almost identical compression results. IBM released MMR in 1979
with the introduction of its Scanmaster product before Group 4 was standardized. MMR became
IBM's own document compression standard and is still used in many IBM imaging systems today.
Document-imaging systems that store large amounts of facsimile data have adopted these CCITT
compression schemes to conserve disk space. CCITT-encoded data can be decompressed quickly
for printing or viewing (assuming that enough memory and CPU resources are available). The
same data can also be transmitted using modem or facsimile protocol technology without needing
to be encoded first. The CCITT algorithms are non-adaptive. That is, they do not adjust the
encoding algorithm to encode each bitmap with optimal efficiency. They use a fixed table of code
values that were selected according to a reference set of documents containing both text and
graphics. The reference set of documents were considered to be representative of documents that
would be transmitted by facsimile. Group 3 normally achieves a compression ratio of 5:1 to 8:1
on a standard 200-dpi (204x196 dpi), A4-sized document. Group 4 results are roughly twice as
efficient as Group 3, achieving compression ratios upwards of 15:1 with the same document.
Claims that the CCITT algorithms are capable of far better compression on standard business
documents are exaggerated--largely by hardware vendors. Because the CCITT algorithms have
been optimized for type and handwritten documents, it stands to reason that images radically
different in composition will not compress very well. This is all too true. Bi-level bitmaps that
contain a high frequency of short runs, as typically found in digitally halftoned continuous-tone
images, do not compress as well using the CCITT algorithms. Such images will usually result in a
compression ratio of 3:1 or even lower, and many will actually compress to a size larger than the
original. The CCITT actually defines three algorithms for the encoding of bi-level image data:
• Group 3 One-Dimensional (G31D)
• Group 3 Two-Dimensional (G32D)
• Group 4 Two-Dimensional (G42D)
G31D is the simplest of the algorithms and the easiest to implement. G32D and G42D are much
more complex in their design and operation and are described only in general terms below. The
Group 3 and Group 4 algorithms are standards and therefore produce the same compression results
for everybody. If you have heard any claims made to the contrary, it is for one of these reasons:
• Non-CCITT test images are being used as benchmarks.
• Proprietary modifications have been made to the algorithm.
• Pre- or post-processing is being applied to the encoded image data.
• You have been listening to a misinformed salesperson.
Group 3 One‐Dimensional (G31D) Group 3 One-Dimensional encoding (G31D) is a variation of
the Huffman keyed compression scheme. A bi-level image is composed of a series of black-and-
white 1-bit pixel runs of various lengths (1 = black and 0 = white). A Group 3 encoder determines
the length of a pixel run in a scan line and outputs a variable-length binary code word representing
the length and color of the run. Because the code word output is shorter than the input, pixel data
compression is achieved. The run-length code words are taken from a predefined table of values
representing runs of black or white pixels. This table is part of the T.4 specification and is used to
encode and decode all Group 3 data. The size of the code words were originally determined by the
CCITT, based statistically on the average frequency of black-and-white runs occurring in typical
type and handwritten documents. The documents included line art and were written in several
different languages. Run lengths that occur more frequently are assigned smaller code words while
run lengths that occur less frequently are assigned larger code words. In printed and handwritten
documents, short runs occur more frequently than long runs. Two- to 4-pixel black runs are the
most frequent in occurrence. The maximum size of a run length is bounded by the maximum width
of a Group 3 scan line. Run lengths are represented by two types of code words: makeup and
terminating. An encoded pixel run is made up of zero or more makeup code words and a
terminating code word. Terminating code words represent shorter runs, and makeup codes
represent longer runs. There are separate terminating and makeup code words for both black and
white runs. Pixel runs with a length of 0 to 63 are encoded using a single terminating code. Runs
of 64 to 2623 pixels are encoded by a single makeup code and a terminating code. Run lengths
greater than 2623 pixels are encoded using one or more makeup codes and a terminating code. The
run length is the sum of the length values represented by each code word. Here are some examples
of several different encoded runs:
• A run of 20 black pixels would be represented by the terminating code for a black run length of
20. This reduces a 20-bit run to the size of an 11-bit code word, a compression ratio of nearly 2:1.
• A white run of 100 pixels would be encoded using the makeup code for a white run length of 64
pixels followed by the terminating code for a white run length of 36 pixels (64 + 36 = 100). This
encoding reduces 100 bits to 13 bits, or a compression ratio of over 7:1.
• A run of 8800 black pixels would be encoded as three makeup codes of 2560 black pixels (7680
pixels), a makeup code of 1088 black pixels, followed by the terminating code for 32 black pixels
(2560 + 2560 + 2560 + 1088 + 32 = 8800).
In this case, we will have encoded 8800 run-length bits into five code words with a total length of
61 bits, for an approximate compression ratio of 144:1. This is illustrated below. (16)

4. Examine the file format of TIFF file with RIFF file format. (16)
Also Known As: Tag Image File Format
Usage:
Used for data storage and interchange. The general nature of TIFF allows it to be used in any
operating environment, and it is found on most platforms requiring image data storage. File
Organization TIFF files are organized into three sections: the Image File Header (IFH), the Image
File Directory (IFD), and the bitmap data. Of these three sections, only the IFH and IFD are
required. It is therefore quite possible to have a TIFF file that contains no bitmapped data at all,
although such a file would be highly unusual. A TIFF file that contains multiple images has one
IFD and one bitmap per image stored. TIFF has a reputation for being a complicated format in part
because the location of each Image File Directory and the data the IFD points to--including the
bitmapped data--may vary. In fact, the only part of a TIFF file that has a fixed location is the Image
File Header, which is always the first eight bytes of every TIFF file. All other data in a TIFF file
is found by using information found in the IFD. Each IFD and its associated bitmap are known as
a TIFF subfile. There is no limit to the number of subfiles a TIFF image file may contain. Each
IFD contains one or more data structures called tags. Each tag is a 12-byte record that contains a
specific piece of information about the bitmapped data. A tag may contain any type of data, and
the TIFF specification defines over 70 tags that are used to represent specific information. Tags
are always found in contiguous groups within each IFD. Tags that are defined by the TIFF
specification are called public tags and may not be modified outside of the parameters given in the
latest TIFF specification. User-definable tags, called private tags, are assigned for proprietary use
by software developers through the Aldus Developer's Desk. See the TIFF 6.0 specification for
more information on private tags. Note that the TIFF 6.0 specification has replaced the term tag
with the term field. Field now refers to the entire 12-byte data record, while the term tag has been
redefined to refer only to a field's identifying number. Because so many programmers are familiar
with the older definition of the term tag, the authors have chosen to continue using tag, rather than
field, in this description of TIFF to avoid confusion.
Also Known As: RIFF, Resource Interchange File Format, RIFX, .WAV, .AVI, .BND, .RMI,
.RDI.
Usage:
RIFF is a device control interface and common file format native to the Microsoft Windows
system. It is used to store audio, video, and graphics information used in multimedia applications.
IFF is a binary file format containing multiple nested data structures. Each data structure within a
RIFF file is called a chunk. Chunks do not have fixed positions within a RIFF file, and therefore
standard offset values cannot be used to locate their fields. A chunk contains data such as a data
structure, a data stream, or another chunk called a subchunk. Every RIFF chunk has the following
basic structure:
typedef struct _Chunk
{
DWORD ChunkId;
/* Chunk ID marker */
DWORD ChunkSize;
/* Size of the chunk data in bytes */
BYTE ChunkData[ChunkSize];
/* The chunk data */
}
CHUNK;
ChunkId contains four ASCII characters that identify the data the chunk contains. For example,
the characters RIFF are used to identify chunks containing RIFF data. If an ID is smaller than four
characters, it is padded on the right using spaces (ASCII 32). Note that RIFF files are written in
little-endian byte order. Files written using the big-endian byte ordering scheme have the identifier
RIFX. ChunkSize is the length of the data stored in the ChunkData field, not including any padding
added to the data. The size of the ChunkId and ChunkSize fields are not themselves included in
this value. ChunkData contains data that is WORD-aligned within the RIFF file. If the data is an
odd length in size, an extra byte of NULL padding is added to the end of the data. The ChunkSize
value does not include the length of the padding. Subchunks also have the same structure as
chunks. A subchunk is simply any chunk that is contained within another chunk. The only chunks
that may contain subchunks are the RIFF file chunk RIFF and the list chunk, LIST (explained in
the next section). All other chunks may contain only data. A RIFF file itself is one entire RIFF
chunk. All other chunks and subchunks in the file are contained within this chunk. If you are
decoding, your RIFF reader should ignore any chunks that the reader does not recognize or it
cannot use. If you are encoding, your RIFF writer will write out all unknown and unused chunks
that were read. Do not discard them. (16)

5. i) Why is an electronic pen a more natural means of input? Analyze. (06)


ii) Describe the operation of a pen system. (10)
i) A pen is a natural device used to write or draw on a document. It is highly suitable for unskilled
or partly skilled keyboard operators who would rather use a pen than type on the keyboard. Pen
input requires no training for these operators since it emulates the common pen, just what they are
used to writing or drawing with. A pen allows adding annotations to forms and documents called
up on a subnotebook pen-based computer screen. An electronic pen does not intimidate people.
(06)
ii)Operation of a pen system:
Pen computing system also known as Pen Extension for Microsoft Windows. The Pen Extension
includes a set of dynamic link libraries and drivers that make applications pen-enabled. The
DLLs allow pen-based input and hand writing recognition. The Microsoft windows for Pen
Computing system consists of the following components:
Electronic pen and digitizer
Pen driver
Recognition context manager (RC manager)
Recognizer
Dictionary
Display driver (10)

6. Compare and contrast JPEG and MPEG.How would Motion JPEG differ from MPEG.
(16)
Both JPEG and MPEG are two different types of compressing formats. The main difference
between the two is that JPEG is mainly used for image compression, while MPEG has various
standards for audio and video compression.
JPEG stands for Joint Photographic Expert Group. The file name for a JPEG image is .jpg or .jpeg.
JPEG is the most commonly used format for photographs. It is specifically good for color
photographs or for images with many blends or gradients. However, it is not the best with sharp
edges and might lead to a little blurring. This is mainly because JPEG is a method of lossy
compression for digital photography.
This means that while saving the image in a JPEG format, there is a slight loss of quality due to
compression. Hence, JPEG is not the greatest format in case one needs to keep making numerous
edits and re-saves to the image. As with each re-save there a slight loss of quality due to
compression. Still, if one only makes a few edits and the image is saved in a high-quality format,
the slight loss of quality due to compression is mainly negligible. An advantage to using the JPEG
format is that due to compression, a JPEG image will take up a few MB of data.
MPEG, on the other hand, stands for the Moving Picture Experts Group. It is a working group of
experts that was formed in 1988 by ISO and IEC. It was a joint initiative between Hiroshi Yasuda
of the Nippon Telegraph and Telephone and Leonardo Chiariglione. Chiariglione has served as
the group’s Chair since the group’s inception.
The aim of MPEG was to set standards for audio and video compression and transmission. By
2005, the group has grown to include approximately 350 members per meeting from various
industries, universities, and research institutions.
The standards as set by MPEG consist of different Parts. Each part covers a certain aspect of the
whole specification. MPEG has standardized the following compression formats and ancillary
standards:
• MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at
up to about 1.5 Mbit/s (ISO/IEC 11172). It includes the popular MPEG-1 Audio Layer III
(MP3) audio compression format.
• MPEG-2 (1995): Generic coding of moving pictures and associated audio information
(ISO/IEC 13818).
• MPEG-3: MPEG-3 dealt with standardizing scalable and multi-resolution compression and
was intended for HDTV compression but was found to be redundant and was merged with
MPEG-2.
• MPEG-4 (1998): Coding of audio-visual objects. It includes MPEG-4 Part 14 (MP4).
include("ad3rd.php"); ?> (16)

UNIT V
1. Explain the different types of authoring systems available? (16)
A multimedia presentation differs from a normal presentation in that it contains some form of
animation or special media. It includes video, graphics, audio, music etc. along with text for the
presentation.
• Multimedia Authoring Tool is a development environment where one can merge a number of
media into a single application.
• Typically a multimedia presentation contains at least one of the following elements:
Video or movie clip
Animation
Sound (this could be a voice-over, background music or sound clips)
Image
• Authoring systems can also be defined as process of creating multimedia application.
• Multimedia authoring tools provide the framework for organizing and editing the elements of a
multimedia project.
• Authoring software provides an integrated environment for combining the content and
functions of a project.
• It enables the developer to combine text, graphics, audio, video and animation to create an
interactive presentation/project. E.g. MS PowerPoint
• Authoring tools are used for designing interactivity and the user interface, for presentation your
project on screen and assembling multimedia elements into a single project.
Features of Authoring Tools:
• Editing and organizing features
• Programming features
• Interactivity features
• Playback features
• Cross-platform feature
Design issues of Authoring Systems:
• Display resolution- Display resolution or display modes of a digital television, computer
monitor or display device is the number of distinct pixels in each dimension that can be
displayed.
• File format and compression issues: Authoring systems should be capable of handling different
file formats.
File format is a standard way that information is encoded for storage in a computer file.
Eg.
Text formats
RTF(Rich Text Format)-is a primary file format introduced in 1987 by Microsoft for cross
platform documents interchange,
Plain Text-plain text files can be opened,read and edited with most text editors.Commonly used
are notepad(windows)Gedit or nano(Linux,Unix).Plain text is the original and popular way of
conveying an email.
Image formats
TIFF(Tagged image file format),BMP(Bitmap),DIB(Device independent Bitmap),GIF(Graphics
interchange format),JPEG(Joint Photograpic Experts group),PNG(Portable network graphics).

• The first – and hardest – part is to choose the technology for your presentation. The choice
comes down to two main contenders,
Adobe Flash
• Flash allows you to create presentations where you can build in powerful animation. It also has
very good video compression technology.
• Perhaps the best part of Flash is that it also allows you to put presentations directly onto your
web site.
• The biggest problem though is that Flash is a difficult system to get to use.
Microsoft PowerPoint.
• The easiest way to create a multimedia presentation is in Microsoft PowerPoint. You can add in
video, a soundtrack and also a reasonable degree of animation.
• By far the biggest advantage of making multimedia presentations in PowerPoint is that it is
easy for anyone to be able to edit the presentation.

Types of Authoring Systems


Icon based authoring system
• Each part is represented an icon (symbolic picture)
• Each icon does a specific task, e.g. plays a sound
• Icons are then linked together to form complete applications
• Can easily visualize the structure and navigation of the final application
Dedicated authoring system
• Dedicated authoring systems are designed for a single user consisting of single track for
playback.
• In the case of dedicated authoring system, users need not to be experts in multimedia or a
professional artist.
• Dedicated authoring systems are extremely simple since they provide drag and drop concept.
• Authoring is done on objects captured by video camera, image scanner or objects stored in
multimedia library.
• It does not provide effective presentation due to single stream.
• Examples of Dedicated authoring systems are Paint, MS PowerPoint etc.
Advantage:
Very simple
Disadvantage:
Designing an authoring system capable of combining even two object is complex.

Telephone Authoring Systems


• There is an application where the phone is linking into multimedia electronic mail application.
• Telephone can be used as a reading device by providing full text to-speech synthesis capability.
• The phone can be used for voice command input for setting up and managing voice mail
messages.
• Digitized voice clips are captured via the phone and embedded in electronic mail messages.
• As the capability to recognize continuous speech is deployed, phones can be used to create
electronic mail.
Programmable authoring system
• Structured authoring tools were not able to allow the authors to express automatic function for
handling certain routine tasks.
• But, programmable authoring system has improved in providing powerful functions based on
image processing and analysis and embedding program interpreters to use image processing
functions. E.g. Visual Basic, Net beans, Visual Studio
Timeline Based Authoring
• It has an ability to develop an application like movie.
• It can create complex animations and transitions.
• All the tracks can be played simultaneously carrying different data.
• Best to use when you have a message with a beginning and an end.
• Played back at a speed that you can set.
• Other elements (such as audio events) are triggered at a given time or location in the sequence
of events.
• Jumps to any location in a sequence (16)

2. How does video conferencing relate to hypermedia messaging? What are the
implications of building a system were the user starts with video conferencing and switches
to integrated stored messaging?
Hypermedia is used as a logical extension of the term hypertext in which graphics, audio, video,
plain text and hyperlinks intertwine to create a generally non-linear medium of information. This
contrasts with the broader term multimedia, which may be used to describe non-interactive linear
presentations as well as hypermedia. It is also related to the field of electronic literature. The World
Wide Web is a classic example of hypermedia, whereas a non-interactive cinema presentation is
an example of standard multimedia due to the absence of hyperlinks. The first hypermedia work
was, arguably, the Aspen Movie Map. Atkinson's HyperCard popularized hypermedia writing,
while a variety of literary hypertext and hypertext works, fiction and nonfiction, demonstrated the
promise of links. Most modern hypermedia is delivered via electronic pages from a variety of
systems including media players, web browsers, and stand-alone applications (i. e., software that
does not require network access). Audio hypermedia is emerging with voice command devices
and voice browsing. Conducting a conference between two or more participants at different sites
by using computer networks to transmit audio and video data. For example, a point-to-point (two
people) video conferencing system works much like a video telephone. Each participant has a
video camera, microphone, and speakers mounted on his or her computer. As the two participants
speak to one another, their voices are carried over the network and delivered to the other's speakers,
and whatever images appear in front of the video camera appear in a window on the other
participant's monitor. Multipoint videoconferencing allows three or more participants to sit in a
virtual conference room and communicate as if they were sitting right next to each other. Until the
mid 90s, the hardware costs made videoconferencing prohibitively expensive for most
organizations, but that situation is changing rapidly. Many analysts believe that videoconferencing
will be one of the fastest-growing segments of the computer industry in the latter half of the decade.
(16)

3. Discuss the concepts involved in distributed multimedia systems. (16)


If the multimedia systems are supported by multiuser system, then we call those multimedia
systems as distributed multimedia systems.
A multi user system designed to support multimedia applications for a large number of users
consists of a number of system components. A typical multimedia application environment
consists of the following components:
1. Application software.
2. Container object store.
3. Image and still video store.
4. Audio and video component store.
5. Object directory service agent.
6. Component service agent.
7. User interface and service agent.
8. Networks (LAN and WAN).
Application Software
The application software perfoms a number of tasks related to a specific business process. A
business process consists of a series of actions that may be performed by one or more users.
The basic tasks combined to form an application include the following:
(1) Object Selection - The user selects a database record or a hypermedia document from a file
system, database management system, or document server.
(2) Object Retrieval- The application retrieves the base object.
(3) Object Component Display - Some document components are displayed automatically when
the user moves the pointer to the field or button associated with the multimedia object.
(4) User Initiated Display - Some document components require user action before
playback/display.
(5) Object Display Management and Editing: Component selection may invoke a component
control sub application which allows a user to control playback or edit the component object.
Document store
A document store is necessary for application that requires storage of large volume of documents.
The following describes some characteristics of document stores.
1. Primary Document Storage: A file systems or database that contains primary document
objects (container objects). Other attached or embedded documents and multimedia objects may
be stored in the document server along with the container object.
2. Linked Object Storage: Embedded components, such as text and formatting information, and
linked information, and linked components, such as pointers to image, audio, and video.
Components contained in a document, may be stored on separate servers.
3. Linked Object Management: Link information contains the name of the component, service
class or type, general attributes such as size, duration of play for isochronous objects and hardware,
and software requirements for rendering.
Image and still video store
An image and still video is a database system optimized for storage of images. Most systems
employ optical disk libraries. Optical disk libraries consist of multiple optical disk platters that are
played back by automatically loading the appropriate platter in the drive under device driver
control.
The characteristics of image and still video stores are as follows:
(i) Compressed information (ii) Multi-image documents
(iii)Related annotations (iv) Large volumes
(v)Migration between high-volume such as an optical disk library and high-speed media
such as magnetic cache storages
(vi) Shared access: The server software managing the server has to be able to manage the
different requirements.

4. How does the telephone metaphor differ from the VCR metaphor for voice capture?
Explain. (16)
Telephone metaphor
The telephone was considered as an independent office appliance. The advent of voice mail
systems was the first step in changing the role of the telephone. Voice mail servers convert the
analog voice and store it in digital form. With the standards for voice mail file formats and digital
storage of sound for computer systems coming closer together, use of a computer system to manage
the phone system was a natural extension of the user’s desktop. The telephone metaphor combines
normal windows user interface ideas with the telephone keypad. The telephone metaphor on a
computer screen allows using the computer interface as telephone keypad is used.
VCR Metaphor:
VCR metaphor is used for video playback applications. This user interface shows all functions one
would find in a video camera when it is in capture mode. (16)

5. Summarize the design issues for multimedia authoring. (16)


Enterprise wide standards should be set up to ensure that the user requirements are fulfilled with
good quality and made the objects transferable from one system to another. So standards must be
set for a number of design issues.
• Display resolution
• Data formula for capturing data
• Compression algorithms
• Network interfaces
• Storage formats (16)

6. Describe User interface design in detail. (16)


User interface design for multimedia applications is more involved than for other applications due
to the number of types of interactions with the user. Consequently, rather than a simple user
interface dialogue editor, multimedia applications need to use four different kinds of user interface
development tools. We can classify these as the following:
• Media editors
• An authoring application
• Hypermedia object creation
• Multimedia object locator and browser (16)

You might also like