0 views

Uploaded by Khushboo

Ignou assignment

Ignou assignment

© All Rights Reserved

- PAG 1-OpenGL Reloaded
- asl hand drawing rubric sheet
- GM
- FT3 Essentials
- Color Interpolation Algorithmss
- fdfdsfsdf
- MU
- pebret
- program3_12
- (2010!11!18) Synthetic Surfaces
- Bézier Surface Modeling for Neutrosophic Data Problems
- scanline
- Triangle Fill
- Ultimate Sharpening Technique
- Paisaje Misterioso
- spline cubico.xlsx
- Curves
- Scratch Detection and Removal From Old Videos
- v1-2-19.pdf
- Photoshop for 202

You are on page 1of 45

Downloaded by Khushboo Mohta (khushbomohta123@gmail.com)

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Course Title : Computer Graphics and Multimedia

Assignment Number : MCA(V)-053/Assignment/2018-19

Maximum Marks : 100

Weightage : 25%

Last Date of Submission : 15th October, 2018 (For July Session)

15th April, 2019 (For January Session)

Question1: Write Midpoint Circle Generation Algorithm. Computer coordinate points of circle

drawn with centre at (0,0) and radius 5, using midpoint circle algorithm.

Ans. Drawing a circle on the screen is a little complex than drawing a line. There are two popular

algorithms for generating a circle − Bresenham’s Algorithmand Midpoint Circle Algorithm. These

algorithms are based on the idea of determining the subsequent points required to draw the circle.

Let us discuss the algorithms in detail − The equa on of circle is X2+Y2=r2,X2+Y2=r2, where r is

radius.

Bresenham’s Algorithm

We cannot display a continuous arc on the raster display. Instead, we have to choose the nearest pixel

position to complete the arc.

From the following illustration, you can see that we have put the pixel at (X, Y) location and now

need to decide where to put the next pixel − at N (X+1, Y) or at S (X+1, Y-1).

lOMoARcPSD|3112409

www.ignousite.blogspot.com

• If d <= 0, then N(X+1, Y) is to be chosen as next pixel.

Algorithm

Step 1 − Get the coordinates of the center of the circle and radius, and store them in x, y, and R

respectively. Set P=0 and Q=R.

Step 2 − Set decision parameter D = 3 – 2R.

Step 3 − Repeat through step-8 while P ≤ Q.

Step 4 − Call Draw Circle (X, Y, P, Q).

Step 5 − Increment the value of P.

Step 6 − If D < 0 then D = D + 4P + 6.

Step 7 − Else Set R = R - 1, D = D + 4(P-Q) + 10.

Step 8 − Call Draw Circle (X, Y, P, Q).

Find the mid-point p of the two possible pixels i.e (x-0.5, y+1)

If p lies inside or on the circle perimeter, we plot the pixel (x, y+1), otherwise if it’s outside we plot

the pixel (x-1, y+1)

Boundary Condition : Whether the mid-point lies inside or outside the circle can be decided by using

the formula:-

Given a circle centered at (0,0) and radius r and a point p(x,y)

F(p) = x2 + y2 – r2

if F(p)<0, the point is inside the circle

F(p)=0, the point is on the perimeter

F(p)>0, the point is outside the circle

Example

lOMoARcPSD|3112409

www.ignousite.blogspot.com

In our program we denote F(p) with P. The value of P is calculated at the mid-point of the two

contending pixels i.e. (x-0.5, y+1). Each pixel is described with a subscript k.

Pk = (Xk — 0.5)2 + (yk + 1)2 – r2

Now,

xk+1 = xk or xk-1 , yk+1= yk +1

∴ Pk+1 = (xk+1 – 0.5)2 + (yk+1 +1)2 – r2

= (xk+1 – 0.5)2 + [(yk +1) + 1]2 – r2

= (xk+1 – 0.5)2 + (yk +1)2 + 2(yk + 1) + 1 – r2

= (xk+1 – 0.5)2 + [ – (xk – 0.5)2 +(xk – 0.5)2 ] + (yk + 1)2 – r2 + (yk + 1) + 1

= Pk + (xk+1 – 0.5)2 – (xk – 0.5)2 + 2(yk + 1) + 1

= Pk + (x2k+1 – x2k)2 + (xk+1 – xk)2 + 2(yk + 1) + 1

= Pk + 2(yk +1) + 1, when Pk <=0 i.e the midpoint is inside the circle

(xk+1 = xk)

Pk + 2(yk +1) – 2(xk – 1) + 1, when Pk>0 I.e the mid point is outside the circle(xk+1 = xk-1)

The first point to be plotted is (r, 0) on the x-axis. The initial value of P is calculated as follows:-

P1 = (r – 0.5)2 + (0+1)2 – r2

= 1.25 – r

= 1 -r (When rounded off)

Examples:

Input : Centre -> (0, 0), Radius -> 5

Output :

(5, 0) (5, 0) (0, 5) (0, 5)

(5, 1) (-5, 1) (5, -1) (-5, -1)

(1, 5) (-1, 5) (1, -5) (-1, -5)

(2, 2) (-2, 2) (2, -2) (-2, -2)

Question2: Discuss Shear Transformation with suitable example, write Shear transformation

matrix for Shear along X- axis, Y-axis and Generalized Shear. Show that the simultaneous

shearing shxy (a, b), is not same as the shearing in x-direction, shx(a) followed by a shearing

in y-direction, shy(b).

(old coordinates are (x, y) and the new coordinates are (x', y'))

X-Direction Shear is given by the following matrix:

(1 0 0)

(SHx 1 0)

(0 0 1)

( 1 0 0)

(x' y' 1) = ( x y 1) * (SHx 1 0)

( 0 0 1)

Which produces a shearing along x that is proportional to y:

x' = x + SHx * y

y' = y

1=1

lOMoARcPSD|3112409

www.ignousite.blogspot.com

(1 SHy 0)

(0 1 0)

(0 0 1)

(1 SHy 0)

(x' y' 1) = ( x y 1) * (0 1 0)

(0 0 1)

Which produces a shearing along y that is proportional to x:

x' = x

y' = x * SHy + y

1=1

xy-shearsaboutsthesorigins

Let an object point P(x,y) be moved to P’(x’,y’) as a result of shear transformation in both x- and

y-directions with shearing factors a and b, respectively, as shown in

x' = x +ay

where ′ay′ and ′bx′ are shear factors in x and y directions, respectively. The xy-shear is also

called simultaneous shearing or shearing for short.

(x’,y’)=(x,y) 1 b …………………(2)

a 1

lOMoARcPSD|3112409

www.ignousite.blogspot.com

(x’,y’,1)=(x,y,1) 1 b 0

A 1 0

0 0 1

That is, P’h = Ph.Shxy(a,b) …………………………….(3)

Where Ph and P’h represent object points, before and after required transformation, in Homogeneous

Coordinates and Shxy(a,b)is called homogeneous transformation matrix for xy-shear in both x- and y-

directions with shearing factors a and b, respectively,

Special case: when we put b=0 in equation (21), we have shearing in x-direction, andwhen a=0, we have

Shearing in the y-direction, respectively.

Question3: Explain the scan line polygon filling algorithm with the help of suitable diagram.

Ans. Polygon is an ordered list of vertices as shown in the following figure. For filling polygons with

particular colors, you need to determine the pixels falling on the border of the polygon and those

which fall inside the polygon. In this chapter, we will see how we can fill polygons using different

techniques.

This algorithm works by intersecting scanline with polygon edges and fills the polygon between pairs

of intersections. The following steps depict how this algorithm works.

Step 1 − Find out the Y min and Y max from the given polygon.

Step 2 – Scan Line intersects with each edge of the polygon from Y min to Y max. Name each

intersection point of the polygon. As per the figure shown above, they are named as p0, p1, p2, p3.

Step 3 − Sort the intersection point in the increasing order of X coordinate i.e. (p0, p1), (p1, p2), and

(p2, p3).

Step 4 − Fill all those pair of coordinates that are inside polygons and ignore the alternate pairs.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Question4: What is the role of light in computer graphics? Discuss the Lamberts Cosine Law?

Explain ambient .diffused and specular reflection. Give general mathematical expression of each.

And also the give the mathematical expression to determine the Intensity when all three type of

reflections are available

Ans. Lighting in Computer Graphics refers to the pgacement of gights in a scene to achieve some desired

effect. Image synthesis and animation packages agg contain different types of gights that can be pgaced in

different gocations and modified by changing the parameters. Too often, peopge who are creating images

or animations ignore or pgace gittge emphasis on gighting. This is unfortunate since gighting is a very

important part of image synthesis. The proper use of gights in a scene is one of the things that

differentiates the tagented CG peopge from the untagented. This is not a new topic as a garge amount of

work has been done on gighting issues in photography, figm, and video. Since Image Synthesis is trying to

emugate reagity, we can gearn much from this previous work.

Lighting can be used to create more of a 3D effect by separating the foreground from the background, or

it can merge the two to create a fgat 2D effect. It can be used to set an emotionag mood and to infguence

the viewer.

In optics ( Physics ), Lambert's cosine law states that the radiant intensity or luminous intensity

observed from an ideal diffusely reflecting surface is directly proportional to the cosine of the angle

θ formed between the direction of the incident light and the surface normal.

It states that when light falls obliquely on a surface, the illumination of the surface is directly

proportional to the cosine of the angle θ between the direction of the incident light and the

surface nurmal. The law is also known as the cosine emission law or Lambert's emission law. It is

used to find the illumination of a surface when light falls on the surface along an oblique direction.

When light strikes a surface, some of it will be reflected. Exactly how it reflects depends in a

complicated way on the nature of the surface, what I am calling the material properties of the

surface. In OpenGL (and in many other computer graphics systems), the complexity is approximated

by two general types of reflection, specular reflection and diffuse reflection.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

In perfect specular ("mirror-like") reflection, an incoming ray of light is reflected from the surface

intact. The reflected ray makes the same angle with the surface as the incoming ray. A viewer can

see the reflected ray only if the viewer is in exactly the right position, somewhere along the path of

the reflected ray. Even if the entire surface is illuminated by the light source, the viewer will only

see the reflection of the light source at those points on the surface where the geometry is right.

Such reflections are referred to as specular highlights. In practice, we think of a ray of light as being

reflected not as a single perfect ray, but as a cone of light, which can be more or less narrow.

Question5: What is frame buffer? How it is different from the display buffer? How a frame

buffer is used for putting colour and controlling intensity of any display device?

Ans. A frame buffer is a large, contiguous piece of computer memory. At a minimum there is one

memory bit for each pixel in the rater; this amount of memory is called a bit plane. The picture is

built up in the frame buffer one bit at a time.

You know that a memory bit has only two states, therefore a single bit plane yields a black-and white

display. You know that a frame buffer is a digital device and the CRT is an analog device. Therefore,

a conversion from a digital representation to an analog signal must take place when information is

lOMoARcPSD|3112409

www.ignousite.blogspot.com

read from the frame buffer and displayed on the raster CRT graphics device. For this you can use a

digital to analog converter (DAC).Each pixel in the frame buffer must be accessed and converted

before it is visible on the raster CRT.

N-bit colour Frame buffer

Color or gray scales are incorporated into a frame buffer rater graphics device by using additional bit

planes. The intensity of each pixel on the CRT is controlled by a corresponding pixel location in each

of the N bit planes. The binary value from each of the N bit planes is loaded into corresponding

positions in a register. The resulting binary number is interpreted as an intensity level between 0

(dark) and 2n -1 (full intensity).

This is converted into an analog voltage between 0 and the maximum voltage of the electron gun by

the DAC. A total of 2N intensity levels are possible. Figure given below illustrates a system with 3 bit

planes for a total of 8 (23) intensity levels. Each bit plane requires the full complement of memory

for a given raster resolution; e.g., a 3-bit plane frame buffer for a 1024 X1024 raster requires

3,145,728 (3 X 1024 X1024) memory bits.

An increase in the number of available intensity levels is achieved for a modest increase in required

memory by using a lookup table. Upon reading the bit planes in the frame buffer, the resulting

number is used as an index into the lookup table. The look up table must contain 2N entries. Each

entry in the lookup table is W bit wise. W may be greater than N. When this occurs, 2W intensities

are available; but only 2N different intensities are available at one time. To get additional

intensities, the lookup table must be changed.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Because there are three primary colours, a simple color frame buffer is implemented with three bit

planes, one for each primary color. Each bit plane drives an individual color gun for each of the

three primary colors used in color video. These three primaries (red, green, and blue) are combined

at the CRT to yield eight colors.

Question6: Discuss the Taxonomy of projection with suitable diagram. How Perspective

projection differs from Parallel projection. Derive a transformation matrix for a perspective

projection of a point P ( ,,) onto a =4 plane as viewed from E (6, 0, 0)

Ans. In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is

added. 3D graphics techniques and their application are fundamental to the entertainment, games,

and computer-aided design industries. It is a continuing area of research in scientific visualization.

Furthermore, 3D graphics components are now a part of almost every personal computer and,

although traditionally intended for graphics-intensive software such as games, they are increasingly

being used by other applications.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Drawing is a visual art that has been used by man for self-expression throughout history. It uses

pencils, pens, colored pencils, charcoal, pastels, markers, and ink brushes to mark different types of

medium such as canvas, wood, plastic, and paper.

It involves the portrayal of objects on a flat surface such as the case in drawing on a piece of paper or

a canvas and involves several methods and materials. It is the most common and easiest way of

recreating objects and scenes on a two-dimensional medium.

Perspective projection is seeing things larger when they’re up close and smaller at a distance. It is a

three-dimensional projection of objects on a two-dimensional medium such as paper. It allows an

artist to produce a visual reproduction of an object which resembles the real one.

Parallel projection, on the other hand, resembles seeing objects which are located far from the viewer

through a telescope. It works by making light rays entering the eyes parallel, thus, doing away with

the effect of depth in the drawing. Objects produced using parallel projection do not appear larger

when they are near or smaller when they are far. It is very useful in architecture. However, when

measurements are involved, perspective projection is best. Solution: Plane of projection: x = 4

(given)

Let P (x, y, z) be any point in the space. We know the

Parametric equation of a line AB, starting from A and passing Transformations through B is

P (t) = A + t. (B – A), o < t < ∞

So that parametric equation of a line starting from E (6,0,0) and passing through P (x, y, z) is:

E + t ( P – E) , o < t < ∞.

= (6, 0, 0) + t [(x, y, z) – (6, 0, 0)]

= (6, 0, 0) + [t (x – 6), t. y, t. z]

= [t. (x – 6) + 6, t. y, t. z]. Assume

Point P’ is obtained, when t = t*

∴ P’ = (x’, y’, z’) = [t* (x – 6) + 6, t*y, t*. z]

Since, P’ lies on x = 4 plane, so

t* (x – 6) + 6 = 4 must be true;

t* = -2 / x-6

lOMoARcPSD|3112409

www.ignousite.blogspot.com

= (4/x-6 , -2.y/x-6 , -2.z/x-6 )

=(4x-24/x-6 , -2.y/x-6 , -2.z/x-6 )

In Homogeneous coordinate system

P’ = (x’, y’, z’, 1) = (4x-24/x-6 , -2.y/x-6 , -2.z/x-6 ,1 )

= (4x-24 , -2.y, -2.z )………………..(1)

4001

0 -2 0 0 0

-2 0 0

-24 0 0 -4

Thus, equation (2) is the required transformation matrix for perspective view from(6, 0, 0)

Question7: Write Bresenham line drawing algorithm and DDA algorithm? Compare both

algorithms and identify which one is better and why? Draw a line segment joining (4, 8) and (8,

10) using both algorithms i.e. Bresenham line drawing algorithm and DDA algorithm.

Drawing Algorithm

Arithmetic A algorithm uses floating points i.e. Real Bresenhams algorithm uses fixed

thmetic. points i.e. Integer Arithmetic.

Operations DDA algorithm uses multiplication and Bresenhams algorithm uses only

division in its operations. subtraction and addition in its

operations.

Speed DDA algorithm is rather slowly than Bresenhams algorithm is faster

Bresenhams algorithm in line drawing than DDA algorithm in line

because it uses real arithmetic (floating- drawing because it performs only

point operations). addition and subtraction in its

calculation and uses only integer

arithmetic so it runs significantly

faster.

Accuracy & Efficiency DDA algorithm is not as accurate and Bresenhams algorithm is more

efficient as Bresenham algorithm. efficient and much accurate than

DDA algorithm.

Drawing DDA algorithm can draw circles and Bresenhams algorithm can draw

curves but that are not as accurate as circles and curves with much

Bresenhams algorithm. more accuracy than DDA

algorithm.

Round Off DDA algorithm round off the coordinates Bresenhams algorithm does not

to integer that is nearest to the line. round off but takes the

incremental value in its operation.

Expensive DDA algorithm uses an enormous number Bresenhams algorithm is less

of floating-point multiplications so it is expensive than DDA algorithm as

expensive. it uses only addition and

subtraction.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Comparisions

• DDA uses fgoating points where as Bresenham aggorithm use fixed points.

• DDA round off the coordinates to nearest integer but Bresenham aggorithm does not.

• Bresenham aggorithm is much accurate and efficient than DDA.

• Bresenham aggorithm can draw circges and curves with much more accuracy than

DDA.

• DDA uses mugtipgication and division of equation but Bresenham aggorithm uses

subtraction and addition ongy

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Bezier Curve. /ow Bezier curves contribute to Bezier Surfaces? Prove the following

properties of Bezier curve.

(i) P(u=1) = Pn (ii) P’(0) = n (P1-P0)

Given four control points PO (2, 2) P1 (3, 4) P2 (5, 4) and P3 (4,2) as vertices of Bezier

curve. Determine four points of Bezier Curve.

Ans. Bezier Curves

Bezier curve is discovered by the French engineer Pierre Bézier. These curves

can be generated under the control of other points. Approximate tangents by

lOMoARcPSD|3112409

www.ignousite.blogspot.com

using control points are used to generate curve. The Bezier curve can be

represented mathematically as −

Σk=0nPiBni(t) Σk=0nPiBin(t)

Where pipi is the set of points and Bni(t) Bin(t) represents the Bernstein

polynomials which are given by –

Bni(t)=(ni)(1−t)n−iti Bin(t)=(ni)(1−t)n−iti

Where n is the polynomial degree, i is the index, and t is the variable.

The simplest Bézier curve is the straight line from the point P0P0 to P1P1. A

quadratic Bezier curve is determined by three control points. A cubic Bezier

curve is determined by four control points.

Bezier curves have the following properties −

• They generally follow the shape of the control polygon, which consists of

the segments joining the control points.

• They always pass through the first and last control points.

• They are contained in the convex hull of their defining control points.

• The degree of the polynomial defining the curve segment is one less that

the number of defining polygon point. Therefore, for 4 control points, the

degree of the polynomial is 3, i.e. cubic polynomial.

• A Bezier curve generally follows the shape of the defining polygon.

• The direction of the tangent vector at the end points is same as that of

the vector determined by first and last segments.

• The convex hull property for a Bezier curve ensures that the polynomial

smoothly follows the control points.

• No straight line intersects a Bezier curve more times than it intersects its

control polygon.

• They are invariant under an affine transformation.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Euclidean coordinate sstem? Consider the square ABCD with vertices A(0, 0),B (0, 2),C (2, 0),

D (2, 2). Perform a composite transformation of the square by performing the following steps.

(Give the coordinates of the square at each steps).

(i) Scale by using =2 and = 3

(ii) Rotate of 450 in the anticlockwise direction

(iii) Translate by using = 3 and = 5

Two Dimensional coordinates are represented using three-element column vectors, and

Transformation operation is represented by 3 x 3 matrices.

On the basis of the matrix product of the individual transformations we can set up a matrix for any

sequence of transformation known as composite transformation matrix. For row-matrix

representation we form composite transformations by multiplying matrices in order from left to right

whereas in column-matrix representation we form composite transformations by multiplying

matrices in order from right to left.

A perspective transformation is the transformation from one three space in to another three space. In

contrast to the parallel transformation , in perspective transformations parallel lines converge, object

size is reduced with increasing distance from the center of projection, and non uniform foreshortening

of lines in the object as a function of orientation and the distance of the object from the center of

projection occurs. All of these effects laid the depth perception of the human visual system., but the

shape of the object is not preserved. Perspective drawings are characterized by perspective

foreshortening and vanishing points .Perspective foreshortening is the illusion that object and lengths

appear smaller as there distance from the center of projection increases. The illusion that certain sets of

parallel lines appear to meet at a point is another feature of perspective drawings. These points are

lOMoARcPSD|3112409

www.ignousite.blogspot.com

called vanishing points .Principal vanishing points are formed by the apparent intersection of lines

parallel to one of the three x,y or z axis. The number of principal vanishing points is determined by the

number of principal axes interested by the view plane

Perspective Anomalies

1.Perspective foreshortening- The farther an object is from the center of projection ,the smaller it

appears

2.vanishing Points- Projections of lines that are not parallel to the view plane (i.e. lines that are not

perpendicular to the view plane normal) appear to meet at some point on the view plane. This point is

called the vanishing point. A vanishing point corresponds to every set of parallel lines. Vanishing points

corresponding to the three principle directions are referred to as "Principle Vanishing Points (PVPs)". We

can thus have at most three PVPs. If one or more of these are at infinity (that is parallel lines in that

direction continue to appear parallel on the projection plane), we get 1 or 2 PVP perspective projection.

View plane at

or

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Similarly,

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Question10: Derive the 2D-transformtion matrix for reflection about the line = , where is

a constant. Use this transformation

matrix to reflect the triangle A (0,0) ,B(1, 1), C(2 ,0) about the line =2 .

Ans.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Question11: Why Shading is required in Computer Graphics? Briefly Discuss the role of

interpolation technique in Shading. Compare intensity interpolation and Normal Interpolation?

Which Interpolation technique contributes to which type of shading? Which shading technique is

better Phong shading or Gourand shading, why?

Ans. Shading is used in drawing fgr depicting levels gf darkness gn paper by applying media

mgre densely gr with a darker shade fgr darker areas, and less densely gr with a lighter shade fgr

lighter areas. There are varigus techniques gf shading including crgss hatching where

perpendicular lines gf varying clgseness are drawn in a grid pattern tg shade an area. The clgser

the lines are tggether, the darker the area appears. Likewise, the farther apart the lines are, the

lighter the area appears.

Light patterns, such as gbjects having light and shaded areas, help when creating the illusign gf

depth gn paper.

Pgwder shading is a sketching shading methgd. In this style, the stumping pgwder and paper

stumps are used tg draw a picture. This can be in cglgr. The stumping pgwder is smggth and

dgesn't have any shiny particles. The pgster created with pgwder shading lggks mgre beautiful

than the griginal. The paper tg be used shguld have small grains gn it sg that the pgwder

remains gn the paper.

Interpolationstechniquess

When calculating the brightness of a surface during rendering, our illumination model requires that we

know the surface normal. However, a 3D model is usually described by a polygon mesh , which may only

store the surface normal at a limited number of points, usually either in the vertices, in the polygon

faces, or in both. To get around this problem, one of a number of interpolation techniques can be used.

Flatsshadings

Here, a color is calculated for one point on each polygon (usually for the first vertex in the polygon, but

sometimes for the centroid for triangle meshes), based on the polygon's surface normal and on the

assumption that all polygons are flat. The color everywhere else is then interpolated by coloring all

points on a polygon the same as the point for which the color was calculated, giving each polygon a

uniform color (similar to in nearest-neighbor interpolation). It is usually used for high speed rendering

where more advanced shading techniques are too computationally expensive. As a result of flat shading

all of the polygon's vertices are colored with one color, allowing differentiation between adjacent

polygons. Specular highlights are rendered poorly with flat shading: If there happens to be a large

specular component at the representative vertex, that brightness is drawn uniformly over the entire

face. If a specular highlight doesn’t fall on the representative point, it is missed entirely. Consequently,

the specular reflection component is usually not included in flat shading computation.

Smoothsshadings

In contrast to fgat shading where the cogors change discontinuousgy at pogygon borders, with smooth

shading the cogor changes from pixeg to pixeg, resugting in a smooth cogor transition between two

adjacent pogygons. Usuaggy, vagues are first cagcugated in the vertices and biginear interpogation is

then used to cagcugate the vagues of pixegs between the vertices of the pogygons.

Types of smooth shading incgude:

lOMoARcPSD|3112409

www.ignousite.blogspot.com

• Gouraud shading

• Phong shading

Gouraudsshadings

• Determine the normal at each polygon vertex.

• Apply an illumination model to each vertex to calculate the light intensity from the vertex

normal.

• Interpolate the vertex intensities using bilinear interpolation over the surface polygon.

Datasstructuress

• Sometimes vertex normals can be computed directly (e.g. height field with uniform mesh)

• More generally, need data structure for mesh

• Key: which polygons meet at each vertex.

Advantagess

• Polygons, more complex than triangles, can also have different colors specified for each vertex.

In these instances, the underlying logic for shading can become more intricate.

Problemss

• Even the smoothness introduced by Gouraud shading may not prevent the appearance of the

shading differences between adjacent polygons.

• Gouraud shading is more CPU intensive and can become a problem when rendering real time

environments with many polygons.

• T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-

Junctions should be avoided.

Phongsshadings

Phong shading is similar to Gouraud shading, except that instead of interpolating the light intensities,

the normals are interpolated between the vertices. Thus, the specular highlights are computed much

more precisely than in the Gouraud shading model:

1. Compute a normal N for each vertex of the polygon.

2. From bilinear interpolation compute a normal, N, for each pixel. (This must be renormalized

each time.)

3. Apply an illumination model to each pixel to calculate the light intensity from Ni.

Othersapproachess

Both Gouraud shading and Phong shading can be implemented using bilinear interpolation. Bishop and

Weimer proposed to use a Taylor series expansion of the resulting expression from applying

an illumination model and bilinear interpolation of the normals. Hence, second degree polynomial

interpolation was used. This type of biquadratic interpolation was further elaborated by Barrera et

al., where one second order polynomial was used to interpolate the diffuse light of the Phong reflection

model and another second order polynomial was used for the specular light.

Spherical Linear Interpolation (Slerp) was used by Kuij and Blake for computing both the normal over the

polygon as well as the vector in the direction to the light source. A similar approach was proposed by

Hast, which uses Quaternion interpolation of the normals with the advantage that the normal will

always have unit length and the computationally heavy normalization is avoided.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

intensity interpolation

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Normal Interpolation

lOMoARcPSD|3112409

www.ignousite.blogspot.com

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Phong Shading: Phong Shading overcomes some of the disadvantages of Gouraud Shading

and specular reflection can be successfully incorporated in the scheme. The first stage in the

process is the same as for the Gouraud Shading - for any polygon we evaluate the vertex

normals. For each scan line in the polygon we evaluate by linear intrepolation the normal vectors

at the end of each line. These two vectors Na and Nb are then used to interpolate Ns. we thus

derive a normal vector for each point or pixel on the polygon that is an approximation to the real

normal on the curved surface approximated by the polygon. Ns , the interpolated normal vector,

is then used in the intensity calculation. The vector interpolation tends to restore the curvature of

the original surface that has been approximated by a polygon mesh. We have :

(2.5)

These are vector equations that would each be implemented as a set of three equations, one for

each of the components of the vectors in world space. This makes the Phong Shading

interpolation phase three times as expensive as Gouraud Shading. In addition there is an

application of the Phong model intensity equation at every pixel. The incremental computation is

also used for the intensity interpolation:

(2.6)

for (xx = x1; xx < x2; xx++)

if (z < CScene.zBuf[offset])

{ CScene.zBuf[offset] = z;

pt = face.findPtInWC(u,v);

lOMoARcPSD|3112409

www.ignousite.blogspot.com

}

u += deltaU;

z += deltaZ;

p1.add(deltaPt);

n1.add(deltaN);

} M

So in Phong Shading the attribute interpolated are the vertex

normals, rather than vertex intensities. Interpolation of normal

allows highlights smaller than a polygon.

Gouraud Shading : In Gouraud Shading, the intensity at each vertex of the polygon is first

calculated by applying equation 1.7. The normal N used in this equation is the vertex normal which is

calculated as the average of the normals of the polygons that share the vertex. This is an important feature

of the Gouraud Shading and the vertex normal is an approximation to the true normal of the surface at

that point. The intensities at the edge of each scan line are calculated from the vertex intensities and the

intensities along a scan line from these. The interpolation equations are as follows:

(2.2)

For computational efficiency these equations are often implemented as incremental calculations. The

intensity of one pixel can be calculated from the previous pixel according to the increment of intensity:

(2.3)

deltaI = (i2 - i1) / (x2 - x1);

lOMoARcPSD|3112409

www.ignousite.blogspot.com

if (z < CScene.zBuf[offset])

{ CScene.zBuf[offset] = z;

CScene.frameBuf[offset] = i1;

z += deltaZ; i1 += deltaI;

Where CScene.ZBuf is the data structure to store the depth of the pixel for hidden-surface removal (I will

discuss this later). And CScene.frameBuf is the buffer to store the pixle value. The above code is the

implementation for one active scan line. In Gouraud Shading anomalies can appear in animated sequences

because the intensity interpolation is carried out in screen coordinates from vertex normals calculated in

world coordinate. No highlight is smaller than a polygon.

Question12: Write Z-Buffer Algorithm for hidden surface detection. Explain how this

algorithm is applied to determine the hidden surfaces.

Ans.

When viewing a picture containing non transparent objects and surfaces, it is not possible to see those

objects from view which are behind from the objects closer to eye. To get the realistic screen image,

removal of these hidden surfaces is must. The identification and removal of these surfaces is called as

the Hidden-surface problem.

Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for

hidden surface detection. It is an Image space method. Image space methods are based on the pixel to

be drawn on 2D. For these methods, the running time complexity is the number of pixels times number

of objects. And the space complexity is two times the number of pixels because two arrays of pixels are

required, one for frame buffer and the other for the depth buffer.

The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally

z-axis is represented as the depth. The algorithm for the Z-buffer method is given below :

s

Algorithms:

i.e, d(i, j) = infinite (max length)

Initialize the color value for each pixel

as c(i, j) = background color

for each polygon, do the following steps :

{

find depth i.e, z of polygon

at (x, y) corresponding to pixel (i, j)

lOMoARcPSD|3112409

www.ignousite.blogspot.com

{

d(i, j) = z;

c(i, j) = color;

}

}

Let’s consider an exampge to understand the aggorithm in a better way. Assume the pogygon

given is as begow :

As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the

result is:

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Now, the z values generated on the pixel will be different which are as shown below :

Therefore, in the Z buffer method, each surface is processed separately one position at a time across the

surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e,

(smallest z) surface determines the color to be displayed in frame buffer. The z values, i.e, the depth

values are usually normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and

when z = 1, it is called as the Front Clipping Pane.

In this method, 2 buffers are used :

1. Frame buffer

2. Depth buffer

Calculation of depth :

As we know that the equation of the pgane is :

ax + by + cz + d = 0, this implies

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Calculation of each depth could be very expensive, but the computation can be reduced to a single add

per pixel by using an increment method as shown in figure below :

AX + BY + CZ + D = 0 implies

Hence, calculation of depth can be done by recording the plane equation of each polygon in the

(normalized) viewing coordinate system and then using the incremental method to find the depth Z.

So, to summarize, it can be said that this approach compares surface depths at each pixel position on

the projection plane. Object depth is usually measured from the view plane along the z-axis of a viewing

system.

Example :

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Let S1, S2, S3 are the surfaces. The surface closest to projection plane is called visible surface. The

computer would start (arbitrarily) with surface 1 and put it’s value into the buffer. It would do the same

for the next surface. It would then check each overlapping pixel and check to see which one is closer to

the viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the

smallest depth from the view plane, so it is visible at that position.

Question13: What is animation? How it is different from Graphics? Explain how acceleration is

simulated in animation? Discuss all the cases i.e. zero acceleration, Positive acceleration,

Negative acceleration and combination of positive and negative acceleration.

Ans.

Animation

Many Web pages use animation, which is the appearance of motion created by displaying a series of still

images in sequence. Animation can make Web pages more visually interesting or draw attention to

important information or links. You can create animations by using any software from a variety of

software that allow you to create animation. Simple animation can be GIF animated file but the complex

animation can be the face of human or alien in movie or game.

Graphics

A graphic, or graphical image, is a digital representation of non-text information such as a drawing,

chart, or photo. Many Web pages use colorful graphical designs and images to convey messages. Of the

graphics formats that exist on the Web, the two more common are JPEG and GIF formats. JPEG

(pronounced JAY-peg) is a format that compresses graphics to reduce their file size, which means the file

takes up less storage space.

The goal with JPEG graphics is to reach a balance between image quality and file size. Digital photos

often use the JPEG format. GIF (pronounced jiff) graphics also use compression techniques to reduce file

sizes. The GIF format works best for images that have only a few distinct colors, such as company logos.

Some Web sites use thumb nails on their pages because graphics can be time-consuming to display. A

lOMoARcPSD|3112409

www.ignousite.blogspot.com

thumbnail is a small version of a larger graphic. You usually can click a thumbnail to display a larger

image.

Acceleration is the change in velocity per unit of given time. The object is in the state of

acceleration if it shows any of these three changes as given below; first if it changes its speed

which change the magnitude of velocity; the second if it changes its direction and the third when

it shows changes in both. It can be negative and positive acceleration. When a moving object

than the acceleration of a moving object can be negative and positive. The positive acceleration when

the acceleration is in the direction of motion of object. The negative acceleration can be in two types.

First is the one when an object is in motion and it slows down and the direction of the acceleration is

in the negative direction. The second is the one when the direction of the acceleration is in the

direction of velocity and the object increases its speed. Let’s discuss the negative acceleration, its

graph representation, and some more examples based on it.

Positive and Negative Acceleration

Positive Acceleration :

If the velocity of an object increases, then the object is said to be moving with positive acceleration.

Example:

1. A ball rolling down on an inclined plane.

2. When you are driving you find the road is clear and for obvious reasons of saving the time you

increase the speed of your car. This is called a positive acceleration.

In other words, a positive acceleration means increasing the speed within a time interval, usually very

short interval of time.

Negative Acceleration :

If the velocity of an object decreases, then the object is said to be moving with negative

acceleration. Negative acceleration is also known as retardation or deceleration.

Example:

1. A ball moving up an inclined plane.

2. A ball thrown vertically upwards is moving with a negative acceleration as the velocity decreases

with time.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Question14: What is windowing transformation? Discuss the real life example where you can

apply the windowing transformation? Explain the concept of window to view port

transformation with the help of suitable diagram and calculations.

Ans. Window:

1. A world-coordinate area selected for display is called a window.

2. In computer graphics, a window is a graphical control element.

3. It consists of a visual area containing some of the graphical user interface of the program it

belongs to and is framed by a window decoration.

4. A window defines a rectangular area in world coordinates. You define a window with a

GWINDOW statement. You can define the window to be larger than, the same size as, or smaller

than the actual range of data values, depending on whether you want to show all of the data or only

part of the data.

Viewport:

1. An area on a display device to which a window is mapped is called a viewport.

2. A viewport is a polygon viewing region in computer graphics. The viewport is an area expressed

in rendering-device-specific coordinates, e.g. pixels for screen coordinates, in which the objects of

interest are going to be rendered.

3. A viewport defines in normalized coordinates a rectangular area on the display device where the

image of the data appears. You define a viewport with the GPORT command.

You can have your graph take up the entire display device or show it in only a portion, say the upper-

right part.

Window to viewport transformation:

1. Window-to-Viewport transformation is the process of transforming a two-dimensional, world-

coordinate scene to device coordinates.

2. In particular, objects inside the world or clipping window are mapped to the viewport. The

viewport is displayed in the interface window on the screen.

3. In other words, the clipping window is used to select the part of the scene that is to be displayed.

The viewport then positions the scene on the output device.

3.s Example:

1. This transformation involves developing formulas that start with a point in the world window, say

(xw, yw).

lOMoARcPSD|3112409

www.ignousite.blogspot.com

2. The formula is used to produce a corresponding point in viewport coordinates, say (xv, yv). We

would like for this mapping to be "proportional" in the sense that if xw is 30% of the way from the

left edge of the world window, then xv is 30% of the way from the left edge of the viewport.

3. Similarly, if yw is 30% of the way from the bottom edge of the world window, then yv is 30% of

the way from the bottom edge of the viewport. The picture below shows this proportionality.

1. The position of the viewport can be changed allowing objects to be viewed at different positions

on the Interface Window.

2. Multiple viewports can also be used to display different sections of a scene at different screen

positions. Also, by changing the dimensions of the viewport, the size and proportions of the objects

being displayed can be manipulated.

3. Thus, a zooming affect can be achieved by successively mapping different dimensioned clipping

windows on a fixed sized viewport.

4. If the aspect ratio of the world window and the viewport are different, then the image may look

distorted.

polygon clipping algorithm. Using this algorithm clip the following polygon against the

rectangular window ABCD as given below.

Ans

lOMoARcPSD|3112409

www.ignousite.blogspot.com

Pseudg cgde

Given a gist of edges in a cgip pogygon, and a gist of vertices in a subject pogygon, the foggowing

procedure cgips the subject pogygon against the cgip pogygon.

for (Edge clipEdge in clipPolygon) do

List inputList = outputList;

outputList.clear();

Point S = inputList.last;

for (Point E in inputList) do

if (E inside clipEdge) then

if (S not inside clipEdge) then

outputList.add(ComputeIntersection(S,E,clipEdge));

end if

outputList.add(E);

else if (S inside clipEdge) then

outputList.add(ComputeIntersection(S,E,clipEdge));

end if

S = E;

done

done

The Sutherland - Hodgman algorithm performs a clipping of a polygon against each window edge in

turn. It accepts an ordered sequence of verices v1, v2, v3, ..., vn and puts out a set of vertices

defining the clipped polygon.

This figure represents a polygon (the large, solid, upward pointing arrow) before clipping has

occurred.

The following figures show how this algorithm works at each edge, clipping the polygon.

lOMoARcPSD|3112409

www.ignousite.blogspot.com

b. Clipping against the top side of the clip window.

c. Clipping against the right side of the clip window.

d. Clipping against the bottom side of the clip window.

Four Types of Edges

As the algorithm goes around the edges of the window, clipping the polygon, it encounters four types

of edges. All four edge types are illustrated by the polygon in the following figure. For each edge

type, zero, one, or two vertices are added to the output list of vertices that define the clipped polygon.

1. Edges that are totally inside the clip window. - add the second inside vertex point

2. Edges that are leaving the clip window. - add the intersection point as a vertex

3. Edges that are entirely outside the clip window. - add nothing to the vertex output list

4. Edges that are entering the clip window. - save the intersection and inside points as vertices

Assume that we're clipping a polgon's edge with vertices at (x1,y1) and (x2,y2) against a clip window

with vertices at (xmin, ymin) and (xmax,ymax).

The location (IX, IY) of the intersection of the edge with the left side of the window is:

i. IX = xmin

ii. IY = slope*(xmin-x1) + y1, where the slope = (y2-y1)/(x2-x1)

The location of the intersection of the edge with the right side of the window is:

i. IX = xmax

ii. IY = slope*(xmax-x1) + y1, where the slope = (y2-y1)/(x2-x1)

lOMoARcPSD|3112409

www.ignousite.blogspot.com

The intersection of the polygon's edge with the top side of the window is:

i. IX = x1 + (ymax - y1) / slope

ii. IY = ymax

Finally, the intersection of the edge with the bottom side of the window is:

i. IX = x1 + (ymin - y1) / slope

ii. IY = ymin

Question16: Explain any five of the following terms with the help of suitable diagram/example,

if needed.

(a) Ray Tracing (b)Ray Casting.

(c) Object-space approach in Visible-surface detection.

(d) Audio file formats (e) Video file formats

(f) Image filtering

(g) Authoring tools

(h) Animation and its types

Ans.

Ray Tracing is a global illumination based rendering method. It traces rays of light from the eye back

through the image plane into the scene. Then the rays are tested against all objects in the scene to

determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the

background color. Ray tracing handles shadows, multiple specular reflections, and texture mapping

in a very easy straight-forward manner.

Note that ray tracing, like scan-line graphics, is a point sampling algorithm.

(b)Ray Casting.

Ray casting is a rendering technique used in computer graphics and computational geometry. It is

capable of creating a three-dimensional perspective in a two-dimensional map. Developed by

scientists at the Mathematical Applications Group in the 1960s, it is considered one of the most basic

graphics-rendering algorithms. Ray casting makes use of the same geometric algorithm as ray

tracing.

(c )Object-space approach in Visible-surface detection.

Depth Buffer (Z-Buffer) Method

This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the Z-

depth of each surface to determine the closest (visible) surface.

In this method each surface is processed separately one pixel position at a time across the surface.

The depth values for a pixel are compared and the closest (smallest z) surface determines the color to

be displayed in the frame buffer.

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To

override the closer polygons from the far ones, two buffers named frame buffer and depth buffer,

are used.

Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 ≤ depth ≤

1).

The frame buffer is used to store the intensity value of color value at each position (x, y).

lOMoARcPSD|3112409

www.ignousite.blogspot.com

The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates

back clipping pane and 1 value for z-coordinates indicates front clipping pane.

(d) Audio file formats An audio file format is a file format for storing digital audio data on a

computer system. The bit layout of the audio data (excluding metadata) is called the audio coding

format and can be uncompressed, or compressed to reduce the file size, often using lossy

compression. The data can be a raw bitstream in an audio coding format, but it is usually embedded

in a container format or an audio data format with defined storage layer.

(e) Video file formats A video file format is a type of file format for storing digital video data

on a computer system. Video is almost always stored in compressed form to reduce the file size.

A video file normally consists of a container (e.g. in the Matroska format) containing video data in a

video coding format (e.g. VP9) alongside audio data in an audio coding format(e.g. Opus). The

container can also contain synchronization information, subtitles, and metadata such as title. A

standardized (or in some cases de facto standard) video file type such as .webm is a profile specified

by a restriction on which container format and which video and audio compression formats are

allowed.

(f) Image filtering

An image filter is a technique through which size, colors, shading and other characteristics of an

image are altered. An image filter is used to transform the image using different graphical editing

techniques. Image filters are usually done through graphic design and editing software.

(g) Authoring tools ―a content authoring tool is a software application used to create

multimedia content typically for delivery on the World Wide Web. Content-authoring tools may also

create content in other file formats so the training can be delivered on a CD (compact disc) or in

other formats for various different uses. The category of content-authoring tools includes HTML,

Flash, and various types of e-learning authoring tools.ǁ

(h) Animation and its types

ANIMATION is nothing more than an optical illusion – a way of tricking our eyes into thinking that

lots of static pictures are one moving image. Since the success of sites such as YouTube, simple

shorts can be attempted by anyone, and stop-motion animations with everyday objects are some of

the most popular and artistic videos. If you have tried some simple animation already, an animation

course will develop this with more sophisticated materials. The basic processes and techniques are

the same for all animation, and because of the wide range of applications, animation graduates are in

high demand.

Simple animations

Before film was invented, there were early forms of animated pictures. The zoetrope, for example, is

a wheel with a number of static pictures around the inside so that they appear to move when the

wheel spins. Flipbook animation is very similar, and places pictures on every page of a book so that

it creates an optical illusion when the pages are flipped quickly. Whilst both of these don’t need a

camera, object animation and chuckimation involve filming regular inanimate objects, such as

Lego or action figures, and animating them using stop-motion or off-camera hand-movement.

Pixilation uses people as stop-motion characters in a similar way.

Traditional animation

Traditional animation is sometimes called hand-drawn animation or cel animation and, for most of

the 20th Century, many popular animated films were created this way. It was a lengthy process.

Thousands of pictures were drawn entirely by hand on acetate sheets, or cels, with each cel being

slightly different from the one before it. Each cel was photographed onto a separate frame of film

so that when the filmreel was played, the animation moved. This form of animation could also be

lOMoARcPSD|3112409

www.ignousite.blogspot.com

combined with live-action video by placing the cels on top of the film. This technique was popular in

the late 80s and early 90s, and was used in films such as Space Jam and Who Framed Roger Rabbit.

- PAG 1-OpenGL ReloadedUploaded byjosekukuku
- asl hand drawing rubric sheetUploaded byapi-278922030
- GMUploaded byJohn Hames
- FT3 EssentialsUploaded byGangolas
- Color Interpolation AlgorithmssUploaded byGayathri Devi Kotapati
- fdfdsfsdfUploaded byVarvara Ciorba
- MUUploaded byrnilams
- pebretUploaded byImam Adhita Virya
- program3_12Uploaded byRussell Fritch
- (2010!11!18) Synthetic SurfacesUploaded bysmodi20
- Bézier Surface Modeling for Neutrosophic Data ProblemsUploaded byMia Amalia
- scanlineUploaded byToertsch
- Triangle FillUploaded byGuntan
- Ultimate Sharpening TechniqueUploaded byMisa Ajverson
- Paisaje MisteriosoUploaded byDavid García
- spline cubico.xlsxUploaded byEdgarKun
- CurvesUploaded byDineshNewalkar
- Scratch Detection and Removal From Old VideosUploaded byAmit Kumar
- v1-2-19.pdfUploaded byTakamura666
- Photoshop for 202Uploaded byLisa Lynch
- Codeblock - Warna Pada OpenGL.pdfUploaded byGate Mob
- Diagnosing TumourUploaded byRashmi Bhangale

- Ip Trunking Hp4k v4Uploaded byMiguel Ortiz
- useful meterial for operating systemsUploaded bySudhakar Venkey
- IITKGP Top Teaching Feedback 2014-15 v1Uploaded byPradyumna Nahak
- LESSON PLAN-DSP-RMH-16-17.docxUploaded byravimhatti
- Module_4_-_COMPARISON_AND_CONDITIONAL_INSTRUCTIONS.pdfUploaded byDewiNatariaSiahaan
- Design Axioms CorollariesUploaded byCharlieGomezPecero
- DxDiagUploaded byBerhanu Girma
- Indian Engineering Services (IES)_ Reference BooksUploaded byMani Kandan
- I Recently Witnessed a Pressure Test Followed by the Leakage TestUploaded byYosafat Eden Wijang Perkasa
- Dimensional Analysis TmUploaded byDr.S.Ramamurthy
- Bioinformatics Techniques for Metamorphic Malware Analysis and Detection: GrijeshUploaded byGrijesh Chauhan
- Skf Supergrip BoltUploaded byformech
- Asher Tagnawa - PetrokemyaUploaded byrumimallick
- CWT + BWTUploaded byAfan Miraj
- Mid-Term Test Abdullah212Uploaded bySami Syed
- 1207-11.PDF - Intake Manifold Air Temperature Sensor.pdfUploaded byRuthAnaya
- Choose the Right Data Converter for Your ApplicationUploaded byAnonymous 3mJfZE
- ec2_manual.pdfUploaded byalorenzoc2
- Kks Handbook Edition 08Uploaded byDavid Bender
- Angular 2Uploaded byask
- TPU explicadoUploaded byFelipe Escudero
- FUT-1Uploaded byDanilo Silvestri
- 01 Vogelsanger Stanag 4178 Ed 2 the New Nato Standard for Nitrocellulose TestingUploaded bysomta2
- JSA for Access Point InstallationUploaded byanhtuan206
- Various Routine Test of Power Transformer-(Part-4)Uploaded bysupermannon
- Nidec Elevator DriveUploaded byDarshan
- Arduino Cheat Sheet v03cUploaded byRicardo J Pantoja
- Pqa823 ManualUploaded bymentong
- Battry UPSUploaded byrushi_007
- Real Estate property For sale, Brochure-DCNPL Hills Vistaa IndoreUploaded byDCNPL Hills Vistaa Indore