You are on page 1of 18

VIVEKANAND EDUCATION SOCIETY’S

INSTITUTE OF TECHNOLOGY

Department of Computer Engineering

Computer Graphics
Mini Project Report on Aviation of a Helicopter

By
Daksh Ramchandani (D7C-51)
Ian Sequeira (D7C-58)
Ayush Raj Singh (D7C-60)

Supervisor:
Mrs. Suvarna Bhatsangave
TITLE: Aviation game of a Helicopter using OpenGL function.

ABSTRACT:
In this project, we have operated with the OpenGL functions to create a visual art
consisting of a Helicopter and its aviation.

CONTENTS:

1. Introduction
2. Objective
3. Hardware and Software Requirements
4. Implementation Details
I. Theory
i. Open Graphics Library (OpenGL)
ii. Transformation
a) Translation
b) Rotation
c) Scaling
iii. Clipping
iv. Projection
a) Perspective projection
b) Parallel Projection
v. Rasterization
5. Source code
6. Output
7. Conclusion
8. References
INTRODUCTION:
We have used the “stdlib.h” header file for standard library functions, “GL/glut.h”
header file for the usage of Open Graphics library functions and finally the “stdio.h”
header file for standard input-output. In this project we have used various applications to
make it work.

OpenGL: Silicon Graphics Inc., (SGI) began developing OpenGL in 1991 and released
it on June 30, 1992. Applications use it extensively in the fields of computer-aided
design (CAD), virtual reality, scientific visualization, information visualization, flight
simulation, and video games. Since 2006, OpenGL has been managed by the non-profit
technology consortium Khronos Group.

DirectX 2D: 2D graphics are a subset of 3D graphics that deal with 2D primitives or
bitmaps. More generally, they don't use a z-coordinate in the way a 3D game might,
since the game play is usually confined to the x-y plane. They sometimes use 3D
graphics techniques to create their visual components, and they are generally simpler to
develop.

DirectX 3D: Direct3D is a graphics application programming interface (API) for


Microsoft Windows. Part of DirectX, Direct3D is used to render three-dimensional
graphics in applications where performance is important, such as games. Direct3D uses
hardware acceleration if it is available on the graphics card, allowing for hardware
acceleration of the entire 3D rendering pipeline or even only partial acceleration.
Direct3D exposes the advanced graphics capabilities of 3D graphics hardware, including
Z-buffering, W-buffering, stencil buffering, spatial anti-aliasing, alpha blending, color
blending, mipmapping, texture blending, clipping, culling, atmospheric effects,
perspective-correct texture mapping, programmable HLSL shaders and effects.
Integration with other DirectX technologies enables Direct3D to deliver such features as
video mapping, hardware 3D rendering in 2D overlay planes, and even sprites,
providing the use of 2D and 3D graphics in interactive media ties.

freeglut: freeglut is a free-software/open-source alternative to the OpenGL Utility


Toolkit (GLUT) library. GLUT was originally written by Mark Kilgard to support the
sample programs in the second edition OpenGL 'RedBook'. Since then, GLUT has been
used in a wide variety of practical applications because it is simple, widely available and
highly portable. GLUT (and hence freeglut) takes care of all the system-specific chores
required for creating windows, initializing OpenGL contexts, and handling input events,
to allow for truly portable OpenGL programs.
freeglut is released under the X-Consortium license.
OBJECTIVE:
Implementing OpenGL to show aviation of a graphically designed Helicopter and then
finally playing a game.

HARDWARE AND SOFTWARE REQUIREMENTS:


Code::Blocks, Turbo C/C++ or Visual Studio 2019 (IDE), Open Graphics Library

IMPLEMENTATION DETAILS:

THEORY:

OPEN GRAPHICS LIBRARY:

Open Graphics Library (OpenGL) is a cross-language (language independent), cross-


platform (platform independent) API for rendering 2D and 3D Vector Graphics (use of
polygons to represent image). OpenGL API is designed mostly in hardware.

• Design: This API is defined as a set of functions which may be called by the client
program. Although functions are similar to those of C language but it is language
independent.

• Development: It is an evolving API and Khronos Group regularly releases its


new version having some extended feature compare to previous one. GPU
vendors may also provide some additional functionality in the form of extension.

• Associated Libraries: The earliest version is released with a companion library


called OpenGL utility library. But since OpenGL is quite a complex process. So in
order to make it easier other library such as OpenGL Utility Toolkit is added
which is later superseded by freeglut. Later included library were GLEE, GLEW
and glbinding.

• Implementation: Mesa 3D is an open source implementation of OpenGL. It can


do pure software rendering and it may also use hardware acceleration on BSD,
Linux, and other platforms by taking advantage of Direct Rendering
Infrastructure.

OpenGL is an evolving API. New versions of the OpenGL specifications are regularly
released by the Khronos Group, each of which extends the API to support various new
features. The details of each version are decided by consensus between the Group's
members, including graphics card manufacturers, operating system designers, and
general technology companies such as Mozilla and Google.
In addition to the features required by the core API, graphics processing unit (GPU)
vendors may provide additional functionality in the form of extensions. Extensions may
introduce new functions and new constants, and may relax or remove restrictions on
existing OpenGL functions. Vendors can use extensions to expose custom APIs without
needing support from other vendors or the Khronos Group as a whole, which greatly
increases the flexibility of OpenGL. All extensions are collected in, and defined by, the
OpenGL Registry.

Each extension is associated with a short identifier, based on the name of the company
which developed it.

If multiple vendors agree to implement the same functionality using the same API, a
shared extension may be released, using the identifier EXT. In such cases, it could also
happen that the Khronos Group's Architecture Review Board gives the extension their
explicit approval, in which case the identifier ARB is used.

The features introduced by each new version of OpenGL are typically formed from the
combined features of several widely implemented extensions, especially extensions of
type ARB or EXT.
OpenGL RENDERING PIPELINE:
Rendering Pipeline is the sequence of steps that OpenGL takes when rendering objects.
Vertex attribute and other data go through a sequence of steps to generate the final image
on the screen. There are usually 9-steps in this pipeline most of which are optional and
many are programmable.

Sequence of steps taken by OpenGL to generate an image:


1. Vertex Specification: In Vertex Specification, list an ordered list of vertices that
define the boundaries of the primitive. Along with this, one can define other
vertex attributes like color, texture coordinates etc. Later this data is sent down
and manipulated by the pipeline.
2. Vertex Shader: The vertex specification defined above now pass through Vertex
Shader. Vertex Shader is a program written in GLSL that manipulate the vertex
data. The ultimate goal of vertex shader is to calculate final vertex position of
each vertex. Vertex shaders are executed once for every vertex(in case of a
triangle it will execute 3-times) that the GPU processes. So if the scene consists of
one million vertices, the vertex shader will execute one million times once for
each vertex. The main job of a vertex shader is to calculate the final positions of
the vertices in the scene.
3. Tessellation: This is an optional stage. In this stage primitives are tessellated i.e.
divided into smoother mesh of triangles.
4. Geometry Shader: This shader stage is also optional. The work of Geometry
Shader is to take an input primitive and generate zero or more output primitive. If
a triangle strip is sent as a single primitive, geometry shader will visualize a series
of triangles. Geometry Shader is able to remove primitives or tessellate them by
outputting many primitives for a single input. Geometry shaders can also convert
primitives to different types. For example, point primitive can become triangles.
5. Vertex Post Processing: This is a fixed function stage i.e. user has a very limited
to no control over these stages. The most important part of this stage is Clipping.
Clipping discards the area of primitives that lie outside the viewing volume.
6. Primitive Assembly: This stage collects the vertex data into an ordered sequence
of simple primitives(lines, points or triangles).
7. Rasterization: This is an important step in this pipeline. The output of
rasterization is a fragments.
8. Fragment Shader: Although not necessary a required stage but 96% of the time
it is used. This user-written program in GLSL calculates the color of each
fragment that user sees on the screen. The fragment shader runs for each fragment
in the geometry. The job of the fragment shader is to determine the final color for
each fragment.
9. Per-sample Operations: There are few tests that are performed based on user has
activated them or not. Some of these tests for example are Pixel ownership test,
Scissor Test, Stencil Test, Depth Test.
TRANSFORMATION:
Transformation means changing some graphics into something else by applying rules.
We can have various types of transformations such as translation, scaling up or down,
rotation, shearing, etc. When a transformation takes place on a 2D plane, it is called 2D
transformation.
Transformations play an important role in computer graphics to reposition the graphics
on the screen and change their size or orientation.

Homogeneous Coordinates
To perform a sequence of transformation such as translation followed by rotation and
scaling, we need to follow a sequential process −

•Translate the coordinates,


•Rotate the translated coordinates, and then
•Scale the rotated coordinates to complete the composite transformation.

To shorten this process, we have to use 3×3 transformation matrix instead of 2×2
transformation matrix. To convert a 2×2 matrix to 3×3 matrix, we have to add an extra
dummy coordinate W.

In this way, we can represent the point by 3 numbers instead of 2 numbers, which is
called Homogeneous Coordinate system. In this system, we can represent all the
transformation equations in matrix multiplication. Any Cartesian point P(X, Y) can be
converted to homogeneous coordinates by P’ (Xh, Yh, h).

Translation:
Translation is the most frequently used of all transformations and is almost if not always
used after any other transformation. Translation of an object is to move an object from
one part of the space to another. Translation is always with respect to the world
coordinates, unlike rotation and scaling where transformation is done with respect to
itself. Hence the statement ‘Translation is moving the object to another part of space
w.r.t the world coordinated’ is more understandable now.
Let us consider a point U, and with the help of matrices we can easily represent
translation as
U’ = U + T
Where, T is the scaling matrix and U’ is the new transformed matrix notation for U.
Rotation:
Rotation as the name suggests is the rotation of an object or shape with respect to an axis
in space. Below is an example of rotation of a triangle ABC, which rotates with respect
to the point P by a certain angle to obtain the transformed triangle A’B’C’.

The rotation or transformation of objects is normally explained better with matrices. So


let us consider a point U, and with the help of matrices we can easily represent rotation
as
U’ = R U
Where, R is the rotation matrix and U’ is the new transformed matrix notation for U.
Scaling:
Scaling is the resizing of the object or figures depending on the scaling quantity w.r.t any
of the three axes(x, y, and z). Scaling can have various different uses, for example a
simple square can be resized to make different larger objects like cubes, windows for
homes, etc.
Let us consider a point U, and with the help of matrices we can easily represent scaling
as
U’ = S U
Where, S is the scaling matrix and U’ is the new transformed matrix notation for U.

CLIPPING:
When we have to display a large portion of the picture, then not only scaling &
translation is necessary, the visible part of picture is also identified. This process is not
easy. Certain parts of the image are inside, while others are partially inside. The lines or
elements which are partially visible will be omitted.
For deciding the visible and invisible portion, a particular process called clipping is
used. Clipping determines each element into the visible and invisible portion. Visible
portion is selected. An invisible portion is discarded.

Types of Lines
Lines are of three types:
Clipping can be applied through hardware as well as software. In some computers,
hardware devices automatically do work of clipping. In a system where hardware
clipping is not available software clipping applied.
Following figure show before and after clipping

The window against which object is clipped called a clip window. It can be curved or
rectangle in shape.
Applications of clipping:
1. It will extract part we desire.
2. For identifying the visible and invisible area in the 3D object.
3. For creating objects using solid modeling.
4. For drawing operations.
5. Operations related to the pointing of an object.
6. For deleting, copying, moving part of an object.

Clipping can be applied to world co-ordinates. The contents inside the window will be
mapped to device co-ordinates. Another alternative is a complete world co-ordinates
picture is assigned to device co-ordinates, and then clipping of viewport boundaries is
done.
Types of Clipping:
1. Point Clipping
2. Line Clipping
3. Area Clipping (Polygon)
4. Curve Clipping
5. Text Clipping
6. Exterior Clipping
PROJECTION:
It is the process of converting a 3D object into a 2D object. It is also defined as mapping
or transformation of the object in projection plane or view plane. The view plane is
displayed surface.

Perspective Projection:

In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.
Two main characteristics of perspective are vanishing points and perspective
foreshortening. Due to foreshortening object and lengths appear smaller from the center
of projection. More we increase the distance from the center of projection, smaller will
be the object appear.
Important terms related to perspective
1. View plane: It is an area of world coordinate system which is projected into
viewing plane.
2. Center of Projection: It is the location of the eye on which projected light rays
converge.
3. Projectors: It is also called a projection vector. These are rays start from the object
scene and are used to create an image of the object on viewing or view plane.

Parallel Projection:

Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until they
intersect the plane of the screen. The point of intersection is the projection of vertex.
Parallel projections are used by architects and engineers for creating working drawing of
the object, for complete representations require two or more views of an object using
different planes.

1. Isometric Projection: All projectors make equal angles generally angle is of 30°.
2. Dimetric: In these two projectors have equal angles. With respect to two principle
axis.
3. Trimetric: The direction of projection makes unequal angle with their principle
axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no
change in length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one half of
their length. These give a realistic appearance of object.

RASTERISATION:
Rasterisation (or rasterization) is the task of taking an image described in a vector
graphics format (shapes) and converting it into a raster image (a series of pixels, dots or
lines, which, when displayed together, create the image which was represented via
shapes). The rasterized image may then be displayed on a computer display, video
display or printer, or stored in a bitmap file format. Rasterisation may refer to either the
conversion of models into raster files, or the conversion of 2D rendering primitives such
as polygons or line segments into a rasterized format.
Rasterisation of 3D images:
Compared with other rendering techniques such as ray tracing, rasterisation is extremely
fast. However, rasterisation is simply the process of computing the mapping from scene
geometry to pixels and does not prescribe a particular way to compute the color of those
pixels. Shading, including programmable shading, may be based on physical light
transport, or artistic intent.
The process of rasterising 3D models onto a 2D plane for display on a computer screen
("screen space") is often carried out by fixed function hardware within the graphics
pipeline. This is because there is no motivation for modifying the techniques for
rasterisation used at render time[clarification needed] and a special-purpose system
allows for high efficiency.

SOURCE CODE:
#include<stdlib.h>
#include<GL/glut.h>
#include<stdio.h>
float helispeed = 0.02;
float heli_x = 50.0, heli_y = 0;
float heli = 0.0;
int i = 0, sci = 0;
float scf = 1;
char scs[20];
int wflag = 1;
char gameover[10] = "GAME OVER";
int sc = 0;
void init(void)
{
heli_y = (rand() % 45) + 10;
glClearColor(0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_SMOOTH);
glLoadIdentity();
glOrtho(0.0, 100.0, 0.0, 100.0, -1.0, 0.0);
}
void Helicopter()
{
glColor3f(0.7, 1.0, 1.0);
glRectf(10, 49.8, 19.8, 44.8);
glRectf(2, 46, 10, 48);
glRectf(2, 46, 4, 51);
glRectf(14, 49.8, 15.8, 52.2);
glRectf(7, 53.6, 22.8, 52.2);
}
void renderBitmapString(float x, float y, float z, void* font, char* string)
{
char* c;
glRasterPos3f(x, y, z);
for (c = string; *c != '\0'; c++)
{
glutBitmapCharacter(font, *c);
}
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);

if ((i == 360 || i == -350)


|| (((int)heli_x == 10 || (int)heli_x == 7 || (int)heli_x == 4 || (int)heli_x
== 1) && (int)heli_y < 53 + (int)heli && (int)heli_y + 35>53 + (int)heli)
|| (((int)heli_x == 9 || (int)heli_x == 3 || (int)heli_x == 6) && (int)heli_y
< 45 + (int)heli && (int)heli_y + 35>45 + (int)heli)
|| (((int)heli_x == 0) && (int)heli_y < 46 + (int)heli && (int)heli_y + 35>46
+ (int)heli)
)
{
glColor3f(0.8, 0.8, 1.0);
glRectf(0.0, 0.0, 100.0, 100.0);
glColor3f(0.0, 0.0, 0.0);
renderBitmapString(10, 70, 0, GLUT_BITMAP_TIMES_ROMAN_24, gameover);

glutSwapBuffers();
glFlush();
printf("GAME OVER\nYour score was %d\n", sc);
exit(0);
}
else if (wflag == 1)
{
wflag = 0;
glColor3f(0.3, 0.7, 0.2);
printf("GAME BY IAN, AYUSH and DAKSH\n");
printf("CLICK SCREEN TO START HELICOPTER\n");
printf("CLICK AND HOLD LEFT MOUSE BUTTON TO GO UP\n");
printf("RELEASE MOUSE BUTTON TO GO DOWN\n");
Helicopter();
glutSwapBuffers();
glFlush();
}
else
{
glPushMatrix();
glColor3f(0.3, 0.7, 0.2);
glRectf(0.0, 0.0, 100.0, 10.0);
glRectf(0.0, 100.0, 100.0, 90.0);
sc++;
sci = (int)scf;
renderBitmapString(20, 3, 0, GLUT_BITMAP_TIMES_ROMAN_24, scs);
glTranslatef(0.0, heli, 0.0);
Helicopter();
if (heli_x < -10)
{
heli_x = 50;
heli_y = (rand() % 25) + 20;
}
else heli_x -= helispeed;
glTranslatef(heli_x, -heli, 0.0);
glColor3f(1.0, 0.0, 0.0);
glRectf(heli_x, heli_y, heli_x + 5, heli_y + 35);
glPopMatrix();
glutSwapBuffers();
glFlush();
}
}
void Heliup()
{
heli += 0.1;
i++;
glutPostRedisplay();
}
void Helidown()
{
heli -= 0.1;
i--;
glutPostRedisplay();
}
void Reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if (w <= h)
glOrtho(-2.0, 2.0, -2.0 * (GLfloat)h / (GLfloat)w, 2.0 * (GLfloat)h /
(GLfloat)w, -10.0, 20.0);
else
glOrtho(-2.0 * (GLfloat)w / (GLfloat)h, 2.0 * (GLfloat)w / (GLfloat)h, -2.0,
2.0, -10.0, 20.0);
glMatrixMode(GL_MODELVIEW);
glutPostRedisplay();
}
void mouse(int btn, int state, int x, int y)
{
if (btn == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
glutIdleFunc(Heliup);
if (btn == GLUT_LEFT_BUTTON && state == GLUT_UP)
glutIdleFunc(Helidown);
glutPostRedisplay();
}
void main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(500, 400);
glutInitWindowPosition(200, 300);
glutCreateWindow("Helicopter Game Project");
init();
glutReshapeFunc(Reshape);
glutDisplayFunc(display);
glutMouseFunc(mouse);
glutMainLoop();
}

OUTPUT:
The screenshots of the game while it’s duration:
CONCLUSION:
From the in-depth usage of OpenGL functions as well as operations like Transformation,
Clipping, Projection and Rasterisation, we were able to perform this project, the interior
make up and course outcomes of Computer Graphics have been understood.

REFERENCES:
http://freeglut.sourceforge.net/
https://en.wikipedia.org/wiki/Main_Page
https://en.wikipedia.org/wiki/OpenGL
https://www.geeksforgeeks.org/opengl-rendering-pipeline-overview/
https://www.tutorialspoint.com/index.htm
https://www.javatpoint.com/
https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html

You might also like