Professional Documents
Culture Documents
INSTITUTE OF TECHNOLOGY
Computer Graphics
Mini Project Report on Aviation of a Helicopter
By
Daksh Ramchandani (D7C-51)
Ian Sequeira (D7C-58)
Ayush Raj Singh (D7C-60)
Supervisor:
Mrs. Suvarna Bhatsangave
TITLE: Aviation game of a Helicopter using OpenGL function.
ABSTRACT:
In this project, we have operated with the OpenGL functions to create a visual art
consisting of a Helicopter and its aviation.
CONTENTS:
1. Introduction
2. Objective
3. Hardware and Software Requirements
4. Implementation Details
I. Theory
i. Open Graphics Library (OpenGL)
ii. Transformation
a) Translation
b) Rotation
c) Scaling
iii. Clipping
iv. Projection
a) Perspective projection
b) Parallel Projection
v. Rasterization
5. Source code
6. Output
7. Conclusion
8. References
INTRODUCTION:
We have used the “stdlib.h” header file for standard library functions, “GL/glut.h”
header file for the usage of Open Graphics library functions and finally the “stdio.h”
header file for standard input-output. In this project we have used various applications to
make it work.
OpenGL: Silicon Graphics Inc., (SGI) began developing OpenGL in 1991 and released
it on June 30, 1992. Applications use it extensively in the fields of computer-aided
design (CAD), virtual reality, scientific visualization, information visualization, flight
simulation, and video games. Since 2006, OpenGL has been managed by the non-profit
technology consortium Khronos Group.
DirectX 2D: 2D graphics are a subset of 3D graphics that deal with 2D primitives or
bitmaps. More generally, they don't use a z-coordinate in the way a 3D game might,
since the game play is usually confined to the x-y plane. They sometimes use 3D
graphics techniques to create their visual components, and they are generally simpler to
develop.
IMPLEMENTATION DETAILS:
THEORY:
• Design: This API is defined as a set of functions which may be called by the client
program. Although functions are similar to those of C language but it is language
independent.
OpenGL is an evolving API. New versions of the OpenGL specifications are regularly
released by the Khronos Group, each of which extends the API to support various new
features. The details of each version are decided by consensus between the Group's
members, including graphics card manufacturers, operating system designers, and
general technology companies such as Mozilla and Google.
In addition to the features required by the core API, graphics processing unit (GPU)
vendors may provide additional functionality in the form of extensions. Extensions may
introduce new functions and new constants, and may relax or remove restrictions on
existing OpenGL functions. Vendors can use extensions to expose custom APIs without
needing support from other vendors or the Khronos Group as a whole, which greatly
increases the flexibility of OpenGL. All extensions are collected in, and defined by, the
OpenGL Registry.
Each extension is associated with a short identifier, based on the name of the company
which developed it.
If multiple vendors agree to implement the same functionality using the same API, a
shared extension may be released, using the identifier EXT. In such cases, it could also
happen that the Khronos Group's Architecture Review Board gives the extension their
explicit approval, in which case the identifier ARB is used.
The features introduced by each new version of OpenGL are typically formed from the
combined features of several widely implemented extensions, especially extensions of
type ARB or EXT.
OpenGL RENDERING PIPELINE:
Rendering Pipeline is the sequence of steps that OpenGL takes when rendering objects.
Vertex attribute and other data go through a sequence of steps to generate the final image
on the screen. There are usually 9-steps in this pipeline most of which are optional and
many are programmable.
Homogeneous Coordinates
To perform a sequence of transformation such as translation followed by rotation and
scaling, we need to follow a sequential process −
To shorten this process, we have to use 3×3 transformation matrix instead of 2×2
transformation matrix. To convert a 2×2 matrix to 3×3 matrix, we have to add an extra
dummy coordinate W.
In this way, we can represent the point by 3 numbers instead of 2 numbers, which is
called Homogeneous Coordinate system. In this system, we can represent all the
transformation equations in matrix multiplication. Any Cartesian point P(X, Y) can be
converted to homogeneous coordinates by P’ (Xh, Yh, h).
Translation:
Translation is the most frequently used of all transformations and is almost if not always
used after any other transformation. Translation of an object is to move an object from
one part of the space to another. Translation is always with respect to the world
coordinates, unlike rotation and scaling where transformation is done with respect to
itself. Hence the statement ‘Translation is moving the object to another part of space
w.r.t the world coordinated’ is more understandable now.
Let us consider a point U, and with the help of matrices we can easily represent
translation as
U’ = U + T
Where, T is the scaling matrix and U’ is the new transformed matrix notation for U.
Rotation:
Rotation as the name suggests is the rotation of an object or shape with respect to an axis
in space. Below is an example of rotation of a triangle ABC, which rotates with respect
to the point P by a certain angle to obtain the transformed triangle A’B’C’.
CLIPPING:
When we have to display a large portion of the picture, then not only scaling &
translation is necessary, the visible part of picture is also identified. This process is not
easy. Certain parts of the image are inside, while others are partially inside. The lines or
elements which are partially visible will be omitted.
For deciding the visible and invisible portion, a particular process called clipping is
used. Clipping determines each element into the visible and invisible portion. Visible
portion is selected. An invisible portion is discarded.
Types of Lines
Lines are of three types:
Clipping can be applied through hardware as well as software. In some computers,
hardware devices automatically do work of clipping. In a system where hardware
clipping is not available software clipping applied.
Following figure show before and after clipping
The window against which object is clipped called a clip window. It can be curved or
rectangle in shape.
Applications of clipping:
1. It will extract part we desire.
2. For identifying the visible and invisible area in the 3D object.
3. For creating objects using solid modeling.
4. For drawing operations.
5. Operations related to the pointing of an object.
6. For deleting, copying, moving part of an object.
Clipping can be applied to world co-ordinates. The contents inside the window will be
mapped to device co-ordinates. Another alternative is a complete world co-ordinates
picture is assigned to device co-ordinates, and then clipping of viewport boundaries is
done.
Types of Clipping:
1. Point Clipping
2. Line Clipping
3. Area Clipping (Polygon)
4. Curve Clipping
5. Text Clipping
6. Exterior Clipping
PROJECTION:
It is the process of converting a 3D object into a 2D object. It is also defined as mapping
or transformation of the object in projection plane or view plane. The view plane is
displayed surface.
Perspective Projection:
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.
Two main characteristics of perspective are vanishing points and perspective
foreshortening. Due to foreshortening object and lengths appear smaller from the center
of projection. More we increase the distance from the center of projection, smaller will
be the object appear.
Important terms related to perspective
1. View plane: It is an area of world coordinate system which is projected into
viewing plane.
2. Center of Projection: It is the location of the eye on which projected light rays
converge.
3. Projectors: It is also called a projection vector. These are rays start from the object
scene and are used to create an image of the object on viewing or view plane.
Parallel Projection:
Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until they
intersect the plane of the screen. The point of intersection is the projection of vertex.
Parallel projections are used by architects and engineers for creating working drawing of
the object, for complete representations require two or more views of an object using
different planes.
1. Isometric Projection: All projectors make equal angles generally angle is of 30°.
2. Dimetric: In these two projectors have equal angles. With respect to two principle
axis.
3. Trimetric: The direction of projection makes unequal angle with their principle
axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no
change in length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one half of
their length. These give a realistic appearance of object.
RASTERISATION:
Rasterisation (or rasterization) is the task of taking an image described in a vector
graphics format (shapes) and converting it into a raster image (a series of pixels, dots or
lines, which, when displayed together, create the image which was represented via
shapes). The rasterized image may then be displayed on a computer display, video
display or printer, or stored in a bitmap file format. Rasterisation may refer to either the
conversion of models into raster files, or the conversion of 2D rendering primitives such
as polygons or line segments into a rasterized format.
Rasterisation of 3D images:
Compared with other rendering techniques such as ray tracing, rasterisation is extremely
fast. However, rasterisation is simply the process of computing the mapping from scene
geometry to pixels and does not prescribe a particular way to compute the color of those
pixels. Shading, including programmable shading, may be based on physical light
transport, or artistic intent.
The process of rasterising 3D models onto a 2D plane for display on a computer screen
("screen space") is often carried out by fixed function hardware within the graphics
pipeline. This is because there is no motivation for modifying the techniques for
rasterisation used at render time[clarification needed] and a special-purpose system
allows for high efficiency.
SOURCE CODE:
#include<stdlib.h>
#include<GL/glut.h>
#include<stdio.h>
float helispeed = 0.02;
float heli_x = 50.0, heli_y = 0;
float heli = 0.0;
int i = 0, sci = 0;
float scf = 1;
char scs[20];
int wflag = 1;
char gameover[10] = "GAME OVER";
int sc = 0;
void init(void)
{
heli_y = (rand() % 45) + 10;
glClearColor(0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_SMOOTH);
glLoadIdentity();
glOrtho(0.0, 100.0, 0.0, 100.0, -1.0, 0.0);
}
void Helicopter()
{
glColor3f(0.7, 1.0, 1.0);
glRectf(10, 49.8, 19.8, 44.8);
glRectf(2, 46, 10, 48);
glRectf(2, 46, 4, 51);
glRectf(14, 49.8, 15.8, 52.2);
glRectf(7, 53.6, 22.8, 52.2);
}
void renderBitmapString(float x, float y, float z, void* font, char* string)
{
char* c;
glRasterPos3f(x, y, z);
for (c = string; *c != '\0'; c++)
{
glutBitmapCharacter(font, *c);
}
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glutSwapBuffers();
glFlush();
printf("GAME OVER\nYour score was %d\n", sc);
exit(0);
}
else if (wflag == 1)
{
wflag = 0;
glColor3f(0.3, 0.7, 0.2);
printf("GAME BY IAN, AYUSH and DAKSH\n");
printf("CLICK SCREEN TO START HELICOPTER\n");
printf("CLICK AND HOLD LEFT MOUSE BUTTON TO GO UP\n");
printf("RELEASE MOUSE BUTTON TO GO DOWN\n");
Helicopter();
glutSwapBuffers();
glFlush();
}
else
{
glPushMatrix();
glColor3f(0.3, 0.7, 0.2);
glRectf(0.0, 0.0, 100.0, 10.0);
glRectf(0.0, 100.0, 100.0, 90.0);
sc++;
sci = (int)scf;
renderBitmapString(20, 3, 0, GLUT_BITMAP_TIMES_ROMAN_24, scs);
glTranslatef(0.0, heli, 0.0);
Helicopter();
if (heli_x < -10)
{
heli_x = 50;
heli_y = (rand() % 25) + 20;
}
else heli_x -= helispeed;
glTranslatef(heli_x, -heli, 0.0);
glColor3f(1.0, 0.0, 0.0);
glRectf(heli_x, heli_y, heli_x + 5, heli_y + 35);
glPopMatrix();
glutSwapBuffers();
glFlush();
}
}
void Heliup()
{
heli += 0.1;
i++;
glutPostRedisplay();
}
void Helidown()
{
heli -= 0.1;
i--;
glutPostRedisplay();
}
void Reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if (w <= h)
glOrtho(-2.0, 2.0, -2.0 * (GLfloat)h / (GLfloat)w, 2.0 * (GLfloat)h /
(GLfloat)w, -10.0, 20.0);
else
glOrtho(-2.0 * (GLfloat)w / (GLfloat)h, 2.0 * (GLfloat)w / (GLfloat)h, -2.0,
2.0, -10.0, 20.0);
glMatrixMode(GL_MODELVIEW);
glutPostRedisplay();
}
void mouse(int btn, int state, int x, int y)
{
if (btn == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
glutIdleFunc(Heliup);
if (btn == GLUT_LEFT_BUTTON && state == GLUT_UP)
glutIdleFunc(Helidown);
glutPostRedisplay();
}
void main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(500, 400);
glutInitWindowPosition(200, 300);
glutCreateWindow("Helicopter Game Project");
init();
glutReshapeFunc(Reshape);
glutDisplayFunc(display);
glutMouseFunc(mouse);
glutMainLoop();
}
OUTPUT:
The screenshots of the game while it’s duration:
CONCLUSION:
From the in-depth usage of OpenGL functions as well as operations like Transformation,
Clipping, Projection and Rasterisation, we were able to perform this project, the interior
make up and course outcomes of Computer Graphics have been understood.
REFERENCES:
http://freeglut.sourceforge.net/
https://en.wikipedia.org/wiki/Main_Page
https://en.wikipedia.org/wiki/OpenGL
https://www.geeksforgeeks.org/opengl-rendering-pipeline-overview/
https://www.tutorialspoint.com/index.htm
https://www.javatpoint.com/
https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html