You are on page 1of 38

PROJECT WORK

ON

“Virtual Reality Simulation Training system for substation”


BY

Kushal Singhal

Summer Intern, IISc Bangalore


B.Tech- 2nd year, EE
Indian Institute of Technology, Ropar.

Under the guidance of

Dr. Gurunath Gurrala

Assistant Professor, Department of Electrical Engineering


Indian Institute of Sciences, Bangalore.

1|Page
Content
1 Introduction
2 CHAPTER -1: What is Virtual Reality?
2.1 Basic understanding of Virtual Reality
2.2 Types of Virtual reality
2.3 Example of Virtual Reality
3 CHAPTER-2: How Virtual Reality works?
3.1 Introduction
3.2 Basic Requirements
3.3 A detailed understanding of working
3.3.1 Introduction
3.3.2 Goal
3.3.3 How do we see?
3.3.4 How do we see in VR Headset
3.3.5 Immersion in VR after Display
4 CHAPTER-3: Geometry of Virtual World
4.1 Introduction
4.2 3D Modelling – Basics
4.3 Primitive – Why triangles?
4.4 Stationary Vs. Movable objects
4.5 Changing position & Orientation
4.6 Rotations
5 CHAPTER-4: Light and Camera
5.1 Introduction
5.2 Camera
5.2.1Basic Concepts
5.2.2 The Shape of View Region
5.2.3Using more than one camera
5.3 Lights
5.3.1 Basic Concepts
2|Page
5.3.2 Types of Lights
5.3.2.1 Point Light
5.3.2.2 Spot Light
5.3.2.3 Directional Light
6 CHAPTER-5: Software for Project and Working
6.1 Basic Terms and Concepts of Project
6.2 3d Modelling Software 123d, Autodesk, blender
6.3 Virtual World Generator – Unity3D
6.3.1Unity3D
6.3.2Scripting in Unity
6.4 Build Visualization and Hardware
6.4.1Scripts / Codes
7 References

3|Page
Introduction
This project titled as “Virtual Reality Simulation Training system for
substation” aims at developing a virtual environment of a substation in a
Power System to enhance the operator’s operating skills and ability to
deal with accidents.
Substation is an important part of the power system, so it’s important to
improve the professional skills of the power system operators. It is a high
risk site and accidents may occur at any time. Moreover it’s practically
impossible to create artificially accidents or faults on the real equipment’s
to observe and practice them any time. Therefore, in order to improve the
operator's ability to solve problems and deal with accidents, it is
particularly important to establish a virtual simulation environment to
simulate the operation of substation. [8] & [9]
So a 3D training system is developed to train operators' operating skills
and ability to deal with accidents by means of a virtual reality
environment.

4|Page
2. CHAPTER 1
What is Virtual Reality?
2.1 Basic understanding of Virtual Reality –
Virtual Reality is a computer generated simulation environment created using a software and
physically interacted using hardware like VR headsets, helmets, data gloves, haptic devices
etc. [2]
Generally working with virtual reality uses a computer screen, VR headset (for physical
interaction use an advanced version like Oculus rift) and VR environment software.
A person using virtual reality equipment is able to "look around" the artificial world, and with
high quality VR move about in it and interact with virtual features or items
The experience of a virtual world is similar to a real world scenario but with some constraints.
Virtual Reality allows someone to do the following:
 Like walk around a 3D building.
 Perform a virtual operation.
 Make some changes in the architecture of the virtual world to move objects.
Therefore, we can say a Virtual Reality Environment is a believable computer generated
environment that you can explore and feel you are there in that environment both physically
and mentally and experience the things in that virtual world that even don’t exist in the real
world.

2.1 Types of Virtual Reality [17]&[18] -

The type of virtual reality depends upon the level of immersion in the virtual world as
well as the hardware used for that level of immersion. On the above basis there are three
types of virtual reality-
1. Non Immersive Systems –
The starting of virtual reality is non immersive systems and the best example of it is a 3d game
or a 3d video on YouTube. If you have played a game like NFS or counter strike or any 3d
game , then you can feel the level of immersion in that environment.
This type of systems uses a computer screen as a visual display with a keyboard and a mouse
or a joystick for making an interaction with the virtual 3d world.
Example - In the picture shown below the user is watching the 3d world on the computer
screen and having interaction with the 3d world using the keyboard.

5|Page
2. Hybrid Systems –
This type of systems is a combination of real world with some devices or equipment’s that
allow to see simulated objects in the real world.
So just stay in the real world and see simulated objects and interact with them.
If you have seen any part of the Iron man movie then you can better feel the level of immersion
in the real world.
Please watch this video before moving further –
https://www.youtube.com/watch?v=Y1TEK2Wf_e8
(You can also find this video in the folder shared with you named as videos for VR ->
augmented reality).

6|Page
Example –

The image shown above uses the hybrid systems virtual reality (Augmented Reality).
The display or 3d world is being continuously generated using some device and the users are
interacting with the 3d world using hand and some haptic devices. The users are in the real
world with all simulated objects in the real world and being interacted with them.
3. Immersive Systems –
This type of systems are the one we will are creating in our project where an immersive 3d
world is rendered in virtual reality using a software and the immersion is made using a VR
headset and data gloves for physical interaction.
If you have used a VR Headset like oculus rift or HTC vive or even Samsung gear 2 you can
easily feel the level of immersion.
Moreover you can watch the video linked below -
https://www.youtube.com/watch?v=i4Zt3JZejbg
(You can also find this video in the folder shared with you named as videos for VR -> Virtual
Reality explained!).

Example –
7|Page
In the above image you can see a person (user) wearing a Vr Headset, a computer display.
Now, what happens is like whatever the person watching in the VR Headset is visible on the
computer screen. So in the virtual world user in watching in Headset and in the real world we
are watching the same thing on the computer screen.

2.2 Example of Virtual Reality –


This example of Virtual Reality gives a brief understanding of how a person can see a VR
World.

[20]
1. Real World – This is where the two users are standing in a room in above image.
2. Haptic Device – This is a device you can see in above image the user is holding in his
hand so that he can interact with the Virtual World
8|Page
3. User- Person who uses VR technology
4. VR Headset – The device the user is wearing so that he can see the VR world and feel
that he is immersed in that world.

5. User in Virtual World - This is a person (a virtual copy of user) in VR world so that he
can feel himself to be in VR World.

6. Virtual World – At last this is the 3d simulated VR world rendered by a computer using
a software so that a person can immerse himself in a place where he can virtually interact
with things that not really exist in real world.

9|Page
3. CHAPTER 2
How Virtual Reality Works?
3.1 Introduction –
Virtual Reality exists in many ways like if you watching a video on YouTube (360º) in your
android phone using a Google cardboard (will be explained in next section 3.2), then you can
see the whole video means a 360º view by rotating your phone or your head. Actually your
phone has a Gyro meter sensor in it which senses the movements of your phone.
Next experience, comes with an advanced VR Headset like Oculus rift or HTC Vive. In this
technology you have to use a computer screen, a VR Headset with a software which will allow
your computer to render a VR World for you.
If you are using a more advanced technology which involves Data Gloves along with a VR
Headset. Then, you have to use a software as tracking means for your body movements, with
VR World rendering software.
In the next section 3.2, a detailed working of a VR experience is explained with the
hardware and software specifications

3.2 Basic Requirements –


For a proper VR experience the Hardware and Software requirements are as follows –
A). Hardware -
- If you just want to see a 360º from YouTube in VR Experience you can use a Google
cardboard.
- If you want to see a VR World from an app at play store or you want to have a better
visualization you have to use a VR Headset like a Oculus Rift or HTC Vive.
- If you want to see as well as interact with VR World you have to some Haptic Device like
Data Gloves or joystick that has sensors and controllers embedded in them which senses your
movements and connect you directly with virtual world.

10 | P a g e
Data Gloves

Tracking
Organism VR Hardware
Stimulation

Surrounding Physical World


A continuous interaction of organism with hardware is equally important as visualising it. Moreover, similar
interaction with the physical world is also made. [21]

b). Software –
i) 3d models creator – This software is used to create 3d models or meshes which further
used to create 3d world.
Example – 123d Autodesk, Blender etc.
ii) World Building Toolkit - This software is used to create the virtual world using 3d
models and meshes
Example – Unity Engine, Unreal Engine etc.
iii) Hardware based software – Now since you are ready with your virtual world you need
to visualise that using some hardware so a software is needed that will render your VR World
in that Hardware
Example- OSVR etc.

11 | P a g e
3.3 A Detailed understanding of Working –
3.3.1 Introduction –
Here you have a computer screen – A camera for tracking purpose –

A VR Headset – Data Gloves as an interactive sensors –

 Working –
With the setup of all the above components you can start with entering into the virtual world.

[21]
12 | P a g e
3.3.2 Goal –
The main goal of the whole game is to create a life size, 3D virtual environment without the
boundaries we usually associate with TV or computer screens. So whatever way you look, the
screen mounted to your face follows you.
Display ->
The display of VR works on the principle of stereoscopy.
Stereoscopy - It is the production of the illusion of depth in a photograph, movie, or other
two-dimensional image by the presentation of a slightly different image to each eye. The two
images are then combined in the brain to give the perception of depth. [12]

[11]
As in the above image you can see that the same image is being placed side by side so that
when seen using a stereoscope seems to create a perception of depth.
The same concept is used in Virtual Reality.
To get a better understanding how we see in VR World it is important to look at how we see
in real world.
3.3.3 How do we see?? [13]

[14]

13 | P a g e
In the above image you can see a object in front of eye and rays emerging from it. The lens of
eye adjusts its focus according to object distance to make its image on the retina. From there
the sensory receptors on the retina senses and sends it to brain.
For each eye the same procedure is followed whatever our each eye see in the real world is
directly send to brain . Our brain fuses the two images formed in each eye in a single image
and hence we see !!
3.3.4 How do we see in a VR Headset?? [16]
Now the same concept is used in Virtual Reality the main thing is about creating a sense of
depth in whatever we see.
This is done using stereoscopy (see on page no. 14).
The headsets that we are using also create two images side by side but with a level of good
graphics and frame rate.
Now when you switch on the power source and wear the Headset-
A video is sent from the console or computer to the headset via a HDMI cable to the headsets
such as HTC's Vive and the Rift.
VR headsets use either two feeds sent to one display or two LCD displays, one per eye. There
are also lenses which are placed between your eyes and the pixels, which is why the devices
are often called goggles. In some instances, these can be adjusted to match the distance
between your eyes, varying from person to person.
These lenses focus and reshape the picture for each eye and create a stereoscopic 3D image by
angling the two 2D images to mimic how each of our two eyes views the world ever-so-slightly
differently. Try closing one eye then the other to see individual objects dance about from side
to side and you get the idea behind this.

[15]
----This is what you will see when you will see the headset from some distance.

14 | P a g e
3.3.5 Immersion in VR after display ??
Now comes the immersion level depends upon many factors like Head tracking and level of
interaction with the VR World, smell, audio in 3D etc but we generally needs Tracking
(Head, motion, eye) , audio and interaction with display .
Tracking –
i) Head Tracking -
This is done using a Head Tracker, a device which is mounted on the
Headset so that whatever you see in the VR World follows you.
Head tracking means that when you wear a VR headset, the picture in front of you shifts as
you look up, down and side to side or angle your head.
A system called 6 DoF (Degrees of freedom) plots your head in terms of your X, Y and Z axis
to measure head movements forward and backwards, side to side and shoulder to shoulder,
also known as pitch, yaw and roll.
There's a few different internal components which can be used in a head-tracking system, such
as a gyroscope, accelerometer and a magnetometer (This is generally used when you use a
Google cardboard)
ii) Motion Tracking –
When you look down with a VR headset on the first thing you want to do is see your hands in
a virtual space.
And when you move in real world or move your hands you want to make, see and feel the
same in the VR World.
Therefore, a set of wireless controllers are used to make you feel like you're using your own
hands in VR. You grab each controller and use buttons, and triggers.
Basic concept behind this is we have a camera just in front of us to track for our movements
in a similar way like GPS works. This camera tracks for every movement of dots or marks
(these are made on VR Headset) and a reply is given to the computation part which further
computes our movement replies to the Headset is we finally see the movement in the VR
World.
This process occurs in a few milliseconds so that we do not lag in the frame rendering and feel
motion sickness!!
I also read that in the coming future EYE TRACKING will also be possible so that without
moving our head when we rotate our eyes it tracks and our display changes!!!

15 | P a g e
iii) Interaction –
This part of VR Experience is most important because when you touch things make some
actions that affect the world around you then you feel like you are immersed in a world, the
feeling of sensation. So giving people the ability to handle virtual objects has always been a
big part of VR.
Usually, this is done using data gloves, which are ordinary gloves with sensors wired to the
outside to detect hand and figure motions. [15]&[16]

[15] [16]
iv) Audio Rendering –
Finally, headphones are used to increase the sense of immersion. Binaural or 3D audio are
used by app and game developers to tap into VR headsets' head-tracking technology to take
advantage of this and give the wearer the sense that sound is coming from behind, to the side
of them or in the distance.

16 | P a g e
4. CHAPTER 3
Geometry of Virtual World
4.1 Introduction –
It is the Virtual World Generator (VWG), which maintains the geometry and physics of the
virtual world. This chapter covers the geometry part
Which is needed to make models and move them around. The models could include the walls
of a building, furniture, our power system equipment’s, clouds in the sky.
Starting with section 4.2 which explains the basic working of Virtual World with some
concepts in 3D modelling. Then comes section 4.3 which explains the basic structure of a 3D
model called as primitive and possible ways of generating it.
Objects or models in a 3D world can be both moving and stationary, the nature or geometry
behind that is explained in this section 4.4.
It is very important to continuously note the orientation and position of models in the VR
World and so to apply the geometry which is explained in section 4.5.
In section 4.6 the rotations are explained in the local and global axis system with roll, yaw and
pitch concept.

4.2 3D modelling and basics [22]–


We first need a virtual world to contain the geometric models. For our purposes, it is enough
to have a 3D Euclidean space with Cartesian coordinates. Therefore, let R 3 denote the virtual
world, in which every point is represented as a (x, y, z). The coordinate axes of our virtual
world are shown in Figure 4.2.1.

17 | P a g e
Also a 3D triangle is defined by its three vertices, each of which is a point in R3.
When you import or create a 3D model in your VWG, each model imported has its own 3D
axis which is generated while modelling it so when you apply rotation or position transform
to that model you need to specify whether the transform is made about model axis or global
axis.
This you can see in figure 4.2.1 in which two different axis systems are there Here, (X1, Y1,
Z1) is used for global axis system and (X2, Y2, Z2)
Is used for local or model axis system.
This concept is basically used when you want to make some changes in rotation using code
file there you specify which axis you want to use whether self or global.
Basically, any 3D model when designed can be made using two components either some
primitives like cube, cylinder, sphere, hemisphere, cone, capsule or you can use a mesh of a
primitive.
In first way of modelling use add or subtract the primitive to create a 3D model , but in the
second one you take any one of the primitive’s mesh and start distorting it in some ways to
create the required 3D model.
Some examples of first way, are software’s like Solid works, 123d Autodesk etc.
Second way, Blender 3ds Max, Maya etc.
Any 3D model is made up of basic primitive which is explained in the next section.

4.3 Primitives- Why triangles?


Geometric models are made of surfaces or solid regions in R3 and contain an infinite number
of points.
Because representations in a computer must be finite, models are defined in terms of primitives
in which each represents an infinite set of points. The simplest and most useful primitive is a
3D triangle, as shown in
Figure 4.2.1. A planar surface patch that corresponds to all points “inside” and on the boundary
of the triangle is fully specified by the coordinates of the triangle vertices: (x1, y1, z1), (x2,
y2, z2), (x3, y3, z3).
To model a complicated object or body in the virtual world, numerous triangles can be
arranged into a mesh, as shown in Figure 4.3.

The basic question that can come to your mind after seeing figure 4.3.1 is – Do I really need
to put each and every triangle is 3D space to design my model ?

18 | P a g e
The answer is partially not, but in some way you need to design, rearrange, and create a mesh
like above using a primitive mesh.

In real practice if you are using a software for 3D modelling like Blender
And if you select a cube mesh which looks like figure 4.3.2.

Now the software allows you move each and vertex of each square which is further divided in
to two traingles each square.
If you want to go into deeper detailing of the 3d model there is another option of level of render
in blender so as you increase level more deeper traingle primitives are generated and you can
edtit them.
But if want to create simple shape with less modifications and detailing you can use Autodesk
123d or solid works which uses primitives as explained previous.
Some more concepts on 3d modelling, meshing and Conversion of files to use in VWG are
given in next sections.

4.4 Stationary Vs Movable objects –


There are two kinds of models in the virtual world on the basis of their dynamic nature -:
• Stationary models are those which keep the same coordinates forever. Examples are streets,
floors, and buildings, transformers, transmission towers, wires, etc.
• Movable models, which can be transformed into various positions and orientations.
Examples include vehicles, and small furniture and in power system we have multimeter, and
some components that need to be replaced like a fuse etc.
The most interesting thing is that the motion of a model can be designed in many ways like-
1) Using a tracking system, the model might move to match the user’s motions.
2) Alternatively, the user might operate a controller to move objects in the virtual world,
including a representation of himself.
3) Finally, objects might move on their own according to the laws of

19 | P a g e
Physics in the virtual world (You can’t believe that VWG uses almost all laws of physics for
motion) like velocity, acceleration, gravity, kinematic, torque, force, momentum etc.
Virtual world is supposed to correspond to familiar environments from the real world, then the
axis scaling should match common units. For example, (1, 0, 0) should mean one meter to the
right of (0, 0, 0). It is also wise to put the origin (0, 0, 0) in a convenient location. Commonly,
y = 0 corresponds to the floor.
The location of x = 0 and z = 0 could be in the
Centre of the virtual world so that it nicely divides into quadrants based on sign.
Viewing the models Of course, one of the most important aspects of VR is how the models are
going to “look” when viewed on a display. This problem is divided into two parts. The first
part involves determining where the points in the virtual world should appear on the display.
This is accomplished by viewing transformations, which are combined with other
transformations in to produce the final result. The second part involves how each part of the
model should appear after taking into account lighting sources and surface properties that are
defined in the virtual world.
Moving objects when viewed in Virtual World should not cause motion sickness otherwise
you can’t feel immersion rather feel sick in the VR World.
So the speed or I should say the frame rate for each object should be defined properly.

4.5 Changing Position and Orientation –

As explained in section 4.3 about figure 4.3.1 which has two axis systems as local or self and
global axis system.
Suppose that a movable model has been defined by its position using global axis system. To
move it, we apply a single transformation on it current position. By combining translation and
rotation, the model can be placed anywhere, and at any orientation in the virtual world R3.

For translations consider the following 3D triangle, with its vertices in R3


As - (x1, y1, z1), (x2, y2, z2), (x3, y3, z3).
Let xt, yt, and zt be the amount we would like to change the triangle’s position, along the x, y,
and z axes, respectively. The operation of changing position is called translation and in the
technical terms of VWG it is called as Transform. Position, and it is given by
(x1, y1, z1) → (x1 + xt, y1 + yt, z1 + zt)
(x2, y2, z2) → (x2 + xt, y2 + yt, z2 + zt)
(x3, y3, z3) → (x3 + xt, y3 + yt, z3 + zt),
Applying above transformation to triangle it will translate to the desired location.

4.6.1 Rotation –
For rotation of the object, consider figure 4.5.1

20 | P a g e
In above figure let say if we want to rotate our object as –
About x axis- xº
About y axis- yº
About z axis- zº

Then you have to use Transform.Rotation concept in VWG. (Explained in next sections).
Moreover for rotation purpose the concept of Yaw, Pitch and Roll is used as shown in figure
4.6.2

You can see the axis system used for above model is local axis system because if you want to
rotate your model keeping it stationary you have to use its local axis system which uses the
Roll, Yaw, and Pitch axis.
You can just simply give a command of rotation and see which axis is what.

21 | P a g e
5. CHAPTER 4
Lights and Camera
5.1 Introduction –
As you know without light it’s impossible for us to see anything, similarly in the virtual world
if don’t have light you won’t be able to see around you. Moreover in real world, we have sun
as our natural source of light and bulbs and tube lights are artificial and manmade.
Similarly in Virtual you can use the same concept to create shadows and lighting effects which
is explained in this chapter.
Camera as in the real world decides what we want to see in future as memories of our today’s
life. But in the virtual world it as our eye.
Whatever comes in the field view of our eye we see but what about virtual world!!
So camera in the virtual world acts as our eye whose field of view can be adjusted along with
relative position which is explained in this chapter.

5.2 Camera [23]–


5.2.1 Basic Concepts -
The Virtual world is made up of scenes and each scene is created by arranging and moving
objects in a three-dimensional space. Since the viewer’s screen is two-dimensional, there needs
to be a way to capture a view and “flatten” it for display. This is work is done using the Camera
A camera is an object that defines a view in scene space. The object’s position defines the
viewpoint, while the forward (Z) and upward (Y) axes of the object define the view direction
and the top of the screen, respectively. The Camera component also defines the size and shape
of the region that falls within the view means the Field of View.
With these parameters set up, the camera can display what it currently “sees” to the screen. As
the camera object moves and rotates, the displayed view will also move and rotate accordingly.
Our VWG camera offers two view of a same scene -
Perspective and orthographic cameras:

22 | P a g e
The same scene shown in perspective mode (left) and orthographic mode (right)
A camera in the real world, or a human eye, sees the world in a way that makes objects look
smaller the farther they are from the point of view. This concept is known as Perspective. This
concept is very important to make a scene realistic
A camera that does not diminish the size of objects with distance is referred to as Orthographic.
The perspective and orthographic modes of viewing a scene are known as camera Projections.
5.2.2 The shape of the viewed region -
Both perspective and orthographic cameras have a limit on how far they can “see” from their
current position. The limit is defined by a plane that is perpendicular to the camera’s forward
(Z) direction. This is known as the Far Clipping Plane since objects at a greater distance from
the camera are “clipped” (i.e., excluded from rendering). There is also a corresponding near
clipping plane close to the camera - the viewable range of distance is that between the two
planes.
When perspective is used, objects appear to diminish in size as the distance from camera
increases. This means that the width and height of the viewable part of the scene grows with
increasing distance.
5.2.3 Using more than one camera -
When created, a scene contains just a single camera and this is all you need for most situations.
However, you can have as many cameras in a scene as you like and their views can be
combined in different ways, as described below.

Switching cameras -
By default, a camera renders its view to cover the whole screen and so only one camera view
can be seen at a time.

23 | P a g e
By disabling one camera and enabling another from a script, you can “cut” from one camera
to another to give different views of a scene. You might do this, for example, to switch between
an overhead map view and a first-person view.

5.3 Lights –
5.3.1 Basic Concepts -
Lights are an essential part of every scene. While meshes and textures define the shape and
look of a scene, lights define the color and mood of your 3D environment. You will likely
work with more than one light in each scene. Making them work together requires a little
practice but the results can be quite amazing and effective
As in the figure 5.3.1 a scene is shown in which two type of light sources are used as
Directional Light and Point light.
Effects and working is explained in next section.

24 | P a g e
5.3.2 Types of lights -
In this section I explained the types of lights which I have used in my project and these are the
basic types of light which are generally used in lighting purpose -
A) Point lights -
A point light is located at a point in space and sends light out in all directions equally. The
direction of light hitting a surface is the line from the point of contact back to the centre of the
light object. The intensity diminishes with distance from the light, reaching zero at a specified
range. Light intensity is inversely proportional to the square of the distance from the source.
This is known as ‘inverse square law’ and is similar to how light behaves in the real world.
Point lights are useful for making lamps and other local sources of light in a scene. You can
also use them to make a spark or explosion illuminate its surroundings in a convincing way.

Effect of a Point Light in the scene


B) Spot lights -
Like a point light, a spot light has a specified location and range over which the light falls off.
However, the spot light is constrained to an angle, resulting in a cone-shaped region of
illumination. The centre of the cone points in the forward (Z) direction of the light object.
Light also diminishes at the edges of the spot light’s cone. Widening the angle increases the
width of the cone and with it increases the size of this fade
Spot lights are generally used for artificial light sources such as flashlights, car headlights and
searchlights. With the direction controlled from a script or animation, a moving spot light will
illuminate just a small area of the scene and create dramatic lighting effects.

25 | P a g e
This type of light can also be used if you can to focus a particular area of a object like I used
in the multimeter display I created a spot light in front of the multimeter display and then I
focused on the screen so that a good view of the display can be seen.

Effect of a Spot Light in the scene


C) Directional lights -
Directional lights are very useful for creating effects such as sunlight in your scenes. Behaving
in many ways like the sun, directional lights can be thought of as distant light sources which
exist infinitely far away. A directional light does not have any identifiable source position and
so the light object can be placed anywhere in the scene. All objects in the scene are illuminated
as if the light is always from the same direction. The distance of the light from the target object
is not defined and so the light does not diminish.
Directional lights represent large, distant sources that come from a position outside the range
of the game world. In a realistic scene, they can be used to simulate the sun.

26 | P a g e
Effect of a Directional Light in the scene
. With the light angled to the side, parallel to the ground, sunset effects can be achieved, with
the light angled from above, the sky will resemble daylight.
Also, while creating a directional light by changing the Transform. Rotation of it you can
create shadows of objects.
I also used a directional light as a sun in my project.

27 | P a g e
6. CHAPTER 5
Software for Project Work and Working
6.1 Basic Terms and Concepts of Project –
So finally in this chapter you will find the application of all previous chapters you read and
the project work done till now.
Moreover, you will also get to know how proceeded and what all softwares we have used and
how they work.
The title of the Project is “Virtual Reality Simulation Training system for substation” which
means that we have to create a Substation of Power System with all the equipment’s that we
use in the real world.
Moreover training of substation means that we need to train the operator in substation for
various faults that can occur there and need to be corrected on time. Since most of the faults
can’t be created on the equipment’s in the real world and hence this project.
In previous chapters you learnt that we need a Virtual World Generator to create a Virtual
Environment. In this project that VWG is Unity3D game Engine. This is a software which
provides you a platform to create your own Virtual World!!
Now you know what has to be built it is important for you to know what all things this project
needs to be done for completion –
 Identify appropriate Software and language for Project Work.
 Virtual Substation Building model made in unity.
 First person character created, movement using mouse and keyboard.
 Identify faults in Power system and their correction
 Model and include Power System components according to faults.
 Develop controllers according to faults or use data gloves.
 Virtual interaction with components using controllers or data gloves.
 Create audio and frame rate for real feel of virtual world.
The first three tasks needed for the project are accomplished by me. Rest are with you.
In the following sub- sections I explained about 3d modelling, unity and its scripting.
6.2 3d Modelling Software 123d, Autodesk, blender –
Now you have a VWG as Unity3d but without environment and 3D models it is useless to
have it.
28 | P a g e
We need 3D models for this this task I chose 123d Autodesk as my 3D modelling software
this is a software in which you can create your own 3D models.
You can learn how to make 3D models in this software on the tutorials given on their website
and on YouTube also.
Installation software file is placed in a folder named as Softwares given to you.
(https://www.autodesk.com/solutions/3d-modeling-software)
For many models you may need to visit substation of IISC.
For more detailing of the 3D models you may have to use blender as your 3D modelling
software.
While working in 123D make sure you export file your to Autodesk mesh mixer which will
create a mesh file of your 3D model.
And further you can import your 3D model in unity by converting your .obj file into .fbx file
using FBX converter.
All softwares mentioned above are given in the softwares folder install them and use!

6.2 Virtual World Generator – Unity3D –:


6.2.1 Unity3D –
Now you know about 3D modelling it’s time to start with importing 3D models in unity and
interact with them.
Create a new project – File -> new project
Then a scene window opens, save scene from – File - >save scene.
Now you have a window in front of you as 3D world.
Unity works on two principles – First one is you create a 3D model in some other software
with color, texture, shaders on it.
And when you import that model in unity, it automatically creates a folder named as materials
which contains all details about color, texture and shaders of that file.
Second principle is you create a 3D model using primitives of unity and apply material texture
and shaders’ unity, then you need to create a folder named as materials and add materials to
that folder and apply them on the model.
Moreover, interaction in unity means how your models behave like whether they are rigid
body, or act as colliders, or they are stationary objects, or they move with some commands of
keyboard.
All this modifications on the models are through scripting in unity.

29 | P a g e
Means you have to write code for the interaction
The language used for scripting in unity is C# and Java.
About how work in unity and its Scripting refer the link below-
https://unity3d.com/learn/tutorials/topics/interface-essentials
https://unity3d.com/learn/tutorials/s/graphics
https://unity3d.com/learn/tutorials/topics/user-interface-ui
https://docs.unity3d.com/Manual/index.html?_ga=2.253256649.1189927596.1499864463-
1507710294.1499256764

6.2.2 Scripting in Unity–


For detailed information about how to code in unity for different types of interaction use this
link –
https://unity3d.com/learn/tutorials/s/scripting
Whenever you want to create a game object or want a game object act as Rigid body you can
go Rigid body section in the above link and study the same.
Then you have to use your logic about its variation and behaviour and code for same.

6.2 Build, Visualization and Hardware –


Now you know what all things you need for the project and how to work with them. Here the
project work I did, you can find a folder named as substation and you will find a icon there

, click on this icon and click on play button.


You can also open this file in unity editor by open project and select the same above icon.
There you will find a scene name as substation double click on that icon our substation window
will pop up –

And you can see the substation –

30 | P a g e
1) Front View -

2) Side View –

31 | P a g e
3) Top View-

Now moving around the scene you can find a Motor, Multimeter and table in the substation.
1) Motor –

32 | P a g e
2) Multimeter –

3) Table -

33 | P a g e
Screen will look like –

In the hierarchy window you will find all the game objects created including first person
control,
In the assets folder in project window you will find all the folder created for scene, material
and different objects, in the inspector window you will find all the components used like
transform, colliders, mesh renderers, scripts attached.
Scene of substation is in folder named as Substation Environment_WE

6.2.1 Scripts / Codes -


Scripts written are placed in the folder as –
Standard Assets -> Character controllers - > Scripts

Here you will find five scripts –

34 | P a g e
All codes are commented and a text file is attached to every code which explains working of
it.
A brief explanation is given below –
1) FPSInput controller –
When you press the game icon in the scene window you will be able to move around in the
3D world due to First Person Controller created using capsule as a primitive, a camera, pointer
and a script.
2) Mouse Look -
A 360 degree view is able to visualize by just keeping at one place using this script.
You can move the mouse to have 360 degree view in the substation.
3) Multimeter –
This multimeter has a display which is variable means initially it has a value 00.0 and when
you click game icon the value is 220. (Random. Value).
It has two independent testers one Red and Black whose movement will be through data
gloves.
Display is created using a GUI Text using the script
4) Ray Casting -
The heart and soul of object movement this script is very useful when you will move objects
using controllers or data gloves as a pointer is defined in FPS controller object using that a ray
is casted in forward direction and the object on which this ray falls it returns the name and
distance of that object in the virtual world.
So using and editing in this script you can define which object are to be made independent and
which will remain stationary when you have Data Gloves.

35 | P a g e
5) Rotating Fan –
This is a small script for a rotating fan or motor.
You can change the speed of the fan by clicking on the setaxismotor in the hierarchy window
and then click on the speed icon in the inspector window.
This motor can be used while designing various equipment of power system substation.
About the Hardware you have to decide which controllers you want to use like data gloves or
you want to design your own.
Based on the hardware you have to design the Hardware based Software for Virtual World
Rendering.

36 | P a g e
References
1. “ Tracking Systems in Virtual Reality”, https://www.vrs.org.uk/virtual-reality-
gear/tracking.html
2. “ Virtual Reality”, https://en.wikipedia.org/wiki/Virtual_reality
3. “ Totally new in Virtual Reality”, http://www.businessinsider.in/My-brain-did-
something-crazy-and-totally-new-in-virtual-reality/articleshow/51720413.cms
4. “ Substation Technician”, https://www.trico.coop/files/Journeyman-Substation-
Technician-2-16-17.pdf
5. “ Oculus Rift”, https://www.extremetech.com/gaming/174661-valve-announces-
steamvr-an-oculus-rift-mode-for-steam
6. Zongzhan, D. U. "Development of Virtual Reality Simulation Training System for
Substation." (2016).
7. Ribeiro, Tiago Ramos, et al. "Agito: Virtual reality environment for power systems
substations operators training." International Conference on Augmented and Virtual
Reality. Springer International Publishing, 2014.
8. “ Unity3D website”, https://unity3d.com/
9. “Stereoscopy Image file”,
https://en.wikipedia.org/wiki/Stereoscopy#/media/File:Charles_Street_Mall,_Boston_
Common,_by_Soule,_John_P.,_1827-1904_3.jpg
10.“ Stereoscopy”, https://en.wikipedia.org/wiki/Stereoscopy
11.“ Eye Vision”, https://www.sightsavers.org/eye/
12.“Eye Image File”,
https://www.google.co.in/search?q=image+of+eye+with+a+object+in+front+of+it&so
urce=lnms&tbm=isch&sa=X&ved=0ahUKEwj0nNyE9oXVAhXLN48KHYY0DpEQ
_AUICigB&biw=1366&bih=613#imgrc=qvzcbQERPRdLlM:
13.“ Vr Headset Image File”,
https://www.google.co.in/search?q=vr+headset&source=lnms&tbm=isch&sa=X&ved
=0ahUKEwi4vbSe9oXVAhULOI8KHebMDgwQ_AUICigB&biw=1366&bih=613#tb
m=isch&q=how+we+see+in+vr+headset
14. “ How VR works “, https://www.wareable.com/vr/how-does-vr-work-explained
15. “ Types of Virtual Reality “, https://appreal-vr.com/blog/virtual-reality-and-its-kinds/
16.“ What is Vr”, http://vr.isdale.com/WhatIsVR/frames/WhatIsVR4.1-Types.html
17.https://www.youtube.com/watch?v=-aLoNt1j02M
18.https://www.youtube.com/watch?v=dq2RSlslQcU
19.“Bird’s Eye View”, Steve Valle, http://vr.cs.uiuc.edu/vrch2.pdf
20. “Geometry of VR World”, Steve Valle, http://vr.cs.uiuc.edu/vrch3.pdf
21.https://unity3d.com/learn/tutorials/projects/space-shooter-tutorial/camera-and-lighting

37 | P a g e
22. “ Interface Essentials in Unity”, https://unity3d.com/learn/tutorials/topics/interface-
essentials
23.“ Graphics in Unity”, https://unity3d.com/learn/tutorials/s/graphics
24.“ Gui Text in Unity”, https://unity3d.com/learn/tutorials/topics/user-interface-ui
25.https://docs.unity3d.com/Manual/index.html?_ga=2.253256649.1189927596.1499864
463-1507710294.1499256764
26.“Scripting in Unity”, https://unity3d.com/learn/tutorials/s/scripting

38 | P a g e

You might also like