You are on page 1of 39

Chapter 1

A History of Virtual Reality

1.1 Definition of Virtual Reality

The virtual reality (VR) term was coined by the American computer scientist Jaron Lanier in
1989. It refers to a virtual environment, a visualization of complex data representing an
imagined or styled place. The term virtual reality (VR) is commonly used by the popular media
to describe imaginary worlds that only exist in computers and our minds. However, let us more
precisely define the term. Sherman and Craig [2003] point out in their book Understanding
Virtual Reality that Webster’s New Universal Unabridged Dictionary [1989] defines virtual as
“being in essence or effect, but not in fact” and reality as “the state or quality of being real.
Something that exists independently of ideas concerning it. Something that constitutes a real
or actual thing as distinguished from something that is merely apparent.”
Thus, virtual reality is a term that contradicts itself—an oxymoron! Fortunately, the website
merriam-webster.com [Merriam-Webster 2015] has more recently defined the full-term virtual
reality to be “an artificial environment which is experienced through sensory stimuli (as sights
and sounds) provided by a computer and in which one’s actions partially determine what
happens in the environment.”
Virtual reality is defined to be a computer-generated digital environment that can be
experienced and interacted with as if that environment were real. Virtual Reality (VR) can be
defined as use of computer modelling & simulations which help a person in interacting with
artificial 3D environment. This 3D artificial environment shows reality with help of some
interactive devices which can send and receive information and are worn in form of goggles,
headsets, gloves or body suits etc. In other words, virtual reality can be defined as use of
computer graphics to simulate presence physically in artificial or virtual environment & to
create a realistic looking world. Virtual Reality is a real time and interactive technology; which
means that the computer is developed to automatically detect inputs given by user and can
modify instantaneously the virtual world.

1.2 Need of virtual reality


Due to increasing advancement in technologies & to fulfil growing need of customers; Virtual
reality is now a days consider most immerging and efficient technologies which has not only

1
overcome limitations of augmented reality but also made human life simpler and easier. Some
of the growing needs of virtual reality are as follows:
1) Simulate the real world dynamically by use of computer software, hardware and virtual
world integration technologies.
2) Can pretend to have physical presence in places in the real world as well as in imaginary
worlds.
3) Without any real danger; we can be part of the action on the virtual safe environment.
4) Virtual reality can help us to visualize working environment where people cannot go
especially mars or low temperature environment by making same atmospheric conditions by
use of computer graphics software and use of headsets, gloves etc & make them feel same
physical presence.

1.3 History of VR
Today’s virtual reality technologies build upon ideas that date back to the 1800s, almost to the
very beginning of practical photography. In 1838, the first stereoscope was invented, using
twin mirrors to project a single image. That eventually developed into the View-Master,
patented in 1939 and still produced today.

In 1987 Jaron Lanier, founder of the Visual Programming Lab (VPL), coined (or popularised)
the term "'Virtual Reality'". The use of the term “virtual reality,” however, was first used in the
mid-1980s when Jaron Lanier, founder of VPL Research, began to develop the gear, including
goggles and gloves, needed to experience what he called “virtual reality.” Jaron Lanier is
considered the founding father of virtual reality.

The concept of virtual reality has been around for decades, even though the public really only
became aware of it in the early 1990s. In the mid-1950s, a cinematographer named Morton
Heilig envisioned a theatre experience that would stimulate all his audiences’ senses, drawing
them in to the stories more effectively. He built a single user console in 1960 called the
Sensorama that included a stereoscopic display, fans, emitters, stereo speakers and a moving
chair. He also invented a head mounted television display designed to let a user watch television
in 3-D. Users were passive audiences for the films, but many of Heilig’s concepts would find
their way into the VR field. Morton Heilig was called the “Father of Virtual Reality” in several
books and articles. He was one of the great visionaries of our time, he was a Philosopher,
Inventor, Filmmaker and in general a man who looked towards the future and was way ahead
of his time.
2
Philco Corporation engineers developed the first HMD in 1961, called the Head sight.
Developed by Comeau and Bryan, the headsight projected screen for each eye. The magnetic
tracking system kept track of the head movement, which corresponded with a remote camera.
Although this was not virtual reality due to it lacking a computer simulation, it was the first
step for evolution of VR HMD. The helmet included a video screen and tracking system, which
the engineers linked to a closed-circuit camera system. They intended the HMD for use in
dangerous situations -- a user could observe a real environment remotely, adjusting the camera
angle by turning his head. Bell Laboratories used a similar HMD for helicopter pilots. They
linked HMDs to infrared cameras attached to the bottom of helicopters, which allowed pilots
to have a clear field of view while flying in the dark. The term cyberspace was coined by
William Gibson in his 1984 science fiction novel, Neuromancer.

Cyberspace is thought of as the ultimate virtual reality environment. It is an alternative


computer universe where data exists like cities of light. Information workers use a special
virtual reality system to enter cyberspace and to travel its data highways. This gives them the
experience of being physically free to go anywhere.

Nowadays computer graphics is used in many domains of our life. At the end of the 20th century
it is difficult to imagine an architect, engineer, or interior designer working without a graphics
workstation. In the last years the stormy development of microprocessor technology brings
faster and faster computers to the market. These machines are equipped with better and faster
graphics boards and their prices fall down rapidly. It becomes possible even for an average
user, to move into the world of computer graphics. This fascination with a new reality often
starts with computer games and lasts forever. It allows to see the surrounding world in other
dimension and to experience things that are not accessible in real life or even not yet created.
Moreover, the world of three-dimensional graphics has neither borders nor constraints and can
be created and manipulated by ourselves as we wish – we can enhance it by a fourth dimension:
the dimension of our imagination... But not enough: people always want more. They want to
step into this world and interact with it – instead of just watching a picture on the monitor. This
technology which becomes overwhelmingly popular and fashionable in current decade is called
Virtual Reality (VR).

1.4 Evolution of Virtual Reality:


In 1950, flight simulators were built by US Air Force to train student pilots. In 1965, a research
program for computer graphics called “The Ultimate Display” was laid out. Until that time VR

3
was just a concept and was not very popular.in 1988, commercial development of VR began.
In 1991, first commercial entertainment VR system was released.

The very first idea of it was presented by Ivan Sutherland in 1965: “make that (virtual) world
in the window look real, sound real, feel real, and respond realistically to the viewer’s actions”
[Suth65]. It has been a long time since then, a lot of research has been done and status quo:
“the Sutherland’s challenge of the Promised Land has not been reached yet but we are at least
in sight of it” [Broo95].
Let us have a short glimpse at the last three decades of research in virtual reality and its
highlights:
i. Sensorama–The Sensorama Machine was invented in 1957 and patented in 1962 under patent
# 3,050,870. Morton Heilig created a multi-sensory simulator. A pre-recorded film in colour
and stereo, was augmented by binaural sound, scent, wind and vibration experiences. This was
the first approach to create a virtual reality system and it had all the features of such an
environment, but it was not interactive. The first 3D immersive simulator in 1962, when
Morton Heilig created Sensorama, a simulated experience of a motorcycle running through
Brooklyn characterized by several sensory impressions, such as audio, olfactory, and haptic
stimuli, including also wind to provide a realist experience (Heilig, 1962).

Figure 1.1 Morton Heilig’s Sensorama created the experience of being fully immersed in
film.

ii. The Ultimate Display–In 1965 Ivan Sutherland proposed the ultimate solution of virtual
reality: an artificial world construction concept that included interactive graphics, force-
feedback, sound, smell and taste.

iii. “The Sword of Damocles” –The first virtual reality system realized in hardware, not in

4
concept. Ivan Sutherland con-structs a device considered as the first Head Mounted Display
(HMD), with appropriate head tracking. It supported a stereo view that was updated correctly
according to the user’s head position and orientation.

iv. VIDEOPLACE – In 1969 Myron Krueger (a VR artist) develops a series of "artificial reality"
experiences called GLOWFLOW, METAPLAY, PSYCHIC SPACE and VIDEOPLACE.
Artificial Reality created in 1975 by Myron Krueger –“a conceptual environment, with no
existence”. VIDEOPLACE was created where the computer had control over the relationship
between the participant's image and the objects in the graphic scene.

v. VCASS –Thomas Furness at the US Air Force’s Armstrong Medical Research Laboratories
developed in 1982 the Visually Coupled Airborne Systems Simulator –an advanced flight
simulator. The fighter pilot wore a HMD that augmented the out-the window view by the
graphics describing targeting or optimal flight path information.

Figure 1.2 The Sword of Damocles

vi. VIVED–Virtual Visual Environment Display –constructed at the NASA Ames in 1984 with
off-the-shelf technology a stereoscopic monochrome HMD.

Figure 1.3 VIVED By NASA


5
vii. VPL –The VPL company manufactures the popular Data Glove (1985) and the Eyephone
HMD (1988) –the first commercially available VR devices.

viii. BOOM –commercialized in 1989 by the Fake Space Labs. BOOM is a small box containing
two CRT monitors that can be viewed through the eye holes. The user can grab the box, keep
it by the eyes and move through the virtual world, as the mechanical arm measures the position
and orientation of the box.

Figure 1.4 Mechanical tracking device-BOOM from Fake Space Labs

ix. CAVE –presented in 1992. CAVE (CAVE Automatic Virtual Environment) is a virtual reality
and scientific visualization system. Instead of using a HMD it projects stereoscopic images on
the walls of room (user must wear LCD shutter glasses). This approach assures superior quality
and resolution of viewed images, and wider field of view in comparison to HMD based
systems.
x. SEGA VR (1993)-The Sega VR was a virtual reality headset that was created in the early 90s.
It was an adaptation of a similar headset that Sega was using for arcades but this device was
marketed as a portable home gaming system. The headset itself, an HMD, was outfitted with
LCD screens in the visor coupled with stereo headphones. The way it captured motion was due
to inertial sensors in the headset. Marketed at $200 the final produce never saw a public release
because of development difficulties with included causing motion sickness and severe
headaches in users.

Figure 1.5 CAVE Figure 1.6 Sega VR


6
xi. Virtual Boy-In 1995, Nintendo released the Virtual Boy which was marketed as the first
console capable of displaying stereoscopic 3D. Stereoscopic refers to a pair of stereos (two)
images which, when viewed through a specific medium (in this case a head mounted display,
combined to give the brain the illusion of 3D depth. The Virtual Boy did not see a long shelf
life as it was only available in the U.S. for seven months before it was discontinued. This was
due to its high price.

xii. Street View- In 2007, Google introduced Street View, a service that shows panoramic views
of an increasing number of worldwide positions such as roads, indoor buildings and rural areas.
It also features a stereoscopic 3D mode, introduced in 2010.

xiii. Oculus Rift Prototype: In 2010, Palmer Luckey created a prototype of his modern, lightweight
VR headset, the Oculus Rift.

xiv. Virtual Reality Viewer: In 2011 Apple release their iPhone 'Virtual Reality' Viewer. this
device will work in tandem with an iPhone to develop immersive, 3D viewing experiences.
Basically, you hold the iPhone Virtual Reality Viewer just like a pair of binoculars, where you
can manipulate the iPhone’s touchscreen display via a couple of finger holes in the bottom.

xv. Tactile Haptics: In 2013 Tactile Haptics VR motion Controller was released

xvi. Google Cardboard (2014)-The Google Cardboard is a device that was developed to be used
as a VR platform for smartphones. It was created to be a low-cost system that would help
introduce and encourage interest in the VR platform.

Figure 1.7 Google Cardboard Figure 1.8 Samsung Gear VR

7
xvii. Samsung’s Gear VR: In 2015, Samsung released their VR headset that’s compatible with only
Samsung smartphones called Samsung Gear VR. Also, Microsoft announces their development
of HoloLens.

xviii. Oculus Rift VR headset: In 2016 Facebook made their Oculus Rift VR headset commercially
available, HTC released their top-of-the-line Vive VR headset, and Sony plans to release their
PlayStation VR headset to be used with the PS4
In 2016, HTC shipped its first units of the HTC Vive Steam VR headset. This marked the first
major commercial release of sensor-based tracking, allowing for free movement of users within
a defined space.
A patent filed by Sony in 2017 showed they were developing a similar location tracking
technology to the Vive for PlayStation VR, with the potential for the development of a wireless
headset. Rather than cheap LCD shades, the headset has a pair of 0.7-inch 1280 x 720 HD
OLED screens inside. It takes a single HDMI connection for audio and video, rather than
several component or composite video cords
The Oculus Rift S was released on 20 March 2019.

8
Chapter 2
Introduction to Virtual Reality

2.1 Forms of Reality

Reality takes many forms and can be considered to range on a virtuality continuum from the
real environment to virtual environments [Milgram and Kishino 1994]. Figure 1 shows various
forms along that continuum. These forms, which are somewhere between virtual and
augmented reality, are broadly defined as “mixed reality,” which can be further broken down
into “augmented reality” and “augmented virtuality.”
There are several different forms of virtual reality seen in works of fiction, some of which are
listed below:
Augmented reality - Digital information overlaid or consolidated on the actual world of reality
and vice versa. Superimposes mixed realities to join, forming possible scenarios that behave as
if real, while still maintaining virtual element(s).
Virtual reality - Full digital render/ representation of the real world; our reality.
Mixed reality - A combination of virtual reality and reality.
Virtuality - A virtual portrayal of contingent or non-contingent possibilities.

Figure 2.1 The virtuality continuum.

The real environment is the real world that we live in. Although creating real-world
experiences is not always the goal of VR, it is still important to understand the real world and
how we perceive and interact with it in order to replicate relevant functionality into VR
experiences.
Instead of replacing reality, augmented reality (AR) adds cues onto the already existing real
world, and ideally the human mind would not be able to distinguish between computer-
generated stimuli and the real world. Augmented virtuality (AV) is the result of capturing
real-world content and bringing that content into VR. Immersive film is an example of
augmented virtuality. In the simplest case, the capture is taken from a single viewpoint, but in

9
other cases, real world capture can consist of light fields or geometry, where users can freely
move about the environment, perceiving it from any perspective.
True virtual environments are artificially created without capturing any content from the real
world. The goal of virtual environments is to completely engage a user in an experience so that
she feels as if she is present in another world such that the real world is temporarily forgotten,
while minimizing any adverse effect.
What’s the difference Between Virtual Reality and Augmented Reality?
Virtual Reality and Augmented Reality are two sides of the same coin. You could think of
Augmented Reality as VR with one foot in the real world: Augmented Reality simulates
artificial objects in the real environment; Virtual Reality creates an artificial environment to
inhabit.
In Augmented Reality, the computer uses sensors and algorithms to determine the position and
orientation of a camera. AR technology then renders the 3D graphics as they would appear
from the viewpoint of the camera, superimposing the computer-generated images over a user’s
view of the real world.
In Virtual Reality, the computer uses similar sensors and math. However, rather than locating
a real camera within a physical environment, the position of the user’s eyes are located within
the simulated environment. If the user’s head turns, the graphics react accordingly. Rather than
compositing virtual objects and a real scene, VR technology creates a convincing, interactive
world for the user.
Benford, Greenhalgh, Reynard, Brown & Koleva (1998) propose a classification linked to the
artificiality and transportation perceived by the user (Fig. 2.2).

Figure 2.2 Classification of shared spaces according to transportation and artificiality


(Benford,)

10
2.2 VR Is Communication
Normally, communication is thought of as interaction between two or more people. But
defining communication more abstractly: the transfer of energy between two entities, even if
just the cause and effect of one object colliding with another object. Communication can also
be between human and technology—an essential component and basis of VR. VR design is
concerned with the communication of how the virtual world works, how that world and its
objects are controlled, and the relationship between user and content: ideally where users are
focused on the experience rather than the technology. Well-designed VR experiences can be
thought of as collaboration between human and machine where both software and hardware
work harmoniously together to provide intuitive communication with the human. Developers
write complex software to create, if designed well, seemingly simple transfer functions to
provide effective interactions and engaging experiences.
Communication can be broken down into direct communication and indirect communication
as discussed below.
2.2.1 Direct Communication
Direct communication is the direct transfer of energy between two entities with no
intermediary and no interpretation attached. In the real world, pure direct communication
between entities doesn’t represent anything as the purpose is not communication, but it is a
side effect. However, in VR, developers insert an artificial intermediary (the VR system that is
ideally unperceivable) between the user and carefully controlled sensory stimuli (e.g., shapes,
motions, sounds). When the goal is direct communication, VR creators should focus on making
the intermediary transparent so users feel like they have direct access to those entities. If that
can be achieved, then users will perceive, interpret, and interact with stimuli as if they are
directly communicating with the virtual world and its entities.
Direct communication consists of structural communication and visceral communication.
2.2.1.1 Structural Communication
Structural communication is the physics of the world, not the description or the mathematical
representation but the thing-in-itself. An example of structural communication is the bouncing
of a ball off of the hand. We are always in relationship to objects, which help to define our
state; e.g., the shape of our hand around a controller. The world, as well as our own bodies,
directly tells us what the structure is through our senses. Although thinking and feeling do not
exist within structural communication, such communication does provide the starting point for
perception, interpretation, thinking, and feeling.

11
2.2.1.2 Visceral Communication
Visceral communication is the language of automatic emotion and primal behaviour, not the
rational representation of the emotions and behaviour. Visceral communication is always
present for humans and is the in-between of structural communication and indirect
communication. Presence is the act of being fully engaged via direct communication (albeit
primarily one way). Examples of visceral communication are the feeling of awe while sitting
on a mountaintop, looking down at the earth from space, or being with someone via solid eye
contact (whether in the real world or via avatars in VR).
2.2.2 Indirect Communication
Indirect communication connects two or more entities through some intermediary. The
intermediary need not be physical; in fact, the intermediary is often our mind’s interpretation
that sits between the world and behaviour/action. Once we interpret and give something
meaning, then we have transformed the direct communication into indirect communication.
Indirect communication includes what we normally think of as language, such as spoken and
written language, as well as sign languages and our internal thoughts (i.e., communicating with
oneself). Indirect communication consists of talking, understanding, creating stories/histories,
giving meaning, comparing, negating, fantasizing, lying, and romancing.

2.3 Types of virtual reality

There are many types of Virtual Reality, considering the following: Immersive Virtual Reality,

Non-Immersive Virtual Reality and Hybrid Virtual Reality

2.3.1 Immersive Virtual reality


An immersive system replaces our real-world view with the images generated by computer that
interact to the position and orientation of the user’s head. Headed Mounted Display (HMD)
can be used to see such environment. In a completely immersive system , the user actually feels
part of the environment (experiences a feeling of presence).Here, the user has no visual contact
with the physical world.

Figure 2.3 Immersive virtual reality Figure 2.4 Non-immersive Virtual Reality 12
2.3.2 Non-Immersive Virtual reality
On the other hand, non-immersive system leaves the user visually aware of the real world but
able to observe the virtual world through some display device like graphics workstation etc. It
is also called as semi immersive system. Advanced flight, ship & vehicle simulators are semi
immersive type of virtual reality. The cockpit, bridge or driving seat is a physical model,
whereas the view of the world outside is computer-generated (typically projected).

2.3.3 Hybrid Virtual reality


It allows the user to see the real world with virtual images superimposed over this view. Such
systems are also called as “Augmented virtual reality systems”.

Figure 2.5 Hybrid Virtual Reality

2.4 3I’s of Virtual Reality: Imagination, Immersion, Interaction


Burdea and Coiffet raised the 3I of virtual reality–immersion, imagination, and interaction In
a virtual reality environment, a user experiences immersion, or the feeling of being inside and
a part of that world. He is also able to interact with his environment in meaningful ways. The
combination of a sense of immersion and interactivity is called telepresence. Computer scientist
Jonathan Steuer defined it as “the extent to which one feels present in the mediated
environment, rather than in the immediate physical environment.” In other words, an effective
VR experience causes you to become unaware of your real surroundings and focus on your
existence inside the virtual environment.

Figure 2.6 I3 of Virtual Reality


13
Chapter 3

Technical Aspects of Virtual Reality

3.1 Working Principle of VR


Virtual reality is a way to create a computer-generated environment that immerses the user into
a virtual world. When we put on a VR headset it takes us to a simulated set-up making us
completely aloof from the actual surroundings.
The primary subject of virtual reality is simulating the vision. Every headset aims to perfect
their approach to creating an immersive 3D environment. Blink 3D Builder is a proprietary
authoring tool for creating immersive 3D environments. The 3D environments can be viewed
using the Blink 3D Viewer on the Web or locally.
The Virtual Reality system works on the following principle:

It first tracks the physical movements in the real world, then a computer redraws the virtual
world to reflect those movements. The updated virtual world is sent to the out-put (to the user
in the real world).

In this case, the output is sent back to a head mounted display. Hence, the user feels "immersed"
in the virtual world as if they are in the virtual world itself as all they can watch is their rendered
movements in virtual environment.

Figure 3.1 Virtual Reality hardware structure

The idea behind VR is to deliver a sense of being there by giving at least the eye what it would
have received if it were there and, more important to have the image change instantly as the
point of view is changed (Smith & Lee, 2004). The perception of spatial reality is driven by
various visual cues, like relative size, brightness and angular movement. One of the strongest

14
is perspective, which is particularly powerful in its binocular form in that the right and left eyes
see different images. Fusing these images into one 3D perception is the basis of stereovision.

The perception of depth provided by each eye seeing a slightly different image, eye parallax,
is most effective for objects very near you. Objects farther away essentially cast the same image
on each eye. The typical dress code for VR is a helmet with goggle-like displays, one for each
eye. Each display delivers a slightly different perspective image of what you would see if you
were there. As you move your head, the image rapidly updates so that you feel you are making
these changes by moving your head (versus the computer actually following your movement,
which it is). You feel you are the cause not the effect.

3.2 Reality systems


A reality system is the hardware and operating system that full sensory experiences are built
upon. The reality system’s job is to effectively communicate the application content to and
from the user in an intuitive way as if the user is interacting with the real world. Humans and
computers do not speak the same language so the reality system must act as a translator or
intermediary between them (note the reality system also includes the computer). It is the VR
creator’s obligation to integrate content with the system so the intermediary is transparent and
to ensure objects and system behaviours are consistent with the intended experience
Communication between the human and system is achieved via hardware devices. These
devices serve as input and/or output. A transfer function, as it relates to inter action, is a
conversion from human output to digital input or from digital output to human input. What is
output and what is input depends on whether it is from the point of view of the system or the
human. For consistency, input is considered information traveling from the user into the system
and output is feedback that goes from the system back to the user. This forms a cycle of
input/output that continuously occurs for as long as the VR experience lasts. This loop can be
thought of as occurring between the action and distal stimulus stages of the perceptual process
where the user is the perceptual process.
Figure 3.2 shows a user and a VR system divided into their primary components of input,
application, rendering, and output. Input collects data from the user such as where the user’s
eyes are located, where the hands are located, button presses, etc. The application includes
non-rendering aspects of the virtual world including updating dynamic geometry, user
interaction, physics simulation, etc. Rendering is the transformation of a computer-friendly

15
Figure 3.2 A VR system consists of input from the user, the application, rendering, and
output to the user.

format to a user-friendly format that gives the illusion of some form of reality and includes
visual rendering, auditory rendering (called auralization), and haptic (the sense of touch)
rendering. An example of rendering is drawing a sphere. Rendering is already well defined
and other than high-level descriptions and elements that directly affect the user experience the
technical details are not the focus of this book. Output is the physical representation directly
perceived by the user (e.g., a display with pixels or headphones with sound waves).
The primary output devices used for VR are visual displays, speakers, haptics, and motion
platforms. More exotic displays include olfactory (smell), wind, heat, and even taste displays.
Selecting appropriate hardware is an essential part of designing VR experiences. Some
hardware may be more appropriate for some designs than others.

3.3 Technical aspects of VR


Various technical aspects of virtual reality technologies are: Input Process, Simulation Process
and Rendering Process
3.3.1 Input Process
This process controls the input devices like keyboard, joystick, 3D position trackers (glove,
wand, body suit), voice recognition system etc. Some glove systems can also add gesture
recognition. The objective is to get the coordinated data from the input devices to the rest of
the system.
3.3.2 Simulation Process
This process is the core of a virtual reality program. It can handle the interactions, simulation
of physical laws & determines the world status. It is a discrete process which is iterated once

16
for each time step or frame. This process can finally decide what actions to be taken place in
the virtual world.
3.3.3. Rendering Process
This process creates sensations which are output data to the user or other network processes.
There are separate rendering processes like: Visual Rendering Auditory Rendering and Haptic
Rendering.
i. Visual Rendering
Visual Rendering is related to the computer graphics & animations. This process is also referred
to as rendering pipeline process. It consists of a series of sub processes that are involved to
generate each frame. It begins with information of the world, the objects, lighting & camera
(eye) location in the world space. The objects get their geometries transformed into the eye
coordinate system. Then, the algorithms & actual pixel rendering is done.
ii. Auditory Rendering
Auditory Rendering generates mono, stereo or 3D audio. There are many aspects of our head
& ear shape that affect the recognition of 3D sound. Hence, the HRTF is applied to the sound.
iii. Haptic Rendering
This haptic rendering area is newly growing science & there is much more to be learned in
such rendering. Haptics is the generation of touch & force feedback information. Almost all
systems today are focusing to have force feedback.

3.4 Architecture of a VR system:

Main components present in architecture of virtual reality are: Input Processor, Simulation
Processor, Rendering processor, World Database, Input Devices, Output Devices

Figure 3.3 The Architecture of a VR System


As we can see in the image above, the input processor controls the device used to input
information to the computer and to send the coordinate data to the rest of the system (mouse,

17
trackers and the voice recognition system) in a reduced time-frame or with minimal lag
time.The simulation processor represents the core of the VR system. It takes the user input
along with any tasks programmed and determines the actions that will take place in the virtual
world.The rendering processor creates the sensations, the output to the user. Different rendering
processes are used for haptic, visual, auditory sensations and other sensory systems.A VR
system also has a world database, which stores the objects from the virtual world and scripts
that describes actions of those objects.

The five basic components of a VR system, which are VR engine/Personal computer,


input/output devices, software & database. The components necessary for building and
experiencing VR are divided into two main components-the hardware components and the
software components. A VR system is made up of 2 major subsystems, the hardware and
software. The hardware can be further divided into computer or VR engine and I/O devices,
while the software can be divided into application software and database.

In general: input devices are responsible for interaction, output devices for the feeling of
immersion and software for a proper control and synchronization of the whole environment.

3.5 Virtual Reality system Hardware


The major components of the hardware are the VR engine or computer system, input devices
and output devices.

3.5.1 VR Engine/Computer Workstation

In VR systems, the VR engine or computer system has to be selected according to the


requirement of the application. Graphic display and image generation are some of the most
important factors and time-consuming task in a VR system. The choice of the VE engine
depends on the application field, user, I/O devices, level of immersion and the graphic output
required, since it is responsible for calculating and generating graphical models, object
rendering, lighting, mapping, texturing, simulation and display in real-time. The computer also
handles the interaction with users and serves as an interface with the I/O devices.

A major factor to consider when selecting the VR engine is the processing power of the
computer, and the computer processing power is the amount of senses (graphical, sound, haptic,
etc) that can be rendered in a particular time frame as pointed. The VR engine is required to
recalculate the virtual environment approximately every 33ms and produce real time simulation

18
of more than 24fps, furthermore, the associated graphic engine should be capable of producing
stereoscopic vision. The VR engine could be a standard PC with more processing power and a
powerful graphics accelerator or distributed computer systems interconnected through high
speed communication network. Computer workstation is used to control several sensory
display devices to immerse you in 3D virtual environment.

3.5.2 VR Input Devices

3.5.2.1 Position and orientation tracking devices

Tracking devices are intrinsic components in any VR system. These devices communicate with
the system's processing unit, telling it the orientation of a user's point of view. In systems that
allow a user to move around within a physical space, trackers detect where the user is, the
direction he is moving and his speed.

Figure 3.4 Objects Orientation


The absolute minimum of information that immersive VR requires, is the position and
orientation of the viewer’s head, needed for the proper rendering of images. Additionally other
parts of body may be tracked e.g., hands – to allow interaction, chest or legs – to allow the
graphical user representation etc. Three-dimensional objects have six degrees of freedom
(DOF): position coordinates (x, y and z offsets) and orientation (yaw, pitch and roll angles for
example). Each tracker must support this data or a subset of it. In general there are two kinds
of trackers: those that deliver absolute data (total position/orientation values) and those that
deliver relative data (i.e. a change of data from the last state).

From a user's perspective, this means that when you wear an HMD, the view shifts as you look
up, down, left and right. It also changes if you tilt your head at an angle or move your head
forward or backward without changing the angle of your gaze. The trackers on the HMD tell
the CPU where you are looking, and the CPU sends the right images to your HMD's screens.

19
Every tracking system has a device that generates a signal, a sensor that detects the signal and
a control unit that processes the signal and sends information to the CPU. The signals sent from
emitters to sensors can take many forms, including electromagnetic signals, acoustic signals,
optical signals and mechanical signals. Each technology has its own set of advantages and
disadvantages.

This system tracks the position and orientation of a user in the virtual environment. This system
is divided into: mechanical, electromagnetic, ultrasonic and infrared trackers.

Electromagnetic tracking systems: Magnetic trackers are the most often used tracking
devices in immersive applications. They typically consist of: a static part (emitter, sometimes
called a source), a number of movable parts (receivers, sometimes called sensors), and a
control station unit. The assembly of emitter and receiver is very similar: they both consist of
three mutually perpendicular antennae. As the antennae of the emitter are provided with
current, they generate magnetic fields that are picked up by the antennae of the receiver. The
receiver sends its measurements (nine values) to the control unit that calculates position and
orientation of the given sensor.A good electromagnetic tracking system is very responsive, with
low levels of latency. One disadvantage of this system is that anything that can generate a
magnetic field can interfere in the signals sent to the sensors.

Figure 3.5 Magnetic Tracker Figure 3.6 Acoustic Tracker

Acoustic tracking systems : These systems emit and sense ultrasonic sound waves to
determine the position and orientation of a target. Most measure the time it takes for the
ultrasonic sound to reach a sensor. Usually the sensors are stationary in the environment -- the
user wears the ultrasonic emitters. The system calculates the position and orientation of the
target based on the time it took for the sound to reach the sensors.

20
Acoustic trackers use ultrasonic waves (above 20kHz) for determining the position and
orientation of object in space. As the use of sound allows the determination of relative distance
between two points only, multiple emitters (typically three) and multiple receivers (typically
three) with known geometry are used to acquire a set of distances to calculate position and
orientation. There are two kinds of acoustic trackers – they either use time-of-flight (TOF) or
phase-coherent (PC) measurements to determine the distance between a pair of points. TOF
trackers (e.g., Logitech 6DOF Ultrasonic Head Tracker, Mattel PowerGlove) measure the flight
time of short ultrasonic pulses from the source to the sensor. PC trackers compare the phase of
a reference signal with the phase of the signal received by the sensors. The phase difference of

360° is equivalent to the distance of one wavelength.

Acoustic tracking systems have many disadvantages. Sound travels relatively slowly, so the
rate of updates on a target's position is similarly slow. The environment can also adversely
affect the system's efficiency because the speed of sound through air can change depending on
the temperature, humidity or barometric pressure in the environment.

Optical tracking devices: They use light to measure a target's position and orientation. The
signal emitter in an optical device typically consists of a set of infrared LEDs. The sensors are
cameras that can sense the emitted infrared light. The LEDs light up in sequential pulses. The
cameras record the pulsed signals and send information to the system's processing unit. The
unit can then extrapolate the data to determine the position and orientation of the target.

Figure 3.7 Optical Tracker a) outside-looking-in; b) inside-looking-out


There are many different kinds and configurations of optical trackers. Generally, we can divide
them into three categories :

21
• beacon trackers – this approach uses a group of beacons (e.g., LEDs) and a set of cameras
capturing images of beacons’ pattern. Since the geometries of beacons and detectors are known,
position and orientation of the tracked body can be derived. There are two tracking paradigms:
outside-in and inside-out

• pattern recognition – these systems do not use any beacons – they determine position and
orientation by comparing known patterns to the sensed ones .No fully functioning systems were
developed up to now. A through-the-lens method of tracking may become a challenge for the
developers.

• laser ranging – these systems transmit onto the object the laser light that is passed through a
diffraction grating. A sensor analyzes the diffraction pattern on the body’s surface to calculate
its position and orientation.

Optical systems have a fast upload rate, meaning latency issues are minimized. The system's
disadvantages are that the line of sight between a camera and an LED can be obscured,
interfering with the tracking process. Ambient light or infrared radiation can also make a
system less effective.

Mechanical tracking systems : These systems rely on a physical connection between the
target and a fixed reference point. A common example of a mechanical tracking system in the
VR field is the BOOM (Binocular Omni-Oriented Monitor) developed by Fake Space Labs. A
BOOM display is an HMD mounted on the end of a mechanical arm that has two points of
articulation. The system detects the position and orientation through the arm. The update rate
is very high with mechanical tracking systems, but the disadvantage is that they limit a user's
range of motion.

Figure 3.8 Mechanical Tracker

22
3.5.2.2 3D Input devices
Beside trackers that capture user’s movements, many other input devices were developed to
make human-computer interaction easier and more intuitive. For full freedom of movements
three-dimensional input devices seem the most natural. Attached to our body or hand-held, they
are generally used to select, move, modify etc. virtual objects.
There are two types of Input devices : Navigation and Gesture Input Devices. Navigation
interfaces allow relative position control of virtual objects and Gesture interfaces allow
dextrous control of virtual objects and interaction through gesture recognition. Navigation
Input Devices are the Cubic Mouse, the trackball and the 3-D probe. They Perform relative
position/velocity control of virtual objects and allow “fly-by” application by controlling a
virtual camera. Gesture Input Devices are sensing gloves such as: Fakespace “Pinch Glove”,
5DT Data Glove, The DidjiGlove and Immersion “CyberGlove”.They have larger work
envelope than trackballs/3-D probes and Need calibration for user’s hand.

Cubic Mouse: a new input device that allows users to intuitively specify three-dimensional
coordinates in graphics applications. The device consists of a cubeshaped box with three
perpendicular rods passing through the center and buttons on the top for additional control.
The rods represent the X, Y, and Z axes of a given coordinate system. Pushing and pulling the
rods specifies constrained motion along the corresponding axes. Embedded within the device
is a six degree of freedom tracking sensor, which allows the rods to be continually aligned with
a coordinate system located in a virtual world.

Figure 3.9 Cubic Mouse Figure 3.10 Trackball

Trackball: In virtual trackball techniques, rotation is controlled with a projection of mouse


movement onto a virtual trackball, which in turn controls the actual rotation of the object.

23
Virtual trackballs allow rotation along several dimensions simultaneously and integrate
controller and the object controlled, as in direct manipulation. The main drawback of virtual
trackballs is a lack of thorough mathematical description of the projection from mouse
movement onto a rotation. A virtual trackball is a tool for controlling 3D rotations by moving
a 2D mouse that work by simulating a physical trackball.

Gloves: Gloves are 3D input devices that can detect the joint angles of fingers. The
measurement of finger flexion is done with the help of fiber-optic sensors (e.g., VPL
DataGlove), foil-strain technology (e.g., Virtex CyberGlove) or resistive sensors (e.g., Mattel
PowerGlove). The use of gloves allows the user richer interaction than the 3D mouse, because
hand gestures may be recognized and translated into proper actions [Mine95a]. Additionally
gloves are equipped with a tracker that is attached to the user’s wrist to measure its position
and orientation.
Gloves have played a role in the VR craze from the very beginning, even though the original
designers didn't necessarily intend for them to be used in VR systems. Using a wired glove,
you can interact with virtual objects by making various hand gestures. Many people call the
gloves DataGloves or Power Gloves, though both those terms specifically refer to particular
models of gloves and are not generic terms. Not all gloves work the same way, though all share
the same purpose: allowing the user to manipulate computer data in an intuitive way.
Some gloves measure finger extension through a series of fiber-optic cables. Light passes
through the cables from an emitter to a sensor. The amount of light that makes it to the sensor
changes depending on how the user holds his fingers -- if he curls his fingers into a fist, less
light will make it to the sensor, which in turn sends this data to the VR system's CPU. In general,
these sort of gloves need to be calibrated for each user in order to work properly. The official
Data Glove is a fiber-optic glove.

Figure 3.11 Pinch Gloves Figure 3.12 Data Glove (Pair)

24
Other gloves use strips of flexible material coated in an electrically conductive ink to measure
a user's finger position. As the user bends or straightens his fingers, the electrical resistance
along the strips changes. The CPU interprets the changes in resistance and responds
accordingly. These gloves are less accurate than fiber-optic gloves, but they also tend to be
much less expensive.
Of course, if you want a really accurate and responsive glove, you should use a dexterous hand
master (DHM). The DHM uses sensors attached to each finger joint. You attach the sensors to
your joints with mechanical links, which means the glove is like an exoskeleton. These gloves
are more accurate than either fiber-optic gloves or those using electrically conductive material,
but they are also cumbersome and clunky.
Pinch glove enables natural interaction with objects. It uses hand-signs to execute actions. It
continuously tracks the motion of the user’s hand & limb and accordingly gives signal to the
transmitter.
Gloves in virtual reality allow the user to interact with the virtual world. For example, the user
may pick up a virtual block, turn it over in a virtual hand, and set it on a virtual table.
Wired with thin fibre optic cables, some gloves use light-emitting diodes (LEDs) to detect the
amount of light passing through the cable in relation to the movement of the hand or joint.
The computer then analyses the corresponding information and projects this moving hand into
the virtual reality. Magnetic tracking systems also are used to determine where the hand is in
space in relation to the virtual scene. People who experience VR are often seen wearing gloves
made with fibreoptic cables or have light-emitting diodes. These materials allow the system to
record hand or joint movement, and then the computer will project the movement to a virtual
scene. In some cases, gesture-sensors are also used in gloves.

3.5.3 VR output devices


3.5.3.1 Visual (Sensory) displays
Sensory displays are used to display the simulated virtual worlds to the user. The most common
sensory displays are the computer visual display unit, the head-mounted display (HMD) for 3D
visual and headphones for 3D audio.

Two display technologies are currently available on the market:

CRT – cathode ray tube displays are based on conventional television technology. They offer
relatively good image quality: high resolution (up to 1600x1280), sharp view and big contrast.
Their disadvantages are high weight and high power consumption. They also generate high-

25
frequency, strong magnetic fields that may be hazardous to the user’s eyes and may have
negative influence on the quality of measurements of magnetic trackers.

LCD – liquid crystal diode displays are a relatively new technology that is alternative to
standard CRT displays. LCD displays are flat, lightweight, have low power consumption and
lower emissions than CRTs. The biggest disadvantage is poor image quality: low contrast,
brightness and resolution (typically up to 720x480) .

Beyond CRTs and LCDs, a virtual retinal display (VRD) was proposed. A prototype of a VRD
developed at the HITLab uses a modulated laser light that projects the image directly onto the
user’s retina.

Today’s reality systems are implemented in one of three ways: head-mounted displays, world-
fixed displays, and hand-held displays.
i. Head mounted displays:
Head-mounted display (HMD) units use a small screen or screens (one for each eye) that are
worn in a helmet or a pair glass. Unlike a movie, where the director controls what the viewer
sees, the HMD allows viewers to look at an image from various angles or change their field of
view by simply moving their heads. HMD units usually employ cathode-ray tube (CRT) or
liquid crystal display (LCD) technology.
CRTs incorporate optic systems that reflect an image onto the viewer's eye. Although bulkier
and heavier than LCD displays, CRT systems create images that have extremely high
resolutions, making a scene seem that much more realistic. Head mounted displays place a
screen in front of each of the viewer's eyes at all times. The view, the segment of the virtual
environment generated and displayed, by the computer, and a new perspective of the scene is
generated. In most cases, a set of optical lens and mirrors are used to enlarge the view to fill
the field of view and to direct the scene to the eyes (Lane, 1993). A head-mounted display
(HMD) is a visual display that is more or less rigidly attached to the head. Position and
orientation tracking of HMDs is essential for VR because the display and earphones move with
the head.

HMDs can be further broken down into three types: non-see-through HMDs, video see-
Through HMDs, and optical-see-through HMDs. Non-see-through HMDs block out all cues
from the real world and provide optimal full immersion conditions for VR. Optical-see-
through HMDs enable computer-generated cues to be overlaid onto the visual field and
provide the ideal augmented reality experience. Conveying the ideal augmented reality

26
experience using optical-see-through head-mounted displays is extremely challenging due to
various requirements (extremely low latency, extremely accurate tracking, optics, etc.). Due to
these challenges, video-see-through HMDs are sometimes used. Video-see-through HMDs are
often considered to be augmented virtuality, and ome advantages and disadvantages of both
augmented reality and virtual reality.

ii. World-fixed displays render graphics onto surfaces and audio through speakers that do not
move with the head. Displays take many forms, ranging from a standard monitor (also known
as fish-tank VR) to displays that completely surround the user (e.g., CAVEs and CAVE-like
displays). Head tracking is important for world-fixed displays, but accuracy and latency
requirements are typically not as critical as they are for head-mounted displays because stimuli
are not as dependent upon head motion. High-end world-fixed displays with multiple surfaces
and projectors can be highly immersive but are more expensive in dollars and space. World-
fixed displays typically are considered to be part virtual reality and part augmented reality. This
is because real-world objects are easily integrated into the experience, such as the physical
chair. However, it is often the intent that the user’s body is the only visible real-world cue.

Figure 3.13 CAVE (Cave Automatic Virtual Environment)

A CAVE is a small room or cubicle where at least three walls (and sometimes the floor and
ceiling) act as giant monitors. The display gives the user a very wide field of view -- something
that most head-mounted displays can't do. Users can also move around in a CAVE system
without being tethered to a computer, though they still must wear a pair of funky goggles that
are similar to 3-D glasses.
The active walls are actually rear-projection screens. A computer provides the images projected
on each screen, creating a cohesive virtual environment. The projected images are in a
stereoscopic format and are projected in a fast alternating pattern. The lenses in the user's

27
goggles have shutters that open and shut in synchronization with the alternating images,
providing the user with the illusion of depth.
Tracking devices attached to the glasses tell the computer how to adjust the projected images
as you walk around the environment. Users normally carry a controller wand in order to interact
with virtual objects or navigate through parts of the environment. More than one user can be in
a CAVE at the same time, though only the user wearing the tracking device will be able to
adjust the point of view -- all other users will be passive observers.
iii. Hand-held displays are output devices that can be held with the hand(s) and do not require
precise tracking or alignment with the head/eyes (in fact the head is rarely tracked for hand-
held displays). Hand-held augmented reality, also called indirect augmented reality, has
recently become popular due to the ease of access and improvements in smartphones/tablets.
In addition, system requirements are much less since viewing is indirect—rendering is
independent of the user’s head and eyes.
Binocular Omni-Orientation Monitor (BOOM) is mounted on a jointed mechanical arm
with tracking sensors located at the joints. A counterbalance is used to stabilize the monitor, so
that when the user releases the monitor, it remains in place. To view the virtual environment,
the user must take hold of the monitor and put her face up to it.Developed and commercialized
by Fake Space Labs BOOMs are complex devices supporting both mechanical tracking and
stereoscopic displaying technology. Two visual displays (for stereo view) are placed in a box
mounted to a mechanical arm. The box can be grabbed by the user and the monitors can be
watched through two holes. As the mechanical construction supports usually counter-balance,
the displays used in the BOOMs need to be neither small nor lightweight. Therefore CRT
technology can be used for better resolution and image quality.

3.5.3.2 Haptic displays


Haptic sensations perceived by humans can be divided into two main groups:
• kinesthetic (force) feedback – forces sensed by the muscles, joints and tendons.
• tactile feedback – includes feedback through the skin, like sense of touch, temperature,
texture or pressure on the skin surface.
Haptics are artificial forces between virtual objects and the user’s body. "Haptics" refers to the
sense of touch, so a haptic system is one that provides the user with physical feedback. A
joystick with force-feedback technology is one example of a haptic interface device. Passive
haptics are a little different in that they don't actively exert force against a user. Instead, passive

28
haptics are objects that physically represent virtual elements in a VR environment. For instance,
a real folding table might double as a virtual kitchen counter. Having something real to touch
in a virtual environment enhances the user's sense of immersion and helps him navigate through
the simulation. Many haptic systems also serve as input devices.
Haptics can be classified as passive (static physical objects) or active (physical feedback
controlled by the computer), tactile (through skin) or proprioceptive force (through
joints/muscles), and self-grounded (worn) or world-grounded (attached to real world).

3.5.3.3 Motion Platforms: A motion platform is a hardware device that moves the entire
body resulting in a sense of physical motion and gravity. Such motions can help to convey a
sense of orientation, vibration, acceleration, and jerking. Common uses of platforms are for
racing games, flight simulation, and location-based entertainment. When integrated well with
the rest of a VR application, motion sickness can be reduced by decreasing the conflict between
visual motion and felt motion.

Motion platforms can be active or passive. An active motion platform is controlled by the
computer simulation. An example of an active motion platform that moves a base platform via
hydraulic actuators. A passive motion platform is controlled by the user. For example, the
tilting of a passive motion platform might be achieved by leaning forward.

3.5.3.4 Treadmills: Treadmills provide a sense that one is walking or running while actually
staying in one place. Variable-incline treadmills, individual foot platforms, and mechanical
tethers providing restraint can convey hills by manipulating the physical effort required to
travel forward. A treadmill is useful because the user remains stationary with respect to the real
world, but feels as if he is actually walking through the virtual environment.
Some companies have developed omni-directional treadmills. These devices allow a user to
step in any direction. Normal treadmills use a single motor, which exerts force either forward
or backward relative to the user. Omni-directional treadmills use two motors -- from the user's
perspective the treadmill can exert force forward, backward, left or right. With both motors
working together, the treadmill can allow a user to walk in any direction he chooses on a
walking surface wrapped around a complex system of belts and cables. Omnidirectional
treadmills enable simulation of physical travel in any direction and can be active or passive.
Active omnidirectional treadmills have computer-controlled mechanically moving parts.
These treadmills move the treadmill surface in order to recenter the user on the

29
treadmill (e.g., Darken et al. 1997 and Iwata 1999). Unfortunately, such recentering can cause
the user to lose balance. Passive omnidirectional treadmills contain no computer-controlled
mechanically moving parts. For example, the feet might slide along a low-friction surface.

3.6 Virtual Reality System Software and Tools

The Virtual Reality Modelling Language (VRML), first introduced in 1994, was intended for
the development of "virtual worlds" without dependency on headsets. The Web3D consortium
was subsequently founded in 1997 for the development of industry standards for web-based
3D graphics. The consortium subsequently developed X3D from the VRML framework as an
archival, open-source standard for web-based distribution of VR content. WebVR is an
experimental JavaScript application programming interface (API) that provides support for
various virtual reality devices, such as the HTC Vive, Oculus Rift, Google Cardboard or
OSVR, in a web browser.

VRML allows to create "virtual worlds" networked via the Internet and hyperlinked with the
World Wide Web.Aspects of virtual world display, interaction and internetworking can be
specified using VRML without being dependent on special gear like head-mounted devices
(HMD). It is the intention of its designers to develop VRML as the standard language for
interactive simulation within the World Wide Web.

Virtual reality system software is a collection of tools and software for designing, developing
and maintaining virtual environments and the database where the information is stored.The
software components are divided into four sub-components: 3D modelling software, 2D
graphics software, digital sound editing software and VR simulation software.
3.6.1 3D modelling software

3D modelling software is used in constructing the geometry of the objects in a virtual world
and specifies the visual properties of these objects.

VR Modelling Software:VR is a promising new medium that’s changing the way we approach
3D Modelling. It allows artists to immerse themselves in a virtual workspace where they can
construct, carve, and paint their visions like never before.

While interest in VR as a place for games and experiences has waned over the past year, there's
still a lot of interest in it as a medium for creativity. Here we've brought together the best VR

30
apps for artists – whether you want to paint and sculpt full artworks, or model 3D characters,
vehicles and props for use in other applications.

Google paved the way with the launch of its much-hyped Tilt Brush for the HTC Vive headset
in 2015. But beyond the novelty of painting into thin air, Facebook's Quill was created to propel
illustration filmmaking. Oculus has tapped into character art with its VR 3D modelling
experience, Medium, and even Mozilla is jumping in by creating a basic web-based painting
tool. And Gravity Sketch has taken tools designed for carmakers and made them accessible to
3D artists. There are several options when it comes to VR Modelling software, and the software
you need will ultimately depend on what you want out of the application. Some applications
are made for realistic modelling while others are better suited for stylized art.

i. Oculus Medium: Oculus VR released a VR sculpting program called Medium, and this
software allows the artist to add to, take away from, and manipulate the shape of 3D objects.
The final result can be saved and exported for later use. Medium also supports the use of
standard .obj files, which can be used to create custom brushes, and the versatility of Medium
has already allowed artists to create amazing, ornate 3D creations.
ii. Google Tilt Brush :Tilt Brush is Google’s current flagship VR art program. While not
necessarily a true 3D Modelling program, Tilt Brush allows users to create objects by painting
streaks in 3D space, and the final object is composed of the painted strokes, making this more
of a 3D painting application.

There was a lot of hype around the release of Google’s Tilt Brush app for the HTC Vive in
2015 (through Valve’s Steam platform). Artists, painters, cartoonists, dancers and designers
were commissioned by Google for their Artist in Residence program. It's worth checking out
the impressive designs posted on their Virtual Arts Experiments blog.

Tilt Brush allows professional artists or amateurs to paint in 3D space inside a VR world, using
a variety of brushes (such as ink, smoke, snow and fire) to create artwork that you can interact
with, walk around in, and share as room-scale VR masterpieces or animated GIFs.The best part
about Tilt Brush is that it works in both HTC Vive and Oculus Rift headsets.

iii. Google Blocks: This application takes a different approach to creating 3D objects. It requires
users to build 3D objects onto a 2D screen. Something the developers decided would make
creating in VR easier to do. Rather than mimicking general 3D Modelling software, Blocks
aims at a more user-friendly experience that's designed to make you feel as if you were creating
in blocks,

31
iv. Gravity Sketch: Gravity Sketch focuses on a professional creative workflow that focuses
creation of geometry in non-destructive parametric modelling, allowing users to explore
infinite iterations of their idea. Their aim is a change the workflow by using Gravity Sketch to
concept 3D modelling ideas that can then be imported into CAD software for further refinement
Recently refreshed with update 1.5, Gravity Sketch started out as a VR sculpting tool for car
and shoe designers – but it’s just as usable for VR sketching and modelling as there’s a potential
for grace and solidity to the models you create that’s lacking from Google and Oculus’ tools.

You can draw freehand in 3D space using smooth curves, then extrude surfaces into 3D space
– or extrude as you draw around a central access. You can grab and move points to adjust
splines. And you can use both controllers together to create a surface like pulling a ribbon
through the air.

With Gravity Sketch 1.5, users can now make use of new features like taper mode, which
allows you to draw any length of stroke and always have tapered ends. Other new tools to make
use of include a group/ungroup option, depth of field + square snapshot, and an option to draw
and edit with normal.

3.6.2 3D graphics software

2D graphics software is used to manipulate texture to be applied to the objects which enhance
their visual details.

Croquet: Croquet is an open source 3D graphical platform that is used by experienced


software developers to create and deploy deeply collaborative multi-user online virtual world
applications on and across multiple operating systems and devices.Croquet is a next generation
virtual operating system (OS) written in Squeak. Squeak is a modern variant of Smalltalk.
Squeak runs mathematically identical on all machines. Croquet system features a peer-based
messaging protocol that dramatically reduces the need for server infrastructures to support
virtual world deployment and makes it easy for software developers to create deeply
collaborative applications. Croquet provides rich tutorials, resources and videos as educational
materials for developers.

Ogoglio: Ogoglio is an open source 3D graphical platform like Croquet. The main goal of the
Ogoglio is to build an online urban style space for creative collaboration. Ogoglio platform is
built from the languages and protocols of the web. Therefore, it’s scripting language is
JavaScript; it’s main data transfer protocol is hypertext transfer protocol (HTTP), it’s 2D layout
is hypertext mark-up language (HTML) and cascading style sheet (CSS), and it has light wave
32
object geometry format for its 3D. Ogoglio is very different from the other virtual reality world
development platforms because it uses Windows, Linux, Solaris operating system platforms
and runs on web browsers such as Internet Explorer, Firefox, and Safari.

3.6.3 Digital editing software

Digital editing software is used to mix and edit sounds that objects make within the virtual
environment.

3.6.4 VR simulation software

Simulation software brings the components together. It is used to program how these objects
behave and set the rules that the virtual world follows. Virtual reality simulation is the use of
3D objects and environments to create immersive and engaging learning experiences.

CyberSession / CS-Research: CyberSession / CS-Research is a VR-simulation software


designed to perform interactive virtual experiments. The simulation software controls VR-
simulations, process measurement and interface data and enables complete control over the
presented contents.The output of the VR-simulation takes place via head-mounted-display or
a projection system with the help of one or distributed rendering computers. In order to interact
with the virtual -world CS-research processes the input of tracking systems and input devices.
Processed data can be stored for later analysis. Additionally, external devices can be controlled,
to e.g. stimulate or record the physiological reactions. CS-research can be controlled via scripts
(ECMA-, Java-Script). That way automatic scenarios as well as standardized experimental
procedures can be constructed. For optimal functioning of the simulation software the use of
certified simulation hardware (simulation and rendering computer)

OpenSimulator (OpenSim): OpenSimulator is a 3D application server. It can be used to create


a virtual environment (world) which can be accessed through a variety of clients, on multiple
protocols. OpenSimulator allows you to develop your environment using technologies you feel
work best. OpenSimulator has numerous advantages which among other things are:
OpenSimulator is released under BSD license, making it both open source, and commercially
friendly to embed in products. Open Simulator can be extended via modules to build
completely custom configuration. It is a world building tools for creating content real time in
the environment. Supports many programming languages for application development such as
Linden Scripting Language / OpenSimulator Scripting Language (LSL/OSSL).

33
Chapter 4
Devices and Applications of VR

4.1 How Virtual Reality is being used today?


Virtual reality now a day’s used in wide range of applications, some of very important area of
use is as follows: Business ,Training ,Engineering and design ,Medical ,Entertainment
,Education and conferencing ,Architecture design and prototyping ,Competitive sports
application ,Virtual Manufacturing system ,Military Applications Mobile and gaming
applications ,Defence industry ,Ergonomics and human factor analysis ,Museum and art design
,Design Evaluation (Virtual Prototyping) ,Planning & Maintenance and many more……

1. Business
Virtual reality is being used in a number of ways by the business community which include:
Virtual tours of a business environment, Training of new employees. A 360 Degree view of
any product.

2. Training and Simulation


Virtual reality environments have been used for training simulators. The usage of VR in a
training perspective is to allow professional conduct training in a virtual environment where
they can improve upon their skills without the consequence of failing the operation. Examples
include flight simulators, battlefield simulators for soldiers, Para trooping, combat training for
the military.

3. Engineering and Design


Virtual Reality is most popularly used in engineering and designing process. It gives better
understanding of the design and help to facilitate changes wherever necessary. It helps to
reduce the time and cost factor. Examples are building construction, car designing etc.

4. Entertainment
The entertainment industry is one of the most enthusiastic advocates of virtual reality, most
noticeably in games and virtual worlds. An example includes virtual museum, gaming, virtual
theme parks, inter-active exhibitions etc.

34
5. Education and Conferencing
Education is another area where virtual reality has been adopted for teaching and learning
situations. The advantage of this is that it enables large groups of students to interact with each
other as well as within a three-dimensional environment. It is able to present complex data in
an accessible way to students which is both easy to learn and fun. Plus, these students can
interact with the objects in that environment in order to discover more about them. Best
example where virtual reality can be more useful is for medical students to develop surgery
simulations or 3D images of human body where students can explore nicely without danger.
This type of technology is mostly used in UK and abroad.

6.Virtual Reality and data visualization


Scientific and engineering data visualization has benefited for years from Virtual Reality,
although recent innovation in display technology has generated interest in everything from
molecular visualization to architecture to weather models.

7. VR for aviation, medicine, and the military


In aviation, medicine, and the military, Virtual Reality training is an attractive alternative to
live training with expensive equipment, dangerous situations, or sensitive technology.
Commercial pilots can use realistic cockpits with VR technology in holistic training programs
that incorporate virtual flight and live instruction. Surgeons can train with virtual tools and
patients, and transfer their virtual skills into the operating room, and studies have already begun
to show that such training leads to faster doctors who make fewer mistakes. Police and soldiers
are able to conduct virtual raids that avoid putting lives at risk.

8. Virtual Reality and the treatment of mental illness


Speaking of medicine, the treatment of mental illness, including post-traumatic stress disorder,
stands to benefit from the application of Virtual Reality technology to ongoing therapy
programs. Whether it’s allowing veterans to confront challenges in a controlled environment,
or overcoming phobias in combination with behavioural therapy, VR has a potential beyond
gaming, industrial and marketing applications to help people heal from, reconcile and
understand real-world experiences.

35
4.2Advantages
Virtual reality has also been used extensively to treat phobias (such as a fear of heights, flying
and spiders) and post-traumatic stress disorder. This type of therapy has been shown to be
effective in the academic setting, and several commercial entities now offer it to patients.
Although it was found that using standardized patients for such training was more realistic, the
computer-based simulations afforded a number of advantages over the live training. Their
objective was to increase exposure to life-like emergency situations to improve decision
making and performance and reduce psychological distress in a real health emergency.
Researchers in the field have generally agreed that VR technology is exciting and can provide
a unique and effective way to learn and that VR projects are highly motivating to learners
(Mantovani et al., 2003). From research, several specific situations have emerged in which VR
has strong benefits or advantages. For example, VR has great value in situations where
exploration of environments or interactions with objects or people is impossible or
inconvenient, or where an environment can only exist in computer-generated form. VR is also
valuable when the experience of actually creating a simulated environment is important to
learning. Creating their own virtual worlds has been shown to enable some students to master
content and to project their understanding of what they have learned (Ausburn & Ausburn,
2004).
One of the beneficial uses of VR occurs when visualization, manipulation, and interaction with
information are critical for its understanding; it is, in fact, its capacity for allowing learners to
display and interact with information and environment that some believe is VR’s greatest
advantage. Finally, VR is a very valuable instructional and practice alternative when the real
thing is hazardous to learners, instructors, equipment, or the environment. This advantage of
the technology has been cited by developers and researchers from such diverse fields as
firefighting, anti-terrorism training, nuclear decommissioning, crane driving and safety,
aircraft inspection and maintenance, automotive spray painting and pedestrian safety for
children (Ausburn & Ausburn, 2004).

4.3 Disadvantages
Some psychologists are concerned that immersion in virtual environments could
psychologically affect a user. They suggest that VE systems that place a user in violent
situations, particularly as the perpetuator of violence, could result in the user becoming

36
desensitized. In effect, there’s a fear that VE entertainment systems could breed a generation
of sociopaths. Engaging virtual environments could potentially be more addictive.
Another emerging concern involves criminal acts. In the virtual world, defining acts such as
murder or sex crimes has been problematic. At what point can authorities charge a person with
a real crime for actions within a virtual environment? Studies indicate that people can have
really physical and emotional reactions to stimuli within a virtual environment, and so it’s quite
possible that a victim of a virtual attack could feel real emotional trauma.
One important issue in the use of VR is the high level of skill and cost required to develop and
implement VR, particularly immersive systems. Very high levels of programming and graphics
expertise and very expensive hardware and software are necessary to develop immersive VR,
and considerable skill is needed to use it effectively in instruction. While desktop VR
technology has dramatically reduced the skill and cost requirement of virtual environments, it
still demands some investment of money and time.
Another set of limitations of VR environments stems from the nature of the equipment they
require. A long-standing problem with immersive VR has been health and safety concerns for
its users. The early literature was top-heavy with studies of headaches, nausea, balance upsets,
and other physical effects of HMD systems. While these problems have largely disappeared
from current VR research as the equipment has improved, and appear to be completely absent
in the new desktop systems, little is known about long-term physical or psychological effects
of VR usage. A second equipment limitation of VR arises from the fact that it is computer-
based and requires high-end hardware for successful presentation.

4.4. Challenges
Like many advantageous technologies ; besides opportunities , applications , sec-ond life , there
always exist unavoidable challenges and disadvantages too . In fact use of virtual reality
technologies offer both technical and cultural challenges. We can try our level best to minimize
this challenges rather than trying to completely avoid it. Reasons of these unavoidable
challenges are:
10.1 Technical Challenges
All features or functions of virtual environment can only be streamed by streaming all data to
the user live over the Internet with minimal local caching of frequently used data. This means
that user must have a minimum of 300kbit/s of Internet bandwidth for basic functionality, and
1Mbit/s for getting better performance.

37
Due to the proprietary commun-ications protocols , it is impossible to make use of a network
proxy or caching service to minimize network load when many people are all using the same
location. For ex : when used for group activities in company or schools.
Cost is another challenging issues ; as these technologies are newer , they are more costlier due
to which many small/medium scaled people can’t afford it.
As technologies are growing at rapid rate, many people are still unaware of such new
technologies , along with its advantages , disadvantages and applications. Hence awareness has
to be created among the people by conducting free seminars and demonstration.
In addition to appropriate internet bandwidth and interfacing charges, there are several
membership charges too.
10.2 Cultural Challenges
Liability issues are still question in virtual world. As we know that private land need to be
purchased for virtual learning and this private land are restricted to only authorized users.
However users in public area may have to suffer violence or disruptiveness.
There are many unsolved legal issues surrounding like virtual violence, sexual harassment ,
virtual assault . Everyday billions of people connect in these worlds to socialize , shop and
learn. Unfortunately, many laws breaker also joined this virtual world and many criminal
activities are taking placed. common criminal cases which are occurring everyday are money
laundering , sexual harassment , exchanging of child abuse environment and terrorist attack
etc.Inventory loss issues is still present ; inventory loss in which items in users inventory
including those things which have been paid for can disappear without warning or enter a state
where they will fail to enter in a world when requested ( Giving an “ object is missing “ database
error ). This loss is although much less in past years, but still it’s existing.

4.4 Future Aspects


In future, we will see rapid advancements in creating a truly immersive digital experience. With
major players like Google, Microsoft, Oculus, and HTC making tremendous efforts to improve
the current capabilities, we are not far from achieving a virtual reality that would feel so much
better than the real world.The VR technology presents countless opportunities for brands to
create breath-taking marketing content. Using this approach, they won’t only win over
customers but also establish themselves as the leader in innovation. If would like to develop a
VR app for your business then feel free to get in touch. With over 10 years of experience in
mobile technologies, we can make every dream a reality.

38
REFERENCES
[1]. Ronak Dipakkumar Gandhi , Dipam S. Patel ; “Virtual Reality – Opportunities and
Challenges”; International Research Journal of Engineering and Technology (IRJET) e-
ISSN: 2395-0056; Volume: 05 Issue: 01 | Jan-2018

[2]. Bharath V G , Dr. Rajashekar Patil ; “Importance & Applications of Virtual Reality in
Engineering sector”; International Journal of Scientific Research and Development (IJSRD)
; Volume 3 Issue 2 ; 2016.

[3]. R Radharamanan ; “A survey of Virtual reality technologies” , applications and limitations


; International Journal of Virtual Reality (IJVR) ; Volume 14(2) ; 2015.

[4]. Sharmistha Mandal; “Brief Introduction of Virtual Reality & its Challenges” ; International
Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013; ISSN 2229-
5518

[5]. Maryam Vafadar; “Virtual Reality: Opportunities and Challenges”; International Journal of
Modern Engineering Research (IJMER);Vol.3, Issue.2, March-April. 2013 pp-1139-1145

[6]. Jorge Martín-Gutiérrez, Carlos Efrén Mora, Beatriz Añorbe-Díaz, Antonio González-
Marrero; “Virtual Technologies Trends in Education”; EURASIA Journal of Mathematics
Science and Technology Education; ISSN 1305-8223 (online) 1305-8215 (print); DOI
10.12973/eurasia.2017.00626a

[7]. Moses Okechukwu Onyesolu and Felista Udoka Eze; “Understanding Virtual Reality
Technology:Advances and Applications” DOI: 10.5772/15529

[8]. Jason Jerald, NextGen Interactions; “The VR Book: Human-Centered Design for Virtual
Reality”; 2016

[9]. Zhang Hui,” Head-mounted display-based intuitive virtual reality training system for the
mining industry”; International Journal of Mining Science and Technology;
http://dx.doi.org/10.1016/j.ijmst.2017.05.005

39

You might also like