You are on page 1of 60

Unit 1

Virtual Reality And Its Application


Prof. Shalini Nigam
Immersive Technology
• Immersive technology is an integration of virtual content with the
physical environment in a way that allows the user to engage
naturally with the blended reality.
• In an immersive experience, the user accepts virtual elements of their
environment as part of the whole, potentially becoming less
conscious that those elements are not part of physical reality.
Immersive technologies include:

• virtual reality (VR) – a digital environment that replaces the user’s physical
surroundings.

• Augmented reality (AR) – digital content that is superimposed over a live


stream of the physical environment.

• Mixed reality (MR) – an integration of virtual content and the real world
environment that enables interaction among elements of both.

• Holography – the creation of a 3D image in space that can be explored


from all angles.
• Tele-Presence – a form of robotic remote control in which a human
operator has a sense of being in another location. The user could, for
example, guide the robot through a party or an office, stopping and
chatting with people throughout the environment.

• Digital twin – a virtual replication of some real-world object that


connects to the object for information so that it can display its
current status.

• FPV drone flight – use of an unmanned aerial vehicle (UAV) with a


camera that wirelessly transmits video feed to goggles, a headset, a
mobile device or another display so that the user has a first-person
view (FPV) of the environment where the drone flies.
• Supporting technologies for immersive experiences include AR, MR
and VR headsets, 3D displays, 3D audio, gesture recognition, spatial
sensing, speech recognition, haptics, drones, cameras and
omnidirectional treadmills.

• Immersive technologies exist at various points on what is sometimes


referred to as the virtuality continuum, a range that has the
unadulterated physical environment at one extreme and a fully
immersive virtual reality at the other.
Virtual Reality
• Virtual Reality (VR) is the use of computer technology to create a
simulated environment.
• Virtual Reality’s most immediately-recognizable component is the
head-mounted display (HMD). Human beings are visual creatures,
and display technology is often the single biggest difference between
immersive Virtual Reality systems and traditional user interfaces.
• Major players in Virtual Reality include HTC Vive, Oculus Rift and
PlayStation VR (PSVR)
What is Virtual Reality?
• Virtual Reality (VR) is the use of computer technology to create a
simulated environment. Unlike traditional user interfaces, VR places
the user inside an experience.
• Instead of viewing a screen in front of them, users are immersed and
able to interact with 3D worlds. By simulating as many senses as
possible, such as vision, hearing, touch, even smell, the computer is
transformed into a gatekeeper to this artificial world.
• The only limits to near-real VR experiences are the availability of
content and cheap computing power.
Virtual Reality technology
• Virtual Reality’s most immediately-recognizable component is the
head-mounted display (HMD).
• Human beings are visual creatures, and display technology is often
the single biggest difference between immersive Virtual Reality
systems and traditional user interfaces.
• For instance, CAVE automatic virtual environments actively display
virtual content onto room-sized screens. While they are fun for
people in universities and big labs, consumer and industrial
wearable's are the wild west.
Virtual Reality and the importance of audio
• Convincing Virtual Reality applications require more than just
graphics. Both hearing and vision are central to a person’s sense of
space. In fact, human beings react more quickly to audio cues than to
visual cues.
• In order to create truly immersive Virtual Reality experiences,
accurate environmental sounds and spatial characteristics are a must.
These lend a powerful sense of presence to a virtual world.
• To experience the binaural audio details that go into a Virtual Reality
experience, put on some headphones and tinker with this audio info
graphic published by The Verge.
How Virtual Reality is being used today
• Unsurprisingly, the video games industry is one of the largest
proponents of Virtual Reality.
• Support for the Oculus Rift headsets has already been jerry-rigged
into games like Skyrim and Grand Theft Auto, but newer games like
Elite: Dangerous come with headset support built right in.
• Many tried-and-true user interface metaphors in gaming have to be
adjusted for VR (after all, who wants to have to pick items out of a
menu that takes up your entire field of vision?), but the industry has
been quick to adapt as the hardware for true Virtual Reality gaming
has become more widely available.
Virtual Reality and data visualization
• Scientific and engineering data visualization has benefited for years
from Virtual Reality, although recent innovation in display technology
has generated interest in everything from molecular visualization to
architecture to weather models.
VR for aviation, medicine, and the military:

• In aviation, medicine, and the military, Virtual Reality training is an


attractive alternative to live training with expensive equipment,
dangerous situations, or sensitive technology.
• Commercial pilots can use realistic cockpits with VR technology in
holistic training programs that incorporate virtual flight and live
instruction.
• Surgeons can train with virtual tools and patients, and transfer their
virtual skills into the operating room, and studies have already begun
to show that such training leads to faster doctors who make fewer
mistakes. Police and soldiers are able to conduct virtual raids that
avoid putting lives at risk.
Virtual Reality and the treatment of mental
illness
• Speaking of medicine, the treatment of mental illness, including
post-traumatic stress disorder, stands to benefit from the application
of Virtual Reality technology to ongoing therapy programs.
• Whether it’s allowing veterans to confront challenges in a controlled
environment, or overcoming phobias in combination with behavioral
therapy, VR has a potential beyond gaming, industrial and marketing
applications to help people heal from, reconcile and understand
real-world experiences.
Types of Virtual Reality Systems
• VR systems can be classified in 3 major categories:
• Non- Immersive
• Immersive
• Semi-Immersive
Non-Immersive
• It allows users to interact with a 3D environment through a stereo display
monitor and glasses, other common components include space ball,
keyboard and data gloves. Its
• application areas include modelling and CAD systems

• Immersive :
• Immersive VR system on the other hand is the most expensive and gives
the highest level of immersion; its components include HMD, tracking
devices, data gloves and others, which encompass the user with computer
generated 3D animation that give the user the feeling of being part of the
virtual environment. One of its applications is in virtual walk-through of
building
• Semi-Immersive VR system, also called hybrid systems [1] or
augmented reality system, provides high level of immersion, while
keeping the simplicity of the desktop VR or utilizing some physical
model. Example of such system includes the CAVE (Cave Automatic
Virtual Environment) and an application is the driving simulator.
• Distributed-VR also called Networked-VR is a new category of VR
system, which exists as a result of rapid development of internet. Its
goal is to remove the problem of distance, allowing people from
many different locations to participate and interact in the same
virtual world through the help of the internet and other networks. A
traditional application of this is the SIMNET which is a real time
distributed simulation developed by the US military and used for
combat trainings [3, 4].
Components of Virtual Reality Systems
• A VR system is made up of 2 major subsystems, the hardware and
software.
• The hardware can be further divided into computer or VR engine and
I/O devices, while the software can be divided into application
software and database as illustrated below
Virtual Reality System Hardware
• The major components of the hardware are the VR engine or
computer system, input devices and output devices shown here
Input Devices
• The input devices are the means by which the user interacts with the
virtual world.
• They send signals to the system about the action of the user, so as to
provide appropriate reactions back to the user through the output
devices in real time.
• They can be classified into tracking device, point input device,
bio-controllers and voice device.
• Tracking devices sometimes referred to as position sensors, are used
in tracking the position of the user and they include,
electromagnetic, ultrasonic, optical, mechanical and gyroscopic
sensors, data gloves, neural and bio or muscular controllers.
• Examples of point-input devices include 6DOF mouse and force or
space ball. Their technology is an adaptation of the normal mouse
with extended functions and capability for 3D.
• Voice communication is a common way of interaction among
humans. So it feels natural to incorporate it into a VR system. Voice
recognition or processing software can be used in accomplishing this.
VR Engine
• In VR systems, the VR engine or computer system has to be selected
according to the requirement of the application.
• Graphic display and image generation are some of the most
important factors and time consuming task in a VR system.
• The choice of the VE engine depends on the application field, user,
I/O devices, level of immersion and the graphic output required, since
it is responsible for calculating and generating graphical models,
object rendering, lighting, mapping, texturing, simulation and display
in real-time.
• The computer also handles the interaction with users and serves as
an interface with the I/O devices.
• A major factor to consider when selecting the VR engine is the
processing power of the computer, and the computer processing
power is the amount of senses (graphical, sound, haptic, etc) that can
be rendered in a particular time frame as pointed.
• The VR engine is required to recalculate the virtual environment
approximately every 33ms and produce real time simulation of more
than 24fps,
• furthermore, the associated graphic engine should be capable of
producing stereoscopic vision.
• The VR engine could be a standard PC with more processing power
and a powerful graphics accelerator or distributed computer systems
interconnected through high speed communication network
Output Devices
• The output devices get feedback from the VR engine and pass it on to
the users through the corresponding output devices to stimulate the
senses. The possible classifications of output devices based on the
senses are: graphics (visual), audio (aural), haptic (contact or force),
smell and taste. Of these, the first 3 are frequently used in VR
systems, while smell and taste are still uncommon.
• Two possible common options for the graphics are the stereo display
monitor, and the HMD which provides a higher level of immersion. In
the HMD, the two independent views produced are interpreted by
the brain to provide a 3D view of the virtual world.
• Audio or sound is an important channel in VR; its importance is only
surpassed by that of visual.
• 3D sound can be used in producing different sounds from different
location to make the VR application more realistic.
• Haptic is used to allow the user feel virtual objects. This can be
achieved through electronic signals or mechanical devices.
Virtual Reality System Software and Tools

• Virtual reality system software is a collection of tools and software for


designing, developing and maintaining virtual environments and the
database where the information is stored. The tools can be classified
into modelling tools and development tools.
VR Modelling Tools
• There are many modeling tools available for VR designing, the most
common ones are , 3ds Max, Maya and Creator. Engineering specific
applications might use software like CATIA, Pro/E, Solidworks, UG, etc
• VR Development Tools. VR is a complex and integrative technology that
borrows from many other technologies, such as real time 3D computer
graphics, tracking technology, sound processing, and haptic technology,
among others, therefore software development flexibility and real time
interaction is needed.
• Starting the development of a VR system from the basic codes in C/C++,
Java,
• OpenGL, etc, requires a large amount of work and such system reliability is
usually low, therefore VR development tools are used.
• Careful consideration is needed in choosing VR development tools
due to the difference in flexibility provided by different software
packages as related to model input available, interface compatibility,
file format, animation ease, collision detection, supported I/O devices
and support community available to the users.
• VR development tools used in VR content creation include, virtual
world authoring tools, VR toolkits/software development kits (SDK)
and application program interfaces (APIs). But it is not uncommon to
find that some APIs are also toolkits, like OpenGL optimizer and Java
3D API [1, 6].
Applications of Virtual Reality
• VR has found vast applications in many fields due to its characteristics and
the benefits it provide in solving complex real-world problems. Some of the
application areas include: Architecture, Arts, Business, Design and Planning,
Education and Training, Entertainment, Manufacturing, Medical and
• Scientific Visualization. In manufacturing, VR is used to remove limitations
in virtualization and interaction associated with traditional 3D CAD/CAM
systems through virtual manufacturing.
• Virtual manufacturing is virtual product design, modelling, simulation,
assembly, testing and analysis for error before physical prototypes are built
to reduce development time and avoid wasteful costs
Virtual Reality Input Model : Sensor Gloves
• Sensor gloves are hand worn devices with inbuilt sensors that can
capture information about the movements and positioning of the
user’s hands.
• Some of the most widely known sensor glove technologies are the (i)
DataEntryGlove, (ii) Data Glove , (iii) CyberGlove , and (iv)
AcceleGlove
• The DataEntryGlove was originally devised as an alternative to the
keyboard, and made it possible to generate 96 printable ASCII
characters from 80 different finger positions.
• The glove was made out of cloth and had flex sensors along the
fingers, tactile sensors on the fingertips, and inertial sensors
positioned on the knuckle side of the hands
• The distribution of the sensors was specified with the aim of
recognizing the Single Hand Manual Alphabet for the American Deaf .
The DataEntryGlove was researched but was never commercially
developed.
• Thomas Zimmermann developed the DataGlove in 1987. This glove
was constructed of a lightweight fabric glove equipped with optical
sensors on each finger, and magnetic sensors on the back of the
gloves
• The optical sensors were constructed of optical cables with a small
light in one end and a photodiode in the other. When the fingers
were bent, the light was reduced in strength before it reached the
photodiode
• The bending of the fingers could therefore be determined by
measuring how much light the photo diode detected.
• The magnetic sensor measured the rotations of the hand in relation
to a fixed reference point . The DataGlove was commercialized by
VPL Research and could be purchased at a reasonable price, which
lead to widespread use of this glove
• The CyberGlove was developed at Stanford University in 1988 and
was specifically designed for the Talking Glove Project, which focused
on translating American sign language into spoken English.
• This glove was made up of a cloth glove with the fingertips and the
palm areas removed. This made it possible for users to easily grasp
objects and made it possible for deaf-blind users to conduct manual
finger spelling while wearing the gloves [8].
• The gloves were equipped with a total of 22 flex sensors, which was
made out of thin foil mounted onto plastic modules. These sensors
were sewn into pockets running over each joint, and could measure
flexing of fingers and wrists.
• The maximum flex that could be detected by a sensor was regulated by
adjusting the thickness and elasticity of the plastic modules. The plastic
modules were selected in such a way that they of maximized the output
signal, and at the same time minimized fatigue of the sensors .
• Informal experiments have shown that this glove performs in a smooth and
stable way, and that it is accurate enough to capture complex and detailed
finger and hand gestures .
• However, according to Sturman, one must calibrate the sensors to each
user in order to accurately capture gestures from different hand sizes and
hand shapes. The CyberGlove is commercially available from VR logic .
• The AcceleGlove uses accelerometers and potentiometers to capture
finger and hand poses. The accelerometers are placed on the fingers, the
wrist, and the upper arm, and are used to provide orientation and
acceleration information.
• The potentiometers are located on the elbow and the shoulder, and
provide information about the hand’s absolute position with respect to the
body .
• The AcceleGlove also incorporates a wrist button, which allows the user to
easily activate and deactivate the glove. To activate the glove the user
simply presses the wrist button, and to deactivate it, the user presses the
button a second time. This process is repeated for each sentence, in order
to assist the system in interpreting the signals correctly.
Virtual reality Tracking system
• Tracking systems:
• In optical systems emitted light is captured by cameras in various formats.
Electromagnetic tracking can be used where small electrified coils affect
other electromagnetic sensors, and how their magnetic field affects others
can position them in space.
• Acoustic systems use ultrasonic sound waves to identify the position and
orientation of target objects.
• Mechanical tracking can use articulated arms/limbs/joysticks/sensors
connected to headsets or inside them, much like the inertial tracking in
phones often made possible by accelerometers and gyroscopes.

• The world of virtual reality has successfully utilised several of these


methods for virtual reality system.
Inertial Tracking in Phones and Standalone Headsets

• Phones are one of the more basic viewing options for VR content and
have been providing a super cheap yet still interactive solution for
several years now.
• The phone is put inside a headset, and the screen is split to give
output for each eye, to give the illusion of depth.
• The accelerometer and gyroscope systems in the smartphones and
headsets of the last two decades give virtual reality applications a
true sense of motion
How is it that the VR environment being viewed moves when
your head is moved?

• The accelerometer works by measuring the rate of change in


your movement. Tiny sensors translate the forces into
readable electrical signals.
• Imagine you are in a car going around a corner at speed and
being pushed sideward against the door panel.
• The same situation occurs in the sensors but on a much
smaller scale. The accelerometer, however, cannot detect
the phones position relative to everything else.
• This is where the gyroscope comes in. This can measure
rotational movements relative to a neutral position.
• If you ever played one of those ball bearing, maze games
where you must tilt the maze to move the ball, this is a
comparable principle.
• These more mechanically oriented systems are also used in
the higher end headsets to improve the fluidity of the
experience.
HTC Lighthouse systems
• When ships at sea used to navigate into ports, they didn’t have
fancy positional GPS systems and radars. Instead they used
lighthouses or towers in order to judge there position relatively.
• This is the same principle for Valve’s Lighthouse Base stations
used for the HTC Vive, Vive Cosmos and their own Index.
• The technology has developed to be an incredibly accurate version
of actual lighthouses, emitting non-visible light to flood a room
and using the reference points on the HMD’s and handhelds to
give positional information back to the computer.
• This is done using both stationary LED’s and two active laser
emitters spinning at 60Hz. The LED’s flash and then either of the
lasers creates a beam of light across the room.
• In the receivers, which are covered in photosensors (32 in the
HMD and 24 per controller), the beams are picked up as the
devices have been “counting”.
• It calculates, using the time and position of the base station it
came from over multiple sensors creating a “pose”.
• This shape can then be analysed and gives both an exact position
and direction its facing. This is a cheaper way to get incredible
tracking.
Oculus Tracking

• The Oculus Rift and Rift S, as well as other consumer


products such as the Nintendo Wii Controllers, use a similar
“pose” recognition principle but achieved in a slightly
different manner.
• The headset for the Rift uses constellations of infrared LED’s
built into the HMD and controllers. These are subsequently
picked up by the two desktop sensors designed to recognise
the LED’s specific glow and convert their placement into
positional data.
• There is also a magnetometer, gyroscope and
accelerometer in the headset. Combined these allow for
accurate tracking across all three dimensions of the 3D
world.
• Oculus released another headset called the Quest.
• This wireless and computer-less system uses a radically different tracking
method in the form of scanning a room for any objects within your space
(curtains, windows, coffee tables etc.), and creating a 3D map to play in.
• It combines data from a gyroscope and accelerometer with the map to give
the HMDs position at 1000Hz (once every millisecond).
• The Guardian system is then there to stop you bowling over and allows the
saving of multiple rooms for quicker set up times as well.
• Similarly, the Microsoft HoloLens headset uses two front mounted
cameras to scan the surrounding area and in conjunction with an inertial
measurement unit (IMU) gives precise positions. It essentially builds up a
more and more precise map as you look around.
Mechatech Tracking

• Mechatech has developed a tracking system based on direct measurement


of the human body. An exoskeleton frame is worn over the body, the
AgileVR product is worn over the knee only, and direct measurements are
taken.

• The direct physical tracking of the human body has advantages over
camera-based systems – it doesn’t need a camera! It’s not trapped in room
scale, and it also does not suffer from occlusion – imagine putting your arm
behind your back – the camera cannot see it.

• The direct measurement also means that pose data does not have to be
calculated, it can be read straight from the device in real-time, reducing lag,
which helps with immersive Virtual Reality.
Haptic Technology
• Haptic technology aims to simulate the sensation of touch with
various mechanisms.
• One of them is using touch as a feedback system to communicate
information to and from the user.
• As visually-oriented species, we usually don’t stop to think how
incredible our sense of touch really is. With our hands we can
determine hardness, geometry, temperature, texture and weight only
by handling something.
• Even though you might not know it, there is a good chance you are
already using haptic technology in your daily life.
• Many smartphones with touch screens use vibration as a form of
feedback.
• Unlike keypad, touchscreens are just flat plates of glass, so the
vibration function of the phone is used to simulate the tactile feel of
buttons.
• What is more some Android smartphones detect when you pick
them up and vibrate if there are any unread notifications for you.
That is exactly what haptic technology is. How does it translate to
virtual reality?
How do haptics work
• Different technologies are used to give sensations that feel like solid
objects and resistance. Devices apply force, pressure or resistance by
using electric actuators, pneumatics and hydraulics.
• For example, gamepads use electric motors to force feedback
vibrations. What is more interesting, some data gloves both track
hand motion and use air bladders to harden and restrict your grip, so
you can feel an object in virtual reality.
• Recently, Facebook CEO Mark Zuckerberg revealed a new prototype
for VR gloves, and companies like Manus VR and Dexmo exoskeleton
glove are working on delivering gloves that would work with available
VR headsets.
• A high-end technology are haptic suits. “Suit up to feel every
explosion, gunshot, and sword fight in VR,” NullSpace writes on its VR
Suit’s Kickstarter page. NullSpace suit relies on tracking systems from
systems like the Vive and Oculus Rift to figure out where your limbs
and torso are in relation to the headset.
Haptic interfaces are divided into two main categories:

• Force feedback
• Tactile feedback

• Force feedback interfaces are used to explore and modify remote/virtual


objects in three physical dimensions in applications including computer-aided
design, computer assisted surgery, and computer-aided assembly

• Tactile feedback interfaces deals with surface properties such as roughness,


smoothness and temperature.
Basically haptic system consists of two parts:

• Human part(left)
• Controls the position of the hand
• Machine part(right)
• Exerts forces from the hand to
simulate contact with a virtual object
• Also both the systems will be provided with necessary sensors,
processors and actuators. In the case of
• the human system, nerve receptors performs sensing, brain performs
processing and muscles performs actuation of the motion performed
by the hand while in case of the machine system, the above
mentioned functions are performed by the encoders, computer and
motors respectively
Haptic Concepts
• Tactile cues include textures, vibrations, and bumps kinesthetic cues-
include weight, impact. In the following section, we present some
crucial concepts and terminology related to haptics:
• Haptic: Haptic is the science of applying tactile, kinesthetic, or both
sensations to human–computer interactions. It refers to the ability of
sensing and/or manipulating objects in a natural or synthetic
environment using a haptic interface.
• Cutaneous: Relates to or involving the skin. It includes sensations of
pressure, temperature, and pain
• Tactile: Pertaining to the cutaneous sense, but more specifically the
sensation of pressure rather than temperature or pain.
• Kinesthetic: Relates to the feeling of motion. It is related to
sensations originating in muscles, tendons, and joints.
• Force Feedback: Relates to the mechanical production of information
that can be sensed by the human kinaesthetic system.
• Haptics or Haptic Technology: An emerging interdisciplinary field that
deals with the understanding of human touch (human haptics), motor
characteristics (machine haptics), and with the development of
computercontrolled systems (computer haptics) that allow physical
interactions with real or virtual environments through touch.
• Haptic Communication: This means by which humans and machines
communicate via touch. It mostly concerns networking issues
Haptic devices
• Haptic devices (or haptic interfaces) are mechanical devices acts as
mediator in communicating between the user and the computer.
• Haptic devices allow users to touch, feel and manipulate three-dimensional
objects in virtual environments and tele-operated systems.
• Typically, a haptics system includes:
• Sensor(s)
• Actuator (motor) control circuitry
• One or more actuators that either vibrate or exert force
• Real-time algorithms (actuator control software, which
• we call a “player”) and a haptic effect library
• Application programming interface (API), and often a haptic effect authoring tool
• The Immersion API is used to program calls to the actuator into your product’s
operating system (OS).
Haptic Interface
• This consists of a haptic device and software-based computer control
mechanisms. It enables human–machine communication through the
sense of touch.
• By using a haptic interface, someone can not only feed the
information to the computer but can also receive information or
feedback from the computer in the form of a physical sensation on
some parts of the body.
• Haptic Perception: This is the process of perceiving the
characteristics of objects through touch
Haptic Rendering
• This is the process of calculating the sense of touch, especially force. It
involves sampling the position sensors at the haptic device to obtain the
user’s position within the virtual environment.
• The position information received is used to check whether there are any
collisions between the user and any objects in the virtual environment.
• In case a collision is detected, the haptic rendering module will compute
the appropriate feedback forces that will finally be applied onto the user
through the actuators .
• Haptic rendering is, therefore, a system that consists of three parts, a
collision detection algorithm, a collision response algorithm, and a control
algorithm.
Sensors and Actuators:
• A sensor is responsible for sensing the haptic information exerted by
the user on a certain object and sending these force readings to the
haptic rendering module.
• The actuator will read the haptic data sent by the haptic rendering
module and transform this information into a form perceivable by
human beings
Tele-Haptics
• This is the science of transmitting haptic sensations from a remote
explored object/environment, using a network such as the Internet,
to a human operator. In other words, it is an extension of human
touching sensation/capability beyond physical distance limits.
Tele-Presence:
• This is the situation of sensing sufficient information about the
remote task environment and communicating this to the human
operator in a way that is sufficient for the operator to feel physically
present at the remote site.
• The user’s voice, movements, actions, etc. may be sensed,
transmitted, and duplicated in the remote location. Information may
be traveling in both directions between the user and the remote
location

You might also like