You are on page 1of 18

Virtual reality and Artificial Intelligence in

motion simulators
1) What are motion simulators?

Motion simulators attempt to reproduce the physical sensations you can experience when driving a
vehicle. And everything you can feel in a vehicle comes from forces. At any moment you can feel the "G-
forces" and vibration forces through your muscles, joints, organs and inner ear. Motion simulators
tremendously increase immersion in all games, and allow a finer control of virtual planes and cars in
flight simulators and racing simulators.

A motion simulator or motion platform is a mechanism that creates the feelings of being in a real motion
environment. In a simulator, the movement is synchronized with a visual display of the outside world
scene.

2)How Artificial Intelligence can be used to improve simulations.

Artificial Intelligence and System Simulation are two areas in computer science that are being used
to model complex systems. Artificial intelligence programming methods permit more realistic and
robust simulation models and help the user develop, run, and interpret simulation experiments.
When designing a highly complex, systemically connected smart system, artificial intelligence can
reduce time, cost, and errors.

AI techniques can play a variety of roles in the simulation process for smart systems, including
knowledge representation in a model, decision making within a simulation, rapid prototyping of
models, data analysis of simulator-generated outputs, and model modification and maintenance.

Although simulation and Artificial Intelligence (AI) are two different technology
paradigms, these technologies are related to each other in their primary forms. In
computer engineering, simulation imitates an environment or a machine, while AI
effectively simulates human intelligence.

While they may be related, simulation and AI were being used very differently with
different mathematical and engineering approaches. However, in recent years, the
development of AI-based simulations has experienced rapid growth in various
industries.

For instance, now infamous, Cyberpunk 2077 used AI to simulate facial expressions
and lip-syncing in the gaming industry. On the other hand, Microsoft Flight Simulator
2020 used AI to generate realistic terrains and air traffic.

The power of AI to enable rapid simulation development with faster, more optimized,
and less resource-hungry simulations even on a large scale would empower more
applications of simulation technology in far wider industries and platforms.

However, to understand the benefits of using AI in simulations and its development,


we need to understand the traditional simulation development approach and its use in
this scenario at first.

Traditional Simulation vs. AI-based Simulations

The basic idea behind simulation development is to gather data related to the machine,
environment or anything for different inputs and conditions. These data would then be
collected, analyzed, and studied to understand how the machine/environment/anything
simulated functions and behaves under different conditions and situations.

This understanding would then be used to build a basic mathematical model that can
govern and imitate the actual object in different conditions, then used to construct a
simulation model that can replicate or simulate the real thing.

However, when AI is used to build these simulation models, the AI has to be fed with
data related to the object/environment's behavior and how these subjects
(object/environment to be simulated) function under different conditions and settings.
During this process, the AI model requires relevant data that can be considered a
sample of the simulation subject and represents the subject properly.

Generally, Neural Networks (NNs) would be used as the AI model to be trained.


After the training, this would simulate the subject and its behavior.

Both approaches, either traditional approaches or AI for simulation, have their


advantages and disadvantages. One of the significant advantages of the conventional
simulation method is that the mathematical model defined after studying the
simulation subject can be reused and reconstructed easily.

This allows other development teams to verify or reuse the same mathematical
principles or models to generate the simulation. A traditional approach would also
enable the developers to expand the simulation based on their understanding of the
subject without explicit testing or proof test.

One of the significant disadvantages of this traditional approach remains to be its


complex and resource-hungry process to generate the simulation. This is because
everything has to be done by the simulation developers, who would also have to be
experts in respective domains such that they need to understand the subject very
closely.

Meanwhile, in AI-based simulation development, data is one of the essential


components. The subject's information needs to be in abundant amounts and
deterministic such that the data can represent the subject very closely.

This type of data may not be available readily when the data needs to be either
collected or generated. But after the collection of accurate and abundant data, an AI-
based/aided approach is very advantageous since there is no need to understand the
subject by developers themselves.

Another significant advantage of the AI-based simulation holds within the power of
AI to discover patterns or behavior in subjects not even considered or found by the
developers. Apart from this, training an AI model usually takes a lot of time, but it
may not be as resource hungry, complex, and costly as the traditional approach.

One of the significant disadvantages of the AI approach is that the model builder
cannot be recognized or understood by developers in any way, so it cannot be usually
reconstructed unless similar data or input is fed again to train the model.

Apart from this, due to the data required to qualify the model, expanding the model
will generally be impossible without sufficient data.

Combining Simulations and AI

Using AI in simulation generation or development would enable data-powered


development with rapid changeability and minimal human involvement. Although the
simulation traits would be considered too complex for humans to develop, AI may
easily reconstruct such characteristics if sufficient data is provided.

Due to this, AI can be used to simulate something too hard, complex, or time-
consuming for humans in a short time without too much effort. Thus, not only would
the development of simulations be faster, more productive, and easy, but AI would
also enable the rapid iteration and tweaking of simulations that would be far less
feasible, especially on a large scale.
We can open new doors by combining the power of AI and simulation for product
design and development. Generally, without AI simulations, developers have to
design a product/model that must be intensively tested before production, and changes
are needed after the story. Then, the same process would have to be repeated.

This process is very resource-intensive. But through AI, design changes and
validation can be easily tested through simulation, enabling rapid iteration and
development.

The development and adoption of AI for simulation are far more required in industries
like Augmented Reality (AR) and VR (Virtual Reality), where the sheer complexity
of building high scale models, environments, and graphics through the traditional
methods would be infeasible compared to using AI to develop and deploy simulations
with its data-driven approach of development. The opportunities in AR and VR could
be far more explored and matured through the AI to generate and develop simulations.

Alongside this, simulation of subjects like fluids (air and water) is brutal to construct
with only a traditional approach, the result of which would still not be good and very
close to reality. But with the help of AI, such simulations would be closer to reality
and more refined.

One of the significant advantages of AI-based simulation compared to the traditional


approach is that the conventional system would be significantly resourced heavy since
it usually calculates each simulation particle.

However, AI-based simulation would enable such complex simulations easily since
AI can perform these calculations/predictions much faster and less resource hungry.
Alongside this, generative simulations like the generation of models, terrains in
games, and product designs would also be possible with AI.

For instance, take the game Microsoft Flight Simulator 2020 as an example. This
game allows gamers to experience realistic flights worldwide without lagging in the
quality of models, terrains, and environment.

By traditional approach, this would mean that the game developers would have to
model and build all terrains used in 3D along with matching landscapes and
backgrounds to give the simulation a realistic feeling.

This would have cost the game developers a massive amount of time, resources, and a
considerable number of experts to deal with complex problems lying ahead in such an
enormous project. Realistically, such a project would not be feasible or even
practically be possible to complete.
But through the use of AI, the developers used massive amounts of data that are
already available and combined them with vast amounts of computation through the
power of the cloud to train an AI model that could build realistic 3D models of
terrains, environments, along with grasses, trees, and water-based upon the real world.

The results produced were pretty spectacular and received substantial critical acclaim
from game developers and gamers alike.

Conclusion

By combining simulations and AI, we can unfold new opportunities and endless
possibilities in different industries.

Along with technologies like Machine Learning and Deep Learning, AI-enabled
simulations will be propelled by the data-driven backend. Conquering the
disadvantages of the traditional approach to simulation, AI-based simulations will be
able to push the boundaries of what simulations can do.

Even the most complex simulations, which would be next to impossible when
developed with traditional methods, will be attainable by combining simulations and
AI.

Moreover, with AI enabling rapid development of more optimized and improved


quality, the industry may experience a revolution empowering next-level simulations
with realism and details never seen before.

4) Simulator sickness and ways to avoid it in motion simulators.


Simulator sickness is a subset of motion sickness that is typically experienced while
playing video games from first-person perspective. It was discovered in the context
of aircraft pilots who undergo training for extended periods of time in flight simulators.
Due to the spatial limitations imposed on these simulators, perceived discrepancies
between the motion of the simulator and that of the vehicle can occur and lead to
simulator sickness. It is similar to motion sickness in many ways, but occurs
in simulated environments and can be induced without actual motion. Symptoms of
simulator sickness include discomfort, apathy, drowsiness, disorientation, fatigue, and
nausea. These symptoms can reduce the effectiveness of simulators in flight training
and result in systematic consequences such as decreased simulator use, compromised
training, ground safety, and flight safety. Pilots are less likely to want to repeat the
experience in a simulator if they have suffered from simulator sickness and hence can
reduce the number of potential users. It can also compromise training in two safety-
critical ways:
1. It can distract the pilot during training sessions.

2. It can cause the pilot to adopt certain counterproductive behaviors to prevent


symptoms from occurring.

Simulator sickness can also have post-training effects that can compromise safety after
the simulator session, such as when the pilots drive away from the facility or fly while
experiencing symptoms of simulator sickness.

Motion sickness has been a significant issue for automotive driving simulators since
they were first introduced. This is because the very simulator systems that can provide
a driver with useful motion and visual feedback also have the potential to violate a
driver’s expectations, causing disorientation and discomfort. While motion sickness can
be excused as a mere annoyance with entertainment class driving simulators, it is
generally regarded as unacceptable for engineering class Driver-in-the-Loop (DIL)
simulators, such as those used by professionals to evaluate vehicle and automotive
subsystem designs.

Eliminate motion sickness

Working backwards, we can assert that a driver who is successfully immersed


in a high quality, convincing virtual driving experience is not, by default, being
subjected to the triggers that cause motion sickness in the first place. But
getting to this level of simulator performance and refinement involves some
technical challenges. For example, it is certainly important to be able to
efficiently execute all the required physics models and be able to
communicate synchronously - and with extremely low latency - amongst all
the real-time hardware and software systems in play. This is an all-
encompassing statement, so it may be useful to drill down and discuss some
specific guidelines that are vital for meeting all the cause-and-effect
expectations for DIL simulator drivers.

Ways that can help eliminate DIL simulator motion


sickness:
1. Start with first principles.
A DIL simulator should be a pleasing symphony of various elements with a solid
engineering foundation. For example, if other things are held constant, the dynamic
performance of any DIL simulator will decrease as the size and mass of the motion
machinery and payloads increase. If the fundamental DIL simulator design itself
restricts its dynamic performance, there will be no way to avoid mis-cues and prevent
driver discomfort. Luckily, there are next-generation DIL simulator solutions that can be
adopted to avoid this.

2. Avoid software Band-Aids.


Complex software strategies can be applied to ease simulator motion sickness
triggers. Common examples are predictive and/or adaptive control strategies that might be
used for simulator motion systems. However, when such strategies are required rather
than optional, it is usually an indicator that the underlying motion systems are not
capable of repeatably, confidently delivering correct driver cues.

3. Abandon flight simulator strategies.


Aircraft flight is not restricted to planar motion, so a pilot can experience a large visual
tilt in the horizon line without sensing lateral acceleration. In a ground vehicle, this is not
the case, and a driver expects to feel accelerations in a reference frame that is aligned
with the earth. If this expectation is violated, a driver’s sense of postural stability can be
disturbed, causing disorientation and nausea. This is why it is advisable to avoid
simulator “tilt compensation” and similar schemes.

4. Deploy synchronous systems.


As mentioned above, this is crucial for accurately replicating real vehicle control and
response sensations. For example, motion and vision systems must certainly be fast,
with robust, low latency pipelines. But perhaps even more important to DIL simulator
drivers is having the visual and inertial worlds well-aligned in time. This requires a
simulator computation architecture that can smoothly manage multiple applications.

5. Stay flexible.
A DIL simulator driver is receiving a continuous mix of information, and is therefore
unable to discern between sensations that are created by a vehicle model, or
sensations created by motion and/or vision cueing. Therefore, a flexible simulator
tuning tool set needs to be in place on the operator/engineer side in order to objectively
identify and address issues that might be disturbing for the driver. In addition, having an
open, modular simulator architecture will facilitate explorations of other cueing systems
(audio, tactile, etc.) that may improve overall driver immersion.

3) Developing virtual reality for vehicle simulation purposes.


The market costs and technology

have so far obstructed the


development of small com-
mercial driving simulators
with good performance lev-
els, while a small set of
expensive simulators exist in
car manufacturing compa-
nies and traffic research institutes. In order to obtain
the proper immersion sensation for the driver within
the synthetic scene, simulator developers have resort-
ed to large panoramic screens and high-resolution
video projectors. For many applications it is also
required to simulate the accelerations (centrifugal
forces, braking, etc.) by using a mobile platform (see
[10] for a summary of conventional simulation tech-
nology). These platforms must be able to lift the huge
weight of the vehicle cockpit, viewing screen, and pro-
jection system—which is costly and requires an elabo-
rate hydraulic system.
The development of smaller and portable simula-
tors for application in training has made it necessary
for this equipment to be substituted by single flat pro-
jection screens or multiple video monitors [9], and
very simple motion or vibration systems. Virtual Reality
(VR) devices would offer the natural solution for inex-
pensive and compact driving simulation, maintaining a
high degree of the immersion feeling. The head-
mounted displays (HMDs) provide a means to repro-
duce the role of panoramic screens with no limitation
of viewing area (since drivers can turn their heads in all
directions), the limitation of sight to the synthetic
images and quality 3D sound with much lower cost,
size, and weight. Some companies (Volvo in Sweden,
for example) are exploring the application of VR in
limited driving simulation for design and demonstra-
tion purposes.
Despite some rough years, the automotive industry is still one of the most important economic
sectors. Car manufacturers are continuously trying to put the current technologies to use in order to
deliver the best vehicles. Virtual Reality (VR) and Augmented Reality (AR) technology is advancing
rapidly as computers are becoming more powerful. The market of AR/VR has already become a
billion-dollar market

VR Adoption in the Automotive Industry

North America is the biggest shareholder of the global automotive VR market.


While Tesla advertises its cars by offering customers virtual rides, Ford Motor
Company develops a VR training tool for polishing the skills of its technicians.
Germany continues to be the leading automotive VR adopter in Europe,
with Volkswagen reducing design costs via virtual
prototyping, BMW increasing sales thanks to immersive showrooms,
and Audi enhancing the safety of self-driving cars manufacturing by
performing testing in virtual reality.

Automotive VR Software Architecture

The architecture of VR applications in the automotive industry varies


depending on the application type. Yet in all cases, the VR solution
architecture includes the following 3 components:

A client VR application

with modules responsible for processing input, generating output, building 3D


models of cars and applying the laws of physics to their surfaces, as well as
running interaction scenarios.

A database

that stores 3D car models, VR interaction scenarios, and user data.

A web administration panel

which allows access to the content in the database and thus, data modification
in the client app.
Use Cases of VR in the Automotive Industry

VR for Vehicle Prototyping

With VR, there’s no need to spend time and effort on the complicated process
of building bulky physical prototypes. Virtual prototyping also simplifies
research and development, speeds up the design process, and reduces the
number of adjustment rounds, thus significantly cutting the costs on the entire
pre-manufacturing cycle.

Use Cases of VR in the Automotive Industry

VR for Vehicle Prototyping

With VR, there’s no need to spend time and effort on the complicated process
of building bulky physical prototypes. Virtual prototyping also simplifies
research and development, speeds up the design process, and reduces the
number of adjustment rounds, thus significantly cutting the costs on the entire
pre-manufacturing cycle.
How VR for vehicle prototyping works: When full-scale 3D samples of
vehicle parts or complete vehicle models are placed in the virtual reality,
designers can easily manipulate them and quickly tweak according to safety
regulations, design requirements, or stakeholders’ requests.

Use cases:

 Pre-manufacturing design and approval.

 Virtual testing of self-driving car prototypes.

VR for Training in the Automotive Industry

Virtual reality training software is useful for both manufacturing and after-
sales. It can be a part of original equipment manufacturers’ corporate training
programs to ensure quick onboarding and proficiency matching a brand
image. VR training can also be an offer to potential car buyers, who would
appreciate effective and safe VR driving lessons on specific car models (e.g.,
autonomous cars).
How VR for training works: A user can drive or perform maintenance on a
virtual vehicle just like they would do with a physical one – save for the risks of
causing any damage to both humans and vehicles. To ensure better results,
the VR app can include textual, visual, and audio guidance.

Use cases:

 Technician training.

 Vehicle operation learning.

VR for Auto Showrooms

Virtual reality technology can become a sales-boosting mechanism for car


manufacturers and dealers. They have the opportunity to raise online sales via
virtual showrooms, where potential customers will inspect 3D cars’ exterior
and interior and have a test drive – all without leaving their homes.

How VR immersive showrooms work: A showcased virtual car is a complete


and detailed replica of the physical model. Still, a customer can easily modify
certain elements – e.g., the color of exterior or the material of interior – to
understand what combination or variant they like best.

Use cases:

 Remote immersive tours.

 Virtual test drive.

1. Unity Virtual Reality Engine


Unity is one of the most widely used game development engines globally for VR headsets. Game
developers create apps, games, and even industrial applications compatible with Oculus, HTC Vive,
and PlayStation VR.

Along with natural game dev tools for PC, consoles, and other software platforms, Unity also offers
an extensive collection of unity assets for use with AR and VR developer system tools to help creators
succeed.
Interact is one tool that creates advanced VR system applications directly from CAD or points cloud
collected data. VisualLive is another popular Unity tool that uses AR in real-time as it overlays large
BIM and CAD files onto job sites.

These system tools and others like Unity mod manager are excellent for ensuring accurate motion
control for VR headsets on PC, building finely crafted VR games, and natural VR mobile experiences
on Android and iOS platforms with the Unity API.

2. Unreal Engine For Extended Reality (XR): AR, VR & MR


The powerful Unreal Engine will be another complete suite of developer tools, including VR
compatibility. Unreal engine is perfect for many industries: gaming, film, architecture, automotive
and transportation, broadcasting, and AR/VR simulation!

Creators are allowed total freedom to deliver state-of-the-art visuals, a rich entertainment
experience, and immersive virtual worlds. Like Unity systems, Unreal Engine has a variety of tried-
and-true virtual reality and augmented reality developer tools designed to handle any task. As a
result, the VR game engine provides game developers with an advanced real-time 3D creation tool
for immersive experiences.

3. Blender 3D Computer Graphics Software Toolset


Blender has been a titan in the 3D model and animation industry since 1994. It’s free and open-
source software built to design 3D printed models, animate 3D models, and use those assets in 3D
applications like animated films and VR games.

One of the best features for virtual reality developers will be the built-in rendering tool. Blender has
an unbiased path-tracer engine that offers stunning ultra-realistic rendering. This powerful rendering
developer tool has a real-time preview, CPU & GPU rendering, PBR shaders, HDR lighting support,
and of course, supports VR rendering, modeling, rendering, animation, rigging, sculpting, and
simulation processes are compatible on many systems, including Linux, macOS, Windows, Android,
FreeBSD, OpenBSD, NetBSD, DragonFly BSD, and Haiku.

4. OpenVR SDK to target SteamVR


SteamVR hardware uses the innovative OpenVR developer tool to experience VR content on almost
any VR headset for PC. OpenVR is an API implemented into SteamVR to allow access to other VR
hardware.

Game developers can access the OpenVR SDK to build integrations to the SteamVR platform. Valve
Index, HTC Vive, Oculus Rift, and Windows Mixed Reality headsets are supported by a few VR
hardware products. It achieves this goal by not requiring that applications have specific knowledge
of the hardware they target.

Another significant enhancement to OpenVR is natively supported by Unreal Engine 4 and Unity
version 5.4+. It is worth mentioning that their Steamworks SDK allows software developers to
integrate Steam’s matchmaking, achievements, and Steam wallet.

5. Autodesk Maya 3D Computer Graphics Toolset


VR Developers use Autodesk Maya 3D software for crafting realistic characters and professional
assets. Maya has a free app tool called Create VR. What does Autodesk Maya do?

This innovative VR developer tool empowers artists and designers to start and work directly next to
their 3D design process. Create VR uses a simple curve system and surface tools to explore creators’
3D space and assets while being fully immersed in virtual reality alongside their design.

Composite sketches and modeled assets can also be exported to Maya or other content creation
applications. We recommend Autodesk Maya for larger studio productions rather than indie gamers
because of its initial difficulty to use.

6. Autodesk 3ds Max® Modeling and Rendering Software


Autodesk 3ds Max and Autodesk Maya are both paid software services Autodesk, Inc. provides for
the video game industry. Both are capable of modeling, animation, rigging, keyframing, rendering,
and lighting.

Each software has been used to make VR games, television, movies and has a complete 3D toolset
with unlimited creative potential. However, the main difference is that Maya mainly focuses on
character realism. Autodesk 3ds Max is an all-purpose design for faster modeling and quick editing,
especially as a virtual reality developer tool.

3ds Max is an ideal game development tool for novices to 3D animation with an abundant amount
of online courses and YouTube tutorials, making the software easier to learn.

7. Eyeware Beam Head and Eye Tracking Software


Development Kit
The Eyeware Beam all-in-one head and eye tracker SDK integrates with the API so that VR game
developers can create richer gaming experiences. The Eyeware Beam SDK lets developers create
head and eye-tracking-based apps to complement VR development tools mentioned in this top ten
list.
The SDK provides the capabilities to develop head and eye-tracking-enabled PC solutions with
access to tracking data in real-time. The SDK offers APIs for C++ and Python with support for Unity
in the works. Application integrators and developers previously depended on dedicated hardware to
enable these functionalities for end-users.

When introduced into a PC game, PC gamers can use head and eye tracking to control the in-game
camera with real-life head movements. Indie game developers can put the tech to use in VR games
to enable eye tracking to manipulate gameplay as head tracking does as an experience similar to
virtual reality with iPhones.

Interactive and social games can benefit from providing the ability to live stream with the eye tracker
overlay in games to accurately show viewers where they look on the screen. This is an early access
API for indie game devs to integrate the tech into PC games, mods, controllers, or whatever you the
developer can imagine. The gamers do need a PC and iPhone or iPad but no wearables at all.

Game developers can try out the head and eye tracking software for free. The Eyeware Beam app can
be downloaded on the app store for use in PC games.

The app that turns a Face ID-supported iPhone or iPad, with a built-in TrueDepth camera, into a
precise, multi-purpose, six degrees of freedom (6DoF) head and eye tracking device. This means
anyone can download the app to turn their iOS device into a head and eye-tracking camera.

It works on over 190 games, including DCS, Microsoft Flight Simulator, Star Citizen, etc. all through a
simple app download. It requires OpenTrack, similar to other software-based trackers.

8. Cara VR™ Virtual Reality Plug-in Toolset for Nuke


Another paid application software is Nuke from The Foundry. Nuke operates a little differently by
using node-based digital compositing and visual effects. Cara VR is an ingenious virtual reality
developer tool with a specialized toolset for creating excellent live-action virtual reality content.

Compositing 360° VR footage takes quite a lot of time. This tool will speed up the challenging
process for game artists, so creators have more time to focus on other essential aspects of their
envisioned VR developer experience.

The NukeX environment also now integrates the latest version of Cara VR for a powerful developer
experience using clean-up, set extensions, 3D element insertion, and more.

9. Autodesk Forge AR And VR Toolkit


A third ingenious VR developer tool by Autodesk’s paid services is Forge, which does connect to data
streams inside the Unity engine. The software is a cloud development platform that uses web service
APIs for developers to build innovative, cloud-powered applications.
2D and 3D designs can be viewed in a safe 3D environment. Forge also integrates seamlessly with
other AR and VR developer application tools.

10. Google Cardboard – Cardboard VR Developer Tool


The Cardboard VR development tool is affordable, lightweight hardware designed for fun and quick
experiences on a mobile platform. Make use of the Cardboard SDK to turn a smartphone into a VR
software developer tool.

An Android smartphone can display 3D scenes with stereoscopic rendering, track and react to head
movements, and interact with apps by detecting when the user presses the viewer button. The
Cardboard Design Lab is a free app that helps creators understand how to craft a virtual reality
experience using their virtual reality development tool.

Which of the ten VR tools is best for your game idea? Much of that answer depends on a game
developer’s skillset with C# that is often used for creating desktop, mobile, and VR/AR apps.

The programming languages are a foundation for AR/VR developers, who will likely encounter Unity
and Unreal video game engines. Other companies offer powerful VR developer tools not mentioned
in this list like Facebook in virtual reality.

Do you have an idea for a virtual reality game? We encourage you to integrate head and eye tracking
into your VR games for added realism to the virtual reality gaming experience.

You might also like