You are on page 1of 42

METAVERSE GAMING

A Project Report

Submitted by

Kaustubh Kundan 190120101053


Kishlay Sharma 190120101056
Kajal Singh 190120101050
Jayant Rajan 190120101049

Under the Supervision of


Dr. Anand K. Gupta
Prof. & HOD
Department of Computer Science and Engineering

In Partial Fulfillment of the Requirements


for the Degree of
Bachelor of Technology

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING


TULA’S INSTITUTE, DEHRADUN
(Affiliated to VMSB Uttarakhand Technical University, Dehradun)
DECLARATION

We, declare that the work embodied in this Project report is my own original work carried out by
me under the supervision of Ms. Ritu Pal and the Co-supervision of …………for the session 2022-
2023 at Tula’s Institute, Dehradun. The matter embodied in this Project report has not been
submitted elsewhere for the award of any other degree/diploma. We declare that We have faithfully
acknowledged, given credit to and referred to the researchers wherever the work has been cited in
the text and the body of the thesis. We further certify that we have not willfully lifted up some
other’s work, para, text, data, results, etc. reported in the journals, books, magazines, reports,
dissertations, thesis, etc., or available at web-sites and have included them in this Project report
and cited as my own work.

Date: Kaustubh Kundan (190120101053)


Kishlay Sharma (190120101056)
Kajal Singh (190120101050)
Jayant Rajan (190120101049)
Place:
Certificate from the Supervisor/Co-supervisor

This is to certify that the Project Report entitled:


“Metaverse Gaming”
Submitted by
Kajal Singh (190120101050)
Kishlay Sharma (190120101056)
Kaustubh Kundan (190120101053)
Jayant Rajan (190120101049)
at Tula’s Institute, Dehradun for the degree of Bachelor of Technology in Computer science and
Engineering is his/her original work carried out by him/her under my guidance and supervision.
This work is fully or partially has not been submitted for the award of any other degree or diploma.
The assistance and help taken during the course of the study has been duly acknowledged and the
source of literature amply recorded.

Supervisor Signature :

Supervisor Name : Ms. Ritu Pal

Supervisor Designation :

Date :
ACKNOWLEDGEMENT
Whenever a module of work is completed, there is always a source of inspiration. We always find
our parents as our torch bearers. While completing this project, we realized from our inner core
that Rome was not build in day. The completion of this project could not have been possible
without the participation and assistance of a lot of individuals contributing to his project. However,
we would like to express our deep appreciation and indebtedness to our faculties and supervisors
for their endless support, kindness, and understanding during the project duration.

We found a stack of minor project reports in the library of Tula’s institute of engineering and
management. Those reports are landmarks for us on the way of this task. The presented report is
an effort of day and night works.

We are sincerely thankful to Dr. R.B. SINGH HOD(CSE) and Minor Project Coordinator Ms. Ritu
Pal for his support. We express our gratitude and thanks to all the faculties and staff members of
computer science and engineering department for their sincere corporation in furnishing relevant
information to complete this minor project well in time successfully.

Finally, our gratitude debt us to our parent, family for their enduring love and support.

Kajal Singh (190120101050)

Kaustubh Kundan (190120101053)

Kishlay Sharma (190120101056)

Jayant Rajan (190120101049)


TABLE OF CONTENTS

CHAPTER NO. TITLE PAGE NO.


ABSTRACT v

LIST OF TABLES vi

LIST OF FIGURES vii

LIST OF SYMBOLS, ABBREVIATIONS vii

1. INTRODUCTION 01

2. LITERATURE REVIEW 08

3. PROBLEM FORMULATION 09

4. IMPLEMENTATION 11

5. RESULTS & DISCUSSION 29

6. CONCLUSION AND FUTURE SCOPE 30

7. REFERENCES 31
LIST OF SYMBOLS/ABBREVIATIONS USED, FIGURES AND
TABLES
LIST OF FIGURES PAGE NO.

FIG.2 2

FIG.3 7

FIG.4 9

FIG.5 11

FIG.6 14

FIG.7 15

FIG.8 19

FIG.9 24

FIG.10 25

FIG.11 26

Keywords: Metaverse, a-frame, virtual reality, three.js, axie infinity


ABSTRACT

The Metaverse is the post-reality universe, a perpetual and persistent multiuser


environment merging physical reality with digital virtuality. It is based on the convergence
of technologies that enable multisensory interactions with virtual environments, digital
objects and people such as virtual reality (VR) and augmented reality (AR). Hence, the
Metaverse is an interconnected web of social, networked immersive environments in
persistent multiuser platforms. It enables seamless embodied user communication in real-
time and dynamic interactions with digital artifacts. Its first iteration was a web of virtual
worlds where avatars were able to teleport among them. The contemporary iteration of the
Metaverse features social, immersive VR platforms compatible with massive multiplayer
online video games, open game worlds and AR collaborative spaces.
CHAPTER-1

INTRODUCTION

Computer Science innovations play a major role in everyday life as they change and enrich human
interaction, communication and social transactions. From the standpoint of end users, three major
technological innovation waves have been recorded centered around the introduction of personal
computers, the Internet and mobile devices, respectively. Currently, the fourth wave of computing
innovation is unfolding around spatial, immersive technologies such as Virtual Reality (VR) and
Augmented Reality (AR). This wave is expected to form the next ubiquitous computing paradigm
that has the potential to transform (online) education, business, remote work and entertainment.
This new paradigm is the Metaverse. The word Metaverse is a closed compound word with two
components: Meta (Greek prefix meaning post, after or beyond) and universe. In other words, the
Metaverse is a post-reality universe, a perpetual and persistent multiuser environment merging
physical reality with digital virtuality. Regarding online distance education, Metaverse has the
potential to remedy the fundamental limitations of web-based 2D e-learning tools. Education is
one crucial field for society and economy where core implementation methods remain unchanged
and orbiting around content transmission, classrooms and textbooks despite numerous
technological innovations. Currently, there is an intense race to construct the infrastructure,
protocols and standards that will govern the Metaverse. Large corporations are striving to construct
their closed, proprietary hardware and software ecosystems so as to attract users and become the
de facto Metaverse destination. Different systemic approaches and diverging strategies collide
around concepts such as openness and privacy. The outcome of this race will determine the level
of users’ privacy rights as well as whether the Metaverse will be inclusive to students and school
pupils. Both issues have important implications for education as they will determine if the
Metaverse can become mainstream in e-learning. The aim of this article is to raise awareness about
the origin and the affordances of the Metaverse, so as to formulate a unified vision for meta-
education, Metaverse-powered online distance education. For this purpose, this article is structured
as follows: Definitions of the key concepts are presented in Section 2. The limitations of two-
dimensional learning environments are Computer Science innovations play a major role in
everyday life as they change and enrich human interaction, communication and social transactions.
From the standpoint of end users, three major technological innovation waves have been recorded
centered around the introduction of personal computers, the Internet and mobile devices,
respectively. Currently, the fourth wave of computing innovation is unfolding around spatial,
immersive technologies such as Virtual Reality (VR) and Augmented Reality (AR). This wave is
expected to form the next ubiquitous computing paradigm that has the potential to transform
(online) education, business, remote work and entertainment. This new paradigm is the Metaverse.

The word Metaverse is a closed compound word with two components: Meta (Greek prefix
meaning post, after or beyond) and universe. In other words, the Metaverse is a post-reality
universe, a perpetual and persistent multiuser environment merging physical reality with digital
virtuality. Regarding online distance education, Metaverse has the potential to remedy the
fundamental limitations of web-based 2D e-learning tools. Education is one crucial field for
society and economy where core implementation methods remain unchanged and orbiting around
content transmission, classrooms and textbooks despite numerous technological innovations.
Currently, there is an intense race to construct the infrastructure, protocols and standards that will
govern the Metaverse. Large corporations are striving to construct their closed, proprietary
hardware and software ecosystems so as to attract users and become the de facto Metaverse
destination. Different systemic approaches and diverging strategies collide around concepts such
as openness and privacy. The outcome of this race will determine the level of users’ privacy rights
as well as whether the Metaverse will be inclusive to students and school pupils. Both issues have
important implications for education as they will determine if the Metaverse can become
mainstream in e-learning. The aim of this article is to raise awareness about the origin and the
affordances of the Metaverse, so as to formulate a unified vision for meta-education, Metaverse-
powered online distance education. For this purpose, this article is structured as follows:
Definitions of the key concepts are presented in Section 2. The limitations of two-dimensional
learning environments are attention attraction.
In addition to the above passive sensory inputs, XR systems allow active interaction with virtual
elements through the use of motion controllers. These are handheld input devices with a grip,
buttons, triggers and thumb sticks. Using the controllers, users can touch, grab, manipulate and
operate virtual objects. This capability renders them active agents in any educational experience.
On this front, the development of full hand tracking will further improve the user experience
toward a more natural interface. Research is also being conducted towards wearable devices such
as haptics suits and gloves that respond to touch. Further sensory research efforts are concentrated
in the direction of smell digitalization and simulation. Interaction in XR environments does not
require users to be stationary. Users can activate their entire bodies. Physical movement is being
transferred into XR environments through positional and rotational tracking. Movement can be
tracked with either external, permanently mounted cameras (outside-in) or through inherent
headset sensors and cameras that monitor position changes in relation to the physical environment
(inside out). The latter is used in stand-alone, wireless headsets. The supported degrees of freedom
(DoF) of a XR headset is an essential specification that reflects its motion tracking capabilities.
Early and simpler headsets support three rotational head movement DoFs. Contemporary and high-
fidelity headsets support all six DoFs adding lateral body movement along the x, y and z axes. One
frontier pertaining to occluded VR spaces is perpetual movement translation through unidirectional
treadmills. 3. Limitations of 2D Learning Environments Online distance education has a long
history associated with the movement and philosophy of Open Education. The Open Education
movement led to the creation of Open Universities worldwide, mainly after the 1960s. Later,
Computer Science advancements and the Internet enabled the emergence of Open Courseware,
Open Educational Resources and Open Educational Practices. More recently, it triggered the
explosion of Massive Open Online Courses (MOOCs). MOOCs are openly accessible online
courses that are attended by hundreds or thousands of people. Most of the time, they have a
duration of a few weeks and are free of charge. Online learning is becoming increasingly
mainstream especially in higher and adult, continuous education. The COVID-19 pandemic
accelerated this trend by disrupting attendance-based activities in all levels of education. Remote
emergency teaching was enforced worldwide due to health-related physical distancing measures.
Ever since its conception, online education mainly relies on two main system types: Asynchronous
and synchronous e-learning. Both types depend on software or web applications in two-
dimensional digital environments, spanning in-plane digital windows with width and height but
without any depth. Standard asynchronous online learning tools include learning management
systems (e.g., Moodle, Blackboard), and sometimes also collaborative web applications and social
networks. Asynchronous tools serve the flexible, in other words, anytime, anywhere
communication and interaction among educators, students and content. Synchronous e-learning
systems enable the online meeting of educators and students at the same time in a digital, virtual
space. Synchronous online learning is implemented through web conferencing platforms (e.g.,
Zoom, WebEx, Microsoft Teams, Adobe Connect, Skype). However, applications operating in 2D,
web-based environments have well-documented limitations and inefficiencies. The daily extended
use of synchronous online platforms leads to phenomena such as Zoom fatigue. Asynchronous
platforms are often plagued by emotional isolation, a detrimental emotion for participation
motivation. Consequently, e-learning courses in the above-mentioned platforms face high drop-
out rates. This phenomenon reaches its extreme in MOOCs where typical completion rates have
been fluctuating around or below 10%. The use of social media and collaborative applications
(e.g., blogs, wikis) can improve active engagement but not necessarily address.
LITERATURE REVIEW

The Metaverse is the post-reality universe, a perpetual and persistent multiuser environ-
ment merging physical reality with digital virtuality. It is based on the convergence of technologies
that enable multisensory interactions with virtual environments, digital objects and people such as
virtual reality (VR) and augmented reality (AR). Hence, the Metaverse is an interconnected web
of social, networked immersive environments in persistent multiuser platforms. It enables seamless
embodied user communication in real-time and dynamic interactions with digital artifacts. Its first
iteration was a web of virtual worlds where avatars were able to teleport among them. The
contemporary iteration of the Metaverse features social, immersive VR platforms compatible with
massivemultiplayer online video games, open game worlds and AR collaborative spaces

The definition of the Metaverse varies, depending on point of view and purpose. However, the
commonly discussed metaverse is a virtual world that is like the real world: it is a space for
interacting with other users. The metaverse began with Snow Crash in 1992 (Stephenson, 1992b),
and it was generally studied as the Second Life environment in 2006 (Park & Kim, 2022a).
Recently, various applications based on the metaverse (e.g., Roblox and ZEPETO) have attracted
considerable attention. There are four major differences between the current metaverse and the
previous Second Life metaverse. 1) The new metaverse is more natural and offers greater
immersion than did the previous one; it offers high recognition performance and a natural
generation model due to the development of deep learning. 2) Unlike the previous PC-based
metaverse, the current metaverse uses mobile devices to increase accessibility and continuity. 3)
With the development of security technologies such as blockchain and virtual currency (e.g., Dime,
Bitcoin), the economic efficiency and stability of metaverse services have improved. 4) Due to the
limitations of offline social activity (e.g., Covid-19), interest in the virtual world has grown.
We classify the definitions of the metaverse into four types—environment, interface, interaction,
and social value—by summarizing each characteristic of the metaverse. Similarity to the real world
is a representative example of classifications that distinguish the types of metaverse. There is a
realistic environment that faithfully reflects realistic constraints, and an unrealistic environment
that gives many degrees of freedom without realistic constraints. The metaverse is also classified
according to the degree of immersion (e.g., 3D, virtual reality (VR)) in terms of the interface.
Although the metaverse of a 3D environment that uses VR devices offers users a lot of immersion,
the metaverse offers more than the operation of VR devices in a 3D environment. In addition to
environments and interfaces, metaverse definitions focus on interactions beyond simple
conversations for users and non-player characters (NPCs). Recently, the metaverse has focused on
the redefinition of the social meaning of the metaverse itself, and not simply a replica of real-world
society. The four aspects of the metaverse are as follows.

Metaverse environments include realistic, unrealistic, and fused environments. The fused
environment reflects some unrealistic elements based on a realistic environment. The realistic
metaverse faithfully reflects geography and physical elements according to the designer's purpose
and interpretation (Schroeder et al., 2001). In the realistic metaverse, avatars cannot exist in two
places, and the speed of movement is limited in the same way as in the real world. This method
has the advantage of being able to deliver experiences in a way that is similar to reality (e.g., library
orientation, museum visits). However, although sound and visual modalities are relatively realistic,
there are limitations in the atmosphere, smell, and tactile sensations felt in the field.
The metaverse of the unrealistic environment deceives the user's senses and removes the barriers
of realistic time and space (Papagiannidis & Bourlakis, 2010). The unrealistic metaverse has the
advantage that it can be relatively freely utilized without physical constraints (e.g., gravity). It has
the advantage of being able to freely create unrealistic objects and allow users to experience things
that cannot be experienced in reality (e.g., Mars exploration). On the other hand, a consistent
worldview and exquisite environment are required because the unrealistic world has a lesser sense
of reality.
There is a fused method that comprises the advantages of both methods (Choi & Kim, 2017). The
metaverse of the fused environment includes an augmented method that adds virtual elements
based on reality, and a virtual method for composing a new world with the laws of reality. In the
augmented method, it is important to show how well virtual objects are combined with real objects.
The virtual method is more complicated, but it has the advantage of being able to offer user
experiences that were not possible with an unrealistic method. However, reconstructing a novel
world based on reality is difficult and complex because it is not easy to redefine the rules for and
reconstruct the real world.

From the interface point of view, there are 3D, immersive, and physical methods. Although 3D is
not an essential element of the metaverse, many definitions of the metaverse use the expression
“3D virtual world” (González et al., 2013). In fact, most metaverse environments are composed to
have 3D form, although there are differences in the degree of detail. The 3D method has the
advantage of increasing realism, but it has a disadvantage in terms of service continuity. For
example, there is a large deviation between 2D and 3D screen rendering, and this method requires
relatively high-performance hardware.
Immersion is an essential element for inducing user participation in the metaverse and maintaining
a continuous world (Jaynes et al., 2003). To create immersion, a physical tool (e.g., VR) is used to
substitute the user's real visual sense. Rather than simply sending a textual “Happy Birthday”
message to a distant friend, an avatar's face-to-face conversation in the metaverse immerses the
user. However, excessive immersion leads to psychological problems (e.g., separation from
reality). In addition, negative feelings and emotions that occur in the metaverse extend to the real
world, which can lead to social problems (e.g., identity confusion and addiction).
Physical elements (e.g., inertia) are also mentioned as features of a realistic metaverse. Reflecting
physical elements in the interface is a good way to provide realism, but current technology cannot
adequately provide realism (Amorim et al., 2014). There are tactile and visual methods for
reflecting physical elements. For example, direct stimulation of touch using VR suits and gloves
assists with physical sensations. Furthermore, visually, realism is reinforced by effects such as
bouncing a ball and the realistic rippling of water. However, it is difficult to convey tactile
emotions (e.g., handshakes, hugs) using avatars, and the application of physical laws to a large
space during rendering places a burden on software.

Interaction in the metaverse is classified as social networking, collaboration, and persona dialog.
Effectively redefining and utilizing the experience of social networking in the metaverse is
difficult. Furthermore, interest in value creation through collaboration beyond individual VR
experiences is increasing. Persona dialog maintains a natural conversation by reflecting the
characteristics of NPCs (Zhang et al., 2018).
Because it is the interaction between users that supports the world of the metaverse, many studies
have described the importance of networks (Nevelsteen, 2018). Some explain that the Internet and
social networking service (SNS) expand to become a virtual environment. This network service is
a good medium for expanding the metaverse and is a backbone that connects people's interactions.
Most metaverses consider the relationship between users online, but it is also necessary to pay
attention to an offline metaverse and an individual metaverse for privacy.
Collaboration and communication are important values for the metaverse (Zackery et al., 2016).
User avatars can collaborate and share experiences. They create new value through such
collaboration and sharing. Unlike in the real world, this collaboration makes it possible to
transcend time and space. It also gives users a common purpose and allows the metaverse to
continue as a society. However, because communication is based on sensor information that is
limited relative to reality, it is possible to misunderstand or make erroneous judgments about
hidden intentions.
It is also important to converse with the metaverse NPCs that have a personality (e.g., preference,
hobby) (Kwanya et al., 2015). The conversation is used to continuously convey and extend people's
experiences in the metaverse. We have to consider not only user-to-user conversations but also
user-to-NPC conversations. In addition to human-type NPCs, conversations with animals and
objects are possible in the metaverse. Conversations in the metaverse can be more exaggerated
than they are in reality. User expressions include violent words, so a safer control device is needed.
Sustainability is an essential factor for the metaverse (Papagiannidis & Bourlakis, 2010). The
metaverse serves as a tool to complement the real world and serves as the target of the metaverse
itself. The metaverse enables exchanges of various experiences and new knowledge among users.
This way, users build financial wealth, create new things, and have an opportunity to show a
different side of themselves. However, there are platform restrictions in these social activities, and
consistency is needed (e.g., worldview).
Interdisciplinary research is a medium that allows the metaverse to develop as a society beyond
the level of simply being a 3D environment and physical applications (Rehm et al., 2015). Beyond
simple games and social media, the metaverse requires a variety of values and novel concepts. The
philosophy, psychology, sociology, culture, economics, and politics of the metaverse require a
new perspective. It is necessary to consider an advanced perspective rather than simply substituting
formulas from the real world.
The taxonomy of the metaverse has as its basic components environment, interface, interaction,
and security (Park & Kim, 2022a). First, it is necessary to recognize the sights and sounds
constituting the world and to design an environment capable of rendering them. To compose a
visual environment, it is necessary to recognize and render scenes and objects. To compose a sound
environment, recognizing and synthesizing sound and voice is required. Moreover, motion
rendering for the natural movement of avatars and NPCs is important. In the metaverse, the
physical interface enhances the immersion of the user. Head-mounted displays (HMDs) and hand-
based input devices are commonly used as representative devices. Furthermore, non-hand-based
input devices and motion input devices are also becoming axes of input. Multimodal interaction is
basic because people do not communicate in only one mode when they have a conversation within
the metaverse. In addition, because the avatar performs various tasks simultaneously beyond the
scope of a conversation, multi-tasking is also an important factor. The embodied agent allows 3D
interaction as a means of shaping the agent. In NPC interactions, the persona is a factor that
enriches the metaverse. It is important to provide sustainable services to users by hierarchically
organizing events and scenarios using these components.
CHAPTER-3
Problem Formulation
This time last year, the metaverse was a relatively unfamiliar term. A quick scan of Google’s
search trends show a flatline in January 2021, before a slow rumble of interest around summer
2021 and a huge upward spike towards the end of last year (no doubt in part due to Facebook’s
landmark rebrand.) Thanks to a recent explosion of the metaverse within popular culture, mass
interest has been piqued and, for many, this buzzword du jour has become synonymous with
gaming. And you can see why – all the major metaverse activations so far have, indeed, played
out across the big gaming platforms: Gucci, Nike and Vans have all dipped their trendy toes into
the world of Roblox; Balenciaga (pictured) and Ariana Grande made unexpected appearances on
Fortnite; and even non-profits such as the Ad Council and Reporters Without Borders have
experimented with STEM-themed challenges and an Uncensored Library on Minecraft. With this
in mind, it can be easy for some to dismiss the metaverse as a gimmicky "gaming thing" – simply
the latest tech trend. And while the metaverse and gaming are inextricably linked, there is much
more to it under the surface. A dystopian reality The vision of the metaverse that we currently hold
as sacrosanct is that as defined and popularised by science-fiction works such as Snow Crash and
Ready Player One: dystopian science fiction, where the tangible reality of the world is so bleak
that the metaverse offers a dose of respite and escapism from the drudgery and horror of actual
life. Say what you want about the world in 2022, but we are far from a dystopian hellscape that
warrants plugging in and escaping into a new digital reality. We should look at the evolution of
this trend forking into two paths. The first being this idea of totally immersive persistent and
embodied worlds – this will be akin to the evolution of entertainment, the future of cinema, gaming
and socialising. Engaging by appointment in dedicated portions of time, this is where the current
gaming platforms will flourish and lead. But the other path looks not to plug us into virtual worlds,
but to make the real world more virtual, the natural evolution of how humans interact with
information and the physical world around them: the post-mobile era. Think augmented layers of
the world, with a wealth of information, content and interactivity enhancing our reality. A
controllable, customizable contextualization of the spaces around us. Opportunity to marketers
Both these paths will blur at times and will be underpinned by the decentralized backbone of Web3,
but it’s the second route that offers the scale and opportunity to marketers. Path one seems tangible,
because the modern gaming platforms have already started this journey and the worlds of Fortnite
and Roblox are analogous to these always-on virtual worlds, whilst path two seems like a fantasy
right now. But with Apple, Snap, Meta, Google and Niantic working to achieve this vision, we are
just a consumer product release (headset, glasses… contact lens?) away from the journey properly
beginning and quickly accelerating over the next few years. Think about mobile pre-iPhone and
that is where we are. So while the metaverse may (understandably) seem indiscernible from
gaming right now, over the coming months and years, we will be living through every twist and
turn of its exciting evolution into an all-encompassing platform that will totally revolutionise brand
and consumer engagement. Dismissing the metaverse as a shiny, new gaming trend may be
foolhardy – now is the time for careful planning, internal alignment and the building of a strong,
strategic roadmap to ensure your brand is future-proofed for where the metaverse takes us.

The SLAM industry has been progressing at an astounding level over the last 30 years. This has
enabled large-scale real-world applications of this technology. For example, Tesla’s Autopilot is
using SLAM by allowing the sensors to translate data from the outside world into the car’s head
computer. Further, this information is compiled to present a virtual projection of the surroundings,
hence, helping to avoid crashes and keeping the driver informed about the journey. Loop Closure
Algorithm – Loop closure algorithm is the act of correctly asserting that a device has returned to
a previously visited location. This method includes maintaining a list of prior orientations and
comparing a user’s present view with a complete set or a subset of views that were previously
explored. Based on the comparison, the spatial map of the environment is updated and the drift
error is reduced. Using this algorithm, NavVis stated that the accuracy of 20mm can be achieved
at 95% confidence (Source) which is quite high compared to 0.6-1.3% while using stereoscopic
cameras; i.e. the location could be 60-130 cm away from the real location of the object. (Source)

Precision SLAM Technology – NavVis is also working on another algorithm, i.e. Precision
SLAM Technology, which significantly reduces drift error and improves the SLAM
accuracy. NavVis points out that the Precision SLAM Technology is especially evident when
the loop closure technique has little effect or cannot be used. They are also seeking a patent for
this technology.
CHAPTER-4
IMPLEMENTATION

METAVERSE
The metaverse is a concept of a persistent, online, 3D universe that combines multiple different
virtual spaces. You can think of it as a future iteration of the internet. The metaverse will allow
users to work, meet, game, and socialize together in these 3D spaces.
The metaverse isn’t fully in existence, but some platforms contain metaverse-like elements. Video
games currently provide the closest metaverse experience on offer. Developers have pushed the
boundaries of what a game is through hosting in-game events and creating virtual economies.
Although not required, cryptocurrencies can be a great fit for a metaverse. They allow for creating
a digital economy with different types of utility tokens and virtual collectibles (NFTs). The
metaverse would also benefit from the use of crypto wallets, such as Trust Wallet and MetaMask.
Also, blockchain technology can provide transparent and reliable governance systems.
Blockchain, metaverse-like applications already exist and provide people with liveable incomes.
Axie Infinity is one play-to-earn game that many users play to support their income. Second Live
and Decentraland are other examples of successfully mixing the blockchain world and virtual
reality apps.
When we look to the future, big tech giants are trying to lead the way. However, the decentralized
aspects of the blockchain industry is letting smaller players participate in the metaverse’s
development as well.
The metaverse is a concept of an online, 3D, virtual space connecting users in all aspects of their
lives. It would connect multiple platforms, similar to the internet containing different websites
accessible through a single browser.
The concept was developed in the science-fiction novel Snow Crash by Neal Stephenson.
However, while the idea of a metaverse was once fiction, it now looks like it could be a reality in
the future.
The metaverse will be driven by augmented reality, with each user controlling a character or avatar.
For example, you might take a mixed reality meeting with an Oculus VR headset in your virtual
office, finish work and relax in a blockchain-based game, and then manage your crypto portfolio
and finances all inside the metaverse.
You can already see some aspects of the metaverse in existing virtual video game worlds. Games
like Second Life and Fortnite or work socialization tools like Gather.town bring together multiple
elements of our lives into online worlds. While these applications are not the metaverse, they are
somewhat similar. The metaverse still doesn’t exist yet.
Besides supporting gaming or social media, the metaverse will combine economies, digital
identity, decentralized governance, and other applications. Even today, user creation and
ownership of valuable items and currencies help develop a single, united metaverse. All these
features provide blockchain the potential to power this future technology.

Because of the emphasis on 3D virtual reality, video games offer the closest metaverse experience
currently. This point isn’t just because they are 3D, though. Video games now offer services and
features that cross over into other aspects of our lives. The video game Roblox even hosts virtual
events like concerts and meetups. Players don't just play the game anymore; they also use it for
other activities and parts of their lives in "cyberspace". For example, in the multiplayer game
Fortnite, 12.3 million players took part in Travis Scott's virtual in-game music tour.

How does crypto fit into the metaverse?


Gaming provides the 3D aspect of the metaverse but doesn’t cover everything needed in a virtual
world that can cover all aspects of life. Crypto can offer the other key parts required, such as digital
proof of ownership, transfer of value, governance, and accessibility. But what do these mean
exactly?
If, in the future, we work, socialize, and even purchase virtual items in the metaverse, we need a
secure way of showing ownership. We also need to feel safe transferring these items and money
around the metaverse. Finally, we will also want to play a role in the decision-making taking place
in the metaverse if it will be such a large part of our lives.
Some video games contain some basic solutions already, but many developers use crypto and
blockchain instead as a better option. Blockchain provides a decentralized and transparent way of
dealing with the topics, while video-game development is more centralized.
Blockchain developers also take influence from the video game world too. Gamification is
common in Decentralized Finance (DeFi) and GameFi. It seems there will be enough similarities
in the future that the two worlds may become even more integrated. The key aspects of blockchain
suited to the metaverse are:
1. Digital proof of ownership: By owning a wallet with access to your private keys, you can
instantly prove ownership of activity or an asset on the blockchain. For example, you could show
an exact transcript of your transactions on the blockchain while at work to show accountability. A
wallet is one of the most secure and robust methods for establishing a digital identity and proof of
ownership.
2. Digital collectibility: Just as we can establish who owns something, we can also show that an
item is original and unique. For a metaverse looking to incorporate more real-life activities, this is
important. Through NFTs, we can create objects that are 100% unique and can never be copied
exactly or forged. A blockchain can also represent ownership of physical items.
3. Transfer of value: A metaverse will need a way to transfer value securely that users trust. In-
game currencies in multiplayer games are less secure than crypto on a blockchain. If users spend
large amounts of time in the metaverse and even earn money there, they will need a reliable
currency.
4. Governance: The ability to control the rules of your interaction with the metaverse should also
be important for users. In real life, we can have voting rights in companies and elect leaders and
governments. The metaverse will also need ways to implement fair governance, and blockchain is
already a proven way of doing this.
5. Accessibility: Creating a wallet is open to anyone around the world on public blockchains.
Unlike a bank account, you don't need to pay any money or provide any details. This makes it one
of the most accessible ways to manage finances and an online, digital identity.
6. Interoperability: Blockchain technology is continuously improving compatibility between
different platforms. Projects like Polkadot (DOT) and Avalanche (AVAX) allow for creating
custom blockchains that can interact with each other. A single metaverse will need to connect
multiple projects, and blockchain technology already has solutions for this.

What is a metaverse job?


As we mentioned, the metaverse will combine all aspects of life in one place. While many people
already work at home, in the metaverse, you will be able to enter a 3D office and interact with
your colleagues’ avatars. Your job may also be metaverse related and provide you with income
directly usable in the metaverse. In fact, these kinds of jobs already exist in a similar form.
GameFi and play-to-earn models now provide steady income streams for people worldwide. These
online jobs are great candidates for metaverse implementation in the future, as they show that
people are willing to spend their time living and earning in virtual worlds. Play-to-earn games like
Axie Infinity and Gods Unchained don’t even have 3D worlds or avatars. However, it’s the
principle that they could be part of the metaverse as a way to earn money entirely in the online
world.

Metaverse examples
While we don't yet have a single, linked metaverse, we have plenty of platforms and projects
similar to the metaverse. Typically, these also incorporate NFTs and other blockchain elements.
Let's look at three examples:
SecondLive
SecondLive is a 3D virtual environment where users control avatars for socializing, learning, and
business. The project also has an NFT marketplace for swapping collectibles. In September 2020,
SecondLive hosted BNB Smart Chain's Harvest Festival as part of its first anniversary. The virtual
expo showcased different projects in the BSC ecosystem for users to explore and interact with.

Axie Infinity
Axie Infinity is a play-to-earn game that’s provided players in developing countries an opportunity
to earn consistent income. By purchasing or being gifted three creatures known as Axies, a player
can start farming the Smooth Love Potion (SLP) token. When sold on the open market, someone
could make roughly $200 to $1000 (USD) depending on how much they play and the market price.
While Axie Infinity doesn't provide a singular 3D character or avatar, it gives users the opportunity
for a metaverse-like job. You might have already heard the famous story of Filipinos using it as
an alternative to full-time employment or welfare.
Decentraland
Decentraland is an online, digital world that combines social elements with cryptocurrencies,
NFTs, and virtual real estate. On top of this, players also take an active role in the governance of
the platform. Like other blockchain games, NFTs are used to represent cosmetic collectibles.
They're also used for LAND, 16x16 meter land parcels that users can purchase in the game with
the cryptocurrency MANA. The combination of all of these creates a complex crypto-economy.

What's the future of the metaverse?


Facebook is one of the loudest voices for the creation of a unified metaverse. This is particularly
interesting for a crypto-powered metaverse due to Facebook's Diem stablecoin project. Mark
Zuckerberg has explicitly mentioned his plans to use a metaverse project to support remote work
and improve financial opportunities for people in developing countries. Facebook’s ownership of
social media, communication, and crypto platforms give it a good start combining all these worlds
into one. Other large tech companies are also targeting the creation of a metaverse, including
Microsoft, Apple, and Google.
When it comes to a crypto-powered metaverse, further integration between NFT marketplaces and
3D virtual universes seems like the next step. NFT holders can already sell their goods from
multiple sources on marketplaces like OpenSea and BakerySwap, but there isn’t yet a popular 3D
platform for this. At a bigger scale, blockchain developers might develop popular metaverse-like
applications with more organic users than a large tech giant.

Closing thoughts
While a single, united metaverse is likely a long way off, we already can see developments that
may lead to its creation. It looks to be yet another sci-fi use case for blockchain technology and
cryptocurrencies. If we will ever really reach the point of a metaverse is unsure. But in the
meantime, we can already experience metaverse-like projects and continue to integrate blockchain
more into our daily lives.
the metaverse is a vast network where individuals via their avatars can interact socially and
professionally, invest in currency, take classes, work, and travel in 3-D virtual reality.1

As the metaverse grows, it may likely create online spaces where user interactions are more
multidimensional than current technology supports. In simple terms, the metaverse will allow
users to go beyond just viewing digital content, users in the metaverse will be able to immerse
themselves in a space where the digital and physical worlds converge.
LIBRARY USED:
THREE.JS
Three.js is a cross-browser JavaScript library and application programming interface (API) used
to create and display animated 3D computer graphics in a web browser using WebGL. The source
code is hosted in a repository on GitHub.
Three.js allows the creation of graphical processing unit (GPU)-accelerated 3D animations using
the JavaScript language as part of a website without relying on proprietary browser plugins. This
is possible due to the advent of WebGL, a low-level graphics API created specifically for the web.
High-level libraries such as Three.js or GLGE, SceneJS, PhiloGL, and many more make it possible
to author complex 3D computer animations for display in the browser without the effort required
for a traditional standalone application or a plugin.
Features
Three.js includes the following features:[13]

• Effects: Anaglyph, cross-eyed, and parallax barrier.


• Scenes: add and remove objects at run-time; fog
• Cameras: perspective and orthographic; controllers: trackball, FPS, path and more
• Animation: armatures, forward kinematics, inverse kinematics, morph, and keyframe
• Lights: ambient, direction, point, and spot lights; shadows: cast and receive
• Materials: Lambert, Phong, smooth shading, textures, and more
• Shaders: access to full OpenGL Shading Language (GLSL) capabilities: lens
flare, depth pass, and extensive post-processing library
• Objects: meshes, particles, sprites, lines, ribbons, bones, and more - all with Level of
detail
• Geometry: plane, cube, sphere, torus, 3D text, and more; modifiers: lathe, extrude, and
tube
• Data loaders: binary, image, JSON, and scene
• Utilities: full set of time and 3D math functions including frustum,
matrix, quaternion, UVs, and more
• Export and import: utilities to create Three.js-compatible JSON files from
within: Blender, openCTM, FBX, Max, and OBJ
• Support: API documentation is under construction. A public forum and wiki is in full
operation.
• Examples: Over 150 files of coding examples plus fonts, models, textures, sounds, and
other support files
• Debugging: Stats.js,[14] WebGL Inspector,[15] Three.js Inspector[16]
• Virtual and Augmented Reality via WebXR[17]
Three.js runs in all browsers supported by WebGL

Three.js is often confused with WebGL since more often than not, but not always, three.js uses
WebGL to draw 3D. WebGL is a very low-level system that only draws points, lines, and triangles.
To do anything useful with WebGL generally requires quite a bit of code and that is where three.js
comes in. It handles stuff like scenes, lights, shadows, materials, textures, 3d math, all things that
you'd have to write yourself if you were to use WebGL directly.

These tutorials assume you already know JavaScript and, for the most part they will use ES6
style. See here for a terse list of things you're expected to already know. Most browsers that support
three.js are auto-updated so most users should be able to run this code. If you'd like to make this
code run on really old browsers look into a transpiler like Babel. Of course users running really
old browsers probably have machines that can't run three.js.

When learning most programming languages the first thing people do is make the computer
print "Hello World!". For 3D one of the most common first things to do is to make a 3D cube. So
let's start with "Hello Cube!"

Before we get started let's try to give you an idea of the structure of a three.js app. A three.js app
requires you to create a bunch of objects and connect them together. Here's a diagram that
represents a small three.js app

Things to notice about the diagram above.


• There is a Renderer. This is arguably the main object of three.js. You pass a Scene and
a Camera to a Renderer and it renders (draws) the portion of the 3D scene that is inside
the frustum of the camera as a 2D image to a canvas.

• There is a scenegraph which is a tree like structure, consisting of various objects like
a Scene object, multiple Mesh objects, Light objects, Group, Object3D,
and Camera objects. A Scene object defines the root of the scenegraph and contains
properties like the background color and fog. These objects define a hierarchical
parent/child tree like structure and represent where objects appear and how they are
oriented. Children are positioned and oriented relative to their parent. For example the
wheels on a car might be children of the car so that moving and orienting the car's object
automatically moves the wheels. You can read more about this in the article on
scenegraphs.

Note in the diagram Camera is half in half out of the scenegraph. This is to represent that
in three.js, unlike the other objects, a Camera does not have to be in the scenegraph to
function. Just like other objects, a Camera, as a child of some other object, will move and
orient relative to its parent object. There is an example of putting multiple Camera objects
in a scenegraph at the end of the article on scenegraphs.

• Mesh objects represent drawing a specific Geometry with a specific Material.


Both Material objects and Geometry objects can be used by multiple Mesh objects. For
example to draw two blue cubes in different locations we could need two Mesh objects to
represent the position and orientation of each cube. We would only need one Geometry to
hold the vertex data for a cube and we would only need one Material to specify the color
blue. Both Mesh objects could reference the same Geometry object and the
same Material object.

• Geometry objects represent the vertex data of some piece of geometry like a sphere, cube,
plane, dog, cat, human, tree, building, etc... Three.js provides many kinds of built
in geometry primitives. You can also create custom geometry as well as load geometry
from files.

• Material objects represent the surface properties used to draw geometry including things
like the color to use and how shiny it is. A Material can also reference one or
more Texture objects which can be used, for example, to wrap an image onto the surface
of a geometry.

• Texture objects generally represent images either loaded from image files, generated from
a canvas or rendered from another scene.

• Light objects represent different kinds of lights.

Given all of that we're going to make the smallest "Hello Cube" setup that looks like this
First let's load three.js

• <script type="module">
• import * as THREE from 'three';
• </script>

It's important you put type="module" in the script tag. This enables us to use the import keyword
to load three.js. As of r147, this is the only way to load three.js properly. Modules have the
advantage that they can easily import other modules they need. That saves us from having to
manually load extra scripts they are dependent on.

Next we need is a <canvas> tag so...

• <body>
• <canvas id="c"></canvas>
• </body>

We will ask three.js to draw into that canvas so we need to look it up.

• <script type="module">
• import * as THREE from 'three';

• function main() {
• const canvas = document.querySelector('#c');
• const renderer = new THREE.WebGLRenderer({canvas});
• ...
• </script>

After we look up the canvas we create a WebGLRenderer. The renderer is the thing responsible
for actually taking all the data you provide and rendering it to the canvas. In the past there have
been other renderers like CSSRenderer, a CanvasRenderer and in the future there may be
a WebGL2Renderer or WebGPURenderer. For now there's the WebGLRenderer that uses
WebGL to render 3D to the canvas.
Note there are some esoteric details here. If you don't pass a canvas into three.js it will create one
for you but then you have to add it to your document. Where to add it may change depending on
your use case and you'll have to change your code so I find that passing a canvas to three.js feels
a little more flexible. I can put the canvas anywhere and the code will find it whereas if I had code
to insert the canvas into to the document I'd likely have to change that code if my use case changed.

Next up we need a camera. We'll create a PerspectiveCamera.

• const fov = 75;


• const aspect = 2; // the canvas default
• const near = 0.1;
• const far = 5;
• const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);

fov is short for field of view. In this case 75 degrees in the vertical dimension. Note that most
angles in three.js are in radians but for some reason the perspective camera takes degrees.

aspect is the display aspect of the canvas. We'll go over the details in another article but by default
a canvas is 300x150 pixels which makes the aspect 300/150 or 2.

near and far represent the space in front of the camera that will be rendered. Anything before that
range or after that range will be clipped (not drawn).

Those four settings define a "frustum". A frustum is the name of a 3d shape that is like a pyramid
with the tip sliced off. In other words think of the word "frustum" as another 3D shape like sphere,
cube, prism, frustum.

The height of the near and far planes are determined by the field of view. The width of both planes
is determined by the field of view and the aspect.

Anything inside the defined frustum will be drawn. Anything outside will not.

The camera defaults to looking down the -Z axis with +Y up. We'll put our cube at the origin so
we need to move the camera back a little from the origin in order to see anything.

• camera.position.z = 2;

Here's what we're aiming for.

In the diagram above we can see our camera is at z = 2. It's looking down the -Z axis. Our frustum
starts 0.1 units from the front of the camera and goes to 5 units in front of the camera. Because in
this diagram we are looking down, the field of view is affected by the aspect. Our canvas is twice
as wide as it is tall so across the canvas the field of view will be much wider than our specified 75
degrees which is the vertical field of view.

Next we make a Scene. A Scene in three.js is the root of a form of scene graph. Anything you want
three.js to draw needs to be added to the scene. We'll cover more details of how scenes work in a
future article.

• const scene = new THREE.Scene();


Next up we create a BoxGeometry which contains the data for a box. Almost anything we want to
display in Three.js needs geometry which defines the vertices that make up our 3D object.

• const boxWidth = 1;
• const boxHeight = 1;
• const boxDepth = 1;
• const geometry = new THREE.BoxGeometry(boxWidth, boxHeight, boxDepth);

We then create a basic material and set its color. Colors can be specified using standard CSS style
6 digit hex color values.

• const material = new THREE.MeshBasicMaterial({color: 0x44aa88});

We then create a Mesh. A Mesh in three represents the combination of three things

1. A Geometry (the shape of the object)


2. A Material (how to draw the object, shiny or flat, what color, what texture(s) to apply. Etc.)
3. The position, orientation, and scale of that object in the scene relative to its parent. In the
code below that parent is the scene.

• const cube = new THREE.Mesh(geometry, material);


FRAMEWORK USED:-

A FRAME

A-Frame experiences can be used to view and manipulate three-dimensional models and view 360
degree images and videos. The framework is also used to design and implement responsive user
interfaces and hand tracking in VR and AR applications.
A-Frame is a web framework for building virtual reality (VR) experiences. A-Frame is based on
top of HTML, making it simple to get started. But A-Frame is not just a 3D scene graph or a
markup language; the core is a powerful entity-component framework that provides a declarative,
extensible, and composable structure to three.js.
Originally conceived within Mozilla and now maintained by the co-creators of A-Frame
within Supermedium, A-Frame was developed to be an easy yet powerful way to develop VR
content. As an independent open source project, A-Frame has grown to be one of the largest VR
communities.

A-Frame supports most VR headsets such as Vive, Rift, Windows Mixed Reality, Daydream,
GearVR, Cardboard, Oculus Go, and can even be used for augmented reality. Although A-Frame
supports the whole spectrum, A-Frame aims to define fully immersive interactive VR experiences
that go beyond basic 360° content, making full use of positional tracking and controllers.
Features
VR Made Simple: Just drop in a <script> tag and <a-scene>. A-Frame will handle 3D
boilerplate, VR setup, and default controls. Nothing to install, no build steps.
Declarative HTML: HTML is easy to read, understand, and copy-and-paste. Being based on top
of HTML, A-Frame is accessible to everyone: web developers, VR enthusiasts, artists, designers,
educators, makers, kids.
Entity-Component Architecture: A-Frame is a powerful three.js framework, providing a
declarative, composable, reusable entity-component structure. HTML is just the tip of the iceberg;
developers have unlimited access to JavaScript, DOM APIs, three.js, WebVR, and WebGL.
Cross-Platform VR: Build VR applications for Vive, Rift, Windows Mixed Reality, Daydream,
GearVR, and Cardboard with support for all respective controllers. Don’t have a headset or
controllers? No problem! A-Frame still works on standard desktop and smartphones.
Performance: A-Frame is optimized from the ground up for WebVR. While A-Frame uses the
DOM, its elements don’t touch the browser layout engine. 3D object updates are all done in
memory with little garbage and overhead. The most interactive and large scale WebVR
applications have been done in A-Frame running smoothly at 90fps.
Visual Inspector: A-Frame provides a handy built-in visual 3D inspector. Open up any A-Frame
scene, hit <ctrl> + <alt> + i, and fly around to peek under the hood!

Components: Hit the ground running with A-Frame’s core components such as geometries,
materials, lights, animations, models, raycasters, shadows, positional audio, text, and controls for
most major headsets. Get even further from the hundreds of community components
including environment, state, particle systems, physics, multiuser, oceans, teleportation, super
hands, and augmented reality.
Proven and Scalable: A-Frame has been used by companies such as Google, Disney, Samsung,
Toyota, Ford, Chevrolet, Amnesty International, CERN, NPR, Al Jazeera, The Washington Post,
NASA. Companies such as Google, Microsoft, Oculus, and Samsung have made contributions to
A-Frame.
Off You Go!

If it’s your first time here, here’s a plan for success for getting into A-Frame:

1. Subscribe to the Newsletter for updates and tips on A-Frame and to see featured
community projects.
2. Read through the documentation to get a grasp. Glitch is used as a recommended coding
playground and for examples.
3. Join us on Discord and Slack and if you have any questions, search and ask on
StackOverflow, and someone will try to get to you!
4. When you build something, share your project online and we’ll try to feature it on
the newsletter and the blog!
And it really helps to have a dig into the fundamentals on JavaScript and three.js. Have fun!
Visual Inspector & Dev Tools

This section will go over many useful tools that will improve VR development experience:

• A-Frame Inspector - Inspector tool to get a different view of the scene and see the visual
effect of tweaking entities. The VR analog to the browser’s DOM inspector. Can be
opened on any A-Frame scene with <ctrl> + <alt> + i.
• Keyboard shortcuts.
• Motion Capture - A tool to record and replay headset and controller pose and events. Hit
record, move around inside the VR headset, interact with objects with the controller.
Then replay that recording back on any computer for rapid development and testing.
Reduce the amount of time going in and out of the headset.

We’ll go over GUI tools built on top of A-Frame that can be used without code. And touch on
other tools that can ease development across multiple machines.

A-Frame Inspector
The A-Frame Inspector is a visual tool for inspecting and tweaking scenes. With the Inspector, we
can:
• Drag, rotate, and scale entities using handles and helpers
• Tweak an entity’s components and their properties using widgets
• Immediately see results from changing values without having to go back and forth between
code and the browser

The Inspector is similar to the browser’s DOM inspector but tailored for 3D and A-Frame. We can
toggle the Inspector to open up any A-Frame scene in the wild Let’s view source!
Opening the Inspector
The easiest way to use is to press the <ctrl> + <alt> + i shortcut on our keyboard. This will fetch
the Inspector code via CDN and open up our scene in the Inspector. The same shortcut toggles the
Inspector closed.

Not only can we open our local scenes inside the Inspector, we can open any A-Frame scene in
the wild using the Inspector (as long as the author has not explicitly disabled it).

See the Inspector README for details on serving local, development, or custom builds of the
Inspector.
Using the Inspector
Scene Graph
The Inspector’s scene graph is a tree-based representation of the scene. We can use the scene graph
to select, search, delete, clone, and add entities or exporting HTML.

The scene graph lists A-Frame entities rather than internal three.js objects. Given HTML is also a
representation of the scene graph, the Inspector’s scene graph mirrors the underlying HTML
closely. Entities are displayed using their HTML ID or HTML tag name.

Viewport

The viewport displays the scene from the Inspector’s point of the view. We can rotate, pan, or
zoom the viewport to change the view of the scene:

• Rotate: hold down left mouse button (or one finger down on a trackpad) and drag
• Pan: hold down right mouse button (or two fingers down on a trackpad) and drag
• Zoom: scroll up and down (or two-finger scroll on a trackpad)

From the viewport, we can also select entities and transform them:

• Select: left-click onan entity, double-click to focus the camera on it


• Transform: select a helper tool on the upper-right corner of the viewport, drag the
red/blue/green helpers surrounding an entity to transform it
The components panel displays the selected entity’s components and properties. We can modify
values of common components (e.g., position, rotation, scale), modify values of attached
components, add and remove mixins, and add and remove components.

The type of widget for each property depends on the property type. For example, booleans use a
checkbox, numbers use a value slide, and colors use a color picker.

We can copy the HTML output of individual components. This is useful for visually tweaking and
finding the desired value of a component and then syncing it back to source code.

CHAPTER-5
RESULTS & DISCUSSION
Several things that can drive success in the development of smart tourism are innovation,
leadership, social capital, and human capital. The use of data is also an important element in the
development of smart tourism. Khan et al. in their research in Dubai, concluded that Dubai has 4
important elements that make it become smart destination, namely big data, shared data, open data,
and rich data. These data can be maximally utilized by using 5G and AI. With open data and rich
data, government organizations can collaborate effectively each other. Furthermore, the four
elements also make it easier to map what should or need to be developed. However, the use of big
data and other advanced technology definitely requires large costs. Wijayanti et al. in her research
at Smart Park of Yogyakarta stated that one of the biggest challenges in developing smart tourism
is related to costs. In addition, what needs to be considered in developing smart tourism is how to
meet the needs of tourists, maintain tourist privacy, and present smart experiences to them. Smart
service experiences provide empowerment, seamless experiences, accurate service delivery,
enjoyment, security, and privacy.

Apart from big data, artificial intelligence (AI) also cannot be separated from the development of
smart tourism. Even though it has many benefits, AI in its development also has challenges, such
as acceptance from the public to deal with the new technology, the reduced need for human
resources, and threatening the privacy of users.

Smart tourism should be able to promote the environment, economy, socio-culture, and politics.
However, many technologies that have been developed are not concerned with the impacts on the
environment. In other words, sustainability solutions are underrepresented. If examined
more deeply, smart tourism acts as an attraction for destinations that aim to redevelop image
enhancement, reach new market segments, and employ smart solutions for urban renewal and
resource efficiency with the aim that these technologies will address global problems, such as
environmental problems.
In his research related to Marine Tourism, Bhaduri & Pandey argued that the use of ICT in tourism
not only has an impact on increasing the number of tourist visits and economic growth. ICT also
threatens environmental sustainability because of the resulting C02 emissions. In addition, smart
tourism must implement not only the concept of geographically equal growth
but also social inclusiveness for the community as a whole, which is a principle of human rights
that should be granted to everyone without exception because technology has contributed to
making the tourism experience more available and beneficial to everyone.
The use of smart devices actually results in inconvenience and a lack of interaction. The one who
has long used smart devices appears to rely more on them and are accustomed to advanced
functionality and high technology demands. The impact is that when they are in a tourism
destination that does not provide smart facilities, they will find it difficult to enjoy the attraction
because they are used to using smart devices. Husain added that in the development process, smart
tourism involves many parties who have different interests which sometimes causes conflicts
between them. In several countries, many people and tourists are not too aware of the existence of
smart technology, especially in developing countries, this is also an obstacle in developing smart
tourism. Furthermore, Amir et al. specifically details several challenges in developing smart
tourism, such as: “Difficulties with application systems; Limited and slow internet network; Do
not afford for digital devices; Less and limited digital technology; Application in all tourism
businesses; Less awareness
CHAPTER-6
CONCLUSIONS AND FUTURE SCOPE
Based on the explanation above, it can be concluded that apart from having many benefits, the
development of smart tourism also has many challenges. For instance, it requires a lot of money,
threatens environmental sustainability, and reduces the need for human resources. These
challenges must be considered by stakeholders so that the development of smart tourism can run
smoothly and be accepted by the community. These challenges must be considered by stakeholders
so that the development of smart tourism can run smoothly and be accepted by the community.
This research details some of the challenges that occur in the development of smart tourism in
various regions. In the future, further research is needed to discuss strategies to address these
challenges.
Tourism activity will recover, but this will require a technological transformation of the entire
sector. Customers now handle a large amount of information and are more demanding when it
comes to assessing the user experience from the moment they start their search until the end of
their trip, so a digital transformation focused on personalizing the traveller’s needs is crucial.

The answer to how to remain competitive and revive tourism is to make the most of technologies
such as Artificial Intelligence, Big Data, Augmented Reality, and mobile apps.

The smart tourism and the recommendation is long way to go. The future work will focus on
improving the existing system in order to provide most efficiently performing system. Though the
research in the development of recommendation system is growing largely in now a day a major
issue comes into existence as how to implement recommendation techniques in real world and
how to solve the problem of large and dynamic input datasets. An algorithm that works well offline
on small datasets may become inefficient when tested online with large datasets.
REFERENCES

Heilig, M.L. Sensorama Simulator. US3050870A, 28 August 1962.


2. Gimeno, J.; Fernández, M.; Morillo, P.; Coma, I.; Pérez, M. A reconfigurable
immersive workbench and
wall-system for designing and training in 3D environments. In Proceedings of the International
Conference
on Virtual, Augmented and Mixed Reality, Beijing, China, 22–27 July 2007.
3. Pan, Z.; Cheok, A.D.; Yang, H.; Zhu, J.; Shi, J. Virtual reality and mixed reality
for virtual learning
environments. Comput. Graph. 2006, 30, 20–28.
4. Gavish, N.; Gutiérrez, T.; Webel, S.; Rodríguez, J.; Peveri, M.; Bockholt, U.; Tecchia,
F. Evaluating virtual
reality and augmented reality training for industrial maintenance and assembly tasks. Interact.
Learn.
Environ. 2015, 23, 778–798.
5. Lau, K.W.; Lee, P.Y. The use of virtual reality for creating unusual environmental
stimulation to motivate
students to explore creative ideas. Interact. Learn. Environ. 2015, 23, 3–18.
6. Vargas González, A.N.; Kapalo, K.; Koh, S.; LaViola, J. Exploring the virtuality
continuum for complex
rule-set education in the context of soccer rule comprehension. Multimodal Technol. Interact.
2017, 1, 30.
7. Hock, P.; Benedikter, S.; Gugenheimer, J.; Rukzio, E. Carvr: Enabling in-car virtual
reality entertainment.
In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver,
CO, USA,
6–11 May 2017; pp. 4034–4044.
8. Bates, J. Virtual reality, art, and entertainment. Presence 1992, 1, 133–138.
9. Casas, S.; Coma, I.; Portalés, C.; Fernández, M. Towards a simulation-based tuning
of motion cueing
algorithms. Simul. Model. Pract. Theory 2016, 67, 137–154.
10. Tudor, S.; Carey, S.; Dubey, R. Development and evaluation of a dynamic virtual reality
driving simulator.
In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to
Assistive
Environments, Corfu, Greece, 1–3 July 2015.
11. Cox, D.J.; Davis, M.; Singh, H.; Barbour, B.; Nidiffer, F.D.; Trudel, T.; Mourant,
R.; Moncrief, R. Driving
rehabilitation for military personnel recovering from traumatic brain injury using virtual reality
driving
simulation: A feasibility study. Mil. Med. 2010, 175, 411–416.
12. Portalés, C.; Alonso-Monasterio, P.; Viñals, M.J. 3D virtual reconstruction and
visualisation of the
archaeological site Castellet de Bernabé (Llíria, Spain). Virtual Archaeol. Rev. 2017, 8,
72–85,
doi:10.4995/var.2017.5890.
13. Richards-Rissetto, H.; Robertsson, J.; von Schwerin, J.; Agugiaro, G.; Remondino, F.;
Girardi, G. Geospatial
virtual heritage: A gesture-based 3D gis to engage the public with ancient maya archaeology.
Archaeol.
Digit. Era 2014, 6, 118–130.
APPENDIX –I
APPENDIX -II
APPENDIX –III (For PG Programs Only)
(A Brief profile not exceeding one page with photograph of the students.)
APPENDIX –IV (For PG Programs Only)

(Brief Profile of the Supervisor/Co-Supervisor)

You might also like