You are on page 1of 11

10 X October 2022

https://doi.org/10.22214/ijraset.2022.46953
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

Metaverse Simulation Based on VR, Blockchain,


and Reinforcement Learning Model
Aryan Bagade1, Rutvij Wamanse2, Prof. Rupesh Jaiswal Chandrakant3, Prof. Girish Shrikisan Mundada4
1, 2
BE Final Year, Department of Electronics and Telecommunication, SCTR’s Pune Institute of Computer Technology, Pune
3
Associate Professor, Department of Electronics and Telecommunication, SCTR’s Pune Institute of Computer Technology, Pune
4
Professor, Department of Electronics and Telecommunication, SCTR’s Pune Institute of Computer Technology, Pune
Savitribai Phule Pune University

Abstract: With advancements in virtual reality and the boom of the Metaverse, the technology underlying modern games is
progressing rapidly. Virtual reality and current Artificial Intelligence algorithms have resulted in impressive results and
immersive user experiences. This proposed work aims to create a first-person blockchain figure VR environment incorporating
reinforcement learning and imitation learning methods. Supervised and reinforcement learning work using input-output
mapping, but the critical difference is that reinforcement learning uses reward and penalty, which are positive and negative
action indicators, respectively. The blockchain-based non-fungible agent, based on this indication, learns which actions are
favourable. For a better experience, these profitable actions with imitation learning in the state action format are mapped
further. In this environment, reinforcement learning is trained with various parameters such as Sensory complexity, Logic
complexity for solving tasks, social complexity, and Physical complexity. This simulation enables users to use NFT guns to shoot
enemy agents. These agents have been trained to navigate the map using Reinforcement and Imitation learning and shoot us
whenever they sense our presence. The simulation is complete if the user kills every agent before his health level finishes. This
proposed work highlights the potential of VR, Blockchain, and reinforcement learning in making Metaverse more interactive
and impressive.
Keywords: Metaverse, Blockchain, Non-Fungible Token, Reinforcement Learning, Unity ML-Agents, Virtual Reality.

I. INTRODUCTION
Simulating reality has always been a widely recognised problem. We start with an essential experience and fine-tune it until it
becomes more and more difficult to distinguish between what is virtual and what is real. That, in essence, is the idea of Virtual
Reality. Virtual Reality (VR) is a giant leap forward in closely synthesising an authentic experience because it changes the dynamic
of the user interacting with the system to the user interacting with the surroundings as a part of the system. While this idea has been
implemented in other mediums, too, from the first-person perspective of Metaverse, the implementation in VR is still a huge step
forward because it completely shuts the user from the “real” surroundings so that the user is completely immersed in the virtual
experience. The crux of this paper is to present an AI-powered VR verse in which the user interacts with subjects in the environment,
which again reacts to the user’s presence using its underlying rules. In this proposed work, the implementation used to achieve this
task is that of a first-person shooting game in which the player is an agent of a map and can freely roam around and shoot down the
other subjects. These subjects, too, react to the user’s presence by firing back or even initiating once they have discovered the user.
Both the user and the subjects have been supplied with limited health, which, when exhausted, leads to the end for either. While the
user’s actions are entirely treated as input, that is not the case with the subjects. The subjects follow pre-written rules to navigate
their way on the map and treat events accordingly.

II. LITERATURE SURVEY


With the advent of the metaverse and the latest developments in the field of Virtual Reality, several modern VR games leverage
immersive technology and intelligent systems to enthral the user experience. Artificial intelligence has been on the rise, and while
more research is being done on supervised and unsupervised learning, the developments in reinforcement learning have been crucial
to modern games and VR applications [1]. Unity is a platform that is popularly used for making games and also making virtual
reality applications. This section discusses some papers that utilize VR and reinforcement learning. For motivating users to make
physical movements, a Virtual Reality based exercise game was proposed that promises to make users physically active [2].

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 67
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

The Unity Gaming Engine, Unity MI-Agents, and the HTC Vive Head-Mounted Display are used to provide a new artificial
intelligence-driven game mechanism that visibly assists users in their motions. It also goes over how Proximal Policy Optimization
and Generative Adversarial Imitation Learning may be used to use deep reinforcement learning to accomplish physical activities
from the same immersive virtual reality game.
They tested our mechanisms with four users, guarding a virtual butterfly with a cooperative "ghost arm" and an autonomous
adversary while visibly aiding them.
According to the findings, deep learning agents may be excellent at learning gaming activities and may supply users with new
insights. Learning from demonstration is a paradigm in which people demonstrate how to do complicated tasks so autonomous
entities can be trained. The performance of LfD, on the other hand, is strongly dependent on the quality of demonstrations, which is,
in turn, dependent on the user interface. Virtual reality was utilized to create an intuitive interface that allows users to provide good
demos [3]. The deep neural network learned attention tactics efficiently with just a few minutes of interaction with the system. For
examining the trade-off in performances of Reinforcement learning and Imitation learning, Upadhyay et al. propose a learning
methodology that combines both approaches [4]. This project aims to develop an experiment in which agents in a virtual
environment learn to play Pong to obtain human-level proficiency with little training time and effort. S. Göllner et al. describe the
integration of reinforcement learning into a game development scenario by creating a competitive volleyball game using the Unity
ML-Agents Toolkit [5]. Chaichanawirote, C. et al. focus on the notion of cooperation to improve robotics and artificial intelligence
[6]. In a virtual reality (VR) game, a robot agent was used to play Roundness. Players' EDA skin sensor data trains the agent via
reinforcement learning. Records of the player's feelings throughout the process are used to evaluate the system and compared
against agents who have not been trained with emotional data. Also role of ML and ESPs [13-69] are becoming important in recent
applications and control.

III. IMPLEMENTATION
This section discusses the methodology proposed in the paper in detail. The proposed paper consists of 5 main components: a)
Prefab Model preparation. b) Adjusting ML agent's Environment Properties. c) Scripting ML agents. d) Basic Multi-Agent Reward
Schemes. e) Integration of XR interaction toolkit with oculus quest 2.

A. Prefab Model Preparation


There are three basic steps in model preparation: Prefab extraction, Material creation, and Texture addition. Prefab extraction is
done in Blender software using 3D Modelling. Prefabs are a component that allows you to save and reuse fully set up Game Objects.
A user must give information regarding the shape and surface appearance to draw anything in Unity. Prefabs/meshes help describe
shapes, whereas materials are used to define the appearance of surfaces. Material files are saved with the .mat file extension.
Materials and shaders are closely linked to each other. To add a created material file to the prefab/mesh, Unity will add the mesh
renderer inside the prefab model. Then dragging the material file onto the material properties will add the mat file and apply the
material characteristic values. ‘.mat’ extension files in Unity only record which shader to use and which textures/values to assign to
the shader’s inputs but do not store the textures, etc. The textures are saved separately as 3D sprite image files [7]. Textures are
image files that lay over or wrap around the Game Objects to give them visual effects. Unity recognises any image file in a 3D
project’s Assets folder as a Texture, generally saved as Sprites.

B. Adjusting ML Agent’s Environment Properties


Various Complexities that have been aroused while adjusting Agent's Environmental attributes.
1) Physical Complexity: The Nvidia PhysX or Havok Physics engines can be used to mimic physical occurrences in Unity
environments. This allows researchers to investigate rigid body, soft body, particle, and fluid dynamics, as well as ragdoll
physics, in various scenarios. Furthermore, the platform's expandable design allows for the usage of third-party physics engines
if required. For example, as an alternative to PhysX 2, Unity plugins support both the Bullet and MuJoCo physics engines.
2) Task Logic Complexity: C# is used by the Unity Engine to provide a comprehensive and flexible scripting system. Any gaming
or simulation can be developed and dynamically controlled with this framework. The GameObject and component system, in
addition to the scripting language, allows for managing numerous instances of agents, policies, and environments, defining
complicated hierarchical tasks or tasks that would require meta-learning to solve.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 68
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

C. Basic Multi-Agent Reward Schemes (BMARSs)


Non-learnable BMARSs ( ) are a set of joint reward functions as follows:
= { :∀ ∈ Κ, ∀ ∈ , = 0} [9]
Here 0 is a Zero Matrix with the same dimension and form as the parameter space that defines . Intuitively, states that
altering the policy of any agent ∈ will not improve .
Isolated BMARSs ( ) are a set of joint reward function as follows:
={ : ∈ , ∀ℵ ∈ Κ, ∀ ∈ , ∀ ∈ { }, ∀ ∈ Π ,∀ ∈Π = 0} [9]
denotes that the episode reward obtained by any agent ∈ has nothing to do with any policy taken by any other agent
∈ \{ }.s
Competitive BMARSs ( ) are defined as
∫ ∈
={ : ∉ ∪? , ∀ ∈ , ∀ ∈ , ∀ ∈ Π ,∀ ∈Π , = 0 } [9]

Fig. 1 Training the Unity agent to traverse the map in a specified area using Reinforcement and imitation learning. The Blue area
specified are the bounds within which the agent receives a positive reward whereas the unmarked regions are where the agent
receives a negative award.
D. Scripting ML Agents
It may be difficult to construct responsive and intelligent non-playable game characters and virtual players. With Unity's Machine
Learning Agents, it is now possible to command intelligent agents to "learn" using a hybrid technique that combines reinforcement
learning and imitation learning (ML-Agents). From the Unity navigation menu, it can construct agents and modify their parameters
as needed. The agent's height and radius have been modified accordingly. Maximum slope is the angle difference that an agent must
follow to move along a path. Step height refers to the vertical distance that an agent is capable of traversing [10]. With the aid of a
reinforcement learning algorithm, a unity agent receives an appropriate reward if it encounters a barrier or impenetrable object. The
approach of imitation learning helps this agent construct a set of rule pairs that may be utilised as state-action trajectories, with each
rule pair indicating the action that should be performed in the visited state. This collection of rule pairs may serve as state-action
trajectories.

Algorithm 1: Reinforcement Learning Algorithm


1 Start
2 Specifies Boundaries on the map
3 Agent Travels the map
4 if (agent crosses the boundary)
5 return -1
6 else
7 return 1
8 if (player in radius and agent runs towards him)
9 return 2
10 End

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 69
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

After training unity agents, additional features are introduced. Such as patrolling along a trained path, pursuing the player if they are
within their line of sight, and attacking the player if certain conditions are satisfied.
A NavMesh agent is utilised to implement the patrolling operation across the map. As stated in Fig. 2, the NavMesh Agent is a
component that assists with character avoidance, movement around the area towards a common objective, and any other scenario
requiring spatial reasoning or pathfinding [11]. After the level's NavMesh has been developed, it is time to design a character
capable of moving freely across the world. With the aid of the A* algorithm, pathfinding is performed. Using the A* method to
update the path with a straight line and minimise the corner point during the process of connecting the beginning point to the
objective has become the most effective way to do A* optimisation and the essence of its effectiveness. [12].

Fig. 2: Pathfinding with waypoint graph and navigation mesh [13] The AI movement is the optimized one providing better results.

Algorithm 2: Imitation Learning Algorithm


1 Start
2 if state= ”chase” && player in sight range
3 Move Towards Player
4 if state= ”chase” && player not in look radius
5 Switch state to Patrolling network
6 if state= ”patrolling”
7 Action to be performed is Patrolling
8 End

Added to the capabilities of the unity agent is the ability to track players. This functionality requires the agent to compute the
distance between its transform and the player's transform. If the player is within the given range, the line-of-sight script will assume
control. This script will determine whether or not our agent can see the athlete. The line-of-sight script will continue if the player
cannot be seen. Even if a problem presents itself, the agent will continue to patrol the area if the player is not visible. If the player is
visible to the agent, the agent will race towards the player until it is close enough to attack. As soon as the player enters the agent's
firing range, the agent will begin firing at them. When utilising third-dimensional degrees of freedom (3D DOF), line of sight
assures that the player is inside a 40-meter radius with a 70-degree angle of sight. The ray cast hit mechanism is used to detect
whether or not the player is within the sight radius. The agent will shoot the player if they are inside the attack range while the
attack player function is activated. The following is a list of the many features contained in this game.

E. Integration of XR interaction toolkit with oculus quest 2


The unity registry makes the XR interaction toolkit available to virtual reality developers as a package. This package permits a link
between the unity application and the Oculus Quest 2 hardware. In order for the developer to utilise this interaction, the
development mode must be enabled [14]. After activating developer mode, the developer is able to link the Unity project to Oculus
Quest 2. AR and VR developers are required to have a framework, which the XR interaction toolkit provide. This toolkit contains
all the fundamental interactors and interactable components, as well as the interaction manager, which is required for connecting
Oculus controllers. The XR interaction toolkit enables the administration of numerous events, including object interactions, object
placements, user interface interactions, and locomotion. In the suggested work, locomotion is utilised for the player's mobility [15].

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 70
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

A continuous motion-based action-based component has been introduced to the player's agent. Thus, the player's actions with the
Oculus controllers can be used to implement the same activities for the player. The word "grab interactable" refers to game objects,
such as firearms and other equipment, with which the user can interact during actual gameplay. To interface with the UI canvas
system, both and Ray interactable must be present. The location where interactive buttons are placed. The Play and Quit buttons
have been added to the update. Moreover, the proper Oculus controller is necessary to interact with the user interface. Whoever is
intended to press the button will experience the anticipated result. Page Numbers, Headers and Footers

IV. RESULTS AND DISCUSSIONS


Using a NavMesh Agent component that rests on them, Unity agents in the virtual reality first-person shooter game are able to
navigate the entire game map. The hostile Agents are capable of patrolling, pursuing, and attacking. Before launching an assault,
players need to eliminate all of their opponents and remain out of the Agent's line of sight. Due to the Agent's attack damage being
pre-set to 10, the game will end and the player will be judged to have lost if they are shot ten times by hostile agents. The player
must eliminate all active enemy agents in order to win the game. The entire map is accessible to both players and enemy agents,
however the latter must use the Oculus controllers to do so. Using the left Oculus controller, the player will be able to navigate to
different locations around the whole game environment. Although the right Oculus controller can be used to interact with the gun,
such as by holding the rifle and firing the bullet, the left controller is necessary for these actions. Oculus controllers enable the
execution of these diverse tasks. The XR interaction toolkit is utilised for managing all Oculus events, including UI events,
Selection Events, and Action Events for the controller buttons. This game is enjoyable for players because it contains numerous
elements and functionalities comparable to those found in the computer game Counter Strike.
The basic prefab of game map is displayed in fig 5. The building prefabs and road structures are added in such a way that they
create a view of industrial are, where player and enemy agents can traverse and roam around.

Fig 3: Enemy Agent used in the game and their configurations. 5Agents trained using Reinforcement learning have been utilized in
the game.
Fig 5 shows the enemy agent, where it’s prefab mesh and textures are rendered in the unity scene. The navmesh agent component
added to the enemy agent is also displayed in this image.

Fig 4: Enemy Agent Attacking when we are in its radius. The enemies start shooting us and their shot power is set to 15.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 71
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

Imitation learning is also used to implement the attacking state. The enemy agent attacking the player can be seen in the fig 7.
Unity ML agents are trained with reinforcement learning with basic multi-agent reward scheme. There are some Non-Learnable ML
Agents present in the scene. Their rewards are displayed in fig 8. The different colours indicate the parameters required at the
training stage of ML agent. The blue line indicates Logic complexity for solving tasks. The red line is used for sensory complexity.
Green colour shows the physical complexity, and purple line displays the social complexity. As all the lines are around 0.4, which
indicates that non-learnable agents aren’t able to distinguish between other agent, obstacles and player. Hence, they are called non-
learnable agents.

Fig 5: ML Agents Non-Learnable BMaRSs

When all the agents are deployed in the scene together, they start helping each other for better survival rate gradually. The fig 10,
shows that initially the agents weren’t social, but gradually they start learning together and after some time they show a very
competitive nature which helps then to survive for a longer game. The graph given over is again showing the various parameters
used for training ML agent. The performance of agents is increasing from 0.3 to 0.7.

Fig 6: Population performance when all ML Agents are added to the scene

V. CONCLUSIONS
This study aims to create a first-person shooting virtual reality game in which the player is an agent of a map who can freely travel
around and shoot down other subjects. Users can shoot enemy agents using guns in this game. These agents, too, respond to the
presence of the user by shooting back or even initiating once they have detected the user. These agents have been taught to employ
Reinforcement and Imitation learning to travel the map and shoot the user whenever they detect his presence. The reinforcement
learning parameters Sensory complexity, Logic complexity for solving tasks, social complexity and Physical complexity makes the
agent smart which helps them traverse the entire map and track player. Both the user and the subjects have been given a certain
amount of health, which when depleted will result in death for either of them.
The non-learnable ML agents aren’t able to distinguish between other agent, obstacles and player. As their performance is lying
around 0.4. But the competitive agents are able to distinguish between other agent, obstacles and player. As their performance is
lying around 0.8 to 1. When all the agents are deployed at the same time on the map, they show a gradual shift from non-learnable
behaviour to competitive behaviour. With performance value going from 0.3 to 0.7.
This work can be further extended by adding a variety of guns and maps for the user to choose from. The ML agents would be then
trained on new maps and updated parameters

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 72
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

VI. ACKNOWLEDGMENT
we would like to thank all the members who have helped me in completing this research work and paper. I express my heartiest
gratitude to Dr. Rupesh Jaiswal Sir for providing me all guidance to complete my research work. Finally last but not least I would
like to thank all my friends and family members and other all members who have directly or indirectly contributed to successful
completion of this research work.

REFERENCES
[1] S. M. Metev and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin, Germany: Springer-Verlag, 1998.
[2] J. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. Berlin, Germany:
Springer, 1989, vol. 61.
[3] S. Zhang, C. Zhu, J. K. O. Sin, and P. K. T. Mok, “A novel ultrathin elevated channel low-temperature poly-Si TFT,” IEEE Electron Device Lett., vol. 20, pp.
569–571, Nov. 1999.
[4] M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin, “High resolution fiber distributed measurements with coherent OFDR,” in Proc. ECOC’00, 2000,
paper 11.3.4, p. 109.
[5] R. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, “High-speed digital-to-RF converter,” U.S. Patent 5 668 842, Sept. 16, 1997.
[6] (2002) The IEEE website. [Online]. Available: http://www.ieee.org/
[7] M. Shell. (2002) IEEEtran homepage on CTAN. [Online]. Available: http://www.ctan.org/tex-archive/macros/latex/contrib/supported/IEEEtran/
[8] FLEXChip Signal Processor (MC68175/D), Motorola, 1996.
[9] “PDCA12-70 data sheet,” Opto Speed SA, Mezzovico, Switzerland.
[10] A. Karnik, “Performance of TCP congestion control with rate feedback: TCP/ABR and rate adaptive TCP/IP,” M. Eng. thesis, Indian Institute of Science,
Bangalore, India, Jan. 1999.
[11] J. Padhye, V. Firoiu, and D. Towsley, “A stochastic model of TCP Reno congestion avoidance and control,” Univ. of Massachusetts, Amherst, MA, CMPSCI
Tech. Rep. 99-02, 1999.
[12] Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification, IEEE Std. 802.11, 1997.
[13] Jaiswal R. C. and Apoorva Ushire, “ Real Time Water Monitoring System Using NodeMCU ESMP8266 ”, Journal of Emerging Technologies and
Innovative Research (JETIR), Open Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson
Reuters, Mendeley : reference manager, ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 9 pp. c1-c8, September 2022.
[14] Jaiswal R. C. and Firoz Saherawala, “ Smart Glasses ”, Journal of Emerging Technologies and Innovative Research (JETIR), Open Access, Peer Reviewed
and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley : reference manager, ISSN-2349-5162,
Impact Factor:7.95, Volume 9, Issue 8 pp. f393-f401, August 2022.
[15] Jaiswal R. C. and Asawari Walkade, “ Denial of Service Detection and Mitigation ”, Journal of Emerging Technologies and Innovative Research (JETIR),
Open Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley : reference
manager, ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 5 pp. f108-f116, May 2022.
[16] Jaiswal R. C. and Fiza Shaikh, “ Augmented Reality based Car Manual System ”, Journal of Emerging Technologies and Innovative Research (JETIR), Open
Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley : reference manager,
ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 5 pp. c326-c332, May 2022.
[17] Jaiswal R. C. and Tejveer Pratap, “ Multiparametric Monitoring of Vital Signs in Clinical and Home Settings for Patients ”, Journal of Emerging
Technologies and Innovative Research (JETIR), Open Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic,
CiteSeerX, Thomson Reuters, Mendeley : reference manager, ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 5 pp. a701-a705, May 2022.
[18] Jaiswal R. C. and Sahil Nahar, “Recognition and Selection of Learning Styles to Personalize Courses for Students”, Journal of Emerging Technologies and
Innovative Research (JETIR), Open Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson
Reuters, Mendeley : reference manager, ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 2 pp. b235-b252, February 2022.
[19] Jaiswal R. C. and Rushikesh Karwankar, “ Demand Forecasting for Inventory Optimization ”, Journal of Emerging Technologies and Innovative Research
(JETIR), Open Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley :
reference manager, ISSN-2349-5162, Impact Factor:7.95, Volume 8, Issue 12 pp. 121-131, January 2022.
[20] Jaiswal R. C. and P. Khore, “ Exo-skeleton Arm ”, Journal of Emerging Technologies and Innovative Research (JETIR), Open Access, Peer Reviewed and
refereed Journal, Indexed In Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley : reference manager, ISSN-2349-5162, Impact
Factor:7.95, Volume 8, Issue 12 pp. 731-734, December 2021.
[21] Jaiswal R. C. and Shreyas Nazare, “ IoT Based Home Automation System”, Journal of Emerging Technologies and Innovative Research (JETIR), Open
Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:7.95, Volume 8, Issue 11 pp. 151-153, November 2021.
[22] Jaiswal R. C. and Prajwal Pitlehra, “Credit Analysis Using K-Nearest Neighbours’ Model”, Journal of Emerging Technologies and Innovative Research
(JETIR), Open Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:7.95, Volume 8, Issue 5, pp. 504-511, May 2021.
[23] Jaiswal R. C. and Rohit Barve, “Energy Harvesting System Using Dynamo”, Journal of Emerging Technologies and Innovative Research (JETIR), Open
Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:7.95, Volume 8, Issue 5, pp. 278-280, May 2021.
[24] Jaiswal R. C. and Sharvari Doifode, “Virtual Assistant”, Journal of Emerging Technologies and Innovative Research (JETIR), Open Access, Peer Reviewed
and refereed Journal, ISSN-2349-5162, Impact Factor:5.87, Volume 7, Issue 10, pp. 3527-3532, October 2020.
[25] Jaiswal R. C. and Akshat Kaushik, “Automated Attendance Monitoring system using discriminative Local Binary Histograms and PostgreSQL”, Journal of
Emerging Technologies and Innovative Research (JETIR), Open Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:5.87, Volume
7, Issue 11, pp. 80-86, November 2020.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 73
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

[26] Jaiswal R. C. and Danish khan, “Arduino based Weather Monitoring and Forecasting System using SARIMA Time-Series Forecasting”, Journal of Emerging
Technologies and Innovative Research (JETIR), Open Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:5.87, Volume 7, Issue 11,
pp. 1149-1154, November 2020.
[27] Jaiswal R.C. and Param Jain, “Augmented Reality based Attendee Interaction at Events”, International Journal for Research in Applied Science & Engineering
Technology (IJRASET), Open Access, Peer Reviewed and refereed Journal, ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:7.429, Volume 8 Issue VI,
pp. 1578-1582, June 2020.
[28] Jaiswal R.C. and Akash Pal, “Cosmetics Application Using Computer Vision”, Journal of Emerging Technologies and Innovative Research (JETIR), Open
Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:5.87, Volume 7, Issue 6, pp. 824-829, June 2020.
[29] Jaiswal R.C. and Jaydeep Bhoite, “Home Renovation Using Augmented Reality”, Journal of Emerging Technologies and Innovative Research (JETIR), Open
Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Impact Factor:5.87, Volume 7, Issue 6, pp. 682-686, June 2020.
[30] Jaiswal R.C. and Aashay Pawar, “Stock Market Study Using Supervised Machine Learning”, International Journal of Innovative Science and Research
Technology (IJISRT), Open Access, Peer Reviewed and refereed Journal , ISSN: 2456-2165; IC Value: 45.98; SJ Impact Factor:6.253, Volume 5 Issue I, pp.
190-193, Jan 2020.
[31] Jaiswal R.C. and Deepali Kasture, “Pillars of Object Oriented System”, International Journal for Research in Applied Science & Engineering Technology
(IJRASET), Open Access, Peer Reviewed and refereed Journal , ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:7.177, Volume 7 Issue XI, pp. 589-591,
Nov 2019.
[32] Jaiswal R.C. and Yash Govilkar, “A Gesture Based Home Automation System”, International Journal for Research in Applied Science & Engineering
Technology (IJRASET), Open Access, Peer Reviewed and refereed Journal, ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:7.177, Volume 7 Issue XI,
pp. 501-503, Nov 2019.
[33] Jaiswal R.C. and Onkar Gagare, “Head Mounted Display”, International Journal for Research in Applied Science & Engineering Technology (IJRASET),
Open Access, Peer Reviewed and refereed Journal, ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:7.177, Volume 7 Issue XI, pp. 535-541, Nov 2019.
[34] Jaiswal R.C. and Nehal Borole, “Autonomous Vehicle Prototype Development and Navigation using ROS”, International Journal for Research in Applied
Science & Engineering Technology (IJRASET), Open Access, Peer Reviewed and refereed Journal, ISSN: 2321-9653; IC Value: 45.98; SJ Impact
Factor:7.177, Volume 7 Issue XI, pp. 510-514, Nov 2019.
[35] Jaiswal R.C. and Vaibhav Pawar, “Voice And Android Application Controlled Wheelchair”, Journal of Emerging Technologies and Innovative Research
(JETIR), Open Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Volume 6, Issue 6, pp. 635-637, June 2019.
[36] Jaiswal R.C. and Shreya Mondhe, “ Waste Segregation & Tracking”, International Journal for Research in Applied Science & Engineering Technology
(IJRASET), Open Access, Peer Reviewed and refereed Journal , ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:7.429, Volume 8, Issue 5, pp. 2085-
2087, May 2019.
[37] Jaiswal R.C. and Shreya Mondhe, “Stock Market Prediction Using Machine Learning & Robotic Process Automation”, Journal of Emerging Technologies and
Innovative Research (JETIR), Open Access, Peer Reviewed and refereed Journal, ISSN-2349-5162, Volume 6, Issue 6, pp. 926-929, February 2019.
[38] Jaiswal R.C. and S.D.Lokhande, “Systematic Performance Analysis of Bit-Torrent Traffic”, Helix SCI INDEXED E-ISSN: 2319-5592; P-ISSNs: 2277-3495,
Helix Vol. 9 (2): pp. 4858- 4863, DOI 10.29042/2019-4858-4863, April 2019.
[39] Jaiswal R.C. and Samruddhi Sonare, “Smart Supervision Security system Using Raspberry Pi ”, Journal of Emerging Technologies and Innovative Research
(JETIR), ISSN-2349-5162, Volume 6, Issue 4, pp. 574-579, April 2019.
[40] Jaiswal R.C. and Manasi Jagtap, “Automatic Car Fragrance Dispensing System”, International Journal of Research and Analytical Reviews (IJRAR), ISSN-
2349-5138, Volume 6, Issue 1, pp. 315-319, March 2019.
[41] Jaiswal R.C. and Sumukh Ballal, “Scalable Healthcare Sensor Network”, Journal of Emerging Technologies and Innovative Research (JETIR), ISSN-2349-
5162, Volume 6, Issue 2, pp. 350-354, February 2019.
[42] Jaiswal R.C. and Samruddhi Sonare, “Multiple Camera Based Surveillance System Using Raspberry Pi”, International Journal of Research and Analytical
Reviews (IJRAR), ISSN-2348-1269, Volume 6, Issue 1, pp. 1635-1637, February 2019.
[43] Jaiswal R.C. and Reha Musale, “Application of Digital Signature to Achieve Secure Transmission”, International Journal for Research in Applied Science &
Engineering Technology (IJRASET),ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:6.887, Volume 7 Issue II, pp. 150-153, February 2019.
[44] Jaiswal R.C. and Himanshu Mithawala, “Automatic Gate Monitoring System”, Journal of Emerging Technologies and Innovative Research (JETIR), ISSN-
2349-5162, Volume 6, Issue 1,pp. 88-94, January 2019.
[45] Jaiswal R.C. and Bernard Lewis,” Dynamic Runway and Gate Terminal Allocation for Flights”, Journal of Emerging Technologies and Innovative Research
(JETIR), UGC approved Journal, ISSN-2349-5162, Volume 5, Issue 12, December 2018.
[46] Jaiswal R.C. and Sakshi Jain,”Text Search Engine”,‘, Journal of Emerging Technologies and Innovative Research (JETIR), UGC approved Journal ISSN-
2349-5162, Volume 5, Issue 11, November 2018.
[47] Jaiswal R.C. and Arti Gurap, “Design of Different Configurations of Truncated Rectangular Microstrip Patch Antenna For 2.4 GHz And 1.6 GHz ‘, Journal of
Emerging Technologies and Innovative Research (JETIR),UGC Approved Journal, ISSN-2349-5162, Volume 5, Issue 10, October 2018.
[48] Jaiswal R.C. and Atharva Mahindrakar, “ Mine Warfare and Surveillance Rover”, International Journal for Research in Applied Science & Engineering
Technology (IJRASET), ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:6.887, Volume 6 Issue III, March 2018.
[49] Jaiswal R.C. and Saloni Takawale “Multi-Client Server Communication Enhancement through Intranet”, International Journal for Research in Applied
Science & Engineering Technology (IJRASET), ISSN: 2321-9653; UGC approved Journal, IC Value: 45.98; SJ Impact Factor :6.887, Volume 6 Issue 1,
January 2018.
[50] Jaiswal R.C. and Nikita Kakade, “Skin disease detection and classification using Image Processing Techniques”, Journal of Emerging Technologies and
Innovative Research (JETIR), ISSN-2349-5162; UGC approved Journal:5.87, Volume 4, Issue 12, December 2017.
[51] Jaiswal R.C. and Nikita Kakade, “OMR Sheet Evaluation Using Image Processing”, Journal of Emerging Technologies and Innovative Research (JETIR),
ISSN-2349-5162; UGC approved Journal:5.87, Volume 4, Issue 12, December 2017.
[52] Jaiswal R.C. and Swapnil Shah, “Customer Decision Support System”, International Research Journal of Engineering and Technology (IRJET), e-ISSN:
2395-0056; p-ISSN: 2395-0072; UGC approved Journal, SJ Impact Factor:5.181, Volume: 04 Issue: 10 | Oct -2017.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 74
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com

[53] Jaiswal R.C. and Ketan Deshpande, “IOT Based Smart City: Weather, Traffic and Pollution Monitoring System”, International Research Journal of
Engineering and Technology (IRJET), e-ISSN: 2395-0056; p-ISSN: 2395-0072; UGC approved Journal, SJ Impact Factor:5.181, Volume: 04 Issue: 10 |
Oct -2017.
[54] Jaiswal R.C. and Vipul Phulphagar, “Arduino Controlled Weight Monitoring With Dashboard Analysis”, International Journal for Research in Applied
Science & Engineering Technology (IJRASET), ISSN: 2321-9653; UGC approved Journal, IC Value: 45.98; SJ Impact Factor:6.887, Volume 5 Issue XI
November 2017.
[55] Jaiswal R.C. and Siddhant Sribhashyam, “Comparison of Routing Algorithms using Riverbed Modeler”, International Journal of Advanced Research in
Computer and Communication Engineering(IJARCCE), ISSN: (Online) 2278-1021; online) 2278-1021 ISSN (Print) 2319 5940; UGC approved Journal,
Impact Factor 5.947Vol. 6, Issue 6, June 2017.
[56] Jaiswal R.C. and Aishwarya Gaikwad, “Experimental Analysis of Bit torrent Traffic based on Heavy-Tailed Probability Distributions”, International Journal
of Computer Applications, ISSN No. (0975 – 8887),Impact Factor .3.1579(2016),Volume 155 – No 2, December 2016.
[57] Jaiswal R.C. and Lokhande S.D., “Evaluation of Effect of Seeds and downloaders on the Performance of Bit Torrent Network using Markov Chain
Modelling”, Journal of Communication Engineering & Systems, Volume 6, Issue 1. (ISSN: 2321-5151 (print version), ISSN: 2249-8613 (electronic version)
IF (2016): 0.709).
[58] Jaiswal R.C. and Lokhande S.D., A. Ahmed, P. Mahajan, “Performance Evaluation of Clustering Algorithms for IP Traffic Recognition”, International Journal
of Science and Research (IJSR), volume-4, Issue-5, May-2015, pp. 2786-2792.(ISSN (Online): 2319-7064, Index Copernicus Value (2013): 6.14|Impact
Factor (2013):4.438
[59] Jaiswal R.C. and Lokhande S.D., Gulavani Aditya “Implementation and Analysis of DoS Attack Detection Algorithms”, International Journal of Science and
Research (IJSR), volume-4, Issue-5, May-2015, pp. 2085-2089.(ISSN (Online): 2319-7064, Index Copernicus Value (2013): 6.14 | Impact Factor (2013):4.438
[60] Jaiswal R.C. and Lokhande S.D., “Performance Analysis for IPv4 and IPv6 Internet Traffic”, ICTACT Journal on Communication Technology, September
2015, volume: 06, issue: 04, pp. 1208-1217.(Print: ISSN: 0976-0091, Online ISSN:2229-6948 (Impact Factor: 0.789 in 2015).
[61] Jaiswal R.C. and Lokhande S.D, “Performance Evaluation of Wireless Networks”, Coimbatore Institute of Information Technology International Journal,
volume-7, Issue-8, July-2015, pp. 1237-1242. (Print: ISSN 0974 – 9616 |Impact Factor: 0.572)
[62] Jaiswal R.C. and Lokhande S.D, “A Novel Approach for Real Time Internet Traffic Classification”, ICTACT Journal on Communication Technology,
September 2015, volume: 06, issue: 03, pp. 1160-1166.(Print: ISSN: 0976-0091, Online ISSN:2229-6948 (Impact Factor: 0.789 in 2015).
[63] Jaiswal R.C. and Lokhande S.D, “Measurement, Modeling and Analysis of HTTP Web Traffic”, IMCIET-International Multi Conference on Innovations in
Engineering and Technology-ICCC-International Conference on Communication and Computing -2014, PP-242-258, ISBN:9789351072690, VVIT,
Bangalore.
[64] Jaiswal R.C. and Lokhande S.D, “Comparative Analysis using Bagging, LogitBoost and Rotation Forest Machine Learning Algorithms for Real Time Internet
Traffic Classification”, IMCIP-International Multi Conference on Information Processing –ICDMW- International Conference on Data Mining and
Warehousing-2014, PP113-124, ISBN: 9789351072539, University Visvesvaraya College of Engg. Department of Computer Science and Engineering
Bangalore University, Bangalore.
[65] Jaiswal R.C. and Lokhande S.D, “Statistical Features Processing Based Real Time Internet Traffic Recognition and Comparative Study of Six Machine
Learning Techniques”, IMCIP- International Multi Conference on Information Processing-(ICCN- International Conference on Communication Networks-
2014, PP-120-129, ISBN: 9789351072515, University Visvesvaraya College of Engg. Department of Computer Science and Engineering Bangalore
University, Bangalore.
[66] Jaiswal R.C. and Lokhande S.D, “Analysis of Early Traffic Processing and Comparison of Machine Learning Algorithms for Real Time Internet Traffic
Identification Using Statistical Approach ”, ICACNI-2014-International Conference on Advanced Computing, Networking, and Informatics),
Kolkata, India,DOI: 10.1007/978-3-319-07350-7_64, Volume 28 of the book series Smart Innovation, Systems and Technologies (SIST),Page:577-587.
[67] Jaiswal R.C. and Lokhande S.D, “Machine Learning Based Internet Traffic Recognition with Statistical Approach”, INDICON-2013-IIT Bombay IEEE
Conference. Inspec Accession Number: 14062512, DOI: 10.1109/INDCON.2013.6726074.
[68] Jaiswal R. C. and Taher Saraf, “ Stock Price Prediction using Machine Learning”, Journal of Emerging Technologies and Innovative Research (JETIR), Open
Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley : reference manager,
ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 9 pp. e33-e41, September 2022.
[69] Jaiswal R. C. and Ritik Manghani, “Pneumonia Detection using X-rays Image Preprocessing”, Journal of Emerging Technologies and Innovative Research
(JETIR), Open Access, Peer Reviewed and refereed Journal, Indexed in Google Scholar, Microsoft Academic, CiteSeerX, Thomson Reuters, Mendeley :
reference manager, ISSN-2349-5162, Impact Factor:7.95, Volume 9, Issue 9 pp. c653-c662, September 2022.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 75

You might also like