You are on page 1of 14

Plagiarism Checker X Originality Report

Similarity Found: 8%

Date: Thursday, February 13, 2020


Statistics: 344 words Plagiarized / 4415 Total words
Remarks: Low Plagiarism Detected - Your Document needs Optional Improvement.
-------------------------------------------------------------------------------------------

Mechatronics Robot Navigation using Machine Learning Through Prolog Programming


Language Shubhangee K. Varma, Arunkumar B. Patki, P. W. Wani Abstract:This paper
builds up a basic decision-making system in view of the Neural Network to explore a
robot in an obscure condition. In view of the neural system model, the robot can move
out of explicit labyrinths effectively through modifying its bearing and speed
persistently.

A BP neural organize, which incorporates three information hubs and nine yield hubs,
are intended for the route framework. The data of the encompassing condition is
returned by six ultrasonic sensors on the front and respective sides of the robot. After a
large number of preparing, the robot learns the route information effectively from the
examples, and move out of the labyrinths self-rulingly.

The presentation of the robot is approved with the re-enactment results and two
physical tests. The results show that the robot could explore self-governing in obscure
situations. Keywords:Intelligent, matching information, image, python, Prolog, AI.
INTRODUCTION The self-ruling robot route is an intriguing examination of the field
where a robot is required to arrive at a predefined objective area while staying away
from impediments in an obscure condition. The route innovation has been generally
utilized in space investigation, administration robots, and military robots.

For instance, the Mars investigation has been applying this innovation. Specifically, a
robot is required to discover a suitable approach to investigate the obscure space on
Mars and gather important data for people. In addition, independent robots are relied
upon to work in some military ventures.
For instance, robots can supplant officers to execute some hazardous errands in wars.
Subsequently, a self-sufficient robot route will absolutely turn into a basic device in the
future. Despite the fact that the independent robot route is generally utilized by
individuals today, there are as yet numerous difficulties for individuals to plan a great
independent robot route. For the most part, there are three fundamental difficulties. The
first challenge is the obscure and dynamic condition.

Due to the impediments of sensor abilities, a robot is hard to know the worldwide
condition during its development. In other words, the robot doesn't have a worldwide
guide ordinarily. As a result, it chooses its activities as indicated by the sensor
information of its nearby condition. The subsequent test is the impact of the sensor
commotion.

As a rule, sensors can without much of stretch get off-base information estimation in a
mind-boggling condition, which makes the robot select an off-base activity. The last test
is the impediment of the computational capacity of the robot. In such conditions, a
productive route calculation is significant to explore the robot in a mind-boggling
condition.

AI might be a decent answer to meet the above difficulties [1]. Through Machine
Learning, a robot can gain from people's information, the encounters of effective cases
and bombed cases, and improve its route abilities consistently. At long last, the robot
can explore in an intricate and obscure condition effectively.

LITERATURE SURVEY Vision-based human action acknowledgment discovers its


application in numerous fields, for example, video reconnaissance, robot route, telecare,
and surrounding knowledge. The majority of the most recent looks into in the field of
mechanized HAR dependent on skeleton information use profundity gadgets, for
example, Kinect to acquire 3D skeleton data straightforwardly from the camera.

In spite of the fact that these inquiries about accomplish high precision yet are carefully
gadget subordinate and can't be utilized for recordings other than from explicit
cameras. Current work centers around the utilization of just 2D skeletal information
extricated from recordings acquired through any standard camera, for action
acknowledgment.

Appearance and movement highlights were extricated utilizing 2D places of human


skeletal joints through the OpenPose library. The methodology was prepared and tried
on publically accessible datasets. Administered AI was executed for perceiving four-
movement classes including sit, stand, walk, and fall.
Execution of five systems counting K-closest neighbors bolster vector machine, Naive
Bayes, straight discriminant and feed-forward backpropagation neural system was
contrasted with locating the best classifier for the proposed technique. All strategies
performed well with the best outcomes acquired through the KNN classifier [1].

Portable automated stages exploring in unstructured and dynamic conditions


extraordinarily advantage from unconstrained omnidirectional movement. Ground
robots with circular wheels (ball-driven robots) can empower light-footed
omnidirectional versatility over a wide scope of ground landscapes. Slip event at the
drive and ground contact surfaces diminishes activation execution, particularly during
fast vehicle speeding up and route on evaluated landscapes.

Right now, plan of another attractively coupled ball drive that utilizations controllable
attractive powers to increment the transmittable incitation torque and improve footing
execution is portrayed. The plan utilizes an interior help structure to attractively couple
the round wheel to the skeleton empowering it to work as an omnidirectional axel.

Utilizing a model of the attractively coupled ball drive, the slip/no-slip operational
window of the new structure is assessed. A help vector grouping machine is utilized to
prepare and arrange the slip/no-slip districts and recognize the relative significance
scores of the element parameters arranged by affectability. The characterization gave
understanding into fitting scopes of the basic parameters that can improve footing
execution.

In view of the characterization of the structure space, various plan and operational
focuses were acquired to manage the plan procedure further. Magnetostatic recreations
are then used to configuration space proficient attractive clusters prepared to do
creating coupling powers in the ideal range. A model of the new ball drive configuration
is created, and the reason that the attractively coupled ball drive can improve the slip
execution is tentatively tried.

The outcomes demonstrate that it is conceivable to control the footing powers at both
drive and ground surfaces utilizing the attractive coupling constrain and generously
increment the slip execution of the ball drive utilizing the new structure [2]. Surface
electromyography signal assumes a significant job close by work recuperation
preparation.

Right now, an IoT-empowered stroke restoration framework was presented which


depended on a brilliant wearable armband, AI calculations, and a 3D printed apt robot
hand. Client comfort is one of the key issues which ought to be tended to for wearable
gadgets. The brilliant wearable armband was created by coordinating a low-power and
little estimated IoT detecting gadget with material cathodes, which can gauge, pre-
process, and remotely transmit bio-potential signs.

By equitably dispersing surface cathodes over the client's lower arm, downsides of order
precision lackluster showing can be alleviated. Another strategy was set forward to
locate the ideal list of capabilities. AI calculations were utilized to dissect and segregate
highlights of various hand developments, and their exhibitions were evaluated by order
unpredictability assessing calculations what's more, head parts examination.

As indicated by the confirmation results, every one of the nine signals can be effectively
recognized with a normal exactness up to 96.20%. Also, a 3D printed five-finger robot
hand was executed for hand restoration preparing reason. Correspondingly, the client's
hand development goals were separated and changed over into a progression of
directions that were utilized to drive engines collected inside the adroit robot hand.

Subsequently, the adroit robot hand can copy the client's signal in a constant way, which
shows the proposed framework can be utilized as a preparation apparatus to encourage
the recovery process for the patients after stroke [3]. Brilliant gadgets utilizing
interconnected sensors for criticism and control are by and large quickly received.

Numerous helpful applications for these gadgets are in business sectors that request
cost-conscious arrangements. Conventional AI put together control frameworks
frequently depend on respect to different estimations from numerous sensors to
accomplish execution targets. An elective strategy is introduced that uses a period
arrangement yield created by a solitary sensor.

By utilizing area master information, the time arrangement yield is discretized into
limited interims that compare to the physical occasions happening in the framework.
Factual measures are taken over these interims to fill in as the highlights of the AI
framework. Extra highlights that decouple key physical measurements are distinguished,
improving the presentation of the framework.

This tale approach requires an increasingly unassuming informational collection and


doesn't bargain execution. The subsequent advancement exertion is altogether more
financially savvy than conventional sensor grouping frameworks, because of the
decreased sensor tally, yet in addition because of a fundamentally rearranged and
progressively vigorous calculation improvement and testing step.
Results are given the contextual investigation of a media-type grouping framework
inside a printing framework, which was sent to the field as a business item [4].
Notwithstanding vehicle control, drivers regularly perform optional errands that block
driving. A decrease in driver interruption is a significant test for the wellbeing of smart
transportation frameworks.

Right now, system for the identification and assessment of driver interruption while
performing optional errands is depicted and a piece of proper equipment and a
programming condition are offered and contemplated. The framework incorporates a
model of typical driving, a subsystem for estimating the mistakes from the optional
errands, and a module for all-out interruption assessment.

Another AI calculation characterizes driver execution in path keeping and speed support
on a particular street fragment. To perceive the mistakes, a strategy is proposed, which
contrasts ordinary driving parameters and ones acquired while directing an auxiliary
assignment. To assess interruption, a compelling fluffy rationale calculation is utilized.

To check the proposed approach, a contextual analysis with driver-on top of it tests
were done, in which members played out the optional assignment, to be specific visiting
on a PDA. The outcomes displayed right now investigate affirm its capacity to recognize
and to definitely quantify a degree of unusual driver execution [5].

The state of a machine can consequently be recognized by making and grouping


highlights that abridge attributes of estimated signals. Right now, specialists, in their
individual fields, devise these highlights dependent on their insight. Henceforth, the
presentation and convenience relies upon the master's information on the basic material
science or insights.

Moreover, assuming new and extra conditions ought to be distinguishable, specialists


need to execute new component extraction strategies. To relieve the downsides of
highlight designing, a strategy from the sub-field of highlight learning, for example,
profound learning, all the more explicitly convolutional neural systems, is looked into
right now.

The goal of this article is to examine if and how profound learning can be applied to
infrared warm video to consequently decide the state of the machine. By applying this
technique on infrared warm information in two use cases, for example, machine issue
discovery and oil level forecast, we show that the proposed framework can recognize
numerous conditions in pivoting apparatus precisely without requiring any nitty-gritty
information about the hidden material science, and consequently having the capacity to
essentially improve condition checking utilizing complex sensor information.

Moreover, we show that by utilizing the prepared neural systems, significant locales in
the infrared warm pictures can be recognized identified with explicit conditions that can
conceivably prompt new physical experiences [6]. Equal robots are known for their solid
bearing capacity and high kinematic precision, yet they are moderately hard to structure
and to educate.

This paper tends to this trouble by displaying a wise PC supported guidance


demonstrating a technique for equal robot guidance. The paper breaks down,
concerning their approaching instructive profile, Mechatronics understudies' intellectual
procedures while getting information on equal robots; it additionally thinks about the
instructive advantages of different techniques for showing this subject.

The ICAI model for instructing equal robots is established in AI, utilizing data
combination techniques dependent on a counterfeit neural system (ANN). Two terms of
utilizing the ICAI model have approved the technique's viability in showing equal robots,
giving a normal report system and improving the understudies' learning procedure [7].

Existing conclusion of the mental imbalance range issue (ASD) vigorously relies upon
the source's assessment of the patient's conduct, which is both tedious and work
requesting. So as to create a quick analytic instrument with high precision, AI (ML)
approaches have been proposed to investigate the possibility of recognizing ASD with a
predetermined number of highlights extricated from conduct assessment, neuroimaging
and kinematic information.

In spite of the fact that limited and dreary conduct is one of the cardinal side effects of
ASD, no examination has been directed to research whether confined kinematic
highlights could be utilized to distinguish ASD. The present examination expected to
address this inquiry. Twenty kids with high working chemical imbalance and twenty-
three kids with a run of the mill advancement were selected.

They were educated to play out an engine task that necessary the execution of the very
pinnacle of variation development. Entropy and 95% scope of the development
sufficiency, speed, and increasing speed were processed as files of RKF. Five ML
classifiers were prepared and tried including bolster vector machine, Linear Discriminant
Analysis, Decision tree, Random woodland, and K closest neighbor.

Results indicated that the KNN calculation (k = 1) yielded the most elevated order
exactness with four kinematic highlights Our investigation exhibited that RKF could help
vigorously recognize ASD. It is surmised that the use of ML on hereditary, neuroimaging,
mental and kinematic highlights may represent a significant test to the current indicative
criteria of ASD, and might possibly lead to a mechanized and target analysis of ASD [8].
Robots are assuming an undeniably significant job in current medical procedure.

Nonetheless, regular human–PC communication strategies, for example, joystick control


and sound control, have a few weaknesses, and restorative workforce are required to
explicitly work on working the robot. We propose a human–PC communication model
dependent on eye development with which medicinal staff can helpfully utilize their eye
developments to control the robot.

Our calculation requires just an RGB camera to perform errands without requiring costly
eye-GPS beacons. Two sorts of eye control modes are structured right now. The main
kind is the pick and spot development, with which the client utilizes eye stare to
determine where the automated arm is required to move. The subsequent sort is client
direction development, with which the client can use eye stare to choose the course in
which the client wants the robot to move.

The exploratory outcomes show the achievability and comfort of these two methods of
development [9]. A hearty vision-based versatile control framework for wheeled
portable robots. Specifically, this paper tends to the maintenance of visual highlights in
the field of perspective on the camera, which is a significant vigor issue in visual
servoing.

To start with, the old-style approach of picture-based visual servoing for fixed-base
controllers is reached out to WMRs what's more, a control law with Lyapunov strength is
resolved. Second, in request to ensure permeability of visual highlights, an inventive
controller with AI utilizing Q-learning is proposed, which can get familiar with its
conduct approach and independently improve its exhibition.

Third, a half and half controller for powerful portable control are created to incorporate
the IBVS controller and the Q-learning controller through a standard based referee. This
is believed to be the first paper that incorporates support learning or Q-learning with
visual servoing to accomplish the hearty activity. Trials are completed to approve the
methodologies created right now.

The exploratory outcomes show that the new half and half controller created here has
the abilities of self-learning and quick reaction, also, it gives a fair exhibition regard to
strength what's more, exactness [10]. Thecomplexness of coordinated circuits is
expanding though steadfastness of the components is diminishing a result of little
entryways and intersection transistor.

one among the effects of innovation scaling is a ton of affectability to transient and
perpetual deficiencies. Along these lines, issue tolerant framework assumes
indispensable job in fundamental application any place quick act isn't possible. For solid
and practical activity of a framework, the discovery of the transient shortcoming is
significant. it's appallingly problematic to watch these flaws disconnected.

The issue tolerant circuits will watch the flaws and endure the distinguished issues.
Along these lines, AN affordable shortcoming perception style is anticipated for full
snake during this paper which may distinguish the flaws with their real area and also
endure the deficiencies. The anticipated styles will watch the flaws in single and multi-
web.

Inductive Logic Programming (ILP) is programing language we tend to zone unit taking
care of utilized in programing language programming framework to style our
framework. Inductive Logic Programming (ILP) could be another control that researches
the inductive development of first-request articulation speculations from models and
foundation. In the first place, differed drawback determinations of ILP region unit
formalized in etymology settings for ILP, yielding a "model-hypothesis" for ILP Second, a
nonexclusive ILP algorithmic program is presented.

Third, the sensible reasoning guidelines and relating administrators utilized in ILP
territory unit presented, prompting a "proof-hypothesis" for ILP Fourth, since inductive
coherent reasoning doesn't produce explanations that region unit guaranteed to follow
based on what's given, inductive deductions need another kind of defense. this could
take the state of either probabilistic help or intelligent imperatives on the theory
language [11].

Autonomous Navigation using Neural Networks Autonomous Navigation Problem. The


impediment data from the sonar sensors, a robot is required to move out of a
predetermined winding labyrinth from the home situation at the earliest opportunity, as
appeared in Figure 1. What's more, it is accepted that the robot doesn't have a
worldwide guide of the labyrinth. / Fig. 1.

The spiral maze with the home position and initial orientation In Figure 1, at first, the
robot is placed in the focal point of the labyrinth (home situation) with an underlying
direction. At that point the robot needs to choose great activities to move out of the
labyrinth as quickly as time permits, in view of the criticism data from its sonar sensors.
Backpropagation Neural Network So as to assist the robot with navigating effectively in
the labyrinth, the BP neural system is utilized right now select right activities for the
robot. Through giving input of blunders in each preparation, the BP Neural Network, a
sort of mistake back-proliferation feed-forward system, continues refreshing the loads
and edges until the mistake is joined to the normal worth and inside satisfactory cutoff
points [1]. Topology architecture of Neural Network.

The BP neural system, as appeared in Figure 2, has 3 information hubs (or units), which
speak to the most limited separations between the robot and snags estimated by sonar
sensors in three unique ways. What's more, there are 10 hubs in the concealed layer and
9 hubs in the yield layer. Every one of the 9 hubs in the yield layer relates to a particular
activity of the robot.

The estimations of the yield hubs figure out which activity is unequivocally suggested. /
Fig. 2.The architecture of the BP Neural Network. In the information layer, the three hubs
speak to the most brief separations between the robot and deterrents in the front side,
the left side and the correct side. In like manner, the sonar sensors are set to filter three
explicit segments, - 70 degrees to - 30 degrees for the left side, - 15 degrees to +15
degrees for the front side, and +30 degrees to +70 degrees for the correct side.

At the point when the separation is restored, the information should be changed over to
one of three discrete qualities (1, 2 or 3) preceding being contribution to the hubs.
Specifically, the estimation of "1" relates to a separation which is under 1200 millimeters,
or a "nearby" separation. The estimation of "2" relates to a separation which is between
1200 millimeters and 1600 millimeters, or a "center" separation.

The estimation of "3" relates to a separation which is more noteworthy than 1600
millimeters, or a "far" separation. In light of the estimations of the three information
hubs, the prepared neural system is required to choose a right route activity for the
robot. The quantity of units in the shrouded layer has a significant effect on the
exhibition of the system.

SIMULATIONS AND EXPERIMENTS So as to approve the presentation of the proposed


BP Neural Network, an AmigoBotTM versatile robot is utilized to assemble the test
stage. The robot is a two-wheel driven portable robot, which remembers a for board PC
inside its case, and six ultrasonic sensors on the front and two-sided sides of the robot.

The ultrasonic transducers filter nature at a 100-millisecond examining period and the
prolog program forms the information and controls the movement of the robot. The
prolog program depends on the ARIA programming (the portable robot APIs) to control
the speed and direction of the robot. Training Results In the training stage, twenty-
seven samples in Table 1 are used to train the neural network, and the training results
are shown in Figure 3 /(a) Some weights of the BP Neural Network /(b) The training
error of the BP Neural Network Fig. 3.

The training results of the BP Neural Network (After 2755 loops of training, the weights
converged. The final training error is 0.0009996245 while the training goal is 0.001)
Simulation Results A physical winding labyrinth is worked in the lab to speak to the
obscure condition for the versatile robot, as appeared in Figure 4.

The labyrinth has a breadth of 8m. The pictures which are gotten from the genuine
investigation are displayed in Figure 4/ Fig.4. The examination aftereffects of the
proposed self-governing route framework in a winding labyrinth. Some measurable
information of the examinations are appeared in Figure 5.

Specifically, the likelihood conveyance of the route time in 30 examinations is appeared


in Figure 5(a). It demonstrates that the robot complete the route task in 42seconds in a
large portion of cases. Figure 5(b) and 5(c) speak to the rotational speed and
translational speed of the robot in a route task which went through 47 seconds.

From Figure 5(b), it very well may be inferred that the robot makes more right turns than
left turns. This outcome meets the element of the earth, a clockwise labyrinth. Figure
5(d) shows the historical backdrop of three estimated separations returned by the
ultrasonic transducers during the route. / (a) The probability density distribution of the
navigation time of the robot through the maze (30 times) / (b) The rotational velocity of
the robot (deg/sec).

/ (c) The translational velocity of the robot (mm/sec.) / ( d) The left, right and front
separations estimated by the sonar sensors. Fig. 5.The test consequences of
independent route in the square labyrinth. Through contrasting Figure 4 with Figure 5, it
very well may be inferred that the robot has a moderately higher forward speed in the
square labyrinth.

The explanation is that the robot doesn't have to modify its heading much of the time
when running in the square labyrinth. From Figure 5 to Figure 4, plainly the neural
system based independent route innovation is very effective to assist the robot with
moving out of the labyrinths rapidly. f) pr % nondetermrobothold(s,s,s,s,s,s).%
nondetermrobotput(x,x,x,x,x,s,s).nondeterm check(s,s).nondeterm robot(s,s,s,s,s,s,s,y).%
nondeterm call(y).%
nondeterm
valid(x).Clausescheck(“triangle”,”square”).check(“semisquare”,”square”).check(“circle”,”se
misquare”).robot(Square,SemiSquare,Triangle,Circle,Hand,Source,Target,0):-
nl,write(“Press 1 to hold”),nl,write(“Press 2 to put”),nl,write(“Press 3 to
free”),nl,write(“Press 4 to exit”),nl,write(“Press 7 to print”),nl,write(“Enter
Number:”),readint(N),robot(Square,SemiSquare,Triangle,Circle,Hand,Source,Target,N).rob
ot(Square,SemiSquare,Triangle,Circle,Hand,Source,Target,N):-N<1,write(“Enter valid
input 1,2,3,4,7:
“),readint(M),robot(Square,SemiSquare,Triangle,Circle,Hand,Source,Target,M).robot(Squa
re,SemiSquare,Triangle,Circle,Hand,Source,Target,N):-N>7 write(“Enter valid input
1,2,3,4,7:
“),readint(M),robot(Square,SemiSquare,Triangle,Circle,Hand,Source,Target,M).robot(_,_,_,_
,_,_,_,4):-write(“Good Bye”),nl.robot(Square,SemiSquare,Triangle,Circle,Hand,_,Target,1):-
nl,Hand=”free”,write(“Enter shape to hold ‘sqaure,triangle,semisquare,circle’:
“),readln(Shape),robot(Square,SemiSquare,Triangle,Circle,”full”,Shape,Target,6).robot(Squ
are,Semisquare,Triangle,Circle,”full”,Shape,Target,6):-
Square=”free”,Shape=”square”,robot(“hold”,Semisquare,Triangle,Circle,”full”,Shape,Targe
t,0).robot(Square,SemiSquare,Triangle,Circle,”full”,Shape,Target,6):-
Semisquare=”free”,Shape=”semisquare”,robot(Square,”hold”,Triangle,Circle,”full”,Shape,
Target,0).robot(Square,SemiSquare,Triangle,Circle,”full”,Shape,Target,6):-
Triangle=”free”,Shape=”triangle”,robot(Square,SemiSquare,”hold”,Circle,”full”,Shape,Targ
et,0).robot(Square,SemiSquare,Triangle,Circle,”full”,Shape,Target,6).

CONCLUSION A self-ruling route approach utilizing neural systems was grown with the
goal that a portable robot could utilize its onboard sonar sensors to self-rulingly explore
through an obscure situation. To begin with, the engineering of the BP Neural Network
was set up for an independent route. Second, via preparing the system with 27
examples, the robot learned the right route aptitudes in the obscure condition.

At long last, the reproduction and exploratory outcomes were introduced to approve
the proposed approach. This paper is believed to be a decent exertion to improve the
self-governing route capacities of portable robots utilizing AI. The trial results approved
the viability of the methodology. ACKNOWLEDGMENT It is optional.

The preferred spelling of the word “acknowledgment” in American English is without an


“e” after the “g.” Use the singular heading even if you have many acknowledgments.
Avoid expressions such as “One of us (S.B.A.) would like to thank ... .” Instead, write “F. A.
Author thanks ”Sponsor and financial support acknowledgments are placed in the
unnumbered footnote on the first page. REFERENCES Sumaira Ghazal1, Umar S.
Khan, Muhammad MubasherSaleem, Nasir Rashid, JavaidIqbal,” Human activity
recognition using 2D skeleton data and supervised machine learning”, IET Image
Process., 2019, Vol. 13 Iss. 13, pp. 2572-2578. Biruk A. Gebre, Member, IEEE & ASME, and
Kishore Pochiraju, Member, ASME, “Machine Learning Aided Design and Analysis of a
Novel Magnetically Coupled Ball Drive”, DOI 10.1109/TMECH.2019.2929956, IEEE/ASME
Transactions on Mechatronics.

Geng Yang, Member, IEEE,Jia Deng, Gaoyang Pang, Hao Zhang, Jiayi Li, Bin Deng, Zhibo
Pang,” AN IOT-ENABLED STROKE REHABILITATION SYSTEM BASED ON SMART
WEARABLE ARMBAND AND MACHINE LEARNING”, DOI 10.1109/JTEHM.2018.2822681,
IEEE Journal of Translational Engineering in Health and Medicine. Niko Murrell, Ryan
Bradley, Nikhil Bajaj,” A Method for Sensor Reduction in a Supervised Machine Learning
Classification System”, DOI 10.1109/TMECH.2018.2881889, IEEE/ASME Transactions on
Mechatronics .

Andrei Aksjonov , PavelNedoma, Valery Vodovozov,, “Detection and Evaluation of Driver


Distraction Using Machine Learning and Fuzzy Logic 1524-9050 © 2018 IEEE Olivier
Janssens, Rik Van de Walle, Mia Loccufier and Sofie Van Hoecke,” Deep Learning for
Infrared Thermal Image Based Machine Health Monitoring”, I
10.1109/TMECH.2017.2722479, IEEE/ASME Transactions on Mechatronics.

Da-Peng Tan, Shi-Ming Ji, and Ming-Sheng Jin “Intelligent Computer-Aided Instruction
Modeling and a Method to Optimize Study Strategies for Parallel Robot Instruction”,
IEEE TRANSACTIONS ON EDUCATION, VOL. 56, NO. 3, AUGUST 2013. Applying Machine
Learning to Identify Autism With Restricted Kinematic Features”, Received October 10,
2019, accepted October 24, 2019.

PENG LI,XUEBIN HOU1 , XINGGUANG DUAN2 , HIUMAN YIP3 ,GUOLI SONG4 , AND
YUNHUI LIU ppearance-Based Gaze Estimator for Natural Interaction Control of Surgical
Robots” 2169-3536 2019 IEEE Ying Wang, Member, IEEE, Haoxiang Lang, Student
Member, IEEE, and Clarence W. de Silva, Fellow, IEEE, “A Hybrid Visual Servo Controller
for Robust Grasping by Wheeled Mobile Robots”, IEEE/ASME TRANSACTIONS ON
MECHATRONICS, VOL. 15, NO.

5, OCTOBER 2010. Shubhangee KishanVarma, ArunkumarB. Patki,” Fault Detection in


Combinational Circuit (Full Adder)Using Prolog”, Website:www.ijircce.comVol. 7, Issue
11, November 2019. AUTHORS PROFILE Shubhangee K. Varma completed B.E in E & TC
from Cummins College of engineering, Pune. She is pursuing her MTECH in College of
Engineering, Pune. She works as Lecturer of Electronics and Telecommunication at
Cusrow Wadia Institute of Technology, Pune.
Her areas of interest include VLSI and Embedded System design, Artificial Intelligence.
She has 9 years of experience in teaching. Second Author profile which contains their
education details, their publications, research work, membership, achievements, with
photo that will be maximum 200-400 words.

Third Author profile which contains their education details, their publications, research
work, membership, achievements, with photo that will be maximum 200-400

INTERNET SOURCES:
-------------------------------------------------------------------------------------------
<1% - https://www.researchgate.net/publication/319858722_Multi-
step_Reinforcement_Learning_Algorithm_of_Mobile_Robot_Path_Planning_Based_on_Virt
ual_Potential_Field
<1% - https://id.123dok.com/document/qm0g104y-information-and-communication-
technology-for-development-for-africa-pdf-pdf.html
<1% - https://qz.com/is/the-world-in-50-years/themes/
<1% - https://www.therobotreport.com/robot-grippers-advance/
<1% - http://www.cl-christianlouboutin.co.uk/page/2/
<1% -
https://www.researchgate.net/profile/Najmuddin_Aamer/publication/288511811_NEURA
L_NETWORKS_BASED_ADAPTIVE_APPROACH_FOR_PATH_PLANNING_AND_OBSTACLE_A
VOIDANCE_FOR_AUTONOMOUS_MOBILE_ROBOT_AMR/links/5681b94408ae051f9aec58
f2/NEURAL-NETWORKS-BASED-ADAPTIVE-APPROACH-FOR-PATH-PLANNING-AND-
OBSTACLE-AVOIDANCE-FOR-AUTONOMOUS-MOBILE-ROBOT-AMR.pdf
<1% - https://akinternationalnews.blogspot.com/p/science.html
<1% -
https://www.researchgate.net/publication/245161193_Failure_analysis_of_a_ball_transfer
_unit
<1% - http://repository.kulib.kyoto-u.ac.jp/dspace/search-export?
query=principle&target=/&count=2940&format=csv
<1% - https://www.pinterest.com/healthybuddys/fitness-devices/
<1% - https://canceroncologyresearch.wordpress.com/sessions/
<1% - http://www.ijstr.org/research-paper-publishing.php?month=nov2019
<1% - https://www.researchgate.net/scientific-contributions/10711777_Ken_Satoh
<1% -
https://www.researchgate.net/publication/222399424_Unveiling_the_hidden_bride_Deep
_annotation_for_mapping_and_migrating_legacy_data_to_the_Semantic_Web
<1% - https://www.science.gov/topicpages/w/waste+retrieval+challenges.html
<1% - https://en.wikipedia.org/wiki/Surfactant
<1% - https://www.sciencedirect.com/science/article/pii/S016816991931748X
<1% - http://citeseerx.ist.psu.edu/viewdoc/download?
doi=10.1.1.901.7501&rep=rep1&type=pdf
<1% - https://static.sdcpublications.com/multimedia/978-1-58503-767-
4/files/krb/krb_ic_ep3.htm
<1% - http://www.ijlll.org/IJLLL_template.doc
<1% - http://joebm.com/JOEBM_template.doc
<1% - https://web.stevens.edu/facultyprofile/?id=2146
1% - https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?
punumber=3516&isnumber=8692661
<1% - https://health.embs.org/author/zhibopang/
<1% - https://www.researchgate.net/journal/2168-
2372_IEEE_Journal_of_Translational_Engineering_in_Health_and_Medicine
<1% - https://www.hindawi.com/journals/sv/citations/29/
<1% - https://dblp.uni-trier.de/db/journals/te/te56
<1% - http://mre.faculty.asu.edu/CirKit.pdf
<1% - https://www.researchgate.net/publication/337367854_Machine_Learning-
Based_Models_for_Early_Stage_Detection_of_Autism_Spectrum_Disorders
<1% - https://link.springer.com/chapter/10.1007%2F978-3-319-00369-6_10
<1% - https://www.scribd.com/document/253749839/Excellence-Through-Autonomy-
Transformation-of-College-of-Engineering-Pune-into-an-IIT-like-Institution
<1% - https://www.design-reuse.com/articles/43288/wind-turbine-fault-detection-
machine-learning-neural-networks.html
1% - http://www.ijrte.org/wp-content/uploads/IJRTE_Paper_Template.doc

You might also like