You are on page 1of 10

The Report is Generated by DrillBit Plagiarism Detection Software

Submission Information

Author Name venkata rao yanamadni


Title Model for Recognizing Human Behavior via Feature and Classifier Selection
Paper/Submission ID 1447253
Submitted by kgr_plagiarism@kgr.ac.in
Submission Date 2024-02-19 16:28:35
Total Pages 7
Document type Research Paper

Result Information

Similarity 25 %
1 10 20 30 40 50 60 70 80 90

Sources Type Report Content


Quotes
0.16%

Words <
Journal/ 14,
Internet
Publicatio 13.6% Ref/Bib
12.46%
n 12.54% 17.41%

Exclude Information Database Selection

Quotes Excluded Language English


References/Bibliography Excluded Student Papers Yes
Sources: Less than 14 Words % Not Excluded Journals & publishers Yes
Excluded Source 0% Internet or Web Yes
Excluded Phrases Not Excluded Institution Repository Yes

A Unique QR Code use to View/Download/Share Pdf File


DrillBit Similarity Report

A-Satisfactory (0-10%)
B-Upgrade (11-40%)

25 29 B C-Poor (41-60%)
D-Unacceptable (61-100%)
SIMILARITY % MATCHED SOURCES GRADE

LOCATION MATCHED DOMAIN % SOURCE TYPE

1 MDLdroidLite a release-and-inhibit control approach to resource- Publication


3
efficient deep by Zhang-2020

2 www.mdpi.com Internet Data


3

3 ieeeindiacouncil.org Publication
3

5 arxiv.org Publication
1

7 www.mdpi.com Internet Data


1

8 hcis-journal.springeropen.com Publication
1

10 www.ncbi.nlm.nih.gov Internet Data


1

11 Ultrahigh resolution mapping of peatland microform using ground-based Publication


1
by Mercer-2016

12 aaltodoc2.org.aalto.fi Internet Data


1

13 arxiv.org Internet Data


1

14 nature.com Internet Data


1

15 www.mdpi.com Internet Data


1

16 Advances in preparation methods and mechanism analysis of layered Publication


1
double hydroxi by Yu-2020
18 www.researchgate.net Internet Data
1

19 spandidos-publications.com Internet Data


1

20 acemap.info Internet Data


1

21 coek.info Internet Data


1

22 nature.com Internet Data


1

23 www.linkedin.com Internet Data


1

24 asbmr.onlinelibrary.wiley.com Internet Data


<1

25 Deep networks for predicting direction of change in foreign exchange Publication


<1
rates by Galeshchuk-2017

26 IEEE 2014 13th Workshop on Information Optics (WIO) - Neuchatel, Publication


<1
Swi by

27 ijircce.com Publication
<1

28 Modeling of corrosion product migration in the secondary circuit of n, by Publication


<1
Kritskii, V. G. Be- 2016

29 Thesis submitted to dspace.mit.edu Publication


<1

30 The probabilistic constraints in the support vector machine by Had-2007 Publication


<1

31 Unusual dimerization of N-protected bromomethylindoles using Publication


<1
phenylmagnesium chl by Arasambatt-2005

32 www.ncbi.nlm.nih.gov Internet Data


<1

33 www.science.gov Internet Data


<1
Model for Recognizing Human Behavior via Feature
and Classifier Selection
Mr. Venkata Rao Yanamadni¹, Telakapalli Karthikeya 2, Rajamuri Shiva Reddy3. Somarouthu
Gowtham4, Shivaghoni Kiran Goud5,

3
¹Assistant Professor, Department of Computer Science and Engineering, KG Reddy
College of Engineering & Technology, Moinabad, Telangana, India,
2,3,4,54th Year, Department of Computer Science and Engineering, KG Reddy College of

Engineering and Technology, Hyderabad, India.

Abstract

Motion or inertial sensors, like the accelerometer and gyroscope


that are frequently found in smartphones and smartwatches, can
measure the acceleration and angular velocity of bodily movements
23
and be used to train models that can recognize human activities.
18
These models can then be applied to a variety of fields, including
biometrics and remote patient health monitoring. Because deep
learning-based methods employ representation learning techniques,
which can automatically identify hidden patterns in data and
25
generate optimal features from raw input data generated from
29
sensors without human intervention, they have gained popularity
10
recently for the recognition of human activity. In order to recognize
20
human activity, this paper suggests a novel hybrid deep neural
network model called CNN-GRU, which combines convolutional
12
and gated recurrent units. This model exhibited accuracy that is
suggestively better than other state-of-the-art deep neural network
models, such as Inception Time and Deep Conv LSTM developed
using Auto ML, and was successfully verified on the WISDM
dataset.

Corresponding author : venkataraoyanamadni@gmail.com


I. Introduction

8
The study of human activity recognition (HAR) focuses on the automatic identification of
people's everyday routines based on time series recordings made using sensors. Over the
31
past ten years, many numerous developments have been achieved in the field of
interconnected sensing technologies, including cloud, edge computing, IoT, and sensors.
Since sensors are inexpensive and simple to include into both portable and non-portable
systems, the majority of HAR research has switched to focus on sensor applications. One
common IoT use is wearable technology with sensors, which makes it simple to record
1
different body movements for human activity recognition. The development of a wide
range of applications in fields like healthcare (Kushwaha, Kar, & Dwivedi, 2021;
Woznowski, King, Harwin, & Caddock, 2016), biometrics (Weiss, Yoneda, & Hayajneh,
2019), sports analytics (Ramasamy Ramamurthy & Roy, 2018), personal fitness trackers
(Ramasamy Ramamurthy & Roy, 2018), elderly care (Ranasinghe, Torres, &
Wickramasinghe, 2013), security and surveillance applications (Chen, Hoey, Nugent, Cook,
& Yu, 2012; Ranasinghe, Al MacHot, & Mayr, 2016). The fact that HAR based on
32
wearable sensors is not simply restricted to exercise-related behaviors but can also be used
for identifying and logging a wide range of everyday activities, such as eating, drinking,
brushing, and detecting sleep irregularities, makes its significance clear.

2.Literature Survey
2
Enhanced Human Activity Recognition Based On Smartphone Sensor Data Using
Hybrid Feature Selection Model
Authors: Ahmed, N., Rafiq, J. I., & Islam, M. R. (2020).
Techniques for human activity recognition, or HAR, are becoming increasingly important
in the monitoring of daily human activities, including sports, healthcare, elder care, and
3
smart home activities. Inertial sensors that are commonly used to detect various human
physical situations include gyroscopes and accelerometers, which are integrated into
26 10
smartphones. Numerous studies have been conducted recently on the recognition of
27
human activity. Smartphone sensor data creates high-dimensional feature vectors that are
used to identify human activity. All of the vectors do not, however, contribute equally to
the identification process. A phenomenon called the "curse of dimensionality" is created
30
when all feature vectors are included. A hybrid approach feature selection technique,
33 2
including of a filter and wrapper method, has been proposed in this research. Sequential
floating forward search (SFFS) is used in the process to extract the desired features for
improved activity detection. After that, features are supplied into a multiclass support
vector machine (SVM), which uses the kernel method for training and testing to produce
28
nonlinear classifiers. Our model was verified using a reference dataset. Our suggested
method offers enough activity identification while operating effectively with little
hardware resources.
1
Deep Learning For Sensor-Based Human Activity Recognition: Overview,
Challenges And Opportunities
Authors: Kaixuan Chen, Dalin Zhang, Lina Yao, Bin Guo, Zhiwen Yu, Yunhao Liu
Applications of sensor-based activity recognition are made possible by the Internet of
16
Things and the massive proliferation of sensor devices. Nonetheless, there are significant
obstacles that may impact the recognition system's functionality in real-world situations.
Many deep approaches have been studied to overcome the difficulties in activity
recognition recently, as deep learning has proven to be beneficial in many domains. We
5 1
give a survey of the most advanced deep learning techniques for sensor-based human
activity recognition in this work. Initially, we present the multimodal nature of the sensory
15
data and offer details on publicly available datasets that can be utilised for assessment in
various challenge activities. Next, we provide a novel taxonomy to organise the deep
techniques based on obstacles. An overview of the present state of research is formed by
summarising and analysing challenges and challenge-related deep approaches. We
conclude our study with a discussion of the outstanding problems and some suggestions
for future work.

3 METHODOLOGY
3.1 Proposed system
The research methodology employed in this study includes the key techniques shown in
7
Fig. 4, including data collection and acquisition related to sensor-based human activity
recognition, pre-processing of obtained data, segmenting raw sensor data using a sliding
window of appropriate length, dividing the dataset into train, validation, and test datasets,
model development using different deep learning algorithms, adjusting hyper-parameters,
and assessing the results.
The effectiveness of models as measured by several performance measures. These stages
will ultimately result in the supervised categorization and identification of human activities
11
from sensor-collected data, which can help with elderly or severely ill patients' remote
monitoring based on their body movements. The below figure 1 depicts the flow chart of
the proposed system.

Basic action Preprocessing Training model

Median filtering Unified


Butterworth Low- feature
pass filtering extraction
Testing model

(a)

Fig. 1. Flow chart


4 IMPLEMENTATION
Cameras and movies are used by a visual sensor-based system to record and recognise the
behaviour of the study subjects. The use of visual sensors has several restrictions and
21
drawbacks. The primary privacy risk stems from the fact that not all sites can have cameras
installed due to compliance and regulatory regulations (Hussain et al., 2019). Moreover,
computer vision-based methods for handling pictures and videos require a lot of
computation.
3
One kind of sensor modality that can be used to find and detect human behaviours is
wearable sensors. These kinds of sensors can be incorporated into bands, smartwatches,
clothing, and cellphones, or they can be worn on the body. There are many IoT-based
sensor devices on the market, including accelerometers, gyroscopes, magnetometers,
electromyography (EMG), electrocardiography (ECG), and many others. These sensors are
safe and can even be worn on the body to record movements (Seshadri et al., 2019). The
activity data that is being gathered from individuals is directly impacted by the appropriate
positioning of wearable sensors. The most obvious places for the sensors to be implanted
are the waist, lower back, and breast bone. The better the exemplification of body
movements, the closer the sensors are positioned to the epicentre of mass. This is where
gyroscopes and accelerometers included into smartphones and smartwatches come in rather
handy.
Smartphones are convenient to carry around in one's pocket, and smartwatches, which may
be worn on the dominant hand, are excellent tools for tracking intricate hand-based human
activities like eating soup or cleaning one's teeth.

Fig. 2. Smartwatch Confusion Matrix(CNN-GRU)


Fig.3. Smartphone Confusion Matrix(CNN-GRU) 5. Results Below are the implementation results after the successful
execution of the code. Fig-4 Human Activity
Fig-5 Human Activity Prediction (Yoga)

6. Conclusion
13
In order to categorise complicated human activities, this research presented a novel hybrid
14
deep learning model CNN~GRU. In this investigation, raw sensor data from the WISDM
dataset was employed. The original dataset was divided into distinct datasets pertaining to
22
smartphones and smartwatches dataset. The sliding window method was used to manipulate
24
data during preprocessing. This study did not involve any manual feature engineering.
Additionally, the open-source McFly package served as the foundation for the creation of
baseline models like DeepConvLsTM and InceptionTime, which were produced using
AutoML and greatly lowered the effort required to create these intricate deep neural
network models. It can be concluded from the study's results that smartwatches are more
19
accurate than smartphones at identifying complex human behaviours. The results were
further validated using the train, test, and validation datasets. In summary, the findings
5
showed that hybrid deep learning models outperform other deep learning models with
relatively complex model architectures in terms of accuracy when it comes to efficiently
8
and automatically extracting spatial-temporal features from raw sensor data for the purpose
of classifying complex human actions. We plan to include more intricate deep neural
network models in our analysis going forward, in addition to CNN and GRU. Future
research may potentially involve classifying human activity time series from the WISDM
dataset using deep Transformer models. Transformers are self-attention-based neural
3
networks that can be used to discover and understand dependencies in the raw sensor data
input sequence.

To further classify more activities, an enlarged WISDM dataset with more participants and
activities can be used when it becomes available.

7.References
1. Ahmed, N., Rafiq, J. I., & Islam, M. R. (2020). Enhanced human activity
recognition based on smartphone sensor data using hybrid feature selection model.
Sensors, 20(1). 10.3390/s20010317.
2. Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z., & Liu, Y. (2020a). Deep learning
for sensor-based human activity recognition: overview, challenges and
opportunities. arXiv:2001.07416,.
3. Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z., & Liu, Y. (2020b). Deep learning
for sensorbased human activity recognition: overview, challenges and
opportunities,. 37(4). http://arxiv.org/abs/2001.07416.
4. Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z., & Liu, Y. (2021). Deep learning for
sensorbased human activity recognition: Overview, challenges, and opportunities.
ACM Computing Survey, 54(4). 10.1145/3447744.
5. Chen, L., Hoey, J., Nugent, C. D., Cook, D. J., & Yu, Z. (2012). Sensor-based
activity recognition. IEEE Transactions on Systems, Man and Cybernetics Part C:
Applications and Reviews, 42(6), 790–808. 10.1109/TSMCC.2012.2198883. 17
6. S. Gupta International Journal of Information Management Data Insights 1 (2021)
100046
7. Chen, Y., Zhong, K., Zhang, J., Sun, Q., & Zhao, X. (2016). LSTM networks for
mobile human activity recognition. In 2016 international conference on artificial
intelligence: Technologies and applications (pp. 50–53).
8. Atlantis Press. Gani, M. O. (2017). A novel approach to complex human activity
recognition,.
9. Gao, J., Yang, J., Wang, G., & Li, M. (2016). A novel feature extraction method
for scene recognition based on Centered Convolutional Restricted Boltzmann
Machines. Neurocomputing, 214(100), 708–717. 10.1016/j.neucom.2016.06.055.
10. Gao, X., Luo, H., Wang, Q., Zhao, F., Ye, L., & Zhang, Y. (2019). A human
activity recognition algorithm based on stacking denoising autoencoder and
lightGBM. Sensors, 19(4), 1–20. 10.3390/s19040947.
11. Garg, R., Kiwelekar, A. W., Netak, L. D., & Bhate, S. S. (2021). Potential use-
cases of natural language processing for a logistics organization. In Modern
approaches in machine learning and cognitive science: A walkthrough (pp. 157–
191). Springer.
12. Hussain, Z., Sheng, M., & Zhang, W. E. (2019). Different approaches for human
activity recognition: A survey (pp. 1–28). http://arxiv.org/abs/1906.05074. van

You might also like