Professional Documents
Culture Documents
Submission Information
Result Information
Similarity 25 %
1 10 20 30 40 50 60 70 80 90
Words <
Journal/ 14,
Internet
Publicatio 13.6% Ref/Bib
12.46%
n 12.54% 17.41%
A-Satisfactory (0-10%)
B-Upgrade (11-40%)
25 29 B C-Poor (41-60%)
D-Unacceptable (61-100%)
SIMILARITY % MATCHED SOURCES GRADE
3 ieeeindiacouncil.org Publication
3
5 arxiv.org Publication
1
8 hcis-journal.springeropen.com Publication
1
27 ijircce.com Publication
<1
3
¹Assistant Professor, Department of Computer Science and Engineering, KG Reddy
College of Engineering & Technology, Moinabad, Telangana, India,
2,3,4,54th Year, Department of Computer Science and Engineering, KG Reddy College of
Abstract
8
The study of human activity recognition (HAR) focuses on the automatic identification of
people's everyday routines based on time series recordings made using sensors. Over the
31
past ten years, many numerous developments have been achieved in the field of
interconnected sensing technologies, including cloud, edge computing, IoT, and sensors.
Since sensors are inexpensive and simple to include into both portable and non-portable
systems, the majority of HAR research has switched to focus on sensor applications. One
common IoT use is wearable technology with sensors, which makes it simple to record
1
different body movements for human activity recognition. The development of a wide
range of applications in fields like healthcare (Kushwaha, Kar, & Dwivedi, 2021;
Woznowski, King, Harwin, & Caddock, 2016), biometrics (Weiss, Yoneda, & Hayajneh,
2019), sports analytics (Ramasamy Ramamurthy & Roy, 2018), personal fitness trackers
(Ramasamy Ramamurthy & Roy, 2018), elderly care (Ranasinghe, Torres, &
Wickramasinghe, 2013), security and surveillance applications (Chen, Hoey, Nugent, Cook,
& Yu, 2012; Ranasinghe, Al MacHot, & Mayr, 2016). The fact that HAR based on
32
wearable sensors is not simply restricted to exercise-related behaviors but can also be used
for identifying and logging a wide range of everyday activities, such as eating, drinking,
brushing, and detecting sleep irregularities, makes its significance clear.
2.Literature Survey
2
Enhanced Human Activity Recognition Based On Smartphone Sensor Data Using
Hybrid Feature Selection Model
Authors: Ahmed, N., Rafiq, J. I., & Islam, M. R. (2020).
Techniques for human activity recognition, or HAR, are becoming increasingly important
in the monitoring of daily human activities, including sports, healthcare, elder care, and
3
smart home activities. Inertial sensors that are commonly used to detect various human
physical situations include gyroscopes and accelerometers, which are integrated into
26 10
smartphones. Numerous studies have been conducted recently on the recognition of
27
human activity. Smartphone sensor data creates high-dimensional feature vectors that are
used to identify human activity. All of the vectors do not, however, contribute equally to
the identification process. A phenomenon called the "curse of dimensionality" is created
30
when all feature vectors are included. A hybrid approach feature selection technique,
33 2
including of a filter and wrapper method, has been proposed in this research. Sequential
floating forward search (SFFS) is used in the process to extract the desired features for
improved activity detection. After that, features are supplied into a multiclass support
vector machine (SVM), which uses the kernel method for training and testing to produce
28
nonlinear classifiers. Our model was verified using a reference dataset. Our suggested
method offers enough activity identification while operating effectively with little
hardware resources.
1
Deep Learning For Sensor-Based Human Activity Recognition: Overview,
Challenges And Opportunities
Authors: Kaixuan Chen, Dalin Zhang, Lina Yao, Bin Guo, Zhiwen Yu, Yunhao Liu
Applications of sensor-based activity recognition are made possible by the Internet of
16
Things and the massive proliferation of sensor devices. Nonetheless, there are significant
obstacles that may impact the recognition system's functionality in real-world situations.
Many deep approaches have been studied to overcome the difficulties in activity
recognition recently, as deep learning has proven to be beneficial in many domains. We
5 1
give a survey of the most advanced deep learning techniques for sensor-based human
activity recognition in this work. Initially, we present the multimodal nature of the sensory
15
data and offer details on publicly available datasets that can be utilised for assessment in
various challenge activities. Next, we provide a novel taxonomy to organise the deep
techniques based on obstacles. An overview of the present state of research is formed by
summarising and analysing challenges and challenge-related deep approaches. We
conclude our study with a discussion of the outstanding problems and some suggestions
for future work.
3 METHODOLOGY
3.1 Proposed system
The research methodology employed in this study includes the key techniques shown in
7
Fig. 4, including data collection and acquisition related to sensor-based human activity
recognition, pre-processing of obtained data, segmenting raw sensor data using a sliding
window of appropriate length, dividing the dataset into train, validation, and test datasets,
model development using different deep learning algorithms, adjusting hyper-parameters,
and assessing the results.
The effectiveness of models as measured by several performance measures. These stages
will ultimately result in the supervised categorization and identification of human activities
11
from sensor-collected data, which can help with elderly or severely ill patients' remote
monitoring based on their body movements. The below figure 1 depicts the flow chart of
the proposed system.
(a)
6. Conclusion
13
In order to categorise complicated human activities, this research presented a novel hybrid
14
deep learning model CNN~GRU. In this investigation, raw sensor data from the WISDM
dataset was employed. The original dataset was divided into distinct datasets pertaining to
22
smartphones and smartwatches dataset. The sliding window method was used to manipulate
24
data during preprocessing. This study did not involve any manual feature engineering.
Additionally, the open-source McFly package served as the foundation for the creation of
baseline models like DeepConvLsTM and InceptionTime, which were produced using
AutoML and greatly lowered the effort required to create these intricate deep neural
network models. It can be concluded from the study's results that smartwatches are more
19
accurate than smartphones at identifying complex human behaviours. The results were
further validated using the train, test, and validation datasets. In summary, the findings
5
showed that hybrid deep learning models outperform other deep learning models with
relatively complex model architectures in terms of accuracy when it comes to efficiently
8
and automatically extracting spatial-temporal features from raw sensor data for the purpose
of classifying complex human actions. We plan to include more intricate deep neural
network models in our analysis going forward, in addition to CNN and GRU. Future
research may potentially involve classifying human activity time series from the WISDM
dataset using deep Transformer models. Transformers are self-attention-based neural
3
networks that can be used to discover and understand dependencies in the raw sensor data
input sequence.
To further classify more activities, an enlarged WISDM dataset with more participants and
activities can be used when it becomes available.
7.References
1. Ahmed, N., Rafiq, J. I., & Islam, M. R. (2020). Enhanced human activity
recognition based on smartphone sensor data using hybrid feature selection model.
Sensors, 20(1). 10.3390/s20010317.
2. Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z., & Liu, Y. (2020a). Deep learning
for sensor-based human activity recognition: overview, challenges and
opportunities. arXiv:2001.07416,.
3. Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z., & Liu, Y. (2020b). Deep learning
for sensorbased human activity recognition: overview, challenges and
opportunities,. 37(4). http://arxiv.org/abs/2001.07416.
4. Chen, K., Zhang, D., Yao, L., Guo, B., Yu, Z., & Liu, Y. (2021). Deep learning for
sensorbased human activity recognition: Overview, challenges, and opportunities.
ACM Computing Survey, 54(4). 10.1145/3447744.
5. Chen, L., Hoey, J., Nugent, C. D., Cook, D. J., & Yu, Z. (2012). Sensor-based
activity recognition. IEEE Transactions on Systems, Man and Cybernetics Part C:
Applications and Reviews, 42(6), 790–808. 10.1109/TSMCC.2012.2198883. 17
6. S. Gupta International Journal of Information Management Data Insights 1 (2021)
100046
7. Chen, Y., Zhong, K., Zhang, J., Sun, Q., & Zhao, X. (2016). LSTM networks for
mobile human activity recognition. In 2016 international conference on artificial
intelligence: Technologies and applications (pp. 50–53).
8. Atlantis Press. Gani, M. O. (2017). A novel approach to complex human activity
recognition,.
9. Gao, J., Yang, J., Wang, G., & Li, M. (2016). A novel feature extraction method
for scene recognition based on Centered Convolutional Restricted Boltzmann
Machines. Neurocomputing, 214(100), 708–717. 10.1016/j.neucom.2016.06.055.
10. Gao, X., Luo, H., Wang, Q., Zhao, F., Ye, L., & Zhang, Y. (2019). A human
activity recognition algorithm based on stacking denoising autoencoder and
lightGBM. Sensors, 19(4), 1–20. 10.3390/s19040947.
11. Garg, R., Kiwelekar, A. W., Netak, L. D., & Bhate, S. S. (2021). Potential use-
cases of natural language processing for a logistics organization. In Modern
approaches in machine learning and cognitive science: A walkthrough (pp. 157–
191). Springer.
12. Hussain, Z., Sheng, M., & Zhang, W. E. (2019). Different approaches for human
activity recognition: A survey (pp. 1–28). http://arxiv.org/abs/1906.05074. van