Professional Documents
Culture Documents
Journal of Sensors
Volume 2023, Article ID 8276738, 15 pages
https://doi.org/10.1155/2023/8276738
Research Article
Enhance-Net: An Approach to Boost the Performance of Deep
Learning Model Based on Real-Time Medical Images
Received 17 August 2022; Revised 28 September 2022; Accepted 12 October 2022; Published 2 May 2023
Copyright © 2023 Vipul Narayan et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Real-time medical image classification is a complex problem in the world. Using IoT technology in medical applications assures
that the healthcare sectors improve the quality of treatment while lowering costs via automation and resource optimization. Deep
learning is critical in categorizing medical images, which is accomplished by artificial intelligence. Deep learning algorithms allow
radiologists and orthopaedic surgeons to make their life easier by providing them with quicker and more accurate findings in real
time. Despite this, the classic deep learning technique has hit its performance limits. For these reasons, in this research, we
examine alternative enhancement strategies to raise the performance of deep neural networks to provide an optimal solution
known as Enhance-Net. It is possible to classify the experiment into six distinct stages. Champion-Net was chosen as a deep
learning model from a pool of benchmark deep learning models (EfficientNet: B0, MobileNet, ResNet-18, and VGG-19). This
stage helps choose the optimal model. In the second step, Champion-Net was tested with various resolutions. This stage helps
conclude dataset resolution and improves Champion-Net performance. The next stage extracts green channel data. In the
fourth step, Champion-Net combines with image enhancement algorithms CLAHE, HEF, and UM. This phase serves to
improve Enhance-performance. The next stage compares the Enhance-Net findings to the lightness order error (LoE). In
Enhance-Net models, the current study combines image enhancement and green channel with Champion-Net. In the final
step, radiologists and orthopaedic surgeons use the trained model for real-time medical image prediction. The study effort uses
the musculoskeletal radiograph-bone classification (MURA-BC) dataset. Classification accuracy of Enhance-Net was
determined for the train and test datasets. These models obtained 98.02 percent, 94.79 percent, and 94.61 percent accuracy,
respectively. The 96.74% accuracy was achieved during real-time testing with the unseen dataset.
come by. Deep learning models (DLMs) have aided in the ICU devices have sensors that can measure different health
development of a number of advancements in the field of parameters, but it is still hard to keep an eye on them all
image classification [9–13]. the time. We are proposing a system based on the Internet
According to a recent study [14], the depth of a network- of things (IoT), which can help speed up communication,
oriented model is crucial for critical datasets. This demon- find emergencies, and get in touch with medical staff. It
strates that a network-based model’s depth is crucial for can also help start quick, proactive treatment. This
challenging datasets. The debate about shallow vs. deep nets healthcare system makes it less likely that people will make
has raged in DLMs for a long time. The issue of decreased mistakes or take too long to communicate, and it gives doc-
feature reuse arises while training extremely deep learning tors more time to make decisions based on accurate observa-
models [15–17]. This makes the training procedure for these tions [25]. Academic institutions and the commercial
models difficult. In [18], to obtain higher accuracy and min- healthcare sector are devoting a significant amount of atten-
imal error in training loss, create a deep learning model with tion to the development of intelligent medical sensors, gad-
the ideal resolution, depth, and breadth combination. Sev- gets, cloud computing, and other technology related to
eral approaches have been proposed throughout the litera- healthcare. As a result of this, the Internet of things (IoT)
ture to deal with raindrop detection problems. However, has been identified as one of the most promising research
most of these do not consider the following system require- subjects in the field of healthcare, namely, in the field of
ments: high detection rate, real-time constraint, and robust- medical image processing. When conducting their analysis
ness under dynamic environments. of medical photographs, researchers used a wide variety of
A DLM technique called Enhance-Net is proposed to machine learning and semisupervised learning, deep learn-
enhance the overall performance of musculoskeletal radio- ing strategies, and artificial intelligence. These newly discov-
graphs and X-ray pictures in a clinical setting. The most ered methods are used in the process of illness detection,
interesting aspect of this study is the evaluation of the influ- with the goal of assisting medical professionals in the early
ence of three distinct image enhancement algorithms phases of disease diagnosis, as well as delivering findings that
(CLAHE, UM, and HEF) on green channel grayscale medi- are accurate, consistent, efficient, and quick, and so lowering
cal musculoskeletal radiography X-ray images for DLMs. the mortality rate. In today’s world, the coronavirus
The following sections of the paper are divided into seven (COVID-19) has emerged as one of the most challenging
phases: in Section 3, we discussed the materials and pro- and serious illnesses, and it is rapidly spreading around the
cesses that were utilized to create the proposed model, which globe [26]. The authors conducted an exhaustive study of
was interesting. In Section 4, we have gone into further detail the applications of WMSs as well as their advancements,
about the suggested model. In Section 5, we went through and they compared the performance of WMSs to that of
the simulation, the results, and the validation process in other platforms. The authors went through the benefits that
great depth. In Section 6, we have discussed about real- these applications of these devices bring to the table when it
time verification and result outcome. The final segment of comes to monitoring the health of people with illnesses such
Section 7 was dedicated to the conclusion and future work. as cardiac arrest and Alzheimer’s disease [27]. The Internet
of things offers potential solutions that might reduce the
2. Related Work load that is placed on healthcare institutions. For instance,
RFID systems are used in medical institutions in order to
In [19], the study tested the model’s efficiency by extending both lower overall medical costs and improve the quality
the depth (16-20 layers) on ImageNet Challenge held in of treatment that is provided. Patients’ cardiac impulses
2014 on different datasets. The deep learning model’s error may be conveniently monitored by physicians thanks to
rate reaches saturation at the 19th layer. The authors created healthcare monitoring schemes, which help physicians
a residual learning model to make training deeper networks deliver an accurate diagnosis and improve patient care
easier [20]. On the ImageNet test dataset, the model has a [28]. The authors [29] have shown a semisupervised learn-
3.57 percent error rate [20]. The model designer may create ing model for the purpose of collecting the best possible col-
the optimal size model constraint in the design [5, 21–23]. In lection of picture attributes. The primary focus of the
[18], the study offers a scaling technique that uses a com- ensemble learning model that has been provided is on the
pound coefficient to scale evenly using three key parameters fact that the framework is taught to acquire various degrees
(width, depth, and resolution) simultaneously. of semantic representation of pictures in order to extract fea-
When IoT technology is used in healthcare applications, tures of a better quality. In this way, the new set of features
it helps improve the quality of care and lower costs through may be learnt with a bigger dataset via the process of con-
automation and better use of resources. IoT in medical structing a more refined model. The expected fine-tuning
imaging makes it easy to find out what is wrong and take CNN design makes use of a standard amount of medical pic-
steps to fix it in real time. This is made possible by the auto- ture attributes taken from a variety of modalities. Cloud
analysis of the imaging equipment’s parameters [24]. The computing (CC) is a model of distributed computing that
Internet of things (IoT) lets people develop systems that makes it possible for companies and individual users to
use sensors, connected devices, and the Internet. In the access virtualized computing, storage, and networking
ICU, monitoring patients is an essential thing to do. Even resources via the use of the Internet. At the moment, it is
a slight delay in making decisions about how to treat a more economical, less difficult to manage, and more elastic
patient could cause permanent disability or even death. Most to use these resources rather than a collection of local,
Journal of Sensors 3
physical ones. Cloud services are often kept in data centres, linked layer to get the desired results [33]. Their study used
which typically consist of thousands upon thousands of indi- the Kimia Path24 dataset, which was available in grayscale
vidual computers [30]. In order to enhance the effectiveness and colour. A grayscale dataset was utilized using the
of monitoring in IoT-based healthcare systems, a significant DenseNet-161 algorithm, while the ResNet-50 algorithm
amount of research has been conducted. In this paper [31], used a colour dataset to obtain a classification accuracy of
the architecture that is employed in the Internet of things, 98.87 percent. An updated ResNet model was presented
particularly the cloud-integrated systems, is studied. In the [34]. Instead of global average pooling, authors added adap-
Internet of things, accuracy and power consumption are tive dropout. It achieved 87.71 percent classification accu-
two of the most essential concerns; as a result, this article racy in Montgomery County, 62.9 percent in NIH, and
discusses the research activities that are now underway to 81.8 percent in Shenzhen.
improve the functionality of healthcare systems that are The STARE dataset was used in the research by the
based on the Internet of things. An expert application sys- authors [35]. The dataset has been resized into three differ-
tem has been developed [32] using the Internet of medical ent datasets with resolutions of 31 × 35 pixels, 46 × 53 pixels,
things (IoM). Collecting and analysing patients’ physiologi- and 61 × 70 pixels and has been classified into 15 dissimilar
cal data are one of its primary functions to conduct a eye diseases. The investigations found that the used datasets
hands-on analysis of the medical sensor nodes that have with sizes of 32 × 36 and 62 × 72 pixels had the maximum
been implanted into the body of the patient. In turn, it accuracy during training. In contrast, the input test dataset
would detect the medical information of the patient utilising with sizes of 32 × 36 had the highest accuracy of 80.93 per-
sophisticated portable gadgets. The security, protection, and cent (for the input test dataset). Mahbod et al. [36] looked
privacy of medical data are becoming more complex issues at dermoscopic picture collections with dimensions ranging
for the Isle of Man as a result of the fact that patient infor- from 64 × 64 pixels to 768 × 768 pixels. The author con-
mation is very sensitive and should not be disclosed to any- cludes that the classification performance of the small-sized
body other than a medical expert. For this reason, a user dataset 64 × 64 pixels has been significantly reduced. In con-
authentication mechanism that is based on anonymity is trast, the classification performance of the large-sized dataset
recommended as the best solution to the problems with pri- 128 × 128 pixels has shown significant improvement.
vacy preservation in the IoM. The X-ray picture has improved clarity and contrast as a
The authors used transfer learning to train both result of the enhancement. Using the Gaussian high-pass fil-
DenseNet-161 and ResNet-50, deprived of using a fully ter as a starting point, this filter has been optimized to have
4 Journal of Sensors
Champion-Net Enhance-Net
Deep learning Efcient-Net:B0 with image
Models (Benchmark) Step 5
enhancement technique
Efcient-Net: B0
Mobile-Net (CLAHE)
Training data Step 2 High-frequency
ResNet-18 fltering (HEF)
X-ray images 21035 X-ray VGG19 Unsharp masking
images Winner (UM)
Elbow
Finger Champion-net performance
Forearm
Hand Training Green channel
Efcient-Net B0
Humerus accuracy Diferent Extract green
Shoulder Step 1 resolution channel
dataset information
Step 3 Step 4
Preprocessing
MURA-BC
dataset
Evaluation
Verify
Classifcation Enhance-Net
accuracy
Diferent Lightness order error
resolution
(CLAHE)
dataset High-frequency
fltering (HEF)
Unsharp masking Step 7
(UM)
Study
Enhancement ELBOW FINGER FOREARM HAND HUMERUS SHOULDER WRIST
techniques
Original
CLAHE
HEF
UM
Pixel X∑c
X acis = : ð2Þ
HEF elbow image image
Xg
UM elbow image image
Green channel with CLAHE elbow image
The distributed pixel is calculated and shown in
Original elbow image
Ramponi and Mathews, 2000). The sharp details of the 4.1.1. Image Preprocessing Stage. In This stage, the prepro-
image are computed as a difference between the original cessing of X-ray images improves the raw image’s important
image and the image with Gaussian blur applied to the orig- information. The dataset creation and transformation are
inal image. The input image is blurred using a Gaussian blur two steps in the image preprocessing process.
filter at the beginning of this technique. The radius of the
blur and the amount of blur are the two key parameters 4.1.2. Image Dataset Formation. In this stage, the MURA-
for Gaussian blurry images (Ramponi, 1998). The radius BC-based X-ray data is used in different pixel computations
has an effect on the size of the edge that needs to be enlarged. for dataset formation such as 32 × 32, 40 × 40, 48 × 48, 56
According to equation (7), the amount of darkness, light- × 56, 64 × 64, 72 × 72, 80 × 80, and 88 × 88 pixels. The data-
ness, and contrast is included in the edges of the images. set is arranged in two packages: train and test. In the training
package, around 21935 images are placed, and the test pack-
1 −ððx2 +y2 Þ/ð2σ2 ÞÞ age contains 650 X-ray image samples from seven different
Gðx, yÞ = e , ð7Þ groups.
2πσ2
4.1.3. Data Transformation. This step involves randomly
where x, y is the horizontal and vertical distances from the cropping both the training and testing datasets with the four
source and σ is the Gaussian distribution. I enhanced is the padding and flipping X-ray images in horizontal and flip-
obtained image after enhancement as shown in ping the training dataset. This strategy gives you an advan-
tage over the rest of the dataset. The normalizing
À Á procedure is used to minimize signals that include undesired
I enhanced = I original + contrastvalue ∗ ðI blur Þ , ð8Þ
noise or distortion. Because of uneven staining and insuffi-
cient contrast, the X-ray picture acquired by the imaging
where I original is the original image and I blur is the unsharp modality system may be partial and lacking in important
image. features, such as the patient’s position.
4.1. Research Environment. The virtual environment was 4.4. Green Channel Extraction. The suggested model is
used to conduct the research. The Ubuntu operating system designed to extract the green channel details and convert
has 12 GB of RAM, with six virtual CPU (Intel Xeon the image into a grayscale. In [50, 51], the authors have used
2.2 GHz) processors in the server that are installed on the only the green channel of the RGB colour image for conver-
host virtual machine. The proposed experiment takes place sion to a grayscale image. The green channel image keeps the
in a CPU-based setting, and Python 3.0 is used for the sim- most information. The green channel of fundus pictures is
ulation process. often utilized since several authors’ findings have shown that
8
Training report
EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet:
Epoch Epoch Epoch Epoch Epoch Epoch Epoch Epoch
B0 32 × 32 B0 40 × 40 B0 48 × 48 B0 56 × 56 B0 64 × 64 B0 72 × 72 B0 80 × 80 B0 88 × 88
20 92.12 16 93.07 19 92.54 20 92.55 20 93.1 12 93.89 19 94.64 18 94.86
19 92.01 19 92.94 17 92.33 18 92.3 16 93.03 15 93.86 18 94.55 19 94.5
17 91.84 20 92.74 20 92.33 19 92.3 12 93 14 93.77 17 94.53 16 94.47
16 91.67 18 92.65 14 92.29 16 92.18 19 92.94 18 93.71 12 94.39 13 94.45
18 91.65 13 92.63 18 91.99 17 92.14 18 92.89 20 93.7 13 94.39 11 94.44
Max Max Max Max Max Max Max Max
92.12 93.07 92.54 92.55 93.1 93.89 94.64 94.86
% % % % % % % %
Journal of Sensors
Journal of Sensors
Testing report
EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet: EfficientNet:
Epoch Epoch Epoch Epoch Epoch Epoch Epoch Epoch
B0 32 × 32 B0 40 × 40 B0 48 × 48 B0 56 × 56 B0 64 × 64 B0 72 × 72 B0 80 × 80 B0 88 × 88
19 91.3 16 93.07 10 89.74 16 90.76 18 91.96 10 92.02 16 94.24 8 92.86
17 90.94 19 92.94 7 89.68 14 90.46 17 91.3 14 91.9 20 93.46 4 92.38
16 90.16 20 92.74 6 89.2 18 89.92 15 90.94 8 91.84 19 92.8 13 92.32
9 89.02 18 92.65 14 89.08 20 89.92 11 90.58 13 91.78 17 92.68 19 92.14
11 89.02 13 92.63 8 88.66 19 89.38 9 89.8 17 91.42 14 91.96 14 91.84
Max Max Max Max Max Max Max Max
91.3 93.07 89.74 90.76 91.96 92.02 94.24 92.86
% % % % % % % %
9
10 Journal of Sensors
40
20 TP + TN
0 Acc = : ð9Þ
ChampionNet + Gr
ChampionNet + ChampionNet + total number
eenchannel +
UM HEF
clahe
97.66 95.57 95.69 The total amount of the dataset for the images is com-
98.02 94.79 94.61 puted as shown in (10) as follows:
95.78 40.13 84.58
Table 11: Champion-Net training and testing accuracy of various enhancement techniques.
Average LoE score of diferent x-ray 5.3. Experiment Result for Champion-Net Selection as
500 Winner. In order to conduct this experiment, we started
403.68 with a clean slate and trained the deep benchmark learning
400 algorithm from scratch on the MURA-BC 32 × 32 X-ray pic-
ture dataset. The model training was carried out for a total of
LoE score
300
20 epochs. A detailed report on the training accuracy of
200 DLMs is provided in Table 4 (EfficientNet: B0, MobileNet,
88.62 115.68
100
ResNet-18, and VGG-19). In terms of maximum training
accuracy, we achieved 92.12 percent, 91.64 percent, 92.05
0 percent, and 91.96 percent in the experiments.
Clahe HEF UM
The test accuracy of EfficientNet: B0, MobileNet,
Figure 7: LoE score achieved in different image enhancement ResNet-18, and VGG-19 benchmark models is depicted in
techniques. Table 5. The maximum test accuracy is calculated at
92.120%, 91.640%, 92.5%, and 91.960%.
A comparison of training error rates for the benchmark
Table 12: Different X-ray study datasets and their corresponding deep learning models EfficientNet: B0, MobileNet, ResNet-
LoE score. 18, and VGG-19 is presented in Table 6. We acquire mini-
mum training errors of 0.281, 0.4026, 0.2760, and 0.35249,
LoE_Score CLAHE+green channel HEF UM for four different training scenarios.
LoE_HUMERUS 85.29 585.7 104.1 The test accuracy of EfficientNet: B0, MobileNet,
LoE_SHOULDER 150.87 1.5 95.3 ResNet-18, and VGG-19 benchmark models is shown in
LoE_WRIST 53.35 403.6 83.8 Table 7. The minimum test error rate was obtained at
Average_LoE_SCORE 88.62 402.6 115.6 0.2765270, 0.2915930, 0.2934660, and 0.3245850.
LoE_ELBOW 106.87 407.9 138.8
As illustrated in Figures 4 and 5, the EfficientNet: B0 is
selected as the Champion-Net based on the accuracy and
LoE_FINGER 75.92 613.3 203.2
error of training and testing. For training and testing, the
LoE_FOREARM 115.21 324.6 54.1 highest accuracy (EfficientNet: B0) is 92.15 and 91.3, and
LoE_HAND 32.87 487.8 131.1 the minimum training and testing of error rates are obtained
at 0.24 and 0.27.
Predicted
n = 92
no
80.00 %
60.00 % forms better in terms of accuracy and training time, while
40.00 % the 32-32 pixel resolution dataset performs the worst. As a
20.00 % result, we have chosen an image of a 64-64 pixel X-ray
0.00 % dataset.
Real-time verifcation
Figure 9: Real-time verification. 5.5. Experimental Result for Champion-Net with Different
Image Enhancement Techniques. The primary goal of this
phase of research is to improve the overall performance of
technique is depicted:
DLMs, which is now underway. We have processed the data-
Lðx, yÞ = max I c ðx, yÞ, set that was finalized in the previous phase using several
c∈½r ,g,b
ð13Þ image enhancement algorithms, including HEF, CLAHE,
( and UM, to get the desired results. The findings in
1, x ≥ y, Table 10 and in Figure 6 demonstrate the organization and
U ðx, yÞ = ð14Þ performance of Champion-Net along with the enhancement
0, else: approaches on the datasets, in the presence and absence of
Journal of Sensors 13
enhancement strategies, respectively. On the training data- of Enhance-Net, the performance of the DLMs on the
set, the performance of all three image-enhancing algo- MURA-BC dataset is improved significantly. The Enhance-
rithms is about the same in the range of 97.66 to 95.69 Net makes a significant contribution to IoT-based real-
percent. The difference between the test datasets with and time prediction models on X-ray datasets in a dynamic man-
without enhancing methods was examined during the test- ner. Results of the research include X-ray pictures that have
ing phase. It is evident that the green channel+CLAHE strat- been processed with green channel+CLAHE enhancement
egy surpasses both of the other two procedures, according to techniques in order to increase the performance of the
the results in Table 11. The green channel+CLAHE method- DLM. In the future, The Enhance-Net can also be imple-
ology obtained a 95.78 percent accuracy using enhancement mented for various other medical imaging problems. This
techniques. HEF obtained an accuracy of 94.79 percent, approach offers medical practitioners an instant, compre-
while UM obtained an accuracy of 94.61 percent on the test hensive tool to help them through the treatment process in
dataset. Without the use of enhancement techniques, HEF a variety of medical disciplines. Despite the fact that this
obtained just 40.15 percent, and UM scored 84.5 percent study is confined to X-ray image modality, the same work
on the test data, respectively. may be expanded to include other medical acquisition
methods in the future.
5.6. Result Validation Using LoE. The low level of evidence
(LoE) technique is used for result validation of the preceding Data Availability
phase of the experiment in the final step, as represented in
Figure 7. Table 12 displays the LoE-based score of several Dataset will be made available on request.
bone X-ray scans. The lowest LoE score shows the optimal
option [54], which keeps the naturalness of the photos even
after they have been altered. It has a LoE of 88.62, which is
Ethical Approval
the lowest of the three techniques tested. It is calculated that This paper does not contain any studies with human or ani-
the HEF-based LoE score is around 403.6, whereas the UM mal subjects.
LoE score value is around 115.6. The LoE of CLAHE verifies
the outcome of the preceding phase.
Conflicts of Interest
6. Real-Time Verification The authors declare that they have no conflicts of interest.
In this section, radiologists and orthopaedic surgeons per-
form real-time testing of the verified model using the trained References
model for real-time medical image prediction. They tested a
[1] S. Balhara, N. Gupta, A. Alkhayyat et al., “A survey on deep
total of 92 X-ray images. Figure 8 provides the details reinforcement learning architectures, applications and emerg-
regarding the confusion matrix. The 96.74% accuracy is ing trends,” IET Communications, vol. 16, no. 20, 2022.
achieved during real-time testing, as shown in Figure 9. [2] B. Preveze, A. Alkhayyat, F. Abedi, A. M. Jawad, and A. S. Abo-
The suggested model produced the best result and improved sinnee, “SDN-driven Internet of health things: a novel adaptive
classification performance during the experiments. switching technique for hospital healthcare monitoring sys-
tem,” Wireless Communications and Mobile Computing,
7. Conclusion vol. 2022, Article ID 3150756, 11 pages, 2022.
[3] S. Srivastava and S. Sharma, “Analysis of cyber related issues
This paper has proposed a methodology for improving dig- by implementing data mining algorithm,” in 2019 9th Interna-
ital learning environments in the medical imaging field. The tional Conference on Cloud Computing, Data Science & Engi-
importance of the Internet of things in medical imaging neering (Confluence), pp. 606–610, Noida, India, 2019.
technology is shown by how it is used in healthcare applica- [4] D. S. Al-Dulaimi, A. G. Mahmoud, N. M. Hassan,
tions. We have also carried out a number of tests to enhance A. Alkhayyat, and S. A. Majeed, “Development of pneumonia
the overall accuracy of the proposed model and validate the disease detection model based on deep learning algorithm,”
findings through LoE techniques. The Champion-Net evalu- Wireless Communications and Mobile Computing, vol. 2022,
Article ID 2951168, 10 pages, 2022.
ation system was chosen from among the benchmark DLMs
based on the accuracy and error rate of the data. Various res- [5] V. Narayan and A. K. Daniel, “IOT based sensor monitoring
system for smart complex and shopping malls,” in Interna-
olutions are evaluated by the Champion-Net dataset in order
tional Conference on Mobile Networks and Management,
to achieve the maximum possible Champion-Net perfor- pp. 344–354, Springer, 2021.
mance. As part of the improvement phase, the green channel [6] V. Narayan, A. K. Daniel, and A. K. Rai, “Energy efficient two
details are retrieved and retrained, and the best model is tier cluster based protocol for wireless sensor network,” in
tested using image-enhancing techniques such as CLAHE, 2020 international conference on electrical and electronics engi-
HEF, and UM. neering (ICE3), pp. 574–579, Gorakhpur, India, 2020.
This phase contributes to the production of Enhance- [7] N. M. Mamani, “Machine learning techniques and polygenic
Net with improved performance. At the conclusion of the risk score application to prediction genetic diseases,” ADCAIJ:
trial, we compare the findings of the Enhance-Net with the Advances in Distributed Computing and Artificial Intelligence
lightness order error (LoE). Using the suggested architecture Journal, vol. 9, no. 1, pp. 5–14, 2020.
14 Journal of Sensors
[8] V. Narayan, R. K. Mehta, M. Rai et al., “To implement a web [25] B. Prajapati, S. Parikh, and J. Patel, “An intelligent real time
page using thread in Java,” International Journal of Current IoT based system (IRTBS) for monitoring ICU patient,” in
Engineering and Technology, vol. 7, no. 3, pp. 926–934, 2017. International Conference on Information and Communication
[9] A. Kumar, K. Abhishek, C. Chakraborty, and N. Kryvinska, Technology for Intelligent Systems, pp. 390–396, Springer,
“Deep learning and Internet of things based lung ailment rec- 2017.
ognition through coughing spectrograms,” IEEE Access, vol. 9, [26] I. Ahmed, G. Jeon, and A. Chehri, “An IoT-enabled smart
pp. 95938–95948, 2021. health care system for screening of COVID-19 with multi
[10] A. Krizhevsky and G. Hinton, “Learning multiple layers of fea- layers features fusion and selection,” Computing, vol. 105,
tures from tiny images,” University of Toronto, Toronto, no. 4, pp. 743–760, 2023.
Ontario, 2009. [27] R. Miotto, F. Wang, S. Wang, X. Jiang, and J. T. Dudley, “Deep
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet clas- learning for healthcare: review, opportunities and challenges,”
sification with deep convolutional neural networks,” Advances Briefings in Bioinformatics, vol. 19, no. 6, pp. 1236–1246, 2018.
in Neural Information Processing Systems, vol. 25, pp. 1097– [28] M. N. Birje and S. S. Hanji, “Internet of things based distrib-
1105, 2012. uted healthcare systems: a review,” Journal of Data, Informa-
[12] V. Ravi, H. Narasimhan, C. Chakraborty, and T. D. Pham, tion and Management, vol. 2, no. 3, pp. 149–165, 2020.
“Deep learning-based meta-classifier approach for COVID- [29] A. Kumar, J. Kim, D. Lyndon, M. Fulham, and D. Feng, “An
19 classification using CT scan and chest X-ray images,” Mul- ensemble of fine-tuned convolutional neural networks for
timedia Systems, vol. 28, no. 4, pp. 1401–1415, 2022. medical image classification,” IEEE Journal of Biomedical and
[13] V. Narayan and A. K. Daniel, “A research protocol for Health Informatics, vol. 21, no. 1, pp. 31–40, 2017.
detection and optimization of overlapping coverage in wire- [30] A. Darwish, A. E. Hassanien, M. Elhoseny, A. K. Sangaiah, and
less sensor networks,” International Journal of Engineering K. Muhammad, “The impact of the hybrid platform of Internet
and Advanced Technology, vol. 8, Supplement 6, pp. 1–6, of things and cloud computing on healthcare systems: oppor-
2019. tunities, challenges, and open problems,” Journal of Ambient
[14] K. Simonyan and A. Zisserman, “Very deep convolutional net- Intelligence and Humanized Computing, vol. 10, no. 10,
works for large-scale image recognition,” 2014, https://arxi- pp. 4151–4166, 2019.
v.org/abs/1409.1556. [31] S. Selvaraj and S. Sundaravaradhan, “Challenges and opportu-
[15] S. Zagoruyko and N. Komodakis, “DiracNets: training very nities in IoT healthcare systems: a systematic review,” SN
deep neural networks without skip-connections,” 2017, Applied Sciences, vol. 2, no. 1, 2020.
https://arxiv.org/abs/1706.00388. [32] B. D. Deebak, F. Al-Turjman, M. Aloqaily, and O. Alfandi, “An
[16] V. Narayan and A. K. Daniel, “Design consideration and issues authentic-based privacy preservation protocol for smart e-
in wireless sensor network deployment,” Invertis Journal of healthcare systems in IoT,” IEEE Access, vol. 7, pp. 135632–
Science & Technology, vol. 13, no. 3, p. 101, 2020. 135649, 2019.
[17] D. Irfan, X. Tang, V. Narayan, P. K. Mall, S. Srivastava, and [33] M. Talo, “Automated classification of histopathology images
V. Saravanan, “Prediction of quality food sale in mart using using transfer learning,” Artificial Intelligence in Medicine,
the AI-based TOR method,” Journal of Food Quality, vol. 101, article 101743, 2019.
vol. 2022, Article ID 6877520, 9 pages, 2022. [34] Y. Zhang, X. Pan, C. Li, and T. Wu, “3D liver and tumor seg-
[18] M. Tan and Q. Le, “EfficientNet: rethinking model scaling for mentation with CNNs based on region and distance metrics,”
convolutional neural networks,” in Proceedings of the 36 th Applied Sciences, vol. 10, no. 11, p. 3794, 2020.
International Conference on MachineLearning, pp. 6105– [35] B. K. Triwijoyo, B. S. Sabarguna, W. Budiharto, and
6114, Long Beach, California, 2019. E. Abdurachman, “Deep learning approach for classification
[19] M. Jaderberg, K. Simonyan, A. Zisserman, and of eye diseases based on color fundus images,” in Diabetes
K. Kavukcuoglu, “Spatial transformer networks,” 2015, and Fundus OCT, pp. 25–57, Elsevier, 2020.
https://arxiv.org/abs/1506.02025. [36] A. Mahbod, G. Schaefer, C. Wang, R. Ecker, G. Dorffner, and
[20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for I. Ellinger, “Investigating and exploiting image resolution for
image recognition,” in 2016 IEEE Conference on Computer transfer learning-based skin lesion classification,” 2020,
Vision and Pattern Recognition (CVPR), pp. 770–778, Las https://arxiv.org/abs/2006.14715.
Vegas, NV, USA, 2016. [37] K. Panetta, Y. Zhou, S. Agaian, and H. Jia, “Nonlinear unsharp
[21] A. G. Howard, M. Zhu, B. Chen et al., “MobileNets: efficient masking for mammogram enhancement,” IEEE Transactions
convolutional neural networks for mobile vision applications,” on Information Technology in Biomedicine, vol. 15, no. 6,
2017, https://arxiv.org/abs/1704.04861. pp. 918–928, 2011.
[22] V. Narayan and A. K. Daniel, “CHHP: coverage optimization [38] U. Kuran and E. C. Kuran, “Parameter selection for CLAHE
and hole healing protocol using sleep and wake-up concept using multi-objective cuckoo search algorithm for image con-
for wireless sensor network,” International Journal of Systems trast enhancement,” Intelligent Systems with Applications,
Assurance Engineering and Management, vol. 13, Supplement vol. 12, article 200051, 2021.
1, pp. 546–556, 2022. [39] H. Raj and D. K. Vishwakarma, “Detection of COVID-19 in
[23] V. Narayan and A. K. Daniel, “A novel approach for cluster chest X-ray image using convolutional neural network,” in
head selection using trust function in WSN,” Scalable Comput- 2021 2nd Global Conference for Advancement in Technology
ing: Practice and Experience, vol. 22, no. 1, pp. 1–13, 2021. (GCAT), pp. 1–5, Bangalore, India, 2021.
[24] A. Chandy, “A review on IoT based medical imaging technol- [40] B. K. Umri, E. Utami, and M. P. Kurniawan, “Comparative
ogy for healthcare applications,” Journal of Innovative Image analysis of CLAHE and AHE on application of CNN algo-
Processing, vol. 1, no. 1, pp. 51–60, 2019. rithm in the detection of COVID-19 patients,” in 2021 4th
Journal of Sensors 15