Professional Documents
Culture Documents
Machine
Learning and
the Internet
of Things in
Education
Models and Applications
Studies in Computational Intelligence
Volume 1115
Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design methods
of computational intelligence, as embedded in the fields of engineering, computer
science, physics and life sciences, as well as the methodologies behind them. The
series contains monographs, lecture notes and edited volumes in computational
intelligence spanning the areas of neural networks, connectionist systems, genetic
algorithms, evolutionary computation, artificial intelligence, cellular automata, self-
organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems.
Of particular value to both the contributors and the readership are the short publica-
tion timeframe and the world-wide distribution, which enable both wide and rapid
dissemination of research output.
Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
John Bush Idoko · Rahib Abiyev
Editors
Machine Learning
and the Internet of Things
in Education
Models and Applications
Editors
John Bush Idoko Rahib Abiyev
Department of Computer Engineering Department of Computer Engineering
Near East University Near East University
Nicosia, Cyprus Nicosia, Cyprus
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book showcases several Machine Learning Techniques and Internet of Things
technologies, particularly for learning purposes. The techniques and ideas demon-
strated in this book can be explored by targeted researchers, teachers, and students to
explicitly ease their learning and research exercises and goals in the field of Artificial
Intelligence (AI) and Internet of Things (IoT).
The AI and IoT technologies enumerated and demonstrated in this book will
enable systems to be simulative, predictive, prescriptive, and autonomous, and, inter-
estingly, the integration of these technologies can further enhance emerging appli-
cations from being assisted to augmented, and ultimately to self-operating smart
systems. This book focuses on the design and implementation of algorithmic appli-
cations in the field of artificial intelligence and Internet of Things with pertinent
applications. The book further depicts the challenges and understanding of the role
of technology in the fields of machine learning and Internet of Things for teaching,
learning, and research purposes.
The theoretical and practical applications of AI techniques and IoT technologies
featured in the book include but not limited to: different algorithmic and practical
parts of the AI techniques and IoT technologies for scientific problem diagnosis and
recognition, medical diagnosis, e-health, e-learning, e-governance, blockchain tech-
nologies, optimizations and predictions, industrial and smart office/home automation,
supervised and unsupervised machine learning for IoT data and devices, etc.
v
Contents
vii
viii Contents
John Bush Idoko graduated from the Benue State University Makurdi, Nigeria where
he obtained BSc degree in Computer Science in 2010. He started M.Sc. program in
Computer Engineering at Near East University, North Cyprus. After receiving M.Sc.
degree in 2017, he started Ph.D. program in the same department in the same year.
During his post-graduate programs at Near East University, he worked as Research
Assistant in the Applied Artificial Intelligence Research Centre. He obtained his
Ph.D. in 2020 and he is currently an Assistant Professor in Computer Engineering
Department, Near East University, Cyprus. His research interests include but not
limited to: AI, machine learning, deep learning, computer vision, data analysis, soft
computing, advanced image processing, and bioinformatics.
Rahib Abiyev received the B.Sc. and M.Sc. degrees (First Class Hons.) in Elec-
trical and Electronic Engineering from Azerbaijan State Oil Academy, Baku, in
1989, and the Ph.D. degree in Electrical and Electronic Engineering from Computer-
Aided Control System Department of the same university, in 1997. He was a Senior
Researcher with the research laboratory “Industrial intelligent control systems” of
Computer-Aided Control System Department. In 1999, he joined the Department
of Computer Engineering, Near East University, Nicosia, North Cyprus, where he
is currently a Full Professor and the Chair of Computer Engineering Department.
In 2001, he founded Applied Artificial Intelligence Research Centre and in 2008,
he created “Robotics” research group. He is currently the Director of the Research
Centre. He has published over 300 papers in related subjects. His current research
interests include soft computing, control systems, robotics, and signal processing.
ix
Introduction to Machine Learning
and IoT
defeated by the 1977-built chess computer IBM Deep Blue in two out of six games,
with the champion winning one and the other three games ending in draws [2]. Apple
unveiled Siri as a digital assistant in 2011 [2]. OpenAI was launched in 2015 by Elon
Musk and associates [3, 4].
According to John McCarthy, one of the founding fathers of AI, the science
and engineering of artificial intelligence is the development of intelligent devices,
particularly intelligent computer programs. A method for educating a computer, a
robot that is controlled by a computer, or a piece of software to think critically,
much like an intelligent person might, is called artificial intelligence. It is essential
to first comprehend how the human brain functions as well as how individuals learn,
make decisions, and collaborate to solve problems [5] in order to develop intelligent
software and systems.
Some of the objectives of AI are: 1. to develop expert systems that behave intel-
ligently, learn, demonstrate, explain, and provide their users with guidance and 2.
to add human intelligence to machines to make them comprehend, think, learn,
and act like people. A science and technology called artificial intelligence is based
on fields like computer science, engineering, mathematics, biology, linguistics, and
psychology. The development of computer abilities akin to human intelligence, such
as problem-solving, learning, and reasoning, is one of the major focus of artificial
intelligence.
Machine learning, deep learning, and other areas are among the many subfields
of AI, which is a broad and expanding field [6–24]. Figure 1 illustrates a transitive
subset of artificial intelligence.
In nutshell, machine learning is the idea that computers can use algorithms to
improve their creativity and predictions such that they more closely mimic human
thought processes [7]. Figure 2 shows a typical machine learning model learning
process.
Machine learning involves a number of learning processes such as:
a. Supervised learning: Machines/robots are made to learn through supervised
learning, which involves feeding them with labelled data. By providing machines
Fig. 1 Subset of AI
Introduction to Machine Learning and IoT 3
with access to a vast amount of data and training them to interpret it, machines are
being trained in this process [8–14]. For example, the computer is presented with
a variety of images of dogs shot from numerous perspectives with various color
variations, breeds, and many other varieties. In order for the machine to learn to
analyze the data from these various dog images, the “insight” of machines must
grow. Eventually, the machine will be able to predict whether a given image is a
dog from a completely different image that was not even included in the labelled
data set of dog images it was fed earlier.
b. Unsupervised learning: Unsupervised learning algorithms, in contrast to super-
vised learning, evaluate data that has not been assigned a label. This means that in
this scenario, we are teaching the computer to interpret and learn from a series of
data whose meaning is incomprehensible to eyes of human being. The computer
searches for patterns in the data and makes its own decisions based on those
patterns. It is important to note that the findings reached here were generated by
computers from an unlabeled dataset.
c. Reinforcement learning: A machine learning model that depends on feedback
is reinforcement learning. In this method, the machine is fed a set of data and
asked to predict what it might be. The machine receives feedback about its errors
if it draws an incorrect conclusion from the incoming data. When the computer
encounters a completely different image of a basketball, such as if you give it an
image of a basketball and it erroneously identifies the basketball as a tennis ball
or something else, it automatically learns to recognize an image of a basketball.
d. On the other hand, deep learning is the idea that computers can mimic the steps a
human brain takes to reason, evaluate, and learn. A neural network is used in the
deep learning process as a component of an AI’s thought process. Deep learning
requires a significant amount of data to be trained, as well as a very powerful
processing system.
Application areas of AI:
4 J. B. Idoko and R. Abiyev
2 Internet of Things
In the Internet of Things, computing can be done whenever and wherever you want.
In other terms, the Internet of Things (IoT) is a network of interconnected things
(things) that are embedded with sensors, actuators, software, and other technologies
in order to connect and exchange data with other objects (things) over the internet
[15]. IoT, as seen in Fig. 3, is the nexus of the internet, devices, and data.
In 2020, there were 16.5 billion connected things globally, excluding computers
and portable electronic devices (such as smartphones and tablets). IoT gathers such
information from the numerous sensors embedded in vehicles, refrigerators, space-
craft, etc. There is enormous potential for creative IoT applications across a wide
range of sectors as sensors become more ubiquitous.
Components of IoT system:
a. Sensor: a linked device that enables the sensing of the scenario’s or controlled
environment’s physical properties, whose values are converted to digital data.
Introduction to Machine Learning and IoT 5
Fig. 3 IoT
b. Actuator: a linked gadget that makes it possible to take action within a given
environment.
c. Controller: a connected device implementing an algorithm to transform input
data in actions.
d. Smart things: Sensors, actuators, and controllers work together to create
digital devices that provide service functions (potentially implemented by local/
distributed execution platforms and M2M/Internet communications).
Application areas of IOT include: automated transport system, smart security
cameras, smart farming, thermostats, smart televisions, baby monitors, children’s
toys, refrigerators, automatic light bulbs, and many more.
IoT and AI have recently experienced exponential growth. These fields are going
to be so significant and influential that they will significantly alter and improve the
society we live in. We cannot even begin to fathom how enormous and influential
they will be in the near future. With AI and its rapidly expanding applications in
our daily lives, there is still a lot to learn. It would be sage to adjust to this rapidly
changing world and acquire AI and IoT related skills. To improve this world, we
should learn and grow in the same ways that AI does.
The use of AI and IoT in education can be very beneficial. In order to create
curriculum, tactics, and schedules that are appealing, well-suited and inclusive of
the majority, if not all, adults and children, they could be utilized to analyze dataset
from individual’s perspectives, capabilities, preferences, and shortcoming. Future
modes of transportation will change as a result of the applications of AI. In addition
to self-driving automobiles, self-flying planes and drones that conveniently carry your
meals faster and better are also being developed. The fear of automation replacing
6 J. B. Idoko and R. Abiyev
jobs is one of the main AI-related worries. But, it is possible that AI creates more
employment opportunities than it replaces. By creating new job categories, this will
alter how people work.
References
20. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convo-
lutional neural network for people with disabilities. In 13th international conference on theory
and application of fuzzy systems and soft computing—ICAFS-2018 13 (pp. 239–248). Springer
International Publishing.
21. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney
diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Trans-
formation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2,
pp. 273–280). Springer International Publishing.
22. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic
warning system for wildlife road crossing accidents using artificial intelligence. In International
Conference on Transportation and Development 2020 (pp. 194–203). Reston, VA: American
Society of Civil Engineers.
23. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022,
November). IoT based motion detector using raspberry Pi gadgetry. In 2022 5th information
technology for education and development (ITED) (pp. 1–5). IEEE.
24. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis
of erythemato-squamous diseases. In Proceedings of the 13th International Conference on
Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978–
3).
Deep Convolutional Network for Food
Image Identification
Abstract Food plays an integral role in human survival, and it is crucial to monitor
our food intake to maintain good health and well-being. As mobile applications
for tracking food consumption become increasingly popular, having a precise and
efficient food classification system is more important than ever. This study presents
an optimized food image recognition model known as FRCNN, which employs a
convolutional neural network implemented in Python’s Keras library without relying
on transfer learning architecture. The FRCNN model underwent training on the Food-
101 dataset, comprising 101,000 images of 101 food classes, with a 75:25 training-
validation split. The results indicate that the model achieved a testing accuracy of
92.33% and a training accuracy of 96.40%, outperforming the baseline model that
used transfer learning on the same dataset by 8.12%. To further evaluate the model’s
performance, we randomly selected 15 images from 15 different food classes in the
Food-101 dataset and achieved an overall accuracy of 94.11% on these previously
unseen images. Additionally, we tested the model on the MA Food dataset, consisting
of 121 food classes, and obtained a training accuracy of 95.11%. These findings
demonstrate that the FRCNN model is highly precise and capable of generalizing
well to unseen images, making it a promising tool for food image classification.
1 Introduction
Food is a vital component of our daily lives as it provides the body with essen-
tial nutrients and energy to perform basic functions, such as maintaining a healthy
immune system and repairing cells and tissues. Given its significance in health-related
issues, food monitoring has become increasingly important [5]. Unhealthy eating
habits may lead to the development of chronic diseases such as obesity, diabetes,
and hypercholesterolemia. According to the World Health Organization (WHO), the
global prevalence of obesity more than doubled between 1980 and 2014, with 13% of
individuals being obese and 39% of adults overweight. Obesity may also contribute
to other conditions, such as osteoarthritis, asthma, cancer, diabetes mellitus type 2,
obstructive sleep apnea, and cardiovascular disorders [4]. This is why experts have
stressed the importance of accurately assessing food intake in reducing the risks
associated with developing chronic illnesses. Hence, there is a need for a highly accu-
rate and optimized food image recognition system. This system involves training a
computer to recognize and classify food items using one or more combinations of
machine learning algorithms.
Food image recognition is a complex problem that has attracted much interest
from the scientific community, prompting researchers to devise various models
and methods to tackle it. Although food recognition is still considered challenging
due to the need for models that can handle visual data and higher-level semantics,
researchers have made progress in developing effective techniques for food image
classification. One of the earliest methods used for this task was Fisher Vector, which
employs the Fisher kernel to analyse the visual characteristics of food images at a local
level. The Fisher kernel uses a generative model, such as the Gaussian Mixture Model,
to encode the deviation of a sample from the model into a unique Fisher Vector that
can be used for classification. Another technique is the bag of visual words (BOW)
representation, which uses vector quantization of affine invariant descriptors of image
patches. Additionally, Matsuda et al. [15] proposed a comprehensive approach for
identifying and classifying food items in an image that involves using multiple tech-
niques to identify potential food regions, extract features, and apply multiple-kernel
learning with non-linear kernels for image classification. Bossard et al. [6] introduced
a new benchmark dataset called Food-101 and proposed a method called random
forest mining that learns across multiple food classes. Their approach outperformed
other methods such as BOW, IFV, RF, and RCF, except for CNN, according to their
experimentation results.
Over the years, these techniques have been successful in food image classifica-
tion and identification tasks. However, with the progress in computer vision, machine
learning, and enhanced processing speed, image recognition has undergone a trans-
formation [7, 14, 18]. In current literature, deep learning algorithms, especially CNN,
have been extensively used for this task due to their unique properties, such as sparse
interaction, parameter sharing, and equivariant representation. As a result, CNN has
become a popular method for analysing large image datasets, including food images,
and has demonstrated exceptional accuracy [9, 10, 13, 16, 17].
The use of CNN in food image classification has shown significant progress in
recent years [1–3, 18, 20]. Researchers have achieved high accuracy rates using pre-
trained models, such as AlexNet and EfficientNetB0, as well as through the develop-
ment of novel deep CNN algorithms. These approaches have been tested on various
datasets, including the UEC-Food100, UEC-Food256, and Food-101 datasets. Deep-
Food, developed by Lui et al., achieved a 76.30% accuracy rate on the UEC-Food100
Deep Convolutional Network for Food Image Identification 11
dataset, while Hassannejad et al. outperformed this with an accuracy of 81.45% on the
same dataset using Google’s Inception V3 architecture. Mezgec and Koroui Seljak
modified the well-known AlexNet structure to create NutriNet, which achieved a
rating performance of 86.72% on over 520 food and beverage categories. Similarly,
Kawano and Yanai [11] used a pre-trained model similar to the AlexNet architecture
and were able to achieve an accuracy of 72.26%, while Christodoulidis et al. [8]
introduced a novel deep CNN algorithm that obtained 84.90% accuracy on a custom
dataset. Finally, VijayaKumari et al. [19] achieved the best accuracy of 80.16% using
the pre-trained EfficientNetB0 model, which was trained on the Food-101 dataset.
The studies mentioned above have shown that CNN has immense potential for
accurately classifying food images, which could have numerous practical applica-
tions, such as dietary monitoring and meal tracking. However, while these findings
are promising, there is still ample room for improvement, and the primary aim of this
research is to propose a highly accurate and optimized model for food image recog-
nition and classification. In this paper, we introduce a new CNN architecture called
FRCNN that is specifically designed for food recognition. Our proposed system
boasts high precision and greater robustness for different food databases, making it
a valuable tool for real-world applications in the field of food image recognition.
Here is how this paper is structured: in Sect. 2, we describe the methodology we
used to develop our food recognition system. In Sect. 3, we provide the details of the
FRCNN design and architecture, including the dataset and proposed model structure.
We also provide an overview of the simulation results. Finally, in Sect. 4, we present
the conclusion of our work.
2 CNN Architecture
Convolutional neural networks (CNNs) are a type of deep artificial neural network
used for tasks like object detection and identification in grid-patterned input such as
images. CNNs have a similar structure to ANNs, with a feedforward architecture that
splits nodes into layers, and the output is passed on to the next layer. They use back-
propagation to learn and update weights, which reduces the loss function and error
margin. CNNs see images as a grid-like layout of pixels, and their layers detect basic
patterns like lines and curves before advancing to more complex ones. CNNs are
commonly used in computer vision research due to features like sparse interaction,
parameter sharing, and equivariant representation. Convolution, pooling, and fully
linked layers make up most CNNs, with feature extraction typically taking place in
the first two layers, and the outcome mapped into the fully-connected layer (Fig. 1).
One of the most essential layers in CNNs is the convolution layer, which applies
filters to the input image to extract features like edges and corners. The output of this
layer is a feature map that is passed to the next layer for further processing. Also, the
pooling layer is a layer in a convolutional neural network, which is responsible for
reducing the spatial dimensions of the feature maps generated by the convolutional
layer, thus reducing the computational complexity of the network. Pooling can be
12 R. Abiyev and J. Adepoju
performed using different techniques such as max pooling, sum pooling or average
pooling. The final layer in a typical CNN is the fully connected or dense layer, which
takes the output of the convolution and pooling layers and performs classification
using an activation function such as the SoftMax to generate the probability distribu-
tion over the different classes. The dense layer connects all the nodes in the previous
layer to every node in the current layer, making it a computationally intensive part
of the network. By combining these layers, CNNs can extract complex features from
images and achieve high accuracy in tasks like object detection and classification [2].
After determining CNN’s output signals the learning of network parameters θ
starts. The loss function is applied to train CNN. The loss function can be represented
as:
1 E
N
L= l(θ ; y(i ) , o(i) ) (1)
N i=1
where oi and yi are current output and target output signals, correspondingly. Using
the loss function the unknown parameters θ {( are determined.
) With the use
} of training
examples consisting of input–output pairs x (i) , y(i ) ; i ∈ [1, .., N ] the learning
of θ parameters is carried out to minimize the value of the loss function. For this
purpose, Adam optimizer (Kingma & Jimmy, 2015) learning algorithm is used in the
paper. For the efficient training of CNN, a large volume of training pairs is required.
In the paper, food image data sets are used for training if CNN.
Deep Convolutional Network for Food Image Identification 13
Bossard et al. [6] developed the food-101 dataset, a new dataset comprising pictures
gathered from foodspotting.com, an online platform that allows users to share photos
of their food, along with its location and description. To create the dataset, the authors
selected the top 101 foods that were regularly labelled and popular on the website.
They chose 750 photos for each of the 101 food classes for training, and 250 images
for testing (Fig. 2).
The FRCNN model’s architecture is similar to the standard CNN design, with a layer
consisting of a convolution layer, batch normalization, another convolution layer,
batch normalization, and max pooling. The remaining five layers of the FRCNN
model have a similar structure, as depicted in Fig. 6. After the fourth layer’s max-
pooling, the model goes through flattening, a dense layer, batch normalization, a
dropout layer, two fully-connected layers, and finally the classification layer. The
architecture of the FRCNN model is illustrated in the diagram Fig. 3.
The proposed FRCNN model is presented in Table 1. During the development
of the FRCNN model, various factors were considered. Initially, the focus was on
extracting relevant data from the input data, which was achieved using convolutional
and pooling layers. This approach allowed the model to analyse images at varying
scales, which reduced dimensionality and helped to identify significant patterns.
Secondly, the FRCNN model was designed with efficiency in mind, and compu-
tational resources and memory usage were optimized by applying techniques like
weight sharing and data compression, along with fine-tuning the number of layers
and filters for optimal performance. Lastly, to ensure that the model generalizes well
and avoids overfitting, the training dataset used to train the model was of high quality,
and regularization methods were used. This approach enabled the FRCNN model to
achieve exceptional performance even with unseen data, making it an ideal tool for
object detection and recognition tasks.
The FRCNN model was trained on the Food-101 dataset, where the model’s
training was performed on the training subset, and its performance was subsequently
evaluated on the test subset. In evaluating the FRCNN model, the Food-101 dataset
was partitioned into 75% for training and 25% for testing purposes. The FRCNN
model’s performance was assessed using metrics such as accuracy, precision, recall,
and F1 score. Accuracy is determined by calculating the number of true positive, true
negative, false positive, and false negative predictions made by the model. Precision
and recall are measures of the model’s ability to correctly identify positive instances,
while F1 score combines both measures to evaluate the model’s overall performance.
14
Classification Fully
Output Flatten Maxpooling
Layer Connected
Layer
A higher F1 score indicates better performance, with the model striking a balance
between precision and recall. The formula for accuracy, predictions, recall, and F1
score is given below:
TP +TN
Accuracy = (2)
T P + T N + FP + FN
TP
Pr ecision = (3)
T P + FP
TP
Recall = (4)
T P + FN
2( pr ecision ∗ r ecall)
f1 = (5)
Pr ecision + r ecall
3.3 Simulations
The Table 2 shows the comparison of FRCNN with other models using CNN or
transfer learning methodology on the Food-101 dataset.
Below are the results of testing the FRCNN model’s ability to perform well on
new data by downloading random food images from the internet within the food-101
dataset classes (Fig. 6).
Overall, these results show the ability of the FRCNN model to generalize well to
unseen data making the model suitable for food recognition tasks.
4 Conclusion
Food image recognition is a complex task that involves training a machine learning
model to identify and classify food images. A convolutional neural network (CNN)
architecture was used in this study to develop a food image classification model called
FRCNN. FRCNN has five layers, each with batch normalization, convolution, and
pooling operations to increase accuracy. This model was trained on a dataset with 101
food classes and 101,000 images known as food-101. The performance of the system
was further improved by using additional methods such as kernel regularizations
and kernel initializations, and pre-processing data using the ImageDataGenerator
function. In the food classification task, the final model attained 96.40%, proving
that deep CNNs can be built from scratch and perform just as well as pretrained
models.
References
1. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of
epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089.
https://doi.org/10.3390/app10124089
2. Abiyev, R. H., & Arslan, M. (2019). Head mouse control system for people with disabilities.
Expert Systems, 37(1). https://doi.org/10.1111/exsy.12398
3. Abiyev, R. H., & Ma’aitah, M. K. S. (2018). Deep convolutional neural networks for chest
diseases detection. Journal of Healthcare Engineering, 1–11. https://doi.org/10.1155/2018/
4168538
4. Akhi, A. B., Akter, F., Khatun, T., & Uddin, M. S. (2016). Recognition and classification of
fast food images. Global Journal of Computer Science and Technology, 18.
5. Attokaren, D. J., Fernandes, I. G., Sriram, A., Murthy, Y. V. S., & Koolagudi, S. G. (2017).
Food classification from images using convolutional neural networks. In TENCON 2017—2017
IEEE region 10 conference. https://doi.org/10.1109/tencon.2017.8228338
6. Bossard, L., Guillaumin, M., & Van Gool, L. (2014). Food-101—mining discriminative compo-
nents with random forests. Computer Vision—ECCV, 446–461. https://doi.org/10.1007/978-
3-319-10599-4_29
7. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand
gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. https://doi.org/
10.3233/jifs-190353
8. Christodoulidis, S., Anthimopoulos, M., & Mougiakakou, S. (2015). Food recognition for
dietary assessment using deep convolutional neural networks. In New trends in image analysis
and processing—ICIAP 2015 workshops (pp. 458–465). https://doi.org/10.1007/978-3-319-
23222-5_56
9. Hassannejad, H., Matrella, G., Ciampolini, P., De Munari, I., Mordonini, M., & Cagnoni, S.
(2016). Food image recognition using very deep convolutional networks. In Proceedings of
the 2nd international workshop on multimedia assisted dietary management. https://doi.org/
10.1145/2986035.2986042
20 R. Abiyev and J. Adepoju
10. Kagaya, H., Aizawa, K., & Ogawa, M. (2014). Food detection and recognition using convolu-
tional neural network. In Proceedings of the 22nd ACM international conference on multimedia.
https://doi.org/10.1145/2647868.2654970
11. Kawano, Y., & Yanai, K. (2014). Food image recognition with deep convolutional features.
In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous
computing: Adjunct publication. https://doi.org/10.1145/2638728.2641339
12. Kiourt, C., Pavlidis, G., & Markantonatou, S. (2020). Deep learning approaches in food recog-
nition. Learning and Analytics in Intelligent Systems. https://doi.org/10.1007/978-3-030-497
24-8_4
13. Liu, C., Cao, Y., Luo, Y., Chen, G., Vokkarane, V., & Ma, Y. (2016). DeepFood: Deep learning-
based food image recognition for computer-aided dietary assessment. Inclusive Smart Cities
and Digital Health. https://doi.org/10.1007/978-3-319-39601-9_4
14. Liu, S., Li, S. Z., Liu, X. M., & Zhang, H. B. (2010). Entropy-based action features selection
using histogram intersection kernel. In 2010 2nd international conference on signal processing
systems. https://doi.org/10.1109/icsps.2010.5555433
15. Matsuda, Y., Hoashi, H., & Yanai, K. (2012). Recognition of multiple-food images by detecting
candidate regions. In 2012 IEEE international conference on multimedia and expo. https://doi.
org/10.1109/icme.2012.157
16. Mezgec, S., & Koroušić Seljak, B. (2017). NutriNet: A deep learning food and drink image
recognition system for dietary assessment. Nutrients, 9(7), 657. https://doi.org/10.3390/nu9
070657
17. Özsert Yiğit, G., & Özyildirim, B. M. (2018). Comparison of convolutional neural network
models for food image classification. Journal of Information and Telecommunication, 2(3),
347–357. https://doi.org/10.1080/24751839.2018.1446236
18. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature
review on machine learning and student performance prediction: Critical gaps and possible
remedies. Applied Sciences, 11(22), 10907. https://doi.org/10.3390/app112210907
19. VijayaKumari, G., Vutkur, P., & Vishwanath, P. (2022). Food classification using transfer
learning technique. Global Transitions Proceedings, 3(1), 225–229. https://doi.org/10.1016/j.
gltp.2022.03.027
20. Yanai, K., & Kawano, Y. (2015). Food image recognition using deep convolutional network
with pre-training and fine-tuning. In 2015 IEEE international conference on multimedia and
expo workshops (ICMEW). https://doi.org/10.1109/icmew.2015.7169816
Face Mask Recognition System-Based
Convolutional Neural Network
Abstract The use of face masks has been widely acknowledged as an effective
measure in preventing the spread of COVID-19. Scientists argue that face masks
act as a barrier, preventing virus-carrying droplets from reaching other individuals
when coughing or sneezing. This plays a crucial role in breaking the chain of trans-
mission. However, many people are reluctant to wear masks properly, and some
may not even be aware of the correct way to wear them. Manual inspection of a
large number of individuals, particularly in crowded places such as train stations,
theaters, classrooms, or airports, can be time-consuming, expensive, and prone to
bias or human error. To address this challenge, an automated, accurate, and reli-
able system is required. Such a system needs extensive data, particularly images, for
training purposes. The system should be capable of recognizing whether a person
is not wearing a face mask at all, wearing it improperly, or wearing it correctly. In
this study, we employ a convolutional neural network (CNN)-based architecture, to
develop a face mask detection/recognition model. The model achieved an impressive
accuracy of 97.25% in classifying individuals into the categories of wearing masks,
wearing them improperly, or not wearing masks at all. This automated system offers
a promising solution to efficiently monitor and enforce face mask usage in various
settings, contributing to public health and safety.
J. B. Idoko (B)
Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East
University, Nicosia 99138, Turkey
e-mail: john.bush@neu.edu.tr
E. Simsek
Department of Computer Engineering, Near East University, Nicosia 99138, Turkey
Rhynchops nigra, Linn. Syst Nat. vol i. p. 228.—Lath. Ind. Ornith. vol. ii. p.
802.—Ch. Bonaparte, Synopsis of Birds of United States, p. 352.
Black Skimmer, or Shear-water, Rhynchops nigra, Wils. Amer. Ornith.
vol. vii. p. 85, pl. 60, fig. 4.—Nuttall, Manual, vol. ii. p. 264.
The trachea is 5 3/4 inches long, round, but not ossified, its diameter
at the top 5 twelfths, contracting gradually to 2 1/2 twelfths. The
lateral or contractor muscles are small; the sterno-tracheal slender;
there is a pair of inferior laryngeals, going to the last ring of the
trachea. The number of rings is 90, and a large inferior ring. The
bronchi are of moderate length, but wider, their diameter being 3 1/2
twelfths at the upper part; the number of their half-rings about 18.
The digestive organs of this bird are precisely similar to those of the
Terns and smaller Gulls, to which it is also allied by many of its
habits.
BONAPARTIAN GULL.
The white spots on the tips of the wings vary greatly in size, and are
frequently obliterated when the feathers become worn.
Palate with five series of small distant papillæ. Tongue 1 inch 1 1/2
twelfths long, slender, tapering to a slit point, emarginate and
papillate at the base, horny towards the end. Aperture of posterior
nares linear, 9 twelfths long. Heart 1 inch long, 9 twelfths broad.
Right lobe of liver 1 inch 11 twelfths long, the other lobe 1 inch 7
twelfths.
The œsophagus is 6 1/2 inches long, very wide with rather thin
parietes, its average diameter when dilated 10 twelfths, within the
thorax enlarged to 1 inch 2 twelfths. The transverse muscular fibres
are distinct, the internal longitudinal less so; the mucous coat
longitudinally plicate. The proventriculus is 1/2 inch long, with very
numerous small glandules. The stomach is a small oblong gizzard,
10 twelfths long, 8 twelfths broad; its lateral muscles rather large, as
are its tendons. The inner coat or epithelium is of moderate
thickness, dense, with nine longitudinal broad rugæ, and of a
brownish-red colour. The intestine is 24 1/2 inches long, its diameter
2 twelfths. The rectum is 1 1/2 inch long. The cœca are 2 twelfths
long, 1 twelfth in diameter, cylindrical and obtuse.
The intestine of another individual, a male, is 20 1/2 inches long, 3
twelfths in diameter.
The trachea is 3 inches 10 twelfths long, its diameter at the top 3
twelfths, at the lower part 2 1/4 twelfths, the rings very feeble,
unossified, about 130 in number. The sterno-tracheal muscles are
very slender, as are the contractors; and there is a pair of inferior
laryngeals. The bronchi are of moderate length, with about 18 half
rings.
BUFFEL-HEADED DUCK.
Anas Albeola, Linn. Syst. Nat. vol. i. p. 199.—Lath. Ind. Ornith. vol. ii. p. 867.
Anas bucephala, Linn. Syst. Nat. vol. i. p. 200; Anas rustica, p. 201.
Buffel-headed Duck, Anas Albeola, Wilson, American Ornith. vol. viii. p.
51, pl. 67, fig. 2, 3.
Fuligula Albeola, Ch. Bonaparte, Synops. of Birds of United States, p. 394.
Clangula Albeola, Spirit Duck, Richards. and Swains. Fauna Bor.-Amer.
vol. ii. p. 458.
Spirit Duck, Nuttall, Manual, vol. ii. p. 445.
Individuals of both sexes differ much in size, and in the tints of their
plumage.
In an adult male, the tongue is 1 inch and 2 twelfths long, fleshy, and
of the same general form as in the other ducks already described.
The œsophagus is 6 3/4 inches long, passes along the right side, has
a diameter at the top of 4 1/2 twelfths, enlarges about the middle to 9
twelfths, and contracts to 1/2 inch as it enters the thorax. The
proventriculus is 1 inch long, 8 twelfths in its greatest diameter, its
glandules, which are of moderate size, forming a complete belt, as in
all other ducks. The stomach is a muscular gizzard of a roundish
form, 1 inch 5 twelfths long, 1 inch 4 twelfths in breadth; its lateral
muscles 5 twelfths in thickness; its epithelium tough, hard, and
slightly rugous. The intestine is 3 feet 11 inches long; its average
diameter 3 twelfths, its walls thick, and its inner surface villous. The
rectum is 3 inches long; the cœca 2 1/4 inches in length, their
diameter at the commencement 1 twelfth, towards the end 2 twelfths.
The trachea is 5 inches long, much flattened, its rings unossified, its
diameter at the top 2 3/4 twelfths, towards the lower part 3 twelfths,
having scarcely any appearance of dilatation at the part which is so
excessively enlarged in the Golden-eyed Duck, which in form and
habits is yet very closely allied. The lateral muscles are strong, and
there are cleido-tracheal and sterno-tracheal muscles, as in other
ducks.
COMMON GANNET.
On the morning of the 14th of June 1833, the white sails of the
Ripley were spread before a propitious breeze, and onward she
might be seen gaily wending her way toward the shores of Labrador.
We had well explored the Magdalene Islands, and were anxious to
visit the Great Gannet Rock, where, according to our pilot, the birds
from which it derives its name bred. For several days I had observed
numerous files proceeding northward, and marked their mode of
flight while thus travelling. As our bark dashed through the heaving
billows, my anxiety to reach the desired spot increased. At length,
about ten o’clock, we discerned at a distance a white speck, which
our pilot assured us was the celebrated rock of our wishes. After a
while I could distinctly see its top from the deck, and thought that it
was still covered with snow several feet deep. As we approached it, I
imagined that the atmosphere around was filled with flakes, but on
my turning to the pilot, who smiled at my simplicity, I was assured
that nothing was in sight but the Gannets and their island home. I
rubbed my eyes, took up my glass, and saw that the strange
dimness of the air before us was caused by the innumerable birds,
whose white bodies and black-tipped pinions produced a blended tint
of light-grey. When we had advanced to within half a mile, this
magnificent veil of floating Gannets was easily seen, now shooting
upwards, as if intent on reaching the sky, then descending as if to
join the feathered masses below, and again diverging toward either
side and sweeping over the surface of the ocean. The Ripley now
partially furled her sails, and lay to, when all on board were eager to
scale the abrupt sides of the mountain isle, and satisfy their curiosity.
Judge, Reader, of our disappointment. The weather, which hitherto
had been beautiful, suddenly changed, and we were assailed by a
fearful storm. However, the whale-boat was hoisted over, and
manned by four sturdy “down-easters,” along with Thomas Lincoln
and my son. I remained on board the Ripley, and commenced my
distant observations, which I shall relate in due time.
An hour has elapsed; the boat, which had been hid from our sight, is
now in view; the waves run high, and all around looks dismal. See
what exertions the rowers make; it blows a hurricane, and each
successive billow seems destined to overwhelm their fragile bark. My
anxiety is intense, as you may imagine; in the midst of my friends
and the crew I watch every movement of the boat, now balanced on
the very crest of a rolling and foaming wave, now sunk far into the
deep trough. We see how eagerly yet calmly they pull. My son
stands erect, steering with a long oar, and Lincoln is bailing the
water which is gaining on him, for the spray ever and anon dashes
over the bow. But they draw near, a rope is thrown and caught, the
whale-boat is hauled close under our lee-board; in a moment more
all are safe on deck, the helm round, the schooner to, and away
under bare poles she scuds toward Labrador.
Thomas Lincoln and my son were much exhausted, and the sailors
required a double allowance of grog. A quantity of eggs of various
kinds, and several birds, had been procured, for wherever sufficient
room for a gannet’s nest was not afforded on the rock, one or two
Guillemots occupied the spot, and on the ledges below the
Kittiwakes lay thick like snow-flakes. The discharging of their guns
produced no other effect than to cause the birds killed or severely
wounded to fall into the water, for the cries of the countless
multitudes drowned every other noise. The party had their clothes
smeared with the nauseous excrements of hundreds of gannets and
other birds, which in shooting off from their nests caused numerous
eggs to fall, of which some were procured entire. The confusion on
and around the rock was represented as baffling all description; and
as we gazed on the mass now gradually fading on our sight, we all
judged it well worth the while to cross the ocean to see such a sight.
But yet it was in some measure a painful sight to me, for I had not
been able to land on this great breeding-place, of which, however, I
here present a description given by our pilot Mr Godwin.
“The top of the main rock is a quarter of a mile wide, from north to
south, but narrower in the other direction. Its elevation is estimated
at about four hundred feet. It stands in Lat. 47° 52´. The surf beats
its base with great violence, unless after a long calm, and it is
extremely difficult to land upon it, and still more so to ascend to the
top or platform. The only point on which a boat may be landed lies
on the south side, and the moment the boat strikes it must be hauled
dry on the rocks. The whole surface of the upper platform is closely
covered with nests, placed about two feet asunder, and in such
regular order that a person may see between the lines, which run
north and south, as if looking along the furrows of a deeply ploughed
field. The Labrador fishermen and others who annually visit this
extraordinary resort of the Gannets, for the purpose of procuring
their flesh to bait their cod-fish hooks, ascend armed with heavy
short clubs, in parties of eight, ten, or more, and at once begin their
work of destruction. At sight of these unwelcome intruders, the
affrighted birds rise on wing with a noise like thunder, and fly off in
such a hurried and confused manner as to impede each other’s
progress, by which thousands are forced downwards, and
accumulate into a bank many feet high; the men beating and killing
them with their clubs until fatigued, or satisfied with the number they