You are on page 1of 51

Machine Learning and the Internet of

Things in Education: Models and


Applications John Bush Idoko
Visit to download the full and correct content document:
https://ebookmass.com/product/machine-learning-and-the-internet-of-things-in-educat
ion-models-and-applications-john-bush-idoko/
Studies in Computational Intelligence 1115

John Bush Idoko


Rahib Abiyev Editors

Machine
Learning and
the Internet
of Things in
Education
Models and Applications
Studies in Computational Intelligence

Volume 1115

Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design methods
of computational intelligence, as embedded in the fields of engineering, computer
science, physics and life sciences, as well as the methodologies behind them. The
series contains monographs, lecture notes and edited volumes in computational
intelligence spanning the areas of neural networks, connectionist systems, genetic
algorithms, evolutionary computation, artificial intelligence, cellular automata, self-
organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems.
Of particular value to both the contributors and the readership are the short publica-
tion timeframe and the world-wide distribution, which enable both wide and rapid
dissemination of research output.
Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
John Bush Idoko · Rahib Abiyev
Editors

Machine Learning
and the Internet of Things
in Education
Models and Applications
Editors
John Bush Idoko Rahib Abiyev
Department of Computer Engineering Department of Computer Engineering
Near East University Near East University
Nicosia, Cyprus Nicosia, Cyprus

ISSN 1860-949X ISSN 1860-9503 (electronic)


Studies in Computational Intelligence
ISBN 978-3-031-42923-1 ISBN 978-3-031-42924-8 (eBook)
https://doi.org/10.1007/978-3-031-42924-8

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2023

This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Paper in this product is recyclable.


Preface

This book showcases several Machine Learning Techniques and Internet of Things
technologies, particularly for learning purposes. The techniques and ideas demon-
strated in this book can be explored by targeted researchers, teachers, and students to
explicitly ease their learning and research exercises and goals in the field of Artificial
Intelligence (AI) and Internet of Things (IoT).
The AI and IoT technologies enumerated and demonstrated in this book will
enable systems to be simulative, predictive, prescriptive, and autonomous, and, inter-
estingly, the integration of these technologies can further enhance emerging appli-
cations from being assisted to augmented, and ultimately to self-operating smart
systems. This book focuses on the design and implementation of algorithmic appli-
cations in the field of artificial intelligence and Internet of Things with pertinent
applications. The book further depicts the challenges and understanding of the role
of technology in the fields of machine learning and Internet of Things for teaching,
learning, and research purposes.
The theoretical and practical applications of AI techniques and IoT technologies
featured in the book include but not limited to: different algorithmic and practical
parts of the AI techniques and IoT technologies for scientific problem diagnosis and
recognition, medical diagnosis, e-health, e-learning, e-governance, blockchain tech-
nologies, optimizations and predictions, industrial and smart office/home automation,
supervised and unsupervised machine learning for IoT data and devices, etc.

Nicosia, Cyprus John Bush Idoko


Rahib Abiyev

v
Contents

Introduction to Machine Learning and IoT . . . . . . . . . . . . . . . . . . . . . . . . . . 1


John Bush Idoko and Rahib Abiyev
Deep Convolutional Network for Food Image Identification . . . . . . . . . . . . 9
Rahib Abiyev and Joseph Adepoju
Face Mask Recognition System-Based Convolutional Neural
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
John Bush Idoko and Emirhan Simsek
Fuzzy Inference System Based-AI for Diagnosis of Esophageal
Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
John Bush Idoko and Mohammed Jameel Sadeq
Skin Detection System Based Fuzzy Neural Networks for Skin
Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Idoko John Bush and Rahib Abiyev
Machine Learning Based Cardless ATM Using Voice Recognition
Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
John Bush Idoko, Mansur Mohammed,
and Abubakar Usman Mohammed
Automated Classification of Cardiac Arrhythmias . . . . . . . . . . . . . . . . . . . . 85
John Bush Idoko
A Fuzzy Logic Implemented Classification Indicator
for the Diagnosis of Diabetes Mellitus in TRNC . . . . . . . . . . . . . . . . . . . . . . 101
Cemal Kavalcıoğlu
Implementation and Evaluation of a Mobile Smart School
Management System—NEUKinderApp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
John Bush Idoko

vii
viii Contents

The Emerging Benefits of Gamification Techniques . . . . . . . . . . . . . . . . . . . 131


John Bush Idoko
A Comprehensive Review of Virtual E-Learning System Challenges . . . . 141
John Bush Idoko and Joseph Palmer
A Semantic Portal to Improve Search on Rivers State’s
Independent National Electoral Commission . . . . . . . . . . . . . . . . . . . . . . . . . 153
John Bush Idoko and David Tumuni Ogolo
Implementation of Semantic Web Service and Integration
of e-Government Based Linked Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
John Bush Idoko and Bashir Abdinur Ahmed
Application of Zero-Trust Networks in e-Health Internet of Things
(IoT) Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Morgan Morgak Gofwen, Bartholomew Idoko, and John Bush Idoko
IoT Security Based Vulnerability Assessment of E-learning Systems . . . . 235
Bartholomew Idoko and John Bush Idoko
Blockchain Technology, Artificial Intelligence, and Big Data
in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Ramiz Salama and Fadi Al-Turjman
Sustainable Education Systems with IOT Paradigms . . . . . . . . . . . . . . . . . . 255
Ramiz Salama and Fadi Al-Turjman
Post Covid Era-Smart Class Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Kamil Dimililer, Ezekiel Tijesunimi Ogidan,
and Oluwaseun Priscilla Olawale
About the Editors

John Bush Idoko graduated from the Benue State University Makurdi, Nigeria where
he obtained BSc degree in Computer Science in 2010. He started M.Sc. program in
Computer Engineering at Near East University, North Cyprus. After receiving M.Sc.
degree in 2017, he started Ph.D. program in the same department in the same year.
During his post-graduate programs at Near East University, he worked as Research
Assistant in the Applied Artificial Intelligence Research Centre. He obtained his
Ph.D. in 2020 and he is currently an Assistant Professor in Computer Engineering
Department, Near East University, Cyprus. His research interests include but not
limited to: AI, machine learning, deep learning, computer vision, data analysis, soft
computing, advanced image processing, and bioinformatics.

Rahib Abiyev received the B.Sc. and M.Sc. degrees (First Class Hons.) in Elec-
trical and Electronic Engineering from Azerbaijan State Oil Academy, Baku, in
1989, and the Ph.D. degree in Electrical and Electronic Engineering from Computer-
Aided Control System Department of the same university, in 1997. He was a Senior
Researcher with the research laboratory “Industrial intelligent control systems” of
Computer-Aided Control System Department. In 1999, he joined the Department
of Computer Engineering, Near East University, Nicosia, North Cyprus, where he
is currently a Full Professor and the Chair of Computer Engineering Department.
In 2001, he founded Applied Artificial Intelligence Research Centre and in 2008,
he created “Robotics” research group. He is currently the Director of the Research
Centre. He has published over 300 papers in related subjects. His current research
interests include soft computing, control systems, robotics, and signal processing.

ix
Introduction to Machine Learning
and IoT

John Bush Idoko and Rahib Abiyev

Abstract Smart systems constructed off machine learning/internet of things tech-


nologies are systems that have the ability to reason, calculate, learn from experience,
perceive relationships and analogies, store and retrieve data from memory, under-
stand complicated concepts, solve problems, speak fluently in plain language, gener-
alize, classify, and adapt to changing circumstances. To make decisions and build
smart environment, mart systems combine sensing, actuation, signal processing, and
control. The Internet of Things (IoT) is being developed significantly as a result of
the real-time networked information and control that smart systems provide. Smart
systems are the next generation of computing and information systems, combining
artificial intelligence (AI), machine learning, edge/cloud computing, cyber-physical
systems, big data analytics, pervasive/ubiquitous computing, and IoT technologies.
Recent years have brought along some significant hurdles for smart systems due to
the wide variety of AI applications, IoT devices, and technology. A few of these
challenges include the development and deployment of integrated smart systems and
the effective and efficient use of computing technologies.

Keywords Artificial intelligence · Machine learning · Deep learning · Neural


networks · Internet of things

1 Artificial Intelligence (AI)

The goal of artificial intelligence (AI), a subfield of computer science, is to build


machines or computers that are as intelligent as people. Artificial intelligence is not
as cutting-edge as we might imagine. The Turing test was created by Alan Turing
in 1950, therefore this dates back to that year. Eventually, in the 1960s, ELIZA, the
first chatbot computer program, was developed [1]. A world chess champion was

J. B. Idoko (B) · R. Abiyev


Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East
University, Nicosia 99138, Turkey
e-mail: john.bush@neu.edu.tr

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 1


J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things
in Education, Studies in Computational Intelligence 1115,
https://doi.org/10.1007/978-3-031-42924-8_1
2 J. B. Idoko and R. Abiyev

defeated by the 1977-built chess computer IBM Deep Blue in two out of six games,
with the champion winning one and the other three games ending in draws [2]. Apple
unveiled Siri as a digital assistant in 2011 [2]. OpenAI was launched in 2015 by Elon
Musk and associates [3, 4].
According to John McCarthy, one of the founding fathers of AI, the science
and engineering of artificial intelligence is the development of intelligent devices,
particularly intelligent computer programs. A method for educating a computer, a
robot that is controlled by a computer, or a piece of software to think critically,
much like an intelligent person might, is called artificial intelligence. It is essential
to first comprehend how the human brain functions as well as how individuals learn,
make decisions, and collaborate to solve problems [5] in order to develop intelligent
software and systems.
Some of the objectives of AI are: 1. to develop expert systems that behave intel-
ligently, learn, demonstrate, explain, and provide their users with guidance and 2.
to add human intelligence to machines to make them comprehend, think, learn,
and act like people. A science and technology called artificial intelligence is based
on fields like computer science, engineering, mathematics, biology, linguistics, and
psychology. The development of computer abilities akin to human intelligence, such
as problem-solving, learning, and reasoning, is one of the major focus of artificial
intelligence.
Machine learning, deep learning, and other areas are among the many subfields
of AI, which is a broad and expanding field [6–24]. Figure 1 illustrates a transitive
subset of artificial intelligence.
In nutshell, machine learning is the idea that computers can use algorithms to
improve their creativity and predictions such that they more closely mimic human
thought processes [7]. Figure 2 shows a typical machine learning model learning
process.
Machine learning involves a number of learning processes such as:
a. Supervised learning: Machines/robots are made to learn through supervised
learning, which involves feeding them with labelled data. By providing machines

Fig. 1 Subset of AI
Introduction to Machine Learning and IoT 3

Fig. 2 Learning process of a machine learning model

with access to a vast amount of data and training them to interpret it, machines are
being trained in this process [8–14]. For example, the computer is presented with
a variety of images of dogs shot from numerous perspectives with various color
variations, breeds, and many other varieties. In order for the machine to learn to
analyze the data from these various dog images, the “insight” of machines must
grow. Eventually, the machine will be able to predict whether a given image is a
dog from a completely different image that was not even included in the labelled
data set of dog images it was fed earlier.
b. Unsupervised learning: Unsupervised learning algorithms, in contrast to super-
vised learning, evaluate data that has not been assigned a label. This means that in
this scenario, we are teaching the computer to interpret and learn from a series of
data whose meaning is incomprehensible to eyes of human being. The computer
searches for patterns in the data and makes its own decisions based on those
patterns. It is important to note that the findings reached here were generated by
computers from an unlabeled dataset.
c. Reinforcement learning: A machine learning model that depends on feedback
is reinforcement learning. In this method, the machine is fed a set of data and
asked to predict what it might be. The machine receives feedback about its errors
if it draws an incorrect conclusion from the incoming data. When the computer
encounters a completely different image of a basketball, such as if you give it an
image of a basketball and it erroneously identifies the basketball as a tennis ball
or something else, it automatically learns to recognize an image of a basketball.
d. On the other hand, deep learning is the idea that computers can mimic the steps a
human brain takes to reason, evaluate, and learn. A neural network is used in the
deep learning process as a component of an AI’s thought process. Deep learning
requires a significant amount of data to be trained, as well as a very powerful
processing system.
Application areas of AI:
4 J. B. Idoko and R. Abiyev

a. Expert Systems: These applications combine hardware, software, and special-


ized data to convey reasoning and advice. They offer users explanations and
recommendations.
b. Speech Recognition: Certain intelligent systems are able to hear and understand
the language as it is used by people to speak, including the meanings of sentences.
It can manage a variety of accents, background noise, slangs, changes in human
sounds brought on by the cold, etc.
c. Gaming: In strategic games such as chess, tic-tac-toe, poker, etc., where machines
may consider numerous probable locations based on heuristic knowledge, AI
plays a key role.
d. Natural Language Processing: Makes it possible to communicate with a computer
that can understand human natural language.
e. Handwriting Recognition: The text written with a pen on paper or a stylus on
a screen is read by the handwriting recognition software. It can change it into
editable text and recognize the letter shapes.
f. Intelligent Robots: The jobs that humans assign to robots can be completed by
them. They are equipped with sensors that can identify physical data in in real
time, including light, heat, temperature, movement, sound, bumps, and pressure.
To demonstrate intelligence, they have powerful processors, numerous sensors,
and a large amount of memory. Also, they have the capacity to learn from their
mistakes and adapt to the new surroundings.
g. Vision Systems: These systems can recognize, decipher, and comprehend
computer-generated visual input. Examples include the use of a spy plane’s
images to create a map or spatial information, the use of clinical expert systems
by doctors to diagnose patients, and the use of computer software by law
enforcement to identify criminals based on stored portraits created by forensic
artists.

2 Internet of Things

In the Internet of Things, computing can be done whenever and wherever you want.
In other terms, the Internet of Things (IoT) is a network of interconnected things
(things) that are embedded with sensors, actuators, software, and other technologies
in order to connect and exchange data with other objects (things) over the internet
[15]. IoT, as seen in Fig. 3, is the nexus of the internet, devices, and data.
In 2020, there were 16.5 billion connected things globally, excluding computers
and portable electronic devices (such as smartphones and tablets). IoT gathers such
information from the numerous sensors embedded in vehicles, refrigerators, space-
craft, etc. There is enormous potential for creative IoT applications across a wide
range of sectors as sensors become more ubiquitous.
Components of IoT system:
a. Sensor: a linked device that enables the sensing of the scenario’s or controlled
environment’s physical properties, whose values are converted to digital data.
Introduction to Machine Learning and IoT 5

Fig. 3 IoT

b. Actuator: a linked gadget that makes it possible to take action within a given
environment.
c. Controller: a connected device implementing an algorithm to transform input
data in actions.
d. Smart things: Sensors, actuators, and controllers work together to create
digital devices that provide service functions (potentially implemented by local/
distributed execution platforms and M2M/Internet communications).
Application areas of IOT include: automated transport system, smart security
cameras, smart farming, thermostats, smart televisions, baby monitors, children’s
toys, refrigerators, automatic light bulbs, and many more.

3 Conclusion and the Future of AI and IoT

IoT and AI have recently experienced exponential growth. These fields are going
to be so significant and influential that they will significantly alter and improve the
society we live in. We cannot even begin to fathom how enormous and influential
they will be in the near future. With AI and its rapidly expanding applications in
our daily lives, there is still a lot to learn. It would be sage to adjust to this rapidly
changing world and acquire AI and IoT related skills. To improve this world, we
should learn and grow in the same ways that AI does.
The use of AI and IoT in education can be very beneficial. In order to create
curriculum, tactics, and schedules that are appealing, well-suited and inclusive of
the majority, if not all, adults and children, they could be utilized to analyze dataset
from individual’s perspectives, capabilities, preferences, and shortcoming. Future
modes of transportation will change as a result of the applications of AI. In addition
to self-driving automobiles, self-flying planes and drones that conveniently carry your
meals faster and better are also being developed. The fear of automation replacing
6 J. B. Idoko and R. Abiyev

jobs is one of the main AI-related worries. But, it is possible that AI creates more
employment opportunities than it replaces. By creating new job categories, this will
alter how people work.

References

1. Retrieved February 27, 2023, from https://news.harvard.edu/gazette/story/2012/09/alan-turing-


at-100/
2. Retrieved March 01, 2023, from https://www.ibm.com/ibm/history/ibm100/us/en/icons/dee
pblue/
3. Retrieved March 03, 2023, from https://openai.com/blog/introducing-openai/0
4. Retrieved March 03, 2023, from https://www.bbc.com/news/technology-35082344
5. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of
epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089.
6. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep
convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2).
7. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification
of breast tissue. Procedia Computer Science, 120, 402–410.
8. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature
review on machine learning and student performance prediction: Critical gaps and possible
remedies. Applied Sciences, 11(22), 10907.
9. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential
diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97.
10. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver
disorder using fuzzy neural system. International Journal of Advanced Computer Science
and Applications, 8(12).
11. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial
intelligence algorithm for skin detection. In ITM Web of conferences (Vol. 16, p. 02004). EDP
Sciences.
12. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand
gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252.
13. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Reşatoğlu, R., & Alaneme, G. (2022, May). Appli-
cation of deep learning in structural health management of concrete structures. In Proceedings
of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd.
14. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of
myocardial infarction. International Journal of Advanced Computer Science and Applications,
8(6). https://doi.org/10.14569/IJACSA.2017.080622
15. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms
for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240.
16. Dimililer, K., & Bush, I. J. (2017, September). Automated classification of fruits: pawpaw fruit
as a case study. In Man-machine interactions 5: 5th international conference on man-machine
interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Cham:
Springer International Publishing.
17. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual
based driver assistive system. In ITM Web of conferences (Vol. 9, p. 03002). EDP Sciences.
18. Abiyev, R., Idoko, J. B., Arslan, M. (2020, June). Reconstruction of convolutional neural
network for sign language recognition. In 2020 International conference on electrical,
communication, and computer engineering (ICECCE) (pp. 1–5). IEEE.
19. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection
using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690.
Introduction to Machine Learning and IoT 7

20. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convo-
lutional neural network for people with disabilities. In 13th international conference on theory
and application of fuzzy systems and soft computing—ICAFS-2018 13 (pp. 239–248). Springer
International Publishing.
21. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney
diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Trans-
formation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2,
pp. 273–280). Springer International Publishing.
22. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic
warning system for wildlife road crossing accidents using artificial intelligence. In International
Conference on Transportation and Development 2020 (pp. 194–203). Reston, VA: American
Society of Civil Engineers.
23. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022,
November). IoT based motion detector using raspberry Pi gadgetry. In 2022 5th information
technology for education and development (ITED) (pp. 1–5). IEEE.
24. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis
of erythemato-squamous diseases. In Proceedings of the 13th International Conference on
Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978–
3).
Deep Convolutional Network for Food
Image Identification

Rahib Abiyev and Joseph Adepoju

Abstract Food plays an integral role in human survival, and it is crucial to monitor
our food intake to maintain good health and well-being. As mobile applications
for tracking food consumption become increasingly popular, having a precise and
efficient food classification system is more important than ever. This study presents
an optimized food image recognition model known as FRCNN, which employs a
convolutional neural network implemented in Python’s Keras library without relying
on transfer learning architecture. The FRCNN model underwent training on the Food-
101 dataset, comprising 101,000 images of 101 food classes, with a 75:25 training-
validation split. The results indicate that the model achieved a testing accuracy of
92.33% and a training accuracy of 96.40%, outperforming the baseline model that
used transfer learning on the same dataset by 8.12%. To further evaluate the model’s
performance, we randomly selected 15 images from 15 different food classes in the
Food-101 dataset and achieved an overall accuracy of 94.11% on these previously
unseen images. Additionally, we tested the model on the MA Food dataset, consisting
of 121 food classes, and obtained a training accuracy of 95.11%. These findings
demonstrate that the FRCNN model is highly precise and capable of generalizing
well to unseen images, making it a promising tool for food image classification.

Keywords Deep convolutional network · Food image recognition · Transfer


learning

1 Introduction

Food is a vital component of our daily lives as it provides the body with essen-
tial nutrients and energy to perform basic functions, such as maintaining a healthy
immune system and repairing cells and tissues. Given its significance in health-related

R. Abiyev (B) · J. Adepoju


Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East
University, Lefkosa, North Cyprus, Turkey
e-mail: rahib.abiyev@neu.edu.tr

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 9


J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things
in Education, Studies in Computational Intelligence 1115,
https://doi.org/10.1007/978-3-031-42924-8_2
10 R. Abiyev and J. Adepoju

issues, food monitoring has become increasingly important [5]. Unhealthy eating
habits may lead to the development of chronic diseases such as obesity, diabetes,
and hypercholesterolemia. According to the World Health Organization (WHO), the
global prevalence of obesity more than doubled between 1980 and 2014, with 13% of
individuals being obese and 39% of adults overweight. Obesity may also contribute
to other conditions, such as osteoarthritis, asthma, cancer, diabetes mellitus type 2,
obstructive sleep apnea, and cardiovascular disorders [4]. This is why experts have
stressed the importance of accurately assessing food intake in reducing the risks
associated with developing chronic illnesses. Hence, there is a need for a highly accu-
rate and optimized food image recognition system. This system involves training a
computer to recognize and classify food items using one or more combinations of
machine learning algorithms.
Food image recognition is a complex problem that has attracted much interest
from the scientific community, prompting researchers to devise various models
and methods to tackle it. Although food recognition is still considered challenging
due to the need for models that can handle visual data and higher-level semantics,
researchers have made progress in developing effective techniques for food image
classification. One of the earliest methods used for this task was Fisher Vector, which
employs the Fisher kernel to analyse the visual characteristics of food images at a local
level. The Fisher kernel uses a generative model, such as the Gaussian Mixture Model,
to encode the deviation of a sample from the model into a unique Fisher Vector that
can be used for classification. Another technique is the bag of visual words (BOW)
representation, which uses vector quantization of affine invariant descriptors of image
patches. Additionally, Matsuda et al. [15] proposed a comprehensive approach for
identifying and classifying food items in an image that involves using multiple tech-
niques to identify potential food regions, extract features, and apply multiple-kernel
learning with non-linear kernels for image classification. Bossard et al. [6] introduced
a new benchmark dataset called Food-101 and proposed a method called random
forest mining that learns across multiple food classes. Their approach outperformed
other methods such as BOW, IFV, RF, and RCF, except for CNN, according to their
experimentation results.
Over the years, these techniques have been successful in food image classifica-
tion and identification tasks. However, with the progress in computer vision, machine
learning, and enhanced processing speed, image recognition has undergone a trans-
formation [7, 14, 18]. In current literature, deep learning algorithms, especially CNN,
have been extensively used for this task due to their unique properties, such as sparse
interaction, parameter sharing, and equivariant representation. As a result, CNN has
become a popular method for analysing large image datasets, including food images,
and has demonstrated exceptional accuracy [9, 10, 13, 16, 17].
The use of CNN in food image classification has shown significant progress in
recent years [1–3, 18, 20]. Researchers have achieved high accuracy rates using pre-
trained models, such as AlexNet and EfficientNetB0, as well as through the develop-
ment of novel deep CNN algorithms. These approaches have been tested on various
datasets, including the UEC-Food100, UEC-Food256, and Food-101 datasets. Deep-
Food, developed by Lui et al., achieved a 76.30% accuracy rate on the UEC-Food100
Deep Convolutional Network for Food Image Identification 11

dataset, while Hassannejad et al. outperformed this with an accuracy of 81.45% on the
same dataset using Google’s Inception V3 architecture. Mezgec and Koroui Seljak
modified the well-known AlexNet structure to create NutriNet, which achieved a
rating performance of 86.72% on over 520 food and beverage categories. Similarly,
Kawano and Yanai [11] used a pre-trained model similar to the AlexNet architecture
and were able to achieve an accuracy of 72.26%, while Christodoulidis et al. [8]
introduced a novel deep CNN algorithm that obtained 84.90% accuracy on a custom
dataset. Finally, VijayaKumari et al. [19] achieved the best accuracy of 80.16% using
the pre-trained EfficientNetB0 model, which was trained on the Food-101 dataset.
The studies mentioned above have shown that CNN has immense potential for
accurately classifying food images, which could have numerous practical applica-
tions, such as dietary monitoring and meal tracking. However, while these findings
are promising, there is still ample room for improvement, and the primary aim of this
research is to propose a highly accurate and optimized model for food image recog-
nition and classification. In this paper, we introduce a new CNN architecture called
FRCNN that is specifically designed for food recognition. Our proposed system
boasts high precision and greater robustness for different food databases, making it
a valuable tool for real-world applications in the field of food image recognition.
Here is how this paper is structured: in Sect. 2, we describe the methodology we
used to develop our food recognition system. In Sect. 3, we provide the details of the
FRCNN design and architecture, including the dataset and proposed model structure.
We also provide an overview of the simulation results. Finally, in Sect. 4, we present
the conclusion of our work.

2 CNN Architecture

Convolutional neural networks (CNNs) are a type of deep artificial neural network
used for tasks like object detection and identification in grid-patterned input such as
images. CNNs have a similar structure to ANNs, with a feedforward architecture that
splits nodes into layers, and the output is passed on to the next layer. They use back-
propagation to learn and update weights, which reduces the loss function and error
margin. CNNs see images as a grid-like layout of pixels, and their layers detect basic
patterns like lines and curves before advancing to more complex ones. CNNs are
commonly used in computer vision research due to features like sparse interaction,
parameter sharing, and equivariant representation. Convolution, pooling, and fully
linked layers make up most CNNs, with feature extraction typically taking place in
the first two layers, and the outcome mapped into the fully-connected layer (Fig. 1).
One of the most essential layers in CNNs is the convolution layer, which applies
filters to the input image to extract features like edges and corners. The output of this
layer is a feature map that is passed to the next layer for further processing. Also, the
pooling layer is a layer in a convolutional neural network, which is responsible for
reducing the spatial dimensions of the feature maps generated by the convolutional
layer, thus reducing the computational complexity of the network. Pooling can be
12 R. Abiyev and J. Adepoju

Fig. 1 CNN architecture. Source [12]

performed using different techniques such as max pooling, sum pooling or average
pooling. The final layer in a typical CNN is the fully connected or dense layer, which
takes the output of the convolution and pooling layers and performs classification
using an activation function such as the SoftMax to generate the probability distribu-
tion over the different classes. The dense layer connects all the nodes in the previous
layer to every node in the current layer, making it a computationally intensive part
of the network. By combining these layers, CNNs can extract complex features from
images and achieve high accuracy in tasks like object detection and classification [2].
After determining CNN’s output signals the learning of network parameters θ
starts. The loss function is applied to train CNN. The loss function can be represented
as:

1 E
N
L= l(θ ; y(i ) , o(i) ) (1)
N i=1

where oi and yi are current output and target output signals, correspondingly. Using
the loss function the unknown parameters θ {( are determined.
) With the use
} of training
examples consisting of input–output pairs x (i) , y(i ) ; i ∈ [1, .., N ] the learning
of θ parameters is carried out to minimize the value of the loss function. For this
purpose, Adam optimizer (Kingma & Jimmy, 2015) learning algorithm is used in the
paper. For the efficient training of CNN, a large volume of training pairs is required.
In the paper, food image data sets are used for training if CNN.
Deep Convolutional Network for Food Image Identification 13

3 Design of FRCNN System

3.1 Food-101 Dataset

Bossard et al. [6] developed the food-101 dataset, a new dataset comprising pictures
gathered from foodspotting.com, an online platform that allows users to share photos
of their food, along with its location and description. To create the dataset, the authors
selected the top 101 foods that were regularly labelled and popular on the website.
They chose 750 photos for each of the 101 food classes for training, and 250 images
for testing (Fig. 2).

3.2 Model Architecture

The FRCNN model’s architecture is similar to the standard CNN design, with a layer
consisting of a convolution layer, batch normalization, another convolution layer,
batch normalization, and max pooling. The remaining five layers of the FRCNN
model have a similar structure, as depicted in Fig. 6. After the fourth layer’s max-
pooling, the model goes through flattening, a dense layer, batch normalization, a
dropout layer, two fully-connected layers, and finally the classification layer. The
architecture of the FRCNN model is illustrated in the diagram Fig. 3.
The proposed FRCNN model is presented in Table 1. During the development
of the FRCNN model, various factors were considered. Initially, the focus was on
extracting relevant data from the input data, which was achieved using convolutional
and pooling layers. This approach allowed the model to analyse images at varying
scales, which reduced dimensionality and helped to identify significant patterns.
Secondly, the FRCNN model was designed with efficiency in mind, and compu-
tational resources and memory usage were optimized by applying techniques like
weight sharing and data compression, along with fine-tuning the number of layers
and filters for optimal performance. Lastly, to ensure that the model generalizes well
and avoids overfitting, the training dataset used to train the model was of high quality,
and regularization methods were used. This approach enabled the FRCNN model to
achieve exceptional performance even with unseen data, making it an ideal tool for
object detection and recognition tasks.
The FRCNN model was trained on the Food-101 dataset, where the model’s
training was performed on the training subset, and its performance was subsequently
evaluated on the test subset. In evaluating the FRCNN model, the Food-101 dataset
was partitioned into 75% for training and 25% for testing purposes. The FRCNN
model’s performance was assessed using metrics such as accuracy, precision, recall,
and F1 score. Accuracy is determined by calculating the number of true positive, true
negative, false positive, and false negative predictions made by the model. Precision
and recall are measures of the model’s ability to correctly identify positive instances,
while F1 score combines both measures to evaluate the model’s overall performance.
14

Fig. 2 Food-101 dataset preview


R. Abiyev and J. Adepoju
Deep Convolutional Network for Food Image Identification 15

Input Conv2D 32 BatchNor Conv2D 2 32 BatchNorma


Images malization lization
filters ReLU filters ReLU

Classification Fully
Output Flatten Maxpooling
Layer Connected
Layer

Fig. 3 Proposed FRCNN model architecture

A higher F1 score indicates better performance, with the model striking a balance
between precision and recall. The formula for accuracy, predictions, recall, and F1
score is given below:

TP +TN
Accuracy = (2)
T P + T N + FP + FN
TP
Pr ecision = (3)
T P + FP
TP
Recall = (4)
T P + FN
2( pr ecision ∗ r ecall)
f1 = (5)
Pr ecision + r ecall

3.3 Simulations

The FRCNN model was trained on a high-performance computer with an Intel®


Core™ i9-9900 k processor, 32 GB of RAM, and an Nvidia GeForce RTX 2080 Ti
GPU with 11 GB of GDDR6 memory and 4352 CUDA cores. The training was done
using the Anaconda development environment, and the minimum and maximum
training times were 942 and 966 s, respectively. The training environment enabled
fast training of the FRCNN model.
The food-101 dataset, which contains 101,000 images, was used to train and
validate the FRCNN model. The dataset was divided into training and validation
sets, with 75% used for training and 25% used for validation. This resulted in 75,750
images being used for training and 25,250 images being used for validation.
After the training, the Fig. 4 shows the accuracy and validation plots of the FRCNN
model while Fig. 5 shows the training loss and validation loss of the FRCNN model.
16 R. Abiyev and J. Adepoju

Table 1 FRCNN model


Layer (type) Output shape
conv2d (Conv2D) (None, 224, 224, 32)
batch_normalization (BatchNormalization) (None, 224, 224, 32)
conv2d_1 (Conv2D) (None, 224, 224, 32)
batch_normalization_1 (BatchNormalization) (None, 224, 224, 32)
max_pooling2d (MaxPooling2D) (None, 112, 112, 32)
conv2d_2 (Conv2D) (None, 112, 112, 64)
batch_normalization_2 (BatchNormalization) (None, 112, 112, 64)
conv2d_3 (Conv2D) (None, 112, 112, 64)
batch_normalization_3 (BatchNormalization) (None, 112, 112, 64)
max_pooling2d_1 (MaxPooling 2D) (None, 56, 56, 64)
conv2d_4 (Conv2D) (None, 56, 56, 128)
batch_normalization_4 (BatchNormalization) (None, 56, 56, 128)
conv2d_5 (Conv2D) (None, 56, 56, 128)
batch_normalization_5 (BatchNormalization) (None, 56, 56, 128)
max_pooling2d_2 (MaxPooling) (None, 28, 28, 128)
conv2d_9 (Conv2D) (None, 28, 28, 256)
batch_normalization_6 (BatchNormalization) (None, 28, 28, 256)
conv2d_10 (Conv2D) (None, 28, 28, 256)
batch_normalization_7 (BatchNormalization) (None, 28, 28, 256)
max_pooling2d_3 (MaxPooling) (None, 14, 14, 256)
conv2d_12 (Conv2D) (None, 14, 14, 512)
batch_normalization_8 (BatchNormalization) (None, 14, 14, 512)
conv2d_13 (Conv2D) (None, 14, 14, 512)
batch_normalization_9 (BatchNormalization) (None, 14, 14, 512)
max_pooling2d_4 (MaxPooling) (None, 7, 7, 512)
flatten (Flatten) (None, 25,088)
dense (Dense) (None, 1024)
batch_normalization_10 (BatchNormalization) (None, 1024)
dropout (Dropout) (None, 1024)
dense_1 (Dense) (None, 512)
dense_2 (Dense) (None, 256)
dense_3 (Dense) (None, 101)
Total params 34,241,125
Trainable params 34,235,109
Non-trainable params 6,016
Deep Convolutional Network for Food Image Identification 17

Fig. 4 FRCNN model training and validation plot

Fig. 5 FRCNN model loss function plot


18 R. Abiyev and J. Adepoju

The Table 2 shows the comparison of FRCNN with other models using CNN or
transfer learning methodology on the Food-101 dataset.
Below are the results of testing the FRCNN model’s ability to perform well on
new data by downloading random food images from the internet within the food-101
dataset classes (Fig. 6).

Table 2 Comparing the FRCNN model with other models


Model comparison
Method Accuracy (%)
Bossard et al. Random Forest 50.76
Özsert Yiğit and Özyildirim Convolution Neural Network (CNN) 73.80
Liu et al. GoogleNet 77.40
VijayaKumari et al. EfficientNetB0 80.16
Attokaren et al Inception V3 86.97
Hassannejad et al Inception V3 88.28
FRCNN model Convolution Neural Network (CNN) 96.40

Fig. 6 Test results


Deep Convolutional Network for Food Image Identification 19

Overall, these results show the ability of the FRCNN model to generalize well to
unseen data making the model suitable for food recognition tasks.

4 Conclusion

Food image recognition is a complex task that involves training a machine learning
model to identify and classify food images. A convolutional neural network (CNN)
architecture was used in this study to develop a food image classification model called
FRCNN. FRCNN has five layers, each with batch normalization, convolution, and
pooling operations to increase accuracy. This model was trained on a dataset with 101
food classes and 101,000 images known as food-101. The performance of the system
was further improved by using additional methods such as kernel regularizations
and kernel initializations, and pre-processing data using the ImageDataGenerator
function. In the food classification task, the final model attained 96.40%, proving
that deep CNNs can be built from scratch and perform just as well as pretrained
models.

References

1. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of
epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089.
https://doi.org/10.3390/app10124089
2. Abiyev, R. H., & Arslan, M. (2019). Head mouse control system for people with disabilities.
Expert Systems, 37(1). https://doi.org/10.1111/exsy.12398
3. Abiyev, R. H., & Ma’aitah, M. K. S. (2018). Deep convolutional neural networks for chest
diseases detection. Journal of Healthcare Engineering, 1–11. https://doi.org/10.1155/2018/
4168538
4. Akhi, A. B., Akter, F., Khatun, T., & Uddin, M. S. (2016). Recognition and classification of
fast food images. Global Journal of Computer Science and Technology, 18.
5. Attokaren, D. J., Fernandes, I. G., Sriram, A., Murthy, Y. V. S., & Koolagudi, S. G. (2017).
Food classification from images using convolutional neural networks. In TENCON 2017—2017
IEEE region 10 conference. https://doi.org/10.1109/tencon.2017.8228338
6. Bossard, L., Guillaumin, M., & Van Gool, L. (2014). Food-101—mining discriminative compo-
nents with random forests. Computer Vision—ECCV, 446–461. https://doi.org/10.1007/978-
3-319-10599-4_29
7. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand
gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. https://doi.org/
10.3233/jifs-190353
8. Christodoulidis, S., Anthimopoulos, M., & Mougiakakou, S. (2015). Food recognition for
dietary assessment using deep convolutional neural networks. In New trends in image analysis
and processing—ICIAP 2015 workshops (pp. 458–465). https://doi.org/10.1007/978-3-319-
23222-5_56
9. Hassannejad, H., Matrella, G., Ciampolini, P., De Munari, I., Mordonini, M., & Cagnoni, S.
(2016). Food image recognition using very deep convolutional networks. In Proceedings of
the 2nd international workshop on multimedia assisted dietary management. https://doi.org/
10.1145/2986035.2986042
20 R. Abiyev and J. Adepoju

10. Kagaya, H., Aizawa, K., & Ogawa, M. (2014). Food detection and recognition using convolu-
tional neural network. In Proceedings of the 22nd ACM international conference on multimedia.
https://doi.org/10.1145/2647868.2654970
11. Kawano, Y., & Yanai, K. (2014). Food image recognition with deep convolutional features.
In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous
computing: Adjunct publication. https://doi.org/10.1145/2638728.2641339
12. Kiourt, C., Pavlidis, G., & Markantonatou, S. (2020). Deep learning approaches in food recog-
nition. Learning and Analytics in Intelligent Systems. https://doi.org/10.1007/978-3-030-497
24-8_4
13. Liu, C., Cao, Y., Luo, Y., Chen, G., Vokkarane, V., & Ma, Y. (2016). DeepFood: Deep learning-
based food image recognition for computer-aided dietary assessment. Inclusive Smart Cities
and Digital Health. https://doi.org/10.1007/978-3-319-39601-9_4
14. Liu, S., Li, S. Z., Liu, X. M., & Zhang, H. B. (2010). Entropy-based action features selection
using histogram intersection kernel. In 2010 2nd international conference on signal processing
systems. https://doi.org/10.1109/icsps.2010.5555433
15. Matsuda, Y., Hoashi, H., & Yanai, K. (2012). Recognition of multiple-food images by detecting
candidate regions. In 2012 IEEE international conference on multimedia and expo. https://doi.
org/10.1109/icme.2012.157
16. Mezgec, S., & Koroušić Seljak, B. (2017). NutriNet: A deep learning food and drink image
recognition system for dietary assessment. Nutrients, 9(7), 657. https://doi.org/10.3390/nu9
070657
17. Özsert Yiğit, G., & Özyildirim, B. M. (2018). Comparison of convolutional neural network
models for food image classification. Journal of Information and Telecommunication, 2(3),
347–357. https://doi.org/10.1080/24751839.2018.1446236
18. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature
review on machine learning and student performance prediction: Critical gaps and possible
remedies. Applied Sciences, 11(22), 10907. https://doi.org/10.3390/app112210907
19. VijayaKumari, G., Vutkur, P., & Vishwanath, P. (2022). Food classification using transfer
learning technique. Global Transitions Proceedings, 3(1), 225–229. https://doi.org/10.1016/j.
gltp.2022.03.027
20. Yanai, K., & Kawano, Y. (2015). Food image recognition using deep convolutional network
with pre-training and fine-tuning. In 2015 IEEE international conference on multimedia and
expo workshops (ICMEW). https://doi.org/10.1109/icmew.2015.7169816
Face Mask Recognition System-Based
Convolutional Neural Network

John Bush Idoko and Emirhan Simsek

Abstract The use of face masks has been widely acknowledged as an effective
measure in preventing the spread of COVID-19. Scientists argue that face masks
act as a barrier, preventing virus-carrying droplets from reaching other individuals
when coughing or sneezing. This plays a crucial role in breaking the chain of trans-
mission. However, many people are reluctant to wear masks properly, and some
may not even be aware of the correct way to wear them. Manual inspection of a
large number of individuals, particularly in crowded places such as train stations,
theaters, classrooms, or airports, can be time-consuming, expensive, and prone to
bias or human error. To address this challenge, an automated, accurate, and reli-
able system is required. Such a system needs extensive data, particularly images, for
training purposes. The system should be capable of recognizing whether a person
is not wearing a face mask at all, wearing it improperly, or wearing it correctly. In
this study, we employ a convolutional neural network (CNN)-based architecture, to
develop a face mask detection/recognition model. The model achieved an impressive
accuracy of 97.25% in classifying individuals into the categories of wearing masks,
wearing them improperly, or not wearing masks at all. This automated system offers
a promising solution to efficiently monitor and enforce face mask usage in various
settings, contributing to public health and safety.

Keywords Face mask · Face detection · Machine learning · Convolutional neural


network

J. B. Idoko (B)
Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East
University, Nicosia 99138, Turkey
e-mail: john.bush@neu.edu.tr
E. Simsek
Department of Computer Engineering, Near East University, Nicosia 99138, Turkey

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 21


J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things
in Education, Studies in Computational Intelligence 1115,
https://doi.org/10.1007/978-3-031-42924-8_3
Another random document with
no related content on Scribd:
sportsman is desirous of obtaining more, he may easily do so, as
others pass in full clamour close over the wounded bird.

Rhynchops nigra, Linn. Syst Nat. vol i. p. 228.—Lath. Ind. Ornith. vol. ii. p.
802.—Ch. Bonaparte, Synopsis of Birds of United States, p. 352.
Black Skimmer, or Shear-water, Rhynchops nigra, Wils. Amer. Ornith.
vol. vii. p. 85, pl. 60, fig. 4.—Nuttall, Manual, vol. ii. p. 264.

Adult Male. Plate CCCXXIII.


Bill longer than the head, nearly straight, tetragonal at the base,
suddenly extremely compressed, and continuing so to the end.
Upper mandible much shorter than the lower, its dorsal outline very
slightly convex, its ridge sharp, the sides erect, more or less convex,
the edges approximated so as to leave merely a very narrow groove
between them; the tip a little rounded when viewed laterally. Nasal
groove rather short, narrow near the margin; nostrils linear-oblong,
sub-basal in the soft membrane. Lower mandible with the angle
extremely short, the dorsal outline straight or slightly decurved, the
sides erect, the edges united into a very thin blade which fits into the
narrow groove of the upper mandible, the tip rounded or abrupt when
viewed laterally.
Head rather large, oblong, considerably elevated in front. Neck short
and thick. Body short, ovate, and compact. Feet short, moderately
stout; tibia bare below, with narrow transverse scutella before and
behind; tarsus short, moderately compressed, anteriorly covered
with broad scutella, reticulated on the sides and behind; toes very
small; the first extremely short, and free; the inner much shorter than
the outer, which is but slightly exceeded by the middle toe; the webs
very deeply concave at the margin, especially the inner. Claws long,
compressed, tapering, slightly arched, rather obtuse, the inner edge
of the middle toe dilated and extremely thin. Plumage moderately
full, soft, and blended; the feathers oblong and rounded. Wings
extremely elongated, and very narrow; the primary quills excessively
long; the first longest, the rest rapidly graduated; the secondaries
short, broad, incurved, obliquely pointed, some of the inner more
elongated. Tail rather short, deeply forked, of twelve feathers,
disposed in two inclined planes.
Bill of a rich carmine, inclining to vermilion for about half its length,
the rest black. Iris hazel. Feet of the same colour as the base of the
bill, claws black. The upper parts are deep brownish-black; the
secondary quills, and four or five of the primaries, tipped with white;
the latter on their inner web chiefly. Tail-feathers black, broadly
margined on both sides with white, the outer more extensively; the
middle tail-coverts black, the lateral black on the inner and white on
the outer web. A broad band of white over the forehead, extending to
the fore part of the eye; cheeks and throat of the same colour; the
rest of the neck and lower parts in spring and summer of a delicate
cream-colour; axillary feathers, lower wing-coverts, and a large
portion of the secondary quills, white; the coverts along the edge of
the wing black.
Length from point of upper mandible to end of tail 20 inches, to end
of wings 24 1/2, to end of claws 17; to carpal joint 8 1/4; extent of
wings 48; upper mandible 3 1/8; its edge 3 7/8; from base to point of
lower mandible 4 1/2; depth of bill at the base 1; wing from flexure
15 3/4; tail to the fork 3 1/2; to end of longest feather 5 1/4; tarsus
1 1/4; hind toe and claw 4/12; middle toe 10/12; its claw 4/12. Weight 13
oz.
The female, which is smaller, is similar to the male, but with the tail-
feathers white, excepting a longitudinal band including the shaft.
Length to end of tail 16 3/4, to end of wings 20 1/4, to end of claws
16 1/4, to carpus 8; extent of wings 44 1/2. Weight 10 oz.
After the first autumnal moult, there is on the hind part of the neck a
broad band of white mottled with greyish-black; the lower parts pure
white, the upper of a duller black; the bill and feet less richly
coloured.
Length to end of tail 16 3/4 inches, to end of wings 20, to end of
claws 14 1/2, to carpus 6 3/8; extent of wings 42.

In some individuals at this period, the mandibles are of equal length.


The palate is flat, with two longitudinal series of papillæ directed
backwards. The upper mandible is extremely contracted, having
internally only a very narrow groove, into which is received the single
thin edge of the lower mandible. The posterior aperture of the nares
is 1 5/12 inch long, with a transverse line of papillae at the middle on
each side, and another behind. The tongue is sagittiform, 6 1/2
twelfths long, with two conical papillae at the base, soft, fleshy, flat
above, horny beneath. Aperture of the glottis 4 1/2 twelfths long, with
numerous small papillae behind. Lobes of the liver equal, 1 1/2 inch
long. The heart of moderate size, 1 1/12 long, 10 twelfths broad.
The œsophagus, of which only the lower portion, a, is seen in the
figure, is 8 inches long, gradually contracts from a diameter of 1 inch
to 4 twelfths, then enlarges until opposite the liver, where its greatest
diameter is 1 4/12. Its external transverse fibres are very distinct, as
are the internal longitudinal. The proventriculus, b, is 9 twelfths long,
its glandules extremely small and numerous, roundish, scarcely a
quarter of a twelfth in length. The stomach, c, d, e, is rather small,
oblong, 1 inch 4 twelfths long, 11 twelfths broad, muscular, with the
lateral muscles moderate. The cuticular lining of the stomach is
disposed in nine broad longitudinal rugae of a light red colour, as in
the smaller Gulls and Terns. Its lateral muscles are about 4 twelfths
thick, the tendons, e, 6 twelfths in diameter. The intestine is 2 feet 4
inches long, its average diameter 2 1/2 twelfths. The rectum is 2
inches long. One of the cœca is 4, the other 3 twelfths, their
diameter 1 1/4 twelfths.

In another individual, the intestine is 22 1/2 inches long; the cœca 5


twelfths long, 1 twelfth in diameter; the rectum 1 3/4 inch long; the
cloaca 9 twelfths in diameter.

The trachea is 5 3/4 inches long, round, but not ossified, its diameter
at the top 5 twelfths, contracting gradually to 2 1/2 twelfths. The
lateral or contractor muscles are small; the sterno-tracheal slender;
there is a pair of inferior laryngeals, going to the last ring of the
trachea. The number of rings is 90, and a large inferior ring. The
bronchi are of moderate length, but wider, their diameter being 3 1/2
twelfths at the upper part; the number of their half-rings about 18.
The digestive organs of this bird are precisely similar to those of the
Terns and smaller Gulls, to which it is also allied by many of its
habits.
BONAPARTIAN GULL.

Larus Bonapartii, Swains.


PLATE CCCXXIV. Male, Female, and Young.

My first acquaintance with this species took place whilst I was at


Cincinnati, in the beginning of August 1819. I was crossing the Ohio,
along with Mr Robert Best, then curator of the Cincinnati Museum,
for the purpose of visiting the Cliff Swallows which had taken up their
abode on the walls of the garrison on the Kentucky side, when we
observed two Gulls sweeping gracefully over the tranquil waters.
Now they would alight side by side, as if intent on holding a close
conversation; then they would rise on wing and range about, looking
downwards with sidelong glances, searching for small fishes, or
perhaps eyeing the bits of garbage that floated on the surface. We
watched them for nearly half an hour, and having learned something
of their manners, shot one, which happened to be a female. On her
dropping, her mate almost immediately alighted beside her, and was
shot. There, side by side, as in life, so in death, floated the lovely
birds. One, having a dark bluish nearly black head, was found to be
the male; the other, with a brown head, was a female. On the 12th of
November 1820, I shot one a few miles below the mouth of the
Arkansas, on the Mississippi, which corresponded in all respects
with the male just mentioned.
No sooner do the shads and old-wives enter the bays and rivers of
our Middle Districts, than this Gull begins to shew itself on the coast,
following these fishes as if dependent upon them for their support,
which however is not the case, for at the time when these inhabitants
of the deep deposit their spawn in our waters, the Gull has advanced
beyond the eastern limits of the United States. However, after the
first of April, thousands of Bonapartian Gulls are seen gambling over
the waters of Chesapeake Bay, and proceeding eastward, keeping
pace with the shoals of fishes.
During my stay at Eastport in Maine, in May 1833, these Gulls were
to be seen in vast numbers in the harbour of Passamaquody at high
water, and in equal quantities at low water on all the sand and mud-
bars in the neighbourhood. They were extremely gentle, scarcely
heeded us, and flew around our boats so close that any number
might have been procured. My son John shot seventeen of them at
a single discharge of his double-barrelled gun, but all of them proved
to be young birds of the preceding year. On examining these
specimens, we found no development of the ovaries in several,
which, from their smaller size, we supposed to be females, nor any
enlargement of the testes in the males; and as these young birds
kept apart from those which had brown and black hoods, I concluded
that they would not breed until the following spring. Their stomachs
were filled with coleopterous insects, which they caught on the wing,
or picked up from the water, into which they fell in great numbers
when overtaken by a cold fog, while attempting to cross the bay. On
the 24th of August 1831, when at Eastport with my family, I shot ten
of these Gulls. The adult birds had already lost their dark hood, and
the young were in fine plumage. In the stomach of all were shrimps,
very small fishes, and fat substances. The old birds were still in
pairs.
When exploring the Bay of Fundy, in May 1833, I was assured by the
captain and sailors, as well as the intelligent pilot of the Revenue
Tender, the Nancy, that this Gull bred in great abundance on the
islands off Grand Manan; but unfortunately I was unable to certify the
fact, as I set out for Labrador previous to the time at which they
breed in that part of the country. None of them were observed on any
part of the Gulf of St Lawrence, or on the coast of Labrador or
Newfoundland. In winter this species is common in the harbour of
Charleston, but none are seen at that season near the mouths of the
Mississippi.
The flight of this Gull is light, elevated, and rapid, resembling in
buoyancy that of some of our Terns more than that of most of our
Gulls, which move their wings more sedately. I found the adult birds
in moult in August. Although their notes are different from those of all
our other species, being shriller and more frequent, I am unable to
represent them intelligibly by words.
Since I began to study the habits of Gulls, and observe their changes
of plumage, whether at the approach of the love season, or in
autumn, I have thought that the dark tint of their hoods was in the
first instance caused by the extremities of the feathers then gradually
changing from white to black or brown, without the actual renewal of
the feathers themselves, as happens in some species of land-birds.
At Eastport, I had frequent opportunities of seeing the black-hooded
males copulating with the brown-hooded females, so that the colour
of the head in the summer season is really distinctive of the sexes. I
found in London a pair of these birds, of which the sexes were
distinguished by the colour of the head, and which had been brought
from Greenland. They were forwarded by me to the Earl of Derby,
in whose aviaries they are probably still to be seen.
This is certainly the species described in the Fauna Boreali-
Americana under the same name; but it is there stated that the
females agree precisely with the males, their hood being therefore
“greyish-black;” which I have never found to be the case. As to the
Larus capistratus of Bonaparte’s Synopsis, I have nowhere met with
a Brown-headed Gull having the tail “sub-emarginate;” and I infer
that the bird described by him under that name is merely the female
of the present species.

Larus Bonapartii, Bonapartian Gull, Richards. and Swains. Fauna Bor.-


Amer. vol. ii. p. 425.
Brown-masked Gull, Larus capistratus, Bonap. Amer. Ornith., vol. iv.
Female.
Larus capistratus, Ch. Bonaparte, Synopsis of Birds of United States, p.
358.
Bonapartian Gull, Nuttall, Manual, vol. ii. p. 294.

Adult Male in Spring Plumage. Plate CCCXXIV. Fig. 1.


Bill shorter than the head, nearly straight, slender, compressed.
Upper mandible with its dorsal line straight to the middle, then
curved and declinate, the ridge narrow, the sides slightly convex, the
edges sharp and a little inflected, the tips narrow but rather obtuse,
with a slight notch on each side. Nasal groove rather long and
narrow; nostrils in its fore part, longitudinal, submedial, linear,
pervious. Lower mandible with a slight prominence at the end of the
angle, which is long and narrow, the dorsal line then ascending and
slightly concave, the ridge convex, the sides nearly erect and
flattened.
Head of moderate size, ovate, narrowed anteriorly, convex above.
Eyes of moderate size. Neck rather short. Body rather slender.
Wings very long. Feet of moderate length, rather strong; tibia bare
below for a short space, covered behind with narrow scutella; tarsus
compressed, anteriorly covered with numerous scutella and three
inferior series of transverse scales, laterally with oblong scales,
posteriorly with oblique scutella. Toes slender, with numerous
scutella; first extremely small, second considerably shorter than
fourth, third longest; anterior toes connected by reticulated webs, of
which the anterior margins are deeply concave, the outer and inner
slightly marginate. Claws small, compressed, moderately arched,
rather obtuse, that of middle toe with an expanded inner edge.
Plumage full, close, soft, blended. Wings very long and pointed;
primaries tapering and rounded, first longest, second very little
shorter, the rest rapidly graduated; secondaries obliquely pointed,
the rounded extremity extending beyond the tip of the shaft, which is
exterior to it, the inner feathers more elongated. Tail of moderate
length, almost even, the middle feathers slightly longer.
Bill black, inside of mouth vermilion. Iris reddish hazel. Feet orange,
slightly tinged with vermilion; claws dusky brown. Head and upper
part of neck all round, greyish-black, that colour extending half an
inch lower on the throat than on the occiput. A white band divided by
a narrow black line margining the eye behind; the remaining part of
the neck white; back, scapulars and wings, light greyish-blue. The
anterior ridge of the wing, alula, smaller coverts on the carpal
margin, four outer primary coverts, shaft and inner web of the outer
primary, both webs of second, inner webs of third and fourth white; of
which colour also are the rump, tail, and all the lower parts. Outer
web of first quill, excepting a small portion towards the end, its tip to
the length of half an inch, black, as are the ends of the next six,
which however have a small tip of white, the black on some of them
about an inch long, and running along the inner edge to a
considerable extent.
Length to end of tail 14 1/8 inches, to end of wings 15 5/8, to end of
claws 13 1/8; extent of wings 32 1/4; wing from flexure 10 3/4; tail
4 2/12; bill along the ridge 1 4/12, along the edge of lower mandible
1/
1 10/12; tarsus 1 3/12; hind toe and claw 3 4/12; middle toe 1 3/12; its
claw 3 1/4/12, outer toe 1 1/3/12, its claw 2 1/4/12; inner toe 11/12, its claw
2 1/2/ . Weight 6 1/2 oz.
12
Adult Female. Plate CCCXXIV. Fig. 2.
The female is somewhat smaller, and resembles the male, but has
the head and upper part of the neck umber brown.
Young in December. Plate CCCXXIV. Fig. 3.
Bill greyish-black, iris dark brown; feet flesh-coloured, claws dusky.
Head and neck greyish-white; a small black patch about an inch
behind the eye on each side. Upper parts dull bluish-grey, many of
the wing-coverts greyish brown, edged with paler; quills as in the
adult; rump and tail white, the latter with a broad band of black at the
end, the tips narrowly edged with whitish.
Length to end of tail 13 3/8, to end of wings 15 5/8, to end of claws 13;
extent of wings 32 1/2 inches. Weight 6 oz.

The white spots on the tips of the wings vary greatly in size, and are
frequently obliterated when the feathers become worn.
Palate with five series of small distant papillæ. Tongue 1 inch 1 1/2
twelfths long, slender, tapering to a slit point, emarginate and
papillate at the base, horny towards the end. Aperture of posterior
nares linear, 9 twelfths long. Heart 1 inch long, 9 twelfths broad.
Right lobe of liver 1 inch 11 twelfths long, the other lobe 1 inch 7
twelfths.
The œsophagus is 6 1/2 inches long, very wide with rather thin
parietes, its average diameter when dilated 10 twelfths, within the
thorax enlarged to 1 inch 2 twelfths. The transverse muscular fibres
are distinct, the internal longitudinal less so; the mucous coat
longitudinally plicate. The proventriculus is 1/2 inch long, with very
numerous small glandules. The stomach is a small oblong gizzard,
10 twelfths long, 8 twelfths broad; its lateral muscles rather large, as
are its tendons. The inner coat or epithelium is of moderate
thickness, dense, with nine longitudinal broad rugæ, and of a
brownish-red colour. The intestine is 24 1/2 inches long, its diameter
2 twelfths. The rectum is 1 1/2 inch long. The cœca are 2 twelfths
long, 1 twelfth in diameter, cylindrical and obtuse.
The intestine of another individual, a male, is 20 1/2 inches long, 3
twelfths in diameter.
The trachea is 3 inches 10 twelfths long, its diameter at the top 3
twelfths, at the lower part 2 1/4 twelfths, the rings very feeble,
unossified, about 130 in number. The sterno-tracheal muscles are
very slender, as are the contractors; and there is a pair of inferior
laryngeals. The bronchi are of moderate length, with about 18 half
rings.
BUFFEL-HEADED DUCK.

Fuligula albeola, Bonap.


PLATE CCCXXV. Male and Female.

There are no portions of the Union on the waters of which this


beautiful miniature of the Golden-eye Duck is not to be found, either
during the autumnal months or in winter; and, therefore, to point out
any particular district as more or less favoured by its transient visits
would be useless. The miller’s dam is ornamented by its presence;
the secluded creeks of the Middle States are equally favoured by it
as the stagnant bayous and lakes of Lower Louisiana; in the
Carolinas and on the Ohio, it is not less frequent; it being known in
these different districts by the names of Spirit Duck, Butter-box,
Marrionette, Dipper, and Die-dipper. It generally returns from the far
north, where it is said to breed, about the beginning of September,
and many reach the neighbourhood of New Orleans by the middle of
October, at which period I have also observed them in the Floridas.
Their departure from these different portions of our country varies
from the beginning of March to the end of May. On the 11th of that
month in 1833, I shot some of them near Eastport in Maine. None of
them have, I believe, been found breeding within the limits of the
Union. During the period of their movements towards the north, I
found them exceedingly abundant on the waters of the Bay of Fundy,
the males in flocks, and in full dress, preceding the females about a
fortnight, as is the case with many other birds.
The Marrionette—and I think the name a pretty one—is a very hardy
bird, for it remains at times during extremely cold weather on the
Ohio, when it is thickly covered with floating ice, among which it is
seen diving almost constantly in search of food. When the river is
frozen over, they seek the head waters of the rapid streams, in the
turbulent eddies of which they find abundance of food. Possessed of
a feeling of security arising from the rapidity with which they can
dive, they often allow you to go quite near them, though they will
then watch every motion, and at the snap of your gun, or on its being
discharged, disappear with the swiftness of thought, and perhaps as
quickly rise again, within a few yards as if to ascertain the cause of
their alarm. I have sometimes been much amused to see the
apparent glee with which these little Dippers would thus dive at the
repeated snappings of a miserable flint lock, patiently tried by some
vagrant boys, who becoming fatigued with the ill luck of their piece,
would lay it aside, and throw stones at the birds, which would appear
quite pleased.
Their flight is as rapid as that of our Hooded Merganser, for they
pass through the air by regularly repeated beats of their wings, with
surprising speed; and yet this is the best time for the experienced
sportsman to shoot them, as they usually fly low. Their note is a
mere croak, much resembling that of the Golden-eye, but feebler. At
the approach of spring, the males often swell their throats and
expand the feathers of the head, whilst they utter these sounds, and
whilst moving with great pomposity over the waters. Often too, they
charge against each other, as if about to engage in combat, but I
have never seen them actually fighting.
When these birds return to us from the north, the number of the
young so very much exceeds that of the old, that to find males in full
plumage is much more uncommon than toward the time of their
departure, when I have thought the males as numerous as the
females. Although at times they are very fat, their flesh is fishy and
disagreeable. Many of them, however, are offered for sale in our
markets. I have often found some of them on inland ponds, which
they seemed loth to leave, for, although repeatedly shot at, they
would return. Their food is much varied according to situation. On
the sea-coast, or in estuaries, they dive after shrimps, small fry, and
bivalve shells; and in fresh-water, they feed on small crayfish,
leeches, and snails, and even grasses.
Not having found any of these birds in Labrador or Newfoundland, I
am unable to say anything as to their nests. Dr Richardson states,
that they frequent the rivers and fresh-water lakes throughout the Fur
Countries in great numbers, but does not mention having observed
them breeding. As in almost all other species of this family, the
young of both sexes in autumn resemble the adult female. Dr
Townsend has found this species on the streams of the Rocky
Mountains, and it has been observed as far westward as Monterey in
New California.

Anas Albeola, Linn. Syst. Nat. vol. i. p. 199.—Lath. Ind. Ornith. vol. ii. p. 867.
Anas bucephala, Linn. Syst. Nat. vol. i. p. 200; Anas rustica, p. 201.
Buffel-headed Duck, Anas Albeola, Wilson, American Ornith. vol. viii. p.
51, pl. 67, fig. 2, 3.
Fuligula Albeola, Ch. Bonaparte, Synops. of Birds of United States, p. 394.
Clangula Albeola, Spirit Duck, Richards. and Swains. Fauna Bor.-Amer.
vol. ii. p. 458.
Spirit Duck, Nuttall, Manual, vol. ii. p. 445.

Adult Male. Plate CCCXXV. Fig. 1.


Bill much shorter than the head, comparatively narrow, deeper than
broad at the base, gradually depressed towards the end, which is
rounded. Upper mandible with the dorsal line straight and sloping to
the middle, then nearly straight, at the end decurved; the ridge broad
and flat at the base, narrowed between the nostrils, convex towards
the end, the sides convex, the edges soft, with about thirty-five
lamellæ, the unguis oblong. Nostrils submedial, linear, pervious,
nearer the ridge than the margin. Lower mandible flat, ascending,
curved at the base, the angle long, rather narrow, the dorsal line very
slightly convex, the edges with about forty lamellæ, the unguis
broadly elliptical.
Head rather large, compressed. Eyes of moderate size. Neck short
and thick. Body compact, depressed. Feet very short, placed far
back; tarsus very short, compressed, having anteriorly in its whole
length a series of small scutella, and above the outer toe a few broad
scales, the rest covered with reticular angular scales. Hind toe very
small, with a free membrane beneath; anterior toes longer than the
tarsus, connected by reticulated membranes, having a sinus on their
free margins, the inner with a narrow lobed marginal membrane, the
outer with a thickened edge, the third and fourth about equal and
longest, all covered above with numerous narrow scutella. Claws
small, slightly arched, obtuse, that of first toe very small, of third
largest, and with an inner thin edge.
Plumage dense, soft and blended. Feathers on the fore part of the
head very small and rounded, on the upper and hind parts linear and
elongated, as they also are on the lateral and hind parts of the upper
neck, so that when raised, they give the head an extremely tumid
appearance, which is the more marked that the feathers of the neck
immediately beneath are short. Wings very small, decurved, pointed;
the outer primaries pointed, the first longest, the rest rapidly
graduated; the secondaries incurved, obliquely rounded, the inner
much elongated and acuminate. Tail short, graduated, of sixteen
feathers.
Bill light greyish-blue. Iris hazel. Feet very pale flesh-colour, claws
brownish-black. Fore part of the head of a deep rich green, upper
part rich bluish-purple, of which colour also are the elongated
feathers on the fore part and sides of the neck, the hind part of the
latter deep green; a broad band of pure white from one cheek to the
other over the occiput. The coloured parts of the head and neck are
splendent and changeable. The rest of the neck, the lower parts, the
outer scapulars, and a large patch on the wing, including the greater
part of the smaller coverts and some of the secondary coverts and
quills, pure white, the scapulars narrowly margined with black, as are
the inner lateral feathers. Axillary feathers brownish-black, some of
them white on the margins and towards the end; lower wing-coverts
brownish-black, the smaller tipped with white. The back, inner
scapulars, and inner secondary quills, velvet-black. The feathers on
the anterior edge of the wing are black, narrowly edged with white;
alula, primary coverts, and primary quills deep black. The feathers
on the rump gradually fade into greyish-white, and those of the tail
are brownish-grey, with the edges paler, and the shafts dusky.
Length to end of tail 14 1/2 inches, to end of wings 13 3/4, to end of
claws 15 3/4; extent of wings 23; wing from flexure 6 3/4; tail 3 1/4; bill
1/2
along the ridge 1 2/12, along the edge of lower mandible 1 5 /12;
tarsus 1 3/12, hind toe and claw 8/12; outer toe 2 1/12, its claw 2 1/2/12
middle toe 2, its claw 3 1/2/12; inner toe and claw 1 9/12. Weight 1 lb.
Adult Female. Plate CCCXXV. Fig. 2.
The female is much smaller. The plumage of the head is not
elongated as in the male, but there is a ridge of longish feathers
down the occiput and nape. Bill darker than that of the male; feet
greyish-blue, with the webs dusky. Head, upper part of neck, hind
neck, back and wings, greyish-brown; a short transverse white band
from beneath the eye, and a slight speck of the same on the lower
eyelid. Six of the secondary quills white on the outer web. Lower
parts white, shaded into light greyish-brown on the sides; tail dull
greyish-brown.
Length to end of tail 13 inches, to end of claws 13 1/2, to end of
wings 11 1/2; extent of wings 22 1/4. Weight 8 oz.

Individuals of both sexes differ much in size, and in the tints of their
plumage.
In an adult male, the tongue is 1 inch and 2 twelfths long, fleshy, and
of the same general form as in the other ducks already described.
The œsophagus is 6 3/4 inches long, passes along the right side, has
a diameter at the top of 4 1/2 twelfths, enlarges about the middle to 9
twelfths, and contracts to 1/2 inch as it enters the thorax. The
proventriculus is 1 inch long, 8 twelfths in its greatest diameter, its
glandules, which are of moderate size, forming a complete belt, as in
all other ducks. The stomach is a muscular gizzard of a roundish
form, 1 inch 5 twelfths long, 1 inch 4 twelfths in breadth; its lateral
muscles 5 twelfths in thickness; its epithelium tough, hard, and
slightly rugous. The intestine is 3 feet 11 inches long; its average
diameter 3 twelfths, its walls thick, and its inner surface villous. The
rectum is 3 inches long; the cœca 2 1/4 inches in length, their
diameter at the commencement 1 twelfth, towards the end 2 twelfths.
The trachea is 5 inches long, much flattened, its rings unossified, its
diameter at the top 2 3/4 twelfths, towards the lower part 3 twelfths,
having scarcely any appearance of dilatation at the part which is so
excessively enlarged in the Golden-eyed Duck, which in form and
habits is yet very closely allied. The lateral muscles are strong, and
there are cleido-tracheal and sterno-tracheal muscles, as in other
ducks.
COMMON GANNET.

Sula bassana, Lacep.


PLATE CCCXXVI. Adult Male and Young.

On the morning of the 14th of June 1833, the white sails of the
Ripley were spread before a propitious breeze, and onward she
might be seen gaily wending her way toward the shores of Labrador.
We had well explored the Magdalene Islands, and were anxious to
visit the Great Gannet Rock, where, according to our pilot, the birds
from which it derives its name bred. For several days I had observed
numerous files proceeding northward, and marked their mode of
flight while thus travelling. As our bark dashed through the heaving
billows, my anxiety to reach the desired spot increased. At length,
about ten o’clock, we discerned at a distance a white speck, which
our pilot assured us was the celebrated rock of our wishes. After a
while I could distinctly see its top from the deck, and thought that it
was still covered with snow several feet deep. As we approached it, I
imagined that the atmosphere around was filled with flakes, but on
my turning to the pilot, who smiled at my simplicity, I was assured
that nothing was in sight but the Gannets and their island home. I
rubbed my eyes, took up my glass, and saw that the strange
dimness of the air before us was caused by the innumerable birds,
whose white bodies and black-tipped pinions produced a blended tint
of light-grey. When we had advanced to within half a mile, this
magnificent veil of floating Gannets was easily seen, now shooting
upwards, as if intent on reaching the sky, then descending as if to
join the feathered masses below, and again diverging toward either
side and sweeping over the surface of the ocean. The Ripley now
partially furled her sails, and lay to, when all on board were eager to
scale the abrupt sides of the mountain isle, and satisfy their curiosity.
Judge, Reader, of our disappointment. The weather, which hitherto
had been beautiful, suddenly changed, and we were assailed by a
fearful storm. However, the whale-boat was hoisted over, and
manned by four sturdy “down-easters,” along with Thomas Lincoln
and my son. I remained on board the Ripley, and commenced my
distant observations, which I shall relate in due time.
An hour has elapsed; the boat, which had been hid from our sight, is
now in view; the waves run high, and all around looks dismal. See
what exertions the rowers make; it blows a hurricane, and each
successive billow seems destined to overwhelm their fragile bark. My
anxiety is intense, as you may imagine; in the midst of my friends
and the crew I watch every movement of the boat, now balanced on
the very crest of a rolling and foaming wave, now sunk far into the
deep trough. We see how eagerly yet calmly they pull. My son
stands erect, steering with a long oar, and Lincoln is bailing the
water which is gaining on him, for the spray ever and anon dashes
over the bow. But they draw near, a rope is thrown and caught, the
whale-boat is hauled close under our lee-board; in a moment more
all are safe on deck, the helm round, the schooner to, and away
under bare poles she scuds toward Labrador.
Thomas Lincoln and my son were much exhausted, and the sailors
required a double allowance of grog. A quantity of eggs of various
kinds, and several birds, had been procured, for wherever sufficient
room for a gannet’s nest was not afforded on the rock, one or two
Guillemots occupied the spot, and on the ledges below the
Kittiwakes lay thick like snow-flakes. The discharging of their guns
produced no other effect than to cause the birds killed or severely
wounded to fall into the water, for the cries of the countless
multitudes drowned every other noise. The party had their clothes
smeared with the nauseous excrements of hundreds of gannets and
other birds, which in shooting off from their nests caused numerous
eggs to fall, of which some were procured entire. The confusion on
and around the rock was represented as baffling all description; and
as we gazed on the mass now gradually fading on our sight, we all
judged it well worth the while to cross the ocean to see such a sight.
But yet it was in some measure a painful sight to me, for I had not
been able to land on this great breeding-place, of which, however, I
here present a description given by our pilot Mr Godwin.
“The top of the main rock is a quarter of a mile wide, from north to
south, but narrower in the other direction. Its elevation is estimated
at about four hundred feet. It stands in Lat. 47° 52´. The surf beats
its base with great violence, unless after a long calm, and it is
extremely difficult to land upon it, and still more so to ascend to the
top or platform. The only point on which a boat may be landed lies
on the south side, and the moment the boat strikes it must be hauled
dry on the rocks. The whole surface of the upper platform is closely
covered with nests, placed about two feet asunder, and in such
regular order that a person may see between the lines, which run
north and south, as if looking along the furrows of a deeply ploughed
field. The Labrador fishermen and others who annually visit this
extraordinary resort of the Gannets, for the purpose of procuring
their flesh to bait their cod-fish hooks, ascend armed with heavy
short clubs, in parties of eight, ten, or more, and at once begin their
work of destruction. At sight of these unwelcome intruders, the
affrighted birds rise on wing with a noise like thunder, and fly off in
such a hurried and confused manner as to impede each other’s
progress, by which thousands are forced downwards, and
accumulate into a bank many feet high; the men beating and killing
them with their clubs until fatigued, or satisfied with the number they

You might also like