Professional Documents
Culture Documents
Textbook Computational Intelligence and Sustainable Systems Intelligence and Sustainable Computing H Anandakumar Ebook All Chapter PDF
Textbook Computational Intelligence and Sustainable Systems Intelligence and Sustainable Computing H Anandakumar Ebook All Chapter PDF
https://textbookfull.com/product/microbial-data-intelligence-and-
computational-techniques-for-sustainable-computing-1st-edition-
aditya-khamparia/
https://textbookfull.com/product/computational-intelligence-and-
intelligent-systems-kangshun-li/
https://textbookfull.com/product/smart-and-sustainable-
intelligent-systems-sustainable-computing-and-optimization-1st-
edition-namita-gupta-editor/
https://textbookfull.com/product/business-intelligence-for-
enterprise-internet-of-things-anandakumar-haldorai/
EAI International Conference on Big Data Innovation for
Sustainable Cognitive Computing: BDCC 2018 Anandakumar
Haldorai
https://textbookfull.com/product/eai-international-conference-on-
big-data-innovation-for-sustainable-cognitive-computing-
bdcc-2018-anandakumar-haldorai/
https://textbookfull.com/product/2nd-eai-international-
conference-on-big-data-innovation-for-sustainable-cognitive-
computing-bdcc-2019-anandakumar-haldorai/
https://textbookfull.com/product/computational-intelligence-and-
its-applications-abdelmalek-amine/
https://textbookfull.com/product/recent-studies-on-computational-
intelligence-doctoral-symposium-on-computational-intelligence-
dosci-2020-ashish-khanna/
https://textbookfull.com/product/computing-and-communication-
systems-in-urban-development-a-detailed-perspective-anandakumar-
haldorai/
EAI/Springer Innovations in Communication and Computing
H. Anandakumar
R. Arulmurugan
Chow Chee Onn Editors
Computational
Intelligence and
Sustainable Systems
Intelligence and Sustainable Computing
EAI/Springer Innovations in Communication
and Computing
Series editor
Imrich Chlamtac, CreateNet, Trento, Italy
Editor’s Note
The impact of information technologies is creating a new world yet not fully
understood. The extent and speed of economic, life style and social changes already
perceived in everyday life is hard to estimate without understanding the technolog-
ical driving forces behind it. This series presents contributed volumes featuring the
latest research and development in the various information engineering technologies
that play a key role in this process.
The range of topics, focusing primarily on communications and computing
engineering include, but are not limited to, wireless networks; mobile communica-
tion; design and learning; gaming; interaction; e-health and pervasive healthcare;
energy management; smart grids; internet of things; cognitive radio networks;
computation; cloud computing; ubiquitous connectivity, and in mode general
smart living, smart cities, Internet of Things and more. The series publishes a
combination of expanded papers selected from hosted and sponsored European
Alliance for Innovation (EAI) conferences that present cutting edge, global research
as well as provide new perspectives on traditional related engineering fields. This
content, complemented with open calls for contribution of book titles and individual
chapters, together maintain Springer’s and EAI’s high standards of academic excel-
lence. The audience for the books consists of researchers, industry professionals,
advanced level students as well as practitioners in related fields of activity include
information and communication specialists, security experts, economists, urban
planners, doctors, and in general representatives in all those walks of life affected
ad contributing to the information revolution.
About EAI
EAI is a grassroots member organization initiated through cooperation between
businesses, public, private and government organizations to address the global
challenges of Europe’s future competitiveness and link the European Research
community with its counterparts around the globe. EAI reaches out to hundreds of
thousands of individual subscribers on all continents and collaborates with an
institutional member base including Fortune 500 companies, government organiza-
tions, and educational institutions, provide a free research and innovation platform.
Through its open free membership model EAI promotes a new research and
innovation culture based on collaboration, connectivity and recognition of excel-
lence by community.
Computational Intelligence
and Sustainable Systems
Intelligence and Sustainable Computing
Editors
H. Anandakumar R. Arulmurugan
Department of Computer Science Bannari Amman Insititute of Technology
and Engineering Sathyamangalam, Tamil Nadu, India
Sri Eshwar College of Engineering
Coimbatore, Tamil Nadu, India
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book covers ideas, methods, algorithms, and tools for the in-depth study of
performance and reliability of intelligent sustainable systems. The field of sustain-
able computing is moving toward a trending research domain by comprising several
areas of computer science, electrical engineering, and other engineering disciplines.
Sustainability is a factor that is of high significance due to the rapid growth in
demand and minimal amounts of resources. This book explores and contributes
numerous research contributions relating to the field of energy and thermal aware
management system of computing resources. The prevailing hitch in sustainable
energy systems are data mining, process of optimization, and various control tasks.
In this book, we express the techniques and advancements in computational
intelligence that can be used in overcoming and solving complex tasks in sustainable
intelligence systems. The major chapters covered in this book open up to challenges
and topics that are based on textual recognition, wireless sensor networks, remote
sensing, green practices, artificial intelligence, cluster computing, cognitive radio
networks, deep neural networks, control systems, vehicular and ad hoc networks,
analogous network computation, mobile cellular networking, dynamic cooperative
communication, map reducing optimization, machine learning, large scale data,
device-to-device communications, interference limited networks, multicasting in
networks, network coding, terrestrial networks, satellite networks, and many more.
This book opens the door for authors toward current research in optimistic
technological areas for future wireless communication systems. I would like to
thank all authors and co-authors who have contributed to this book and enhance
its scope and Eliska Vlckova and Jakub Tlolka from EAI/Springer International
Publishing AG for their great support.
v
vi Preface
We anticipate that this book will open new entrance for further research and
technology improvements. All the chapters provide a complete overview of intelli-
gent computing and sustainable systems. This book will be handy for academicians,
research scholars, and graduate students in engineering discipline.
vii
viii Contents
ix
x About the Editors
1.1 Introduction
Machine learning (ML) techniques have huge applications in the field of pattern
recognition, image classification, and medical diagnosis (Liu et al. 2017). The major
advantage of this technique is that the computer can function without being
programmed explicitly (Ruder 2017). It belongs to the area of computer science
which enables the computer to learn automatically. This is done by using complex
models and algorithms (Wang et al. 2018). It mostly focuses on making predictions
using computers. It is mostly related to optimization in mathematical operations. It is
done effectively because it strongly makes use of all the training data which is
available.
The Deep belief network (DBN) was developed since 2006 by (Hinton and Teh
2006). The development of deep network model is supported by neuromorphic
systems (Schmitt et al. 2017). One of the active research is ANN (artificial neural
network) (Raith et al. 2017) which was the base for deep learning. Training a Neural
Network (NN) takes more time. The supervised learning algorithm is used to train
ANNs. For pretraining the network, it uses layer-wise-greedy learning where
unsupervised learning is performed. Based on the complexity nature of the problem,
the number of neurons and number of layers in the network has to be increased,
which frames the deep neural network architecture.
S. N. Shivappriya (*)
Department of Electronics and Communication Engineering, Kumaraguru College
of Technology, Coimbatore, Tamilnadu, India
e-mail: shivappriya.sn.ece@kct.ac.in
R. Harikumar
Department of Electronics and Communication Engineering, Bannari Amman Institute
of Technology, Sathyamangalam, Tamilnadu, India
The advancement in the technology led to deep learning (Lei and Ming 2016)
which consists of multiple layers to automatically extract the features. This makes
the increase in use of deep neural network in the applications like speech recogni-
tion, visual object recognition, object detection, and many other domains such as
drug discovery and genomics (Holder and Gass 2018). Based on the input data,
DNN extracts its internal parameters, and then it computes the parameters to be
representation in each layer from the parameter representation in the previous layer.
It provides the breakthrough performance (Raju and Shivappriya 2018).
Generally, autoencoder (Yu et al. 2018) refers to a type of ANN which uses
unsupervised learning for coding efficiently. It is generally preferred for reduction of
dimension in an image. An autoencoder is otherwise known as autoassociator or
Diabolo network. Autoencoder is very similar to that of multilayer perceptron (MLP)
which is a feedforward, nonrecurrent type of neural network which has an input layer
and an output layer which is connected by one or more number of hidden layers
(Mohd Yassin et al. 2017). It has a constraint of owing the same number of nodes as
that of an input layer so that it can reconstruct its own input. This is achieved by
approximating its identity function. This identity function is a trivial function. The
compression task would be very difficult if the inputs were random completely. The
ANN algorithm has an advantage that it will be able to discover some of the
correlations when the input features are correlated (LeCun et al. 2015). The perfor-
mance of the autoencoder depends on the number of hidden units.
An autoencoder which has low number of hidden units ends up learning a
low-dimensional representation which is very similar to that of PCA (principal
component analysis). Instead when there is lot of hidden units which refers to the
greater number of pixels in the input image, it will have improved performance.
When the neuron is in the active state, then its output value is close to 1; otherwise its
output value is close to 0 which refers to the neuron that is in the inactive state. The
neurons are considered to be inactive most of the time. When the activation function
used is tanh, then the neuron that is inactive will have the output value that is close to
1. When there is small training set, then the forward passes can be easily computed
along with which the backpropagation can be easily performed where the data used
can be fitted into the memory. In case the data used is very large, then the
backpropagation also can be easily performed.
There are different types of autoencoders which can be used to improve the ability
of capturing information that is important and that learns richer representations. Few
of the autoencoder types are denoising autoencoder, sparse autoencoder, variational
autoencoder, and contractive autoencoder.
• Denoising autoencoder (G. Du et al. 2018) is the one that takes input that is
partially corrupted, and while training it recovers the original undistorted input.
This can be achieved by assuming that the representations that are at higher level
are stable and are robust to the input that is corrupted.
• Sparse autoencoder (Liu et al. 2018) is used when there is a need for sparsity on
the hidden units, i.e., when more number of hidden units are required than the
number of input units, then useful structures can be learned from the input data by
1 Performance Analysis of Deep Neural Network and Stacked Autoencoder. . . 3
To classify the real-time data, hand-engineering and task-specific features are often
complex, difficult, and time-consuming. For example, it is not immediately clear
what the appropriate features should be for object detection in a crowded place,
unshaped objects in a manufacturing industry, face recognition, and other biometric
applications. This difficulty is more noticeable with multimodal data as the features
have to relate multiple data sources. In this work, we showed how deep learning and
autoencoder can be applied to this challenging task to automatically extract the
features.
1.2 Methodology
During the learning period, if the activities of hidden units are regularized, then the
DNN results in better performance (Ba and Frey 2013). The training process of DNN
consists of the following:
• Firstly, large dataset like MNIST is collected as DNN work well on it.
1 Performance Analysis of Deep Neural Network and Stacked Autoencoder. . . 5
• Then activation function is chosen in such a way that it doesn’t create any
problem. Here tanh function is used as it is zero-centered and it will result in
faster convergence.
• Then the number of hidden units and layers is defined after which the weight
initialization is performed with small random numbers so that the different unit’s
symmetry is broken. The weights are chosen in an intermediate range.
• Next step is to choose the learning rate which should not be small as it may take
ages to converge.
• Finally define the number of epochs or training iterations.
Figure 1.2 shows the architecture of deep neural network with all layer and
learning information (Ng et al. 2015).
When DNN and autoencoders are implemented in different models, high accuracy
can be obtained (Gottimukkula 2016; Dong et al. 2018).
• The compression and decompression functions in an autoencoder learn from the
data itself not like jpg.
• The network is trained in such a way that the difference between the input and
output is as low as possible.
• The compression task is difficult when the input is completely random. The
network will recognize when the input features are correlated. Its working is
similar to that of PCAs which has only linear transformation but additionally it
has “nonlinearities” in encoding.
6 S. N. Shivappriya and R. Harikumar
• In order to discover features when there are more number of hidden units, a
sparsity constraint is imposed on the hidden layers.
• Here all the NN training functions work such as backpropagation, regularization,
dropout, and RBM pretraining.
1.3 Algorithm
Here the training will be performed only when the training samples are labeled, that
is, they should belong to a proper class, whereas in the unsupervised learning
methodology, the labels need not belong to a proper class (Yang et al. 2015). The
supervised learning functions as follows:
• The type of training sample is determined. In this paper the data used is hand-
written digits. The training set is collected because it is usage of the function in
the real world.
• The learned function’s representation of the input feature is determined.
Depending on the input object representation, the accuracy is calculated. This is
done with the help of the input object which is transformed into a feature vector
containing descriptive features of the object.
• The structure of the learned function should be determined after which the
gathered training set can be run on the supervised algorithm.
Sun et al. (2017) shows how the Unsupervised Learning can be used for unlabled
dataset. In the data it finds the data structure or clusters which are its main goal. One of
the major approaches to unsupervised learning is clustering which includes grouping
of all the set of objects which belong to the same group. It is generally formulated as
optimization (Harikumar et al. 2014). Another type of algorithm is semi-supervised
learning (Parloff and Metz 2016) which is the one that falls between the semi-
supervised learning and the unsupervised learning that requires only a small number
of labeled data.
The data that is unlabeled can be used for defining the boundary cluster, whereas
the data that is labeled can be used for labeling the clusters. It is proved that when a
small amount of labeled data is used in conjunction with that of unlabeled data, it can
produce higher accuracy while learning. The reinforcement learning (Baldi 2012) is
where the training of the weights is performed by providing the current state of the
environment. It is usually referred as dynamic programming in the area of research
and control literature. The main difference between the reinforcement learning
algorithms and the classical techniques is that it does not require knowledge about
the MDP.
This work focuses on the classification of Hand writtern images from MNIST
open-source labeled dataset with DNN and Stacked Auto Encoder classifier models.
1 Performance Analysis of Deep Neural Network and Stacked Autoencoder. . . 7
1.3.2 Parameters
The machine learning uses the weight regularization techniques to reduce overfitting
of the model. The model makes better predictions when overfitting of the model is
eliminated. It creates a model that is less complex when the dataset has more number
of features. There are two types of regularization techniques known as L1 regular-
ization and L2 regularization. In some situations, the L1 regularization can be used
as it is well known to perform feature selection, but mostly L2 regularization out-
performs L1 regularization as better results are obtained by it on the remaining
variables (Arulmurugan et al. 2017). The L2 regularization is also known as weight
decay as it has a greater influence on the scale of weights. Its value should be as small
as possible. In this paper the L2 regularization is used.
One of the parameters of the sparsity regularizer is the sparsity proportion which
mainly controls the sparsity of the output from the hidden layer. It should be as low
as possible because in the hidden layer, it enables each neuron to give high output. It
is important for its value to lie between 0 and 1. Here every element’s position in the
matrix is stored.
The learning model’s main objective is to minimize the value of loss function by
using different methods of which the commonly used one is backpropagation where
the values of the weight vector are changed often to minimize loss.
The model’s accuracy is determined when there is no learning which means all
the parameters in the model are learned. The percentage of misclassification is
calculated only after the model is fed with the test samples and the number of
mistakes that is zero-one loss the model makes is recorded, after comparing it with
true targets. 100% accuracy can be obtained when all the labels are correctly
predicted.
8 S. N. Shivappriya and R. Harikumar
1.3.2.5 Time
It is another important parameter to determine the strength of the network. The lower
the time and the higher the accuracy, the greater is the strength of the network.
1.4 Dataset
In the field of machine learning, the datasets are an integral part. Major advances in
this field result in the advancement in learning algorithm in which the best one
includes deep learning. High-quality training can be obtained on the datasets. In the
case of comparing different DNN models, the classification is a difficult task. Here
the MNIST dataset is classified. It is observed that easier task is less complex when
compared to the harder task. Also for simpler task, the energy required will be less
when compared to the harder task which will consume high energy. In order to
classify the images, the models used are AlexNet, VGG16, GoogLeNet, and ResNet,
and for classifying the digits, the models used are LeNet-5 and MNIST. In order to
calculate the accuracy of a DNN model, many AI tasks are available where datasets
can be made use of it. The datasets are publicly available in order to compare the
accuracy of different approaches. Image classification is where an entire image is
selected that belongs to 1 of N classes that the image most likely belongs to and there
is no localization.
Since deep network works very well when compared to shallow networks, it has
greater performance over different datasets. It can compactly represent the number of
hidden units where any other network cannot represent it has larger number of
hidden units exponentially. The part-whole decompositions have to be learned
when the deep networks are used in the case of images. Here since MNIST images
are used, the first layer of the deep network can be used for edge detection, and the
next layer which is the second layer can be used for grouping the edges together so
that it can further detect the contours and then the following layers can be used to
detect simple parts of objects. It is then followed by different stages of processing. In
this method the weights of deep network are randomly initialized and then trained
using the labeled training set. Then the supervised learning objective is used where
the gradient descent algorithm is applied to reduce the training error.
Here for this method, the labeled data for training is considered:
• Firstly, the MNIST dataset is loaded, and then training and the test sets are
specified.
• Secondly, define the different network layers which include the input layer,
convolutional layer, ReLU layer, max pooling layer, fully connected layer,
softmax layer, and finally the classification layer.
• Thirdly, specify the training options, and then train the network using MNIST
data. Finally classify the images and compute accuracy.
Firstly, the training is performed using the learning rate that is mostly preferred
which is 0.0001, and then it is slowly decreased, and the variations in the result are
shown in Table 1.1. Figure 1.3 displays the 20 set of images that is classified using
the deep neural network algorithm. After the training is completed, the learning rate
is reduced, that is, it is maintained at 0.001 for the same set of images maintaining the
same set of iterations and using the same machine learning algorithm which is deep
neural network as in Table 1.2.
Table 1.1 Time and accuracy calculation of MNIST images for the learning rate ¼ 0.0001
Time elapsed Mini-batch Mini-batch accuracy Ease learning
Epoch Iteration (s) loss (%) rate
1 50 8.31 3.0845 13.28 1.00e-04
2 100 12.74 0.7276 74.22 1.00e-04
3 150 14.52 0.4740 83.59 1.00e-04
4 200 16.25 0.3081 92.19 1.00e-04
5 250 17.98 0.2324 92.97 1.00e-04
6 300 19.73 0.1545 96.60 1.00e-04
7 350 23.18 0.0944 97.09 1.00e-04
8 400 24.94 0.0666 98.44 1.00e-04
9 450 26.65 0.0559 99.22 1.00e-04
10 500 28.4 0.0442 100.00 1.00e-04
10 S. N. Shivappriya and R. Harikumar
Table 1.2 Time and accuracy calculation of MNIST images for the learning rate ¼ 0.001
Time elapsed Mini-batch Mini-batch accuracy Base learning
Epoch Iteration (s) loss (%) rate
1 50 1.96 1.4291 71.88 0.0010
2 100 3.80 1.3971 79.41 0.0010
3 150 5.88 1.2292 83.16 0.0010
4 200 8.03 0.9592 85.53 0.0010
5 250 10.26 0.9348 87.59 0.0010
6 300 12.28 0.8191 89.84 0.0010
7 350 16.13 0.5598 91.56 0.0010
8 400 18.00 0.2962 92.41 0.0010
9 450 19.88 0.06825 94.62 0.0010
10 500 22.90 0.0912 96.84 0.0010
Finally, the learning rate is reduced to 0.01, and there is vast difference in the
result that is obtained as in Table 1.3. Here NaN refers to not applicable portion.
From the results obtained, it is proved that as learning rate increases, the accuracy of
image classification is reduced. In order to achieve high accuracy in the image
classification, the earning rate should be maintained at 0.0001. Another set of
20 images are taken, and it is trained using the stacked autoencoder as shown in
Fig. 1.4. Its results are compared with the deep neural networks result. For each
iteration of the stacked autoencoder, the results are provided.
For each iteration of the stacked autoencoder, the results are provided:
When 50 iterations are completed, the time consumed is 52 s, whereas the loss
during the image classification is 0.0379 as shown in Fig. 1.5. When 100 iterations
are completed, the time consumed is 1 min 32 s which is increased when compared
1 Performance Analysis of Deep Neural Network and Stacked Autoencoder. . . 11
Table 1.3 Time and accuracy calculation of MNIST images for the learning rate ¼ 0.01
Time elapsed Mini-batch Mini-batch accuracy Base learning
Epoch Iteration (s) loss (%) rate
1 50 8.31 2.0845 4.69 0.0100
2 100 12.74 NaN 6.28 0.0100
3 150 14.52 NaN 7.86 0.0100
4 200 16.25 NaN 7.94 0.0100
5 250 17.98 NaN 8.53 0.0100
6 300 19.73 NaN 9.72 0.0100
7 350 23.18 NaN 10.81 0.0100
8 400 24.94 NaN 11.59 0.0100
9 450 26.65 NaN 12.16 0.0100
10 500 28.40 NaN 13.28 0.0100
to 50 iterations, whereas the loss during the image classification is reduced to 0.0292
as shown in Fig. 1.6.
When 150 iterations are completed, the time consumed is 1 min 36 s which is
increased when compared to 100 iterations, whereas the loss during the image
classification is reduced to 0.0268 as shown in Fig. 1.7. When 200 iterations are
completed, the time consumed is 2 min 8 s which is increased when compared to
150 iterations, whereas the loss during the image classification is reduced to 0.0262
as shown in Fig. 1.8.
When 250 iterations are completed, the time consumed is 2 min 38 s which is
increased when compared to 200 iterations, whereas the loss during the image
classification is reduced to 0.0259 as shown in Fig. 1.9. When 300 iterations are
completed, the time consumed is 3 min 8 s which is increased when compared to
250 iterations, whereas the loss during the image classification is maintained con-
stant to 0.0259 as shown in Fig. 1.10.
1 Performance Analysis of Deep Neural Network and Stacked Autoencoder. . . 13
When 350 iterations are completed, the time consumed is 3 min 40 s which is
increased when compared to 300 iterations, whereas the loss during the image
classification is reduced to 0.0258 as shown in Fig. 1.11. When 400 iterations are
completed, the time consumed is 4 min 40 s which is increased when compared to
350 iterations, whereas the loss during the image classification is maintained con-
stant as 0.0258 as shown in Fig. 1.12.
14 S. N. Shivappriya and R. Harikumar
Table 1.4 Time consumed and loss of MNIST images by stacked autoencoder
Epoch Iteration Time elapsed (s) Loss
1 50 52 0.0379
2 100 1.32 0.0292
3 150 1.36 0.0268
4 200 2.08 0.0262
5 250 2.38 0.0259
6 300 3.08 0.0259
7 350 3.40 0.0258
8 400 4.06 0.0258
9 450 4.43 0.0258
10 500 5.22 0.0258
When 450 iterations are completed, the time consumed is 4 min 43 s which is
increased when compared to 400 iterations, whereas the loss during the image
classification is maintained constant as 0.0258 as shown in Fig. 1.13. When 500 iter-
ations are completed, the time consumed is 5 min 22 s which is increased when
compared to 450 iterations, whereas the loss during the image classification is
maintained constant as 0.0258 as shown in Fig. 1.14.
Each iterations time and loss of the stacked autoencoder is tabulated as shown in
Table 1.4. After evaluating the DNN and SAE model using the MNIST dataset,
though both the model achieved 100% accuracy, it is found that the constructed SAE
model outperforms the DNN model in terms of time consumption. This is because
the DNN model at its best learning rate of 0.0001 consumed 28.4 s, whereas the
constructed SAE model consumed only 5.22 s.
1 Performance Analysis of Deep Neural Network and Stacked Autoencoder. . . 15
1.6 Summary
Profound impact by deep learning exists on the machine learning technologies. From
the historical development, it is found that the technology faced many ups and
downs regarding the difficulties owing to limited infrastructure availability. The
increasing demand for deep learning has brought it altogether to a new level. This
paper shows that the stacked autoencoder (SAE) can outperform the DNN model
when the MNIST dataset is applied over it. These two techniques aim to improve
accuracy. However, the time consumed to achieve the higher accuracy is also
equally important.
References
Arulmurugan, R., Sabarmathi, K. R., & Anandakumar, H. (2017). Classification of sentence level
sentiment analysis using cloud machine learning techniques. Cluster Computing. https://doi.org/
10.1007/s10586-017-1200-1.
Ba, B. J., & Frey, B. (2013). Adaptive dropout for training deep neural networks. Proceeding of the
Advances in Neural Information Processing Systems, Lake Taheo, NV, USA 3084–3092.
Baldi, P. (2012). Autoencoders, unsupervised learning, and deep architectures. ICML Unsupervised
and Transfer Learning, 27(37–50), 1.
Dong, P. W., Yin, W., Shi, G., Wu, F., & Lu, X. (2018). Denoising prior driven deep neural network
for image restoration, arXiv:1801.06756v1 [cs.CV] pp. 1–13.
Du, L. Y., Shin, K. J., & Managi, S. (2018). Enhancement of land-use change modeling using
convolutional neural networks and convolutional denoising autoencoders, arXiv:1803.01159v1
[stat.AP].
Galloway, A., Taylor, G. W., & Moussa, M. (2018). Predicting adversarial examples with high
confidence. ICML.
Gottimukkula, V. C. R. (2016). Object classification using stacked autoencoder. North Dakota:
North Dakota State University.
Harikumar, R., Shivappriya, S.N., & Raghavan, S. (2014). Comparison of different optimization
algorithms for cardiac arrhythmia classification INFORMATION - An international interdisci-
plinary Journal Published by International Information Institute, Tokyo, Japan, Information
17(8), 3859.
Hinton, S. O., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural
Computation, 18(7), 1527–1554.
16 S. N. Shivappriya and R. Harikumar
Holder, J., & Gass, S. (2018). Compressing deep neural networks: A new hashing pipeline using
Kac’s random walk matrices. Proceedings of the 21st International Conference on Artificial
Intelligence and Statistics (AISTATS) 2018, Lanzarote, Spain. JMLR: W&CP, vol. 7X.
Ishfaq, A. H., & Rubin, D. (2018). TVAE: Triplet-based variational autoencoder using metric
learning, 2015 (pp. 1–4). ICLR 2018 Workshop Submission.
Kohli, D., Gopalakrishnan, V., & Iyer, K. N. (2017). Learning rotation invariance in deep
hierarchies using circular symmetric filters. ICASSP, Proceedings of the IEEE International
Conference of Acoustics, and Speech Signal Processing (pp. 2846–2850).
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
Lei, T., & Ming, L. (2016). A robot exploration strategy based on Q-learning network, IEEE
International Conference on Real-time Computing and Robotics RCAR 2016 (pp. 57–62).
Liu, Z. W., Liu, X., Zeng, N., Liu, Y., & Alsaadi, F. E. (2017). A survey of deep neural network
architectures and their applications. Neurocomputing, 234, 11–26.
Liu, T., Taniguchi, K. T., & Bando, T. (2018). Defect-repairable latent feature extraction of driving
behavior via a deep sparse autoencoder. Sensors, 18(2), 608.
Meyer, D. (2015). Introduction to Autoencoders. http://www.1-4-5.net/~dmm/papers/
Mohd Yassin, R., Jailani, M. S. A., Megat Ali, R., Baharom, A. H. A. H., & Rizman, Z. I. (2017).
Comparison between Cascade forward and multi-layer perceptron neural networks for NARX
functional electrical stimulation (FES)-based muscle model. International Journal on Advanced
Science, Engineering and Information, 7(1), 215.
Ng, Andrew, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen, Adam Coates, Andrew
Maas, et al. (2015). Deep learning tutorial. http://ufldl.stanford.edu/wiki/index.php/UFLDL_
Tutorial
Parloff, R., & Metz, J. (2016). Why deep learning is suddenly changing your life. Published
electronically 28 Sept 2016. http://fortune.com/ai-artificial
Raith, S., et al. (2017). Artificial neural networks as a powerful numerical tool to classify specific
features of a tooth based on 3D scan data. Computers in Biology and Medicine, 80, 65–76.
Raju, D., & Shivappriya, S. N. (2018). A review on development. In Machine Learning Algorithms
and Its Resources, International Journal of Pure and Applied Mathematics Volume 118 No.
5 759–768 ISSN: 1311-8080 (printed version); ISSN: 1314–3395 (on-line version).
Ruder, S. (2017). An overview of multi-task learning in deep neural networks. arXiv preprint
arXiv:1706.05098.
Schmitt, S., et al. (2017). Neuromorphic hardware in the loop: Training a deep spiking network on
the BrainScaleS wafer-scale system. Proceedings of the International Joint Conference on
Neural Networks, 2017, 2227–2234.
Sun, G., Yen, G., & Yi, Z. (2017). Evolving unsupervised deep neural networks for learning
meaningful representations. IEEE Transactions on Evolutionary Computation, 1. https://doi.
org/10.1109/TEVC.2018.2808689.
Wang, X., Takaki, S., & Yamagishi, J. (2018). Investigating very deep highway networks for
parametric speech synthesis. Speech Communication, 96, 1–9.
Yang, H. F., Lin, K., & Chen, C.-S. (2015). Supervised learning of semantics-preserving hash via
deep convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 8828(c), 1–15 2015.intelligence-deep-machine-learning/intro_to_autoencoders.
pdf. arXiv:1507.00101v2 [cs.CV] 14 Feb 2017
Yu, J., Hong, C., Rui, Y., & Tao, D. (2018). Multi-task autoencoder model for recovering human
poses. IEEE Transactions on Industrial Electronics, 65(6), 5060–5068.
Chapter 2
Soft Computing-Based Void Recovery
Protocol for Mobile Wireless Sensor
Networks
2.1 Introduction
Wireless sensor network (WSN) is the assortment of sensor nodes that could able to
sense the surroundings. The information that is being received can be used for
further processing in the real-time systems. Every node can able to sense the
environment, route the sensed information and can send the information back to
the world through the wireless connectivity.
Nowadays, the networks that are building are bidirectional so that the particular
network can be controlled from any other location by enabling the control of sensor
activity. Each node has various components such as transmitter to transmit the data
to the world, receiver to receive the data from other nodes, a controller to control the
activities, memory for processing and a limited power supply (Fig. 2.1).
Sometimes the sensor node may have the mobility unit that enables the node’s
movement in the network. If the nodes have the mobility unit, then the network
topology changes from time to time. This also affects the path to reach the sink node.
The nodes in the network have to update their neighbour details in certain interval to
stay updated about their neighbours. The lifetime of the node in the network
decreased because mobility reduces the energy of the node.
The energy is a major constrain for a sensor node. Every node has a rechargeable
battery that can get energy from external source by using the solar, which avoid the
early death of the node in the network. This may also cause holes in the network and
can cause security breaches in the real-time network. The rechargeable battery can
extend the lifetime of the sensor.
The wireless sensor network has their applications in various fields such as disaster
relief, healthcare, agriculture, traffic monitoring, etc.
This disaster relief application can be used for forest fire detection, military appli-
cation, human relief action, etc. In the forest fire detection, the sensors are equipped
with the thermometers to sense the temperature of the environment. If the temper-
ature exceeds certain limit, then it sends signal to the base station about the
happenings in the environment. In human relief actions, the sensors are used to
rescue humans from disaster such as earthquake, fallen building and so on. The
sensor senses the human location and helps in the rescue purpose of them. The
timely intimation about the disaster can save the life of many humans or other wild
life in the forest.
Another random document with
no related content on Scribd:
Gastric Vertigo.—Trousseau certainly misled the profession as to the
frequency of this form, but he did little more than represent popular
medical views, and we may now feel sure that a good many so-
called gastric vertigoes are due to lithæmia or to troubles of ear or
eye. There are, I think, three ways in which the gastro-duodenal
organs are related to the production of vertigo. Acute gastric vertigo
arises in some persons inevitably whenever they eat certain articles,
and the limitations are odd enough. Thus, I know a gentleman who
cannot eat a mouthful of ice-cream without terrible vertigo, but
otherwise his digestion is perfect. I know another in whom oysters
are productive of vertigo within ten minutes; and a curious list might
be added, including, to my knowledge, milk, eggs, oysters, crabs,
etc. In these cases digestion is arrested and intense vertigo ensues,
and by and by there is emesis and gradual relief.
Single attacks are rare. It is apt to repeat itself, and finally to cause
all the distressing cerebral symptoms which characterize the worst
gastric vertigo, and at last to be capable of easy reproduction by
light, heat, over-exertion, and use of the mind or eyes, by emotion, or
by gastric disorder.
Even after the vertigo has ceased to exist the fear of loss of balance
remains, while perhaps for years the sense of confusion during
mental effort continues, and gives to the sufferer a feeling of what a
patient described to me as mental vertigo—some feeling of
confusion, lack of power to concentrate attention, loss of hold on
trains of thought, with now and then a sensation as if the contents of
the cranium moved up or down or swayed to and fro.
Vertigo from growths on the auditory nerve before it enters the inner
ear is rare in my experience. It is described as slow in its progress,
the deafness and tinnitus being at first slight, but increasing steadily,
while there is tendency to fall toward the side affected.7 In the cases
of disease attacking the seventh nerve within the cranium there is
usually so much involvement of other and important nerve-tissues as
makes the disorder of audition and equilibration comparatively
unimportant.
7 Burnett.
Then it is that even slight defects of the eye may cause vertigo,
which if usually slight and transient, coming and going as the eyes
are used or rested, is sometimes severe and incapacitating. I have
over and over seen vertigo with or without occipital pain or distress in
persons whose eyes were supposed to be sufficiently corrected with
glasses, but who found instant relief when a more exact correction
was made; and this is, I think, a matter which has not yet generally
received from oculists the attention it demands.
Vertigo in old age, if not due to the stomach or defective states of the
portal system, kidneys, or heart, is either caused by atheromatous
vessels or multiple minute aneurismal dilatations of vessels, or in
full-blooded people by some excess of blood or some quality of
blood which is readily changed by an alteration in the diet, of which I
shall presently speak. Whatever be its source, it is in the old a matter
of reasonable anxiety.
It is, however, worth recording here that I have more than once seen
enduring vertiginous status, with occasional grave fits of vertigo,
arise out of very prolonged sea-sickness. In the last example of this
sequence seen by me there was, after a year or more, some
deafness.
In these cases, after the eye has been corrected, the diet should be
regulated with care. In extreme cases it may become desirable to
limit it to milk, fruit, and vegetables where no obvious peculiarities
forbid such a regimen; and I have found it useful to insist also on
some food being used between meals.
I like, also, that these patients rest an hour supine after each meal,
and spend much time out of doors, disregarding their tendency to lie
down. Exercise ought to be taken systematically, and if the vertigo
still forbids it, massage is a good substitute. At first near use of the
eyes is to be avoided, and when the patient resumes their use he
should do this also by system, adding a minute each day until
attainment of the limit of easy use enjoins a pause at that amount of
reading for a time.
TREMOR.
Tremor is sometimes hereditary, and may exist from early life. I have
a patient in whom there is a trembling of the hands which has lasted
since childhood. This lady's mother and grandmother both had the
same form of tremor, and one of her own daughters also has it. In
this case the trembling is most marked when voluntary movements
are attempted, but it does not materially interfere with writing,
sewing, or any other act she wishes to accomplish. There is slight
tremor when the hands are at rest.
When tremor first begins it is slight in degree and extent, and occurs
generally only on voluntary effort. Later there may be a constant
trembling even when the part is at rest. Beginning usually in the
hands, it may extend to the head and legs. It is seen in the tongue
and facial muscles after the disease has lasted for some time.
In some cases the trembling can be controlled to some extent by a
strong effort of will. The tremor from alcohol or opium is most marked
when the individual has been without the use of the stimulant for a
short time, and the trembling may be temporarily checked by
renewing the dose of alcohol or opium as the case may be.
The tremor does not begin at once on section of the nerve, but
comes on after a few days. As the peripheral end of the nerve
undergoes degeneration the tremor increases. It may last months or
even years.
PARALYSIS AGITANS.