Professional Documents
Culture Documents
Textbook Intelligent Computing Theories and Application de Shuang Huang Ebook All Chapter PDF
Textbook Intelligent Computing Theories and Application de Shuang Huang Ebook All Chapter PDF
https://textbookfull.com/product/intelligent-computing-theories-
and-application-16th-international-conference-icic-2020-bari-
italy-october-2-5-2020-proceedings-part-i-de-shuang-huang/
https://textbookfull.com/product/intelligent-computing-
theory-10th-international-conference-icic-2014-taiyuan-china-
august-3-6-2014-proceedings-1st-edition-de-shuang-huang/
https://textbookfull.com/product/intelligent-computing-
methodologies-10th-international-conference-icic-2014-taiyuan-
china-august-3-6-2014-proceedings-1st-edition-de-shuang-huang/
Intelligent Computing Methodologies 16th International
Conference ICIC 2020 Bari Italy October 2 5 2020
Proceedings Part III De-Shuang Huang
https://textbookfull.com/product/intelligent-computing-
methodologies-16th-international-conference-icic-2020-bari-italy-
october-2-5-2020-proceedings-part-iii-de-shuang-huang/
https://textbookfull.com/product/intelligent-computing-in-
bioinformatics-10th-international-conference-icic-2014-taiyuan-
china-august-3-6-2014-proceedings-1st-edition-de-shuang-huang/
https://textbookfull.com/product/application-of-soft-computing-
and-intelligent-methods-in-geophysics-alireza-hajian/
https://textbookfull.com/product/soft-computing-theories-and-
applications-proceedings-of-socta-2019-advances-in-intelligent-
systems-and-computing-millie-pant-editor/
https://textbookfull.com/product/intelligent-fault-diagnosis-and-
accommodation-control-1st-edition-sunan-huang-author/
De-Shuang Huang
Kang-Hyun Jo
Xiao-Long Zhang (Eds.)
LNCS 10955
Intelligent Computing
Theories and Application
14th International Conference, ICIC 2018
Wuhan, China, August 15–18, 2018
Proceedings, Part II
123
Lecture Notes in Computer Science 10955
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7409
De-Shuang Huang Kang-Hyun Jo
•
Intelligent Computing
Theories and Application
14th International Conference, ICIC 2018
Wuhan, China, August 15–18, 2018
Proceedings, Part II
123
Editors
De-Shuang Huang Xiao-Long Zhang
Tongji University Wuhan University of Science
Shanghai and Technology
China Wuhan City
China
Kang-Hyun Jo
University of Ulsan
Ulsan
Korea (Republic of)
LNCS Sublibrary: SL3 – Information Systems and Applications, incl. Internet/Web, and HCI
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
General Co-chairs
Tutorial Chair
Publication Co-chairs
Workshop Co-chairs
Publicity Co-chairs
Program Committee
Additional Reviewers
The Wide and Deep Flexible Neural Tree and Its Ensemble in Predicting
Long Non-coding RNA Subcellular Localization. . . . . . . . . . . . . . . . . . . . . 515
Jing Xu, Peng Wu, Yuehui Chen, Hussain Dawood, and Dong Wang
A Grouping Genetic Algorithm Based on the GES Local Search for Pickup
and Delivery Problem with Time Windows and LIFO Loading . . . . . . . . . . . 729
Feng Zhang, Bin Li, and Kun Qian
Abstract. Fog detection has becomes more and more important in recent years,
real-time monitoring information is very beneficial for people to arrange pro-
duction and life. In this paper, based on meterological satellite data (Himawari-8
standard data, HSD8), Covolutional Neural Network (CNN) is used to detect
fog. Since HSD8 consists of 16 channels, the original CNN is extended to
multiple channels for HSD8. Multiple Channels CNN (MCCNN) can make the
full exploitation of spatial and spectral information effectively. A dataset is
created from Anhui Area which consists of ground station data and grid data.
Different image sizes and convolutional kernels are used to validate the pro-
posed methods. The experimental results show that the proposed method
achieves 91.87% accuracy.
1 Introduction
Fog has becomes one of the major catastrophic weather. The formation of fog is caused
by a large number of tiny water droplets or ice crystals floating in the air near the
ground. Therefore, if the horizontal visibility is less than 1 km, it can cause disastrous
weather. In recent years, with the rapid development of socio-economic construction,
especially land, sea and air transportation, fog detection the has become more and more
important. Fog generally has the characteristics of rapid generation and development
[1]. Traditional ground-based observation are limited to the local areas, it can not meet
the requirements of large-scale and fast monitoring.
The earliest use methods of satellite remote sensing technology in fog monitoring is
visible light image cloud identification and dissipation. In the 1960s, with the devel-
opment of meteorological satellites, European and American countries started to use
the satellite remote sensing technology for fog identification [2]. Usually, the methods
of monitoring heavy fog mainly include two-channel method, principal component
analysis method and texture processing method [3]. But now with the further research
on satellite remote sensing, more scholars use different spectral characteristics to study
the method of fog identification.
Although the previous method performs well, it still has some drawbacks. Most of
these methods are based on manual feature learning, so the performance of these
methods heavily depends on domain knowledge. Even satellite remote sensing data has
rich spectral information, which can not be fully used by these traditional approaches
[4]. However, deep learning is a effective method which can extract the features from
mass of data automatically. In the last few years, deep learning has been a powerful
machine learning technique for learning data-dependent and hierarchical feature rep-
resentations from raw data. Convolutional Neural Network (CNN) is an algorithm in
deep learning. CNN has been widely applied to image processing and computer vision
problems, such as image classification,image segmentation [5], action recognition and
object detection [6].
Himawari-8 Standard Data (HSD8) data has a very high spatial and spectral res-
olution. Every ten miniute, the satellite return 16 channel data. Such a large data set
make CNN framework more suitable and more effective in handling classification [7, 8]
problems. Deep learning has been widely used in hyperspectral image classification.
Hyperspectral image data usually have hundreds of spectral image. Similar to hyper-
spectral image data, HSD8 consist by 16 channel.
In this research, we propose CNN framework that classifies HSD8 data into mul-
tiple classes. The experimental result show that the proposed method achieve a better
classification performance.
2 First Section
2.1 CNN
Convolution Neural Network (CNN) is an efficient identification method that has been
developed and attracted a great deal of attention in recent years. Generally, the basic
structure of CNN includes two layers, one is a feature extraction layer, the input of each
neuron is connected with a local acceptance field of the previous layer. Once the local
feature is extracted, its location relationship with other features is also determined. The
second is the feature mapping layer. The feature mapping structure uses the sigmoid
function with small influence kernel as the activation function of the CNN, which
makes the feature map invariant to displacement. In addition, due to the shared weights
of neurons on a mapping surface, the number of free network parameters is reduced.
The special structure of CNN sharing local weight has unique advantages in speech
recognition [9] and image classification [10]. Weight sharing reduces the complexity of
the network. In particular, the features of multi-dimensional input vector images can be
directly input to the network avoid the complexity of data reconstruction in feature
extraction.
CNN is a feed forward neural network, the structure of the model is shown in the
Fig. 1. x is the input, h is the neuron output. When multiple cells are combined and
have a hierarchical structure, a neural network model is established. Neural network
Deep Convolutional Neural Network for Fog Detection 3
connects many of these neurons into one network, the output of one neuron serves as
input to another neuron. Neural networks can have a wide variety of topologies. One of
the simplest is “multi-layer fully connected forward neural network.” Its input is
connected to every neuron on the first level of the network. The output of each neuron
in the previous layer is connected to the input of each neuron in the next layer. The
output of the last neuron is the output of the entire neural network.
x1
a1(2)
x2 a2(2)
a3(2)
x3 hw,b(x)
+1 +1
Fig. 1. Upper and lower neurons are all connected to the neural network
The order of CNN is the alternation of the convolutional layer with the pooling
layer, which mimics the complex and simple cell nature of the mammalian visual
cortex. Finally, the fully connected neural network ends. In ordinary deep neural
networks, neurons connect to all the neurons in the next level. CNN Unlike ordinary
neural networks, a typical CNN is as shown in Fig. 2. That is, each hidden activation is
calculated by multiplying the entire input x by the weight W in that layer. However, in
CNN, every hidden activation point h is obtained by inputting a small local weight W,
so that the weight W can be shared in the entire input space. Neurons belonging to the
same level share the same weight. The advantage of CNN weight sharing is that it helps
to reduce the total number of training parameters and leads to more efficient training
and more efficient models.
The pooling layer has two functions, the first effect is to retain the main features
while reducing the parameters (similar to the PCA [11]) and the amount of calculation
to prevent over-fitting and improve the model generalization ability. The second effect
is introduced of invariance. The pooling layer has the max pooling and the average
pooling, the typical pooling function is the max pooling. The max pooling divides the
input data into a set of non-overlapping windows and outputs the maximum for each
subregion, reducing the upper computational complexity and providing a form of
translational invariance. The computational chain of the CNN terminates in fully
connected network, the fully connected network contains the feature information
extracted by the convolutional and pooling layers, we mainly use it for classification. In
this article, we explore the appropriate architecture and strategy for CNN-based HSD8
classification.
4 J. Zhang et al.
W
x1
h1
x2
h2
x3
h3
x4
h4
x5
x6 h5
h6
Shared weights
Fig. 2. A typical CNN architecture consisting of a convolutional layer, a max pooling layer, and
a fully connected layer.
C1 M1
M C2 M2 F
F1 F2
Class Probability
Fully Connected Layer
Fully Connected Layer
Convolutional Later
Convolutional Later
Fig. 3. The network structure of the CNN used as our discriminative classifier
Training Strategies
Here, we introduce how to learn the parameter space of the proposed CNN classifier.
CNN’s training process is mainly divided into two steps: forward propagation and back
propagation [12]. The function of forward propagation is to calculate the actual clas-
sification result of the input data with the current parameters and the effect of back
propagation is to update the trainable parameters [13], so that the difference between
the actual classification output and the desired classification output as should be small
as possible.
Deep Convolutional Neural Network for Fog Detection 5
Forward Propagation
As shown in the Fig. 3, the complete network consists of six layers, including the input
layer, the convolution layer C, the max pooling layer M, the full connection layer F.
We first define the problem related notations used throughout the paper. We assume
that h is all the training parameters, h ¼ fhig and i = 1, 2, 3, 4… where hi is the
parameter set between the ði 1Þ th and the i th layer. In this experiment we used a
6-layer convolution neural network. The output of the convolutional layer is as follows:
xi þ 1 ¼ fi ðui Þ ð1Þ
ui ¼ hTi xi þ bi ð2Þ
That xi is the input of the i th layer, h is weight. hTi is a weight matrix of the i th layer
acting on the input data, and bi is an additive bias vector for the i th layer. f ðÞ is the
activation function. The activation function is to introduce nonlinear factors. The
commonly used activation functions are Tanh, Sigmoid, Relu. In our designed archi-
tecture, we choose the hyperbolic tangent function Relu as the activation function. The
maximum function max(u) is used in layer M.
We want to use the hypothetical function to estimate the probability value pðy ¼ jjxÞ
for each category j. That is, we want to estimate the probability of each classification
result for x occurring. Therefore, our hypothetical function will output a k-dimensional
vector to represent the probabilities of these K estimates. Specifically, our hypothetical
function takes the following form, the softmax regression model is defined as:
2 3 2 T ðiÞ
3
PðyðiÞ ¼ 1jh; x; bÞ eh 1 x þ b1
6 7 6 T ðiÞ 7
6 PðyðiÞ ¼ 2jh; x; bÞ 7 6 eh 2 x þ b 2 7
ðiÞ 6 7 1 6 7
yj ¼6 ¼ Pk 6 7 ð3Þ
4 . . .: :. . . 75 j¼1 e hTj xðiÞ þ bj 6
4 . . . 7
5
PðyðiÞ ¼ 2jh; x; bÞ
T ðiÞ
ehj x þ bj
Back Propagation
In the back propagation stage, the backpropagation algorithm gives an efficient way to
use the gradient descent algorithm on all parameters, so that the loss function of the
neural network model on the training data is as small as possible. Backpropagation
algorithm is the core algorithm of training neural network, which can optimize the
parameters of neural network according to the defined loss function, so that the loss
function of neural network model on training data set reaches a smaller value. The
optimization of the parameters in the neural network model directly determines the
quality of the model, which is a very important step when using the neural network.
The loss function used in this work is defined as:
1X m X K n o
ðiÞ
JðhÞ ¼ 1 j ¼ Y ðiÞ logðyj Þ ð4Þ
m i¼1 j¼1
6 J. Zhang et al.
ðiÞ
where m is the number of training samples. Y is the desired output. yj is the j th value
of the actual output yðiÞ of the i th training sample and is a vector whose size is K. In the
desired output Y ðiÞ of the i th sample, the probability value of the labeled class is 1, and
the probability values of other classes are 0.1 j ¼ Y ðiÞ means, if j is equal to the
desired label of the i th training sample, its value is 1; otherwise, its value is 0.
In the training process, we need to get the prediction label by finding the maximum
value in the output vector, which requires that we actually output is closer to the
expectation. When the difference between them is small enough, the iteration stops, at
which point the actual output is closer to the desired output. Experiments show that as
the number of training iterations increases, the return value of the loss function
becomes smaller, which can meet our needs. Finally, the trained CNN is ready for
HSD8 classification. Training samples are first randomly divided into batches, each
batch containing the same number of samples. The batch size has a certain impact on
the experimental results. During training, we trained CNN using a stochastic gradient
descent, each iteration only one batch was sent to the network for training. When the
maximum number of iterations is reached, the training process will stop, then the
training model will be output. In the testing process, we first input the test samples into
the training network [14]. The parameters of the network are obtained by using the
trained network parameters and then finding the maximum value in the output vector.
The training and testing process is shown in Fig. 4.
Batch 1
Training
Batch 2
Data & CNN
Label Training
Batch M
CNN Predicted
Test Test Data
Model Label
The Data Sets: This experiment using HSD8, total of 16 channels. When the visibility
is greater than 1000 m, it is negative sample (without fog) and the other sample is
positive sample (with fog).
Impact of parameter settings: The parameter selection is very important to the
performance of the network. Such as the depth of the network, the size and number of
convolution kernels, the batch size and the learning rate. In this experiment, we
evaluated in detail the effect of different parameter settings on the sensitivity of the
Another random document with
no related content on Scribd:
que solo el hidalgo se podia en
aquella tierra mantener, que el
labrador pechero era neçesario
morir de hanbre.
Miçilo.—¿Pues porque no se iba
tu padre á vibir a otra tierra?
Gallo.—Son tan acobardados
para en eso los labradores, que
nunca se atreuen a hazer
mudança de la tierra donde
naçen: porque vna legua de sus
lugares les pareçe que son las
Indias: y imaginan que ay alla
gentes que comen los honbres
biuos. Y por tanto muere cada
vno en el pajar donde naçio,
avnque sea de hanbre. Y deste
padre naçimos dos hijos varones,
de los quales yo fue el mayor,
llamado por nonbre Alexandro. Y
como vimos tanta miseria como
passauan con el señor los
labradores, pensauamos que si
tomauamos offiçios que por
entonçes nos libertassen se
oluidaria nuestra vileza, y
nuestros hijos serian tenidos y
estimados por hydalgos y viuirian
en libertad. Y ansi yo elegi ser
saçerdote, que es gente sin ley; y
mi hermano fue herrero, que en
aquella tierra son los herreros
exentos de los pedidos, pechos y
velas del lugar donde siruen la
ferreria. Y ansi yo demandé
liçencia a mi padre para aprender
a leer: y avn se le hizo de mal
porque le seruia de guardar vnos
patos. y ojear los pajaros que no
comiessen la simiente de vn linar.
En conclusion mi padre me
encomendo[362] por criado y
monaçino de vn capellan que
seruia vn beneffiçio tres leguas de
alli. ¡O Dios omnipotente, quien te
dixera las bajezas y poquedades
deste honbre! Por cierto si yo no
huuiera tomado la mano oy para
te contar[363] de mi y no de otros,
yo te dixera cosas de gran
donayre. Pero quierote hazer
saber que ninguno dellos sabe
más leer que deletrear y lo que
escriben aslo de sacar por
discreçion. En ninguna cosa estos
capellanes muestran ser
auentajados, sino en comer y
beber: en lo qual no guardan
tiempo ni medida ni razon. Con
este estuue dos años que no me
enseñó sino a mal hazer, y mal
dezir, y mal pensar y mal
perseuerar. A leer me enseñó lo
que el sabia, que era harto poco,
y á escreuir vna letra que no
pareçia sino que era arado el
papel con pies de escarabajos. Ya
yo era buen moço de quinze
años, y entendia que para yo no
ser tan asno como mi amo que
deuia de saber algun latin. Y ansi
me fue á Zamora a estudiar
alguna gramatica: donde llegado
me presenté ante el bachiller y le
dixe mi necesidad, y el me
preguntó si traya libro: y yo le
mostré vn arte de gramatica que
auia hurtado a mi amo, que fue de
los de Pastrana que auia mas de
mil años que se inprimió. Y el me
mostró en el los nominatiuos que
auia de estudiar.
Miçilo.—¿De qué te mantenias?
Gallo.—Dauame el bachiller los
domingos vna çedula suya para
vn cura, o capellan de vna aldea
comarcana el qual me daua el
çetre del agua bendita los
domingos y andaua por todas las
casas a la hora del comer
echando a todos agua: y en cada
casa me dauan vn pedaço de
pan, con los quales mendrugos
me mantenia en el estudio toda la
semana. Aqui estube dos años:
en los quales aprendi
declinaciones y conjugaçiones:
genero, preteritos y supinos. Y
porque semejantes honbres
que[364] yo luego nos
enhastiamos de saber cosas
buenas, y porque nuestra
intinçion no es saber más: sino
tener alguna noticia de las cosas
y mostrar que emos entendido en
ello quando al tomar de las
ordenes nos quisieren examinar.
Porque si nuestra intinçion fuesse
saber algo perseuerariamos en el
estudio. Pero en ordenandonos
començamos a oluidar y damonos
tan buena priesa que si llegamos
a las ordenes neçios, dentro de
vn mes somos confirmados
asnos. Y ansi me sali de Çamora,
donde estudiaua harto de mi
espaçio, y por estar ya enseñado
á mendigar con el çetre sabiame
como miel el pedir: y por tanto me
bolui a ello[365]. Y ansi acordé de
yrme por el mundo en compañia
de otros perdidos como yo, que
luego nos hallamos vnos a otros.
Y en esta compañia fue gran
tiempo zarlo, ó espinel: y alcançe
en esta arte de la zarleria todo lo
que se pudo alcançar.
Miçilo.—Nunca esa arte á mi
noticia llegó: declarate me mas.
Gallo.—Pues quiero
descubrirtelo todo de raiz. Tu
sabras que yo tenia la persona de
estatura creçida y andaua vestido
en diuersas prouinçias de
diuersos atauios, porque ninguno
pudiesse con mala intinçion
aferrar en mi. Pero mas á la
contina traya vna vestidura de
vuriel algo leonado obscuro,
honesta, larga y con vna barua
espesa y muy prolixa, de grande
autoridad y un manteo encima,
puesto á los pechos vn boton[366].
Otras vezes mudando las tierras
mudaua el vestido: y con la
mesma barua vsaua de vn habito
que en muchas prouinçias llaman
veguino: con vna saya y vn
escapulario de Religioso que
hazia vida en la soledad de la
montaña; vna cayada y vn rosario
largo, de vnas cuentas muy
gruesas en la mano, que cada
vez que la vna cuenta caya sobre
la otra lo oyan todos quantos en
vn gran templo estuuiessen.
Publiqué adiuinar lo que estaua
por venir, hallar los perdidos,
reconçiliar enamorados, descubrir
los ladrones, manifestar los
thesoros, dar remedio façil á los
enfermos y avn resuçitar los
muertos. Y como de mí los
honbres tenian noticia venian
luego prostrados con mucha
humildad a me adorar y bessar
los pies y a ofreçerme todas sus
haziendas, llamandome todos
propheta y diçipulo y sieruo de
Dios, y luego les ponia en las
manos vno versos que en vna
tabla yo traya scriptos con letras
de oro sobre vn barniz negro; que
dezian de esta manera: