Professional Documents
Culture Documents
Textbook Brain Informatics International Conference Bi 2018 Arlington TX Usa December 7 9 2018 Proceedings Shouyi Wang Ebook All Chapter PDF
Textbook Brain Informatics International Conference Bi 2018 Arlington TX Usa December 7 9 2018 Proceedings Shouyi Wang Ebook All Chapter PDF
https://textbookfull.com/product/algorithmic-aspects-in-
information-and-management-12th-international-conference-
aaim-2018-dallas-tx-usa-december-3-4-2018-proceedings-shaojie-
tang/
https://textbookfull.com/product/intelligent-human-computer-
interaction-10th-international-conference-ihci-2018-allahabad-
india-december-7-9-2018-proceedings-uma-shanker-tiwary/
https://textbookfull.com/product/combinatorial-optimization-and-
applications-12th-international-conference-cocoa-2018-atlanta-ga-
usa-december-15-17-2018-proceedings-donghyun-kim/
Similarity Search and Applications 11th International
Conference SISAP 2018 Lima Peru October 7 9 2018
Proceedings Stéphane Marchand-Maillet
https://textbookfull.com/product/similarity-search-and-
applications-11th-international-conference-sisap-2018-lima-peru-
october-7-9-2018-proceedings-stephane-marchand-maillet/
https://textbookfull.com/product/applied-informatics-first-
international-conference-icai-2018-bogota-colombia-
november-1-3-2018-proceedings-hector-florez/
https://textbookfull.com/product/smart-blockchain-first-
international-conference-smartblock-2018-tokyo-japan-
december-10-12-2018-proceedings-meikang-qiu/
https://textbookfull.com/product/combinatorial-optimization-and-
applications-14th-international-conference-cocoa-2020-dallas-tx-
usa-december-11-13-2020-proceedings-weili-wu/
https://textbookfull.com/product/critical-infrastructure-
protection-xii-12th-ifip-wg-11-10-international-conference-
iccip-2018-arlington-va-usa-march-12-14-2018-revised-selected-
Shouyi Wang · Vicky Yamamoto
Jianzhong Su · Yang Yang
Erick Jones · Leon Iasemidis
Tom Mitchell (Eds.)
LNAI 11309
Brain Informatics
International Conference, BI 2018
Arlington, TX, USA, December 7–9, 2018
Proceedings
123
Lecture Notes in Artificial Intelligence 11309
Brain Informatics
International Conference, BI 2018
Arlington, TX, USA, December 7–9, 2018
Proceedings
123
Editors
Shouyi Wang Erick Jones
University of Texas at Arlington The University of Texas at Arlington
Arlington, TX, USA Arlington, USA
Vicky Yamamoto Leon Iasemidis
University of Southern California Louisiana Tech University
West Hollywood, USA Arlington, TX, USA
Jianzhong Su Tom Mitchell
Department of Mathematics Carnegie Mellon University
University of Texas at Arlington Pittsburgh, PA, USA
Arlington, TX, USA
Yang Yang
Maebashi Institute of Technology
Gunma, Japan
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Brain informatics (BI) is an emerging interdisciplinary research field with the vision of
studying the brain from the informatics perspective. Firstly, brain informatics combines
the efforts of cognitive science, cognitive psychology, neuroscience, etc. to study the
brain as a general information processing system. Secondly, new informatics equip-
ment, tools, and platforms pave the way for an imminent revolution in understanding
the brain. Thirdly, starting from its proposal as a field, one of the goals of brain
informatics is to build improved, inspiring, and transformative artificial intelligence.
The BI conference is established as a unique interdisciplinary forum that attracts
experts, researchers, practitioners, and industry representatives from neuroscience,
psychology, cognitive science, machine learning, data science, artificial intelligence
(AI), and information and communication technology (ICT) to facilitate fundamental
research and innovative applications on brain informatics and brain-inspired
technologies.
The series of Brain Informatics conferences started with the WICI International
Workshop on Web Intelligence Meets Brain Informatics, held at Beijing, China in
2006. The second, third, fourth, and fifth BI conferences were held in Beijing, China
(2009), Toronto, Canada (2010), Lanzhou, China (2011), and Macau, SAR China
(2012), respectively. In 2013, the conference title was changed to Brain Informatics
and Health (BIH) with an emphasis on real-world applications of brain research in
human health and well-being. BIH 2013, BIH 2014 and BIH 2015, BIH 2016 were
held at Maebashi, Japan, Warsaw, Poland, London, UK, and Omaha, USA, respec-
tively. In 2017, the conference went back to its original design and vision to investigate
the brain from an informatics perspective and to promote a brain-inspired information
technology revolution. Thus, the conference name was changed back to Brain Infor-
matics at Beijing, China, in 2017. In 2018, this grand event was held in Arlington,
Texas, USA. The BI 2018 conference was hosted by The University of Texas at
Arlington, Web Intelligence Consortium (WIC), IEEE Computational Intelligence
Society, and the International Neural Network Society.
The 2018 International Conference on Brain Informatics (BI 2018) provided a
premier international forum to bring together researchers and practitioners from diverse
fields for the presentation of original research results on brain informatics,
brain-inspired technologies, and brain and mental health applications. The BI 2018
conference solicited high-quality papers and talks with world-class keynote speeches,
featured talks, panel discussions, and special topics workshops. Many world leaders in
brain research and informatics gathered for this year’s BI meeting, including Arthur
Toga, Tom Mitchell, Leon Iasemidis, Chris Eliasmith, Andreas Tolias, Guoming Luan,
and many other outstanding researchers.
The theme of BI 2018 was “Advancing Innovative Brain Informatics Technologies
from Fundamental Research to Real-World Practice.” This volume contains 46
high-quality papers accepted and presented at BI 2018, which was held in Arlington,
VI Preface
Texas, USA, during December 7–9, 2018. The authors and attendees came from all
over the world. The BI 2018 proceedings papers address broad perspectives of brain
informatics that bridges scales that span from atoms to thoughts and behavior. These
papers provide a good sample of state-of-the-art research advances on brain informatics
from methodologies, frameworks, techniques to applications and case studies. The
selected papers cover five major tracks of brain informatics including: (1) Cognitive
and Computational Foundations of Brain Science, (2) Human Information Processing
Systems, (3) Brain Big Data Analytics, Curation, and Management, (4) Informatics
Paradigms for Brain and Mental Health Research, (5) Brain–Machine Intelligence and
Brain-Inspired Computing. The BI 2018 conference enhanced the efforts to promote
the five tracks of brain informatics research and their important real-world applications.
The conference promoted transformative research to inspire novel conceptual
paradigms and innovative technologies and designs that will benefit society. In par-
ticular, big data analytics, machine learning, and AI technologies are transforming
brain informatics research and facilitating real-world BI applications. New data fusion
and AI methodologies are developed to enhance human interpretive powers when
dealing with large neuroimaging data sets, including fMRI, PET, MEG, EEG, and
fNIRS, as well as data from other sources like eye-tracking and wearable, portable,
micro, and nano devices. Brain informatics research creates and implements various
tools to analyze all the data and establish a more comprehensive understanding of
human thought, memory, learning, decision-making, emotion, consciousness, and
social behaviors. These methods and related studies will also assist in building
brain-inspired intelligence, brain-inspired computing, human-level wisdom-computing
paradigms and technologies, improving the treatment efficacy of mental health and
brain disorders.
We would like to express our gratitude to all BI 2018 conference committee
members for their instrumental and unwavering support. BI 2018 had a very exciting
program with a number of features, ranging from keynote talks to technical sessions,
workshops/special sessions, and panel discussion. This would not have been possible
without the generous dedication of the Program Committee members in reviewing the
conference papers and abstracts, the BI 2018 workshop and special session chairs and
organizers, and our keynote and feature speakers in giving outstanding talks at the
conference. BI 2018 could not have taken place without the great team effort of the
local Organizing Committee and generous support from sponsors. We would especially
like to express our sincere appreciation to our kind sponsors, including The University
of Texas at Arlington, the Center on Stochastic Modeling, Optimization, and Statistics
(COSMOS), Center for Assistive Technologies to Enhance Human Performance
(iPerform), Web Intelligence Consortium (http://wi-consortium.org), International
Neural Network Society (https://www.inns.org), IEEE Computational Intelligence
Society (http://cis.ieee.org), Springer-Nature, Springer LNCS/LNAI (http://www.
springer.com/gp/computer-science/lncs), and International Neuroinformatics Coordi-
nating Facility (INCF, https://www.incf.org).
Special thanks go to Juzhen Dong and Yang Yang for their great assistance and
support with the CyberChair submission system. We also thank the Steering
Preface VII
Committee co-chairs, Ning Zhong and Hanchuan Peng, for their help in organizing and
promoting BI 2018. We are grateful to Springer’s Lecture Notes in Computer Science
(LNCS/LNAI) for their sponsorship and support. We thank Springer for their help in
coordinating the publication of this special volume in an emerging and interdisciplinary
research field.
General Chairs
Tom Mitchell Carnegie Mellon University, USA
Leon Iasemidis Louisiana Tech University, USA
Ning Zhong Maebashi Institute of Technology, Japan
Organizing Chairs
Shouyi Wang The University of Texas at Arlington, USA
Erick Jones The University of Texas at Arlington, USA
Tutorial Chairs
Yang Yang Maebashi Institute of Technology, Japan
Vassiliy Tsytsarev University of Maryland, USA
Publicity Chairs
Paul Wen University of Southern Queensland, Australia
Huiguang He Chinese Academy of Sciences, China
Mufti Mahmud University of Padua, Italy
Program Committee
Agnar Aamodt Norwegian University of Science and Technology,
Norway
Jun Bai Institute of Automation, Chinese Academy of Sciences,
China
Quan Bai Auckland University of Technology, New Zealand
Sylvain Baillet McGill University, Canada
Katarzyna Blinowska University of Warsaw, Poland
Kristofer Bouchard Lawrence Berkeley National Laboratory, USA
Nizar Bouguila Concordia University, Canada
Weidong Cai The University of Sydney, Australia
Lihong Cao Communication University of China, China
Zhigang Cao Chinese Academy of Sciences, China
Mirko Cesarini University of Milano-Bicocca, Italy
W. Chaovalitwongse University of Arkansas, USA
Jianhui Chen Beijing University of Technology, China
Kan Chen The University of Texas at Arlington, USA
Kay Chen The University of Texas at Arlington, USA
Joe Chou Northeastern University, USA
Eva Dyer Emory University and Georgia Institute of Technology,
USA
Frank Emmert-Streib Tampere University of Technology, Finland
Denggui Fan University of Science and Technology Beijing, China
Yong Fan University of Pennsylvania, USA
Philippe Fournier-Viger Harbin Institute of Technology (Shenzhen), China
Richard Frackowiak Ecole Polytechnique de Lausanne, Switzerland
Min Gan Fuzhou University, China
Geoffrey Goodhill University of Queensland, Australia
Jerzy Grzymala-Busse University of Kansas, USA
Lei Guo The University of Texas at Arlington, USA
Mohand-Said Hacid Université Claude Bernard Lyon 1, France
Bin He University of Minnesota, USA
Huiguang He Chinese Academy of Sciences, China
Gahangir Hossain Texas A&M University-Kingsville, USA
Frank Hsu Fordham University, USA
Jiajin Huang Beijing University of Technology, China
Zhisheng Huang Vrije University of Amsterdam, The Netherlands
Felicia Jefferson Fort Valley State University, USA
Tianzi Jiang Chinese Academy of Sciences, China
Xingpeng Jiang Central China Normal University, China
Hanmin Jung Korea Institute of Science and Technology
Information, South Korea
Jana Kainerstorfer Carnegie Mellon University, USA
Yongjie Li University of Electronic Science and Technology
of China, China
Organization XI
1 Introduction
2 Related Work
Emotion is a subjective, conscious experience when people facing internal
or external stimuli, and it is crucial for communication, decision making
and human-machine interface. Emotion recognition can be performed through
facial movement, voice, speech, text, and physiological signals [8]. Among
these approaches, emotion recognition from electroencephalography (EEG) has
attracted increasing interest [2,7].
We want to solve the problem by encoding time series data as images to allow
machines to recognize and classify the time series data like human vision. Time
series recognition has been well studied in speech and audio area. Researchers
have successfully used combinations of HMMs with acoustic models based on
Gaussian Mixture models (GMMs) [9,10]. Deep neural network is a potential
approach to produce the posterior probabilities over HMM states. Deep learning
has become increasingly popular since the introduction of effective ways to train
multiple hidden layers [11] and has already been proposed as a replacement
for GMMs to model acoustic data in speech recognition tasks [12]. These Deep
Neural Network - Hidden Markov Model hybrid systems (DNN-HMM) achieved
brilliant performance in a variety of speech recognition tasks [13–15]. This success
comes from learning distributed representation through a deep structure and
unsupervised pre training for the Boltzmann machine (RBM) by stacking a single
layer.
Convolutional neural networks (CNN) [16] is also widely used in computer
vision. CNN exploits translational invariance within their structures by extract-
ing features through receptive fields [17] and learn with weight sharing, becoming
the state-of-the-art approach in various image recognition and computer vision
tasks [18–20]. Since unsupervised pretraining has been shown to improve perfor-
mance [21], sparse coding and Topographic Independent Component Analysis
(TICA) are integrated as unsupervised pretraining approaches to learn more
diverse features with complex invariances [22,23].
CNN is proposed for speech processing to maintain the invariance of time
and frequency by LeCun and Bengio. Recently, CNN has shown the ability of
further improving hybrid model performance by applying convolution and max-
pooling in the frequency domain on the TIMIT phone recognition task [24].
A heterogeneous pooling approach proved to be beneficial for training acoustic
invariance in [25]. An exploration of further finite weight allocation and weighted
softmax aggregation layer has been proposed to optimize the CNN structure for
speech recognition tasks [26]. In addition to audio and voice data, less work
explores feature learning in the typical time series analysis tasks of the current
depth learning architecture. Our direction is to encode EEG time series as images
and to use deep learning architectures in computer vision to learn features and
identify structure in time series.
In this paper, we encoded EEG time series data as images using Gramian
Angular field (GAF). We selected 3 real time series dataset and applied deep
Tiled Convolutional Neural Networks (Tiled CNN) with a pretraining stage that
exploits local orthogonality by Topographic ICA [23] to represent the time series.
Emotion Recognition Based on Gramian Encoding Visualization 5
3 Dataset
We evaluate the performance of the approaches on two real-world dataset, the
SEED1 dataset, the SEED IV dataset, and the DEAP2 dataset.
• SEED. The SEED dataset contains EEG signals of three emotions (positive,
neutral, and negative) from 15 subjects. All subjects were watching 15 four-
minute-long emotional movie clips while their signals were collected. The
EEG signals were recorded with ESI NeuroScan System at a sampling rate
of 1000 Hz with a 62-channel electrode cap. To compare with previous work,
we used the same data which contained 27 experiments from 9 subjects.
Signals recorded while the subject watching the first 9 movie clips were used as
training datasets for each experiment and the rest were used as test datasets.
• SEED IV. The SEED IV dataset contains EEG features in total of four
emotions (happy, sad, fear, and neutral) [6]. There were 72 film clips in total
for four emotions and forty five experiments were taken by participants to
assess their emotions when watching the film clips with keywords of emotions
and ratings out of ten points for two dimensions: valence and arousal.
• DEAP. The DEAP dataset contains EEG signals and peripheral physiolog-
ical signals of 32 participants. Signals were collected while participants were
watching one-minute-long emotional music videos. We chose 5 as the thresh-
old to divide the trials into two classes according to the rated levels of arousal
and valence. We used 5-fold cross validation to compare with Liu et al. [3]
and Tang et al. [4].
Inspired by previous work on the duality between time series and complex net-
works [5,27,28], the type of image is a Gramian Angular field (GAF), in which
we represent time series in a polar coordinate system instead of the typical Carte-
sian coordinates. In the Gramian matrix, each element is actually the cosine of
the summation of angles.
Given a time series X = {x1 , x2 , ..., xn } of n real-valued observations, we
rescale X so that all values fall in the interval [−1, 1] or [0, 1] by:
xi − min(X)
or x̃i0 = (2)
max(X) − min(X)
Thus we can represent the rescaled time series X̃ in polar coordinates by encod-
ing the value as the angular cosine and the time stamp as the radius with the
equation below:
In the equation above, ti is the time stamp and N is a constant factor to reg-
ularize the span of the polar coordinate system. This polar coordinate based
representation is a novel way to understand time series. As time increases, cor-
responding values warp among different angular points on the spanning circles,
like water rippling. The encoding map has two important properties. First, it
is bijective as cos(φ) is monotonic when φ ∈ [0, π]. Given a time series, the
proposed map produces one and only one result in the polar coordinate system
with a unique inverse map. Second, as opposed to Cartesian coordinates, polar
coordinates preserve absolute temporal relations.
Rescaled data in different intervals have different angular bounds. [0,1] corre-
sponds to the cosine function in [0, π/2], while cosine values in the interval [−1,1]
fall into the angular bounds [0, π]. They can provide different information gran-
ularity in the Gramian Angular Field for classification tasks, and the Gramian
Angular Difference Field (GADF) of [0,1] rescaled data has the accurate inverse
map. This property actually lays the foundation for imputing missing value of
time series by recovering the images.
After transforming the rescaled time series into the polar coordinate system,
we can easily exploit the angular perspective by considering the trigonometric
sum/difference between each point to identify the temporal correlation within
different time intervals. The Gramian Summation Angular Field (GASF) and
Gramian Difference Angular Field (GADF) are defined as follows:
GASF = [cos(φi + φj )] = X̃ · X̃ − I − X̃ 2 · I − X̃ 2 (4)
GADF = [sin(φi − φj )] = I − X̃ 2 · X̃ − X̃ · I − X̃ 2 (5)
I is the unit row vector [1, 1, ..., 1]. After transforming to the polar coordinate
system, we take time series at each time step as a 1-D metric space. By defining
the inner product:
< x, y >1 = x · y − 1 − x2 · 1 − y 2 (6)
< x, y >2 = 1 − x2 · y − x · 1 − y2 (7)
Two types of Gramian Angular Fields (GAFs) are actually quasi-Gramian matri-
ces [< x˜1 , x˜1 >].
The GAFs have several advantages. First, they provide a way to preserve
temporal dependency, since time increases as the position moves from top-left
Emotion Recognition Based on Gramian Encoding Visualization 7
Fig. 1. GADF (first row) and GASF (second row) visualized images of SEED dataset
with label Positive, Neutral and Negative.
TICA learns the weight matrix W in the second stage by solving the following:
p
n
maximize : fi (xh ; W, V ) subject to : W W T = I (9)
h=1 i=1
8 J.-L. Qiu et al.
Above, W ∈ Rp×q and V ∈ Rp×p where p is the number of hidden units in a layer
and q is the size of the input. V is a logical matrix (Vi j = 1 or 0) that encodes
the topographic structure of the hidden units by a contiguous 3 × 3 block. The
orthogonality constraint W W T = I provides diversity among learned features.
GAF images are not natural images because they have no natural concepts
such as edges and angles. Therefore, we propose to take advantages of unsuper-
vised pre-training of TICA to learn many diversified features with local orthog-
onality. In addition, [23] empirically demonstrate that tiled CNNs perform well
with limited labeled data because Partial weight binding requires fewer parame-
ters and reduces the need for large amounts of labeled data. Our data from SEED
and DEAP datasets tend to have few instances, which is suitable for using tiled
CNNs. Typically, tiled CNNs are trained with two hyperparameters, the tiling
size k and the number of feature maps l. In our experiment, we use grid search
to adjust them.
6 Emotion Classification
We apply Tiled CNN to classify GAF images on 3 datasets. We trained two net-
works, each of which contained GASF and GADF visualized images respectively,
and applied a linear SVM at the last layer as classifier. Our results are better
than the state-of-the-art approach [4]. The datasets are pre-split into training
and testing sets for experimental comparisons. For each dataset, the table gives
Emotion Recognition Based on Gramian Encoding Visualization 9
its name, the number of classes, the number of training and test instances, and
the length of the individual time series.
Feature fusion BDDAE Liu et al. [3] Tang et al. [4] Our approach
Accuracy (%) 83.80 88.98 91.01 93.97 94.33
Std 16.38 12.51 8.23 7.03 6.94
Table 2. Average accuracies (%) and standard deviation of different approaches for
four emotions classification on SEED IV dataset
Table 2 presents that BDAE achieved better results than SVM based feature
fusion. Compared with CCA based approach and other methods, we conclude
that DCCA model which coordinated high-level features achieved better results.
Feature fusion AutoEncoder [2] Liu et al. [3] Tang et al. [4] Ours
Valence (%) 65.29 74.49 85.20 83.82 84.19
Arousal (%) 65.43 75.69 80.50 83.23 83.87
Important future work will apply more practical datasets to the current
method to improve the algorithm and will consider other visualization meth-
ods. At present, only three and four emotion labels are used to discriminate. In
the future, we will consider six types of emotion labels of signals to be encoded
for visualization and classification and explore more efficient framework. We have
cooperated with hospitals and hope to gradually discover new breakthroughs in
visual applications during assisting cognitive disease diagnosis.
References
1. Tzirakis, P., Trigeorgis, G., Nicolaou, M.A., Schuller, B.W., Zafeiriou, S.: End-to-
end multimodal emotion recognition using deep neural networks. IEEE J. Sel. Top.
Signal Process. 11, 1301–1309 (2017)
2. Lu, Y., Zheng, W.-L., Li, B., Lu, B.-L.: Combining eye movements and EEG to
enhance emotion recognition. In: IJCAI (2015)
3. Liu, W., Zheng, W.-L., Lu, B.-L.: Multimodal emotion recognition using multi-
modal deep learning. CoRR, vol. abs/1602.08225 (2016)
4. Tang, H., Liu, W., Zheng, W.-L., Lu, B.-L.: Multimodal emotion recognition using
deep neural networks. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.-S.M. (eds.)
ICONIP 2017, Part IV. LNCS, vol. 10637, pp. 811–819. Springer, Cham (2017).
https://doi.org/10.1007/978-3-319-70093-9 86
5. Zheng, W.-L., Zhu, J.-Y., Peng, Y., Lu, B.-L.: EEG-based emotion classification
using deep belief networks. In: 2014 IEEE International Conference on Multimedia
and Expo (ICME), pp. 1–6 (2014)
6. Zheng, W.-L., Liu, W., Lu, Y., Lu, B.-L., Cichocki, A.: Emotionmeter: a multi-
modal framework for recognizing human emotions. IEEE Trans. Cybern. 99, 1–13
(2018)
7. Schuller, B.W., Rigoll, G., Lang, M.K.: Hidden Markov model-based speech emo-
tion recognition. In: ICME (2003)
8. Kim, K.H., Bang, S.W., Kim, S.R.: Emotion recognition system using short-term
monitoring of physiological signals. Med. Biol. Eng. Comput. 42, 419–427 (2004)
9. Reynolds, D.A., Rose, R.C.: Robust text-independent speaker identification using
Gaussian mixture speaker models. IEEE Trans. Speech Audio Process. 3(1), 72–83
(1995)
10. Leggetter, C., Woodland, P.C.: Maximum likelihood linear regression for speaker
adaptation of continuous density hidden Markov models. Comput. Speech Lang.
9, 171–185 (1995)
11. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief
nets. Neural Comput. 18(7), 1527–1554 (2006)
12. Rahman Mohamed, A., Dahl, G.E., Hinton, G.E.: Acoustic modeling using deep
belief networks. IEEE Trans. Audio Speech Lang. Process. 20, 14–22 (2012)
13. Hinton, G.E., et al.: Deep neural networks for acoustic modeling in speech recog-
nition: the shared views of four research groups. IEEE Signal Process. Mag. 29,
82–97 (2012)
14. Deng, L., Hinton, G.E., Kingsbury, B.: New types of deep neural network learning
for speech recognition and related applications: an overview. In: 2013 IEEE Inter-
national Conference on Acoustics, Speech and Signal Processing, pp. 8599–8603
(2013)
12 J.-L. Qiu et al.
15. Deng, L., et al.: Recent advances in deep learning for speech research at Microsoft.
In: 2013 IEEE International Conference on Acoustics, Speech and Signal Process-
ing, pp. 8604–8608 (2013)
16. LeCun, Y.: Gradient-based learning applied to document recognition (1998)
17. Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional
architecture in the cats visual cortex. J. Physiol. 160, 106–154 (1962)
18. Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: a convolutional
neural-network approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997)
19. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con-
volutional neural networks. In: NIPS (2012)
20. LeCun, Y., Kavukcuoglu, K., Farabet, C.: Convolutional networks and applications
in vision. In: Proceedings of 2010 IEEE International Symposium on Circuits and
Systems, pp. 253–256 (2010)
21. Erhan, D., et al.: Why does unsupervised pre-training help deep learning? J. Mach.
Learn. Res. 11, 625–660 (2010)
22. Kavukcuoglu, K., et al.: Learning convolutional feature hierarchies for visual recog-
nition. In: NIPS (2010)
23. Le, Q.V., Ngiam, J., Chen, Z., Hao Chia, D.J., Koh, P.W., Ng, A.Y.: Tiled convo-
lutional neural networks. In: NIPS (2010)
24. Abdel-Hamid, O., Rahman Mohamed, A., Jiang, H., Penn, G.: Applying convolu-
tional neural networks concepts to hybrid nn-hmm model for speech recognition.
In: 2012 IEEE International Conference on Acoustics, Speech and Signal Process-
ing (ICASSP), pp. 4277–4280 (2012)
25. Deng, L., Abdel-Hamid, O., Yu, D.: A deep convolutional neural network using
heterogeneous pooling for trading acoustic invariance with phonetic confusion. In:
IEEE International Conference on Acoustics, Speech and Signal Processing, pp.
6669–6673 (2013)
26. Abdel-Hamid, O., Deng, L., Yu, D.: Exploring convolutional neural network struc-
tures and optimization techniques for speech recognition. In: INTER-SPEECH
(2013)
27. Campanharo, A.S.L.O., Sirer, M.I., Malmgren, R.D., Ramos, F.M., Amaral,
L.A.N.: Duality between time series and networks. PloS One 6, e23378 (2011)
28. Wang, Z., Oates, T.: Encoding time series as images for visual inspection and
classification using tiled convolutional neural networks (2014)
EEG Based Brain Mapping by Using
Frequency-Spatio-Temporal Constraints
1 Introduction
Neural activity estimation is now considered as an important tool for assisted diag-
nosis of brain related diseases. The inherent dynamic of electroencephalographic
(EEG) signals and its corresponding resolution in time is a key factor to analyze
complex behaviours. In addition, when the estimation of neural activity is per-
formed from EEG (known as dynamic inverse problem) an improvement of the res-
olution obtained by using functional Magnetic Resonance Imaging (fMRI) meth-
ods is achieved. However, in order to obtain a reliable neural activity reconstruc-
tion the inclusion of spatial and temporal constraints is mandatory in the solution
of the dynamic inverse problem [1]. Moreover, frequency behaviour of EEG signals
related to several pathological activity are also important for diagnosis support,
which are usually analyzed by using non-stationary signal processing methods like
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.