Professional Documents
Culture Documents
Textbook Adaptive and Intelligent Systems Third International Conference Icais 2014 Bournemouth Uk September 8 10 2014 Proceedings 1St Edition Abdelhamid Bouchachia Eds Ebook All Chapter PDF
Textbook Adaptive and Intelligent Systems Third International Conference Icais 2014 Bournemouth Uk September 8 10 2014 Proceedings 1St Edition Abdelhamid Bouchachia Eds Ebook All Chapter PDF
https://textbookfull.com/product/formal-modeling-and-analysis-of-
timed-systems-12th-international-conference-
formats-2014-florence-italy-september-8-10-2014-proceedings-1st-
edition-axel-legay/
https://textbookfull.com/product/discovery-science-17th-
international-conference-ds-2014-bled-slovenia-
october-8-10-2014-proceedings-1st-edition-saso-dzeroski/
https://textbookfull.com/product/intelligent-virtual-agents-14th-
international-conference-iva-2014-boston-ma-usa-
august-27-29-2014-proceedings-1st-edition-timothy-bickmore/
https://textbookfull.com/product/algorithmic-learning-
theory-25th-international-conference-alt-2014-bled-slovenia-
october-8-10-2014-proceedings-1st-edition-peter-auer/
Adaptive and
LNAI 8779
Intelligent Systems
Third International Conference, ICAIS 2014
Bournemouth, UK, September 8–10, 2014
Proceedings
123
Lecture Notes in Artificial Intelligence 8779
Adaptive and
Intelligent Systems
Third International Conference, ICAIS 2014
Bournemouth, UK, September 8-10, 2014
Proceedings
13
Volume Editor
Abdelhamid Bouchachia
Bournemouth University
Faculty of Science and Technology
Fern Barrow
Poole, Dorset, BH12 5BB, UK
E-mail: abouchachia@bournemouth.ac.uk
ICAIS is a biennial event. After the very successful ICAIS 2009 and ICAIS
2011, the conference witnessed another success this year. ICAIS 2014 received
32 submissions from 11 countries around the world (Argentina, Austria, Brazil,
Canada, China, Czeck Republic, Germany, Poland, Portugal, Tunisia, UK) which
underwent a rigorous and tough peer-review by three reviewers. Based on the
referees’ reports, the Program Committee selected 19 full papers.
VI Preface
ICAIS 2014 was enhanced by 3 keynote speeches and 1 invited talk given
by internationally renowned researchers, presenting insights and ideas in the
research areas covered by the conference. The rest of the program featured in-
teresting sessions. Overall we had 4 sessions: 1- Advances in feature selection
consisting of 4 papers 2- Clustering and classification techniques consisting of 6
papers 3- Adaptive optimization consisting of 5 papers 4- Time series analysis 4
papers
Although this set of sessions does not cover the topics of the conference, it
is a very good mix proving fresh insight in some of the techniques related to
adaptive and intelligent systems.
This volume and the whole conference are of course the result of team
effort. My thanks go to the local organizers, particularly to Tobias Baker, the
Steering Committee, the publicity chair and especially the Program Committee
members and the reviewers for their cooperation in the shaping of the confer-
ence and running the refereeing process. I highly appreciate the hard work and
timely feedback of the referees who did an excellent job. I would like to thank
all the authors and participants for their interest in ICAIS 2014. Thank you
for your time and effort in preparing and submitting your papers and your pa-
tience throughout the long process. Your work is the backbone of this conference.
Executive Committee
Conference Chair
Abdelhamid Bouchachia Bournemouth University, UK
Advisory Committee
Nikola Kasabov Auckland University, New Zealand
Xin Yao University of Birmingham, UK
Djamel Ziou University of Sherbrooke, Canada
Plamen Angelov University of Lancaster, UK
Witold Pedrycz University of Edmonton, Canada
Janusz Kacprzyk Polish Academy of Sciences, Poland
Program Committee
Additional Reviewers
Sponsoring Institutions
Bournemouth University, UK
Faculty of Science and Technology, Bournemouth University, UK
The International Neural Networks Society
Invited Talks
Feature Extraction for Change Detection
Ludmila Kuncheva
School of Computer Science, Bangor University
Dean Street, Bangor, Gwynedd, LL57 1UT, United Kingdom
l.i.kuncheva@bangor.ac.uk
An online classifier is only accurate if the distribution of the incoming data is the
same as the distribution of the data the classifier was trained upon. Therefore,
detecting a change and retraining the classifier accordingly is an important task.
This talk will challenge the concept of change and change detectability from
unsupervised streaming data. What constitutes a change? The answer is uniquely
determined by the context. But in order to have change detection methods that
work across problems, the relevant features must be exposed. The talk will be
focused on feature extraction for the purposes of change detection.
Distributed Data Stream Mining
João Gama
LIAAD - INESC Porto, Laboratory of Artificial Intelligence and Decision Support
University of Porto, Portugal
joao.jgama@gmail.com
The phenomenal growth of mobile and embedded devices coupled with their
ever-increasing computational and communications capacity presents an exciting
new opportunity for real-time, distributed intelligent data analysis in ubiquitous
environments. In these contexts centralized approaches have limitations due to
communication constraints, power consumption (e.g. in sensor networks), and
privacy concerns. Distributed online algorithms are highly needed to address the
above concerns. The focus of this talk is on distributed stream mining algorithms
that are highly scalable, computationally efficient and resource-aware. These
features enable the continued operation of data stream mining algorithms in
highly dynamic mobile and ubiquitous environments.
Music Lessons and Other Exercises
Kurt Geihs
University of Kassel, EECS Department|Distributed Systems
Wilhelmshöher Allee 73, D-34121 Kassel, Germany
geihs@uni-kassel.de
For more than a decade researchers have explored software systems that dynami-
cally adapt their behavior at run-time in response to changes in their operational
environments, user preferences, and underlying computing infrastructure. Our
particular focus has been on context-aware, self-adaptive applications in ubiqui-
tous computing scenarios. In my presentation I will discuss lessons learned from
three projects in the realm of self-adaptive systems. Some of these insights re-
late to purely technical concerns. Others touch on socio-technical concerns that
substantially influence the users’ acceptance of self-adaptive applications. Our
experiments have shown that both kinds of concerns are important. We claim
that an interdisciplinary software development methodology is needed in order
to produce socially aware applications that are both acceptable and accepted.
Adaptive Ambient Intelligence and Smart
Environments
Ahmad Lotfi
School of Science and Technology, Nottingham Trent University
Clifton Campus, Clifton Lane, Nottingham, NG11 8NS, United Kingdom
ahmad.lotfi@ntu.ac.uk
Adaptive Optimization
Towards a Better Understanding of the Local Attractor in Particle
Swarm Optimization: Speed and Solution Quality . . . . . . . . . . . . . . . . . . . . 90
Vanessa Lange, Manuel Schmitt, and Rolf Wanka
1 Introduction
Dirichlet process (DP) has became as one of the most popular nonparametric
Bayesian techniques [12,10]. It can be viewed as a stochastic process and can
be considered as distribution over distributions, also. Recently, hierarchical DP
[20,19] has been developed as a hierarchical nonparametric Bayesian model and
has shown promising results to the problem of model-based clustering of grouped
data with sharing clusters. It is built on the DP and involves Bayesian hierar-
chy where the base measure for a DP is itself distributed according to a DP.
The hierarchical DP framework is particularly useful in problems involving the
modeling of grouped data where observations are organized into groups, and
allow these groups to remain statistically linked by sharing mixture components
[20,19]. As several other statistical approaches, existing hierarchical DP-based
models considers the Gaussian assumption. Unlike, these existing works, in this
paper, we focus on a specific form of hierarchical DP mixture model where each
observation within a group is drawn from a mixture of generalized Dirichlet (GD)
distributions. We are mainly motivated by the fact that the generalized Dirichlet
distribution has been shown to be efficient in modeling high-dimensional propor-
tional data (i.e. normalized histograms) in a variety of applications from different
disciplines (e.g. computer vision, data mining, pattern recognition) [5,6,8,9].
2 The Model
where j is an index for each group of data. In this work, we represent a hierarchi-
cal DP in a more intuitive and straightforward form through two stick-breaking
constructions which involves a global-level and a group-level construction [18,11].
In the global-level construction, the global measure G0 is distributed according
to a DP(γ, H) and can be described using a stick-breaking representation
k−1 ∞
ψk ∼ Beta(1, γ) Ωk ∼ H ψk = ψk (1 − ψs ) G0 = ψk δΩk (3)
s=1 k=1
t−1 ∞
πjt ∼ Beta(1, λ) jt ∼ G0 πjt = πjt (1 − πjs ) Gj = πjt δjt
s=1 t=1
A Hierarchical Infinite Generalized Dirichlet Mixture Model 3
M ∞
∞
Wjtk
p(W |ψ) = ψk (4)
j=1 t=1 k=1
M ∞
∞
k−1 W
p(W |ψ ) = ψk (1 − ψs ) jtk (5)
j=1 t=1 k=1 s=1
Let i index the observations within each group j, we assume that each θji is a
factor corresponding to an observation Xji , and the factors θj = (θj1 , θj2 , . . .)
are distributed according to DP Gj , one for each j: θji |Gj ∼ Gj , Xji |θji ∼
F (θji ), where F (θji ) represents the distribution of the observation Xji given θji .
According to Eq.(2), the base distribution H of G0 provides the prior for θji .
This setting forms the definition of a hierarchical DP mixture model, where each
group is associated with a mixture model, and the components are shared among
these mixtures due to the sharing of atoms Ωk among all Gj . Furthermore, since
each θji is distributed according to Gj , it takes the value jt with probability
πjt . Next, we introduce a binary latent variable Zjit ∈ {0, 1} as an indicator
variable. That is, Zjit = 1 if θji is associated with component t and maps to the
Z
group-level atom jt ; otherwise, Zjit = 0. Therefore, we have θji = jtjit . Since
Z W Zjit
also maps to the global-level atom Ωk , we then have θji = jtjit = Ωk jtk
jt .
The indicator variable Z = (Zji1 , Zji2 , . . .) is distributed according to π as
M ∞
N
Z
p(Z|π) = πjtjit (7)
j=1 i=1 t=1
M ∞
N
t−1
p(Z|π ) =
[πjt
(1 − πjs )]Zjit (8)
j=1 i=1 t=1 s=1
4 W. Fan et al.
∞
M ∞
M
p(π ) = Beta(1, λjt ) = λjt −1
λjt (1 − πjt ) (9)
j=1 t=1 j=1 t=1
D
Γ (αl + βl ) αl −1
l
γ
GD(Y |α, β) = Yl 1− Yf l (10)
Γ (αl )Γ (βl )
l=1 f =1
D
where l=1 Yl < 1 and 0 < Yl < 1 for l = 1, . . . , D, αl > 0, βl > 0, γl =
βl − αl+1 − βl+1 for l = 1, . . . , D − 1, γD = βD − 1, and Γ (·) is the gamma
function. Based on an interesting mathematical property of the GD distribution
which is thoroughly discussed in [7], we can transform the original data point
Y using a geometric transformation into another D-dimensional data point X
D
with independent features in the form of GD(X|α, β) = l=1 Beta(Xl |αl , βl ),
l−1
where X = (X1 , . . . , XD ), X1 = Y1 and Xl = Yl /(1 − f =1 Yf ) for l > 1,
and Beta(Xl |αl , βl ) is a Beta distribution defined with parameters {αl , βl }. Now
let us consider a data set X that contains N random vectors separated into
M groups, then each vector X ji = (Xji1 , . . . , XjiD ) is represented in a D-
dimensional space and is drawn from a hierarchical infinite GD mixture model.
In practice, not all the features are important, some of them may be irrelevant
and may even degrade the clustering performance. Therefore, feature selection
technique is important and is adopted here as a tool to chooses the “best” fea-
ture subset. The most common feature selection technique, in the context of
unsupervised learning, defines an irrelevant feature as the one having a distribu-
tion independent from class labels [13]. Therefore, in our work the distribution
of each feature Xl can be defined by
p(Xjil ) = Beta(Xjil |αkl , βkl )φjil Beta(Xjil |αl , βl )1−φjil (11)
where φjil is a binary latent variable represents the feature relevance indicator,
such that φjil = 0 denotes the feature l of group j is irrelevant (i.e. noise)
and follows a Beta distribution: Beta(Xjil |αl , βl ); otherwise, the feature Xjil is
relevant. The prior distribution of φ is defined as
M
N
D
φ 1−φjil
p(φ|) = l1jil l2 (12)
j=1 i=1 l=1
where each φjil is a Bernoulli variable such that p(φjil = 1) = l1 and p(φjil =
0) = l2 . The vector = (1 , . . . , D ) represents the features saliencies such that
A Hierarchical Infinite Generalized Dirichlet Mixture Model 5
In the next step, we need to introduce Gamma prior distributions for parameters
α, β, α and β .
In order to simplify notations, we define Θ = (Ξ, Λ) as the set of latent and un-
known random variables, where Ξ = {Z, φ} and Λ = {W , , ψ, π , α, β, α , β }.
Variational inference [1,3] is a deterministic approximation technique that is used
to find tractable approximations for posteriors of a variety of statistical models.
The goal is to find an approximation Q(Θ) to the true posterior distribution
p(Θ|X ) by maximizing the lower bound of ln p(X ):
M
N
T
Z
M
T
K
W
q(Z) = ρjitjit q(W ) = ϑjtkjtk (16)
j=1 i=1 t=1 j=1 t=1 k=1
6 W. Fan et al.
M
N
D
D
Dir(l |ξ ∗ )
φ
q(φ) = ϕjiljil (1 − ϕjil )
1−φjil
q() = (17)
j=1 i=1 l=1 l=1
M
T
K
q(π ) =
Beta(πjt |ajt , bjt ) q(ψ ) = Beta(ψk |ck , dk ) (18)
j=1 t=1 k=1
K
D
K
D
q(α) = G(αkl |ũkl , ṽkl ) q(β) = G(βkl |g̃kl , h̃kl ) (19)
k=1 l=1 k=1 l=1
D
D
q(α ) = G(αl |ũl , ṽl ) q(β ) = G(βl |g̃l , h̃l ) (20)
l=1 l=1
4 Experimental Results
We perform the categorization of cats and dogs using the proposed HInGDFs
model and the bag-of-visual words representation. Our methodology are summa-
rized as follows: First, we extract the 128-dimensional scale-invariant
feature transform (SIFT) [14] descriptors from each image using the Difference-
of-Gaussians (DoG) interest point detectors and then normalized. Then, these
extracted SIFT features are modeled using our HInGDFs. Specifically, each im-
age Ij is considered as a “group” and is therefore associated with a Dirichlet
process mixture (infinite mixture) model Gj . Thus, each extracted SIFT feature
vector Xji of the image Ij is supposed to be drawn from the infinite mixture
A Hierarchical Infinite Generalized Dirichlet Mixture Model 7
In this work, we evaluate the effectiveness of the proposed approach for catego-
rizing cats and dogs using a publicly available database namely the Oxford-IIIT
Pet database [15]1 . It contains 7,349 images of cats and dogs, and is composed of
12 different breeds of cats and 25 different breeds of dogs. Each of these breeds
contains about 200 images. Some sample images of this database are displayed
in Fig. 1 (cats) and Fig. 2 (dogs). This database is randomly divided into two
partitions: one for training (to learn the model and build the visual vocabulary),
the other one for testing.
Fig. 1. Sample cat images from the Oxford-IIIT Pet database. (a) Abyssinian, (b)
Bengal, (c) Birman, (d) Bombay, (e) British Shorthair, (f) Egyptian Mau, (g) Persian,
(h) Ragdoll.
Fig. 2. Sample dog images from the Oxford-IIIT Pet database. (a) Basset hound, (b)
Boxer, (c) Chihuahua, (d) Havanese, (e) Keeshond, (f) Pug, (g) Samyod, (h) Shiba inu.
and the two other tested approaches are shown in Table 1. As we can see from
this table, our approach (HInGDFs) provided the highest categorization accu-
racy among all tested approaches. Moreover, HInGDFs outperformed HInGD
and HInGFs which demonstrates the advantage of incorporating a feature se-
lection scheme into our framework and verifies that the GD mixture model has
better modeling capability for proportional data than Gaussian. Next, we have
evaluated all approaches for the whole Oxford-IIIT Pet database (i.e., we do
not separate cat and dog images). The corresponding results are shown in Ta-
ble 1. Based on the obtained results, the proposed approach provides again the
best categorization accuracy rate (42.73%) as compared toHInGD (39.58%) and
Table 1. The average categorization performance (%) and the standard deviation
obtained over 30 runs using different approaches. The values in parenthesis are the
standard deviations of the corresponding quantities.
0.9
0.8
Feature Saliency
0.7
0.6
0.5
0.4
0.3
0.2
0 20 40 60 80 100 120
Features
5 Conclusion
In this paper, we have developed a nonparametric Bayesian approach for data
modeling, classification, and feature selection, using both hierarchical DP and
generalized Dirichlet distributions which allows to model data without the need
to define, a priori, the complexity of the entire model. In order to learn the
parameters of the proposed model we have derived an inference procedure that
relies on variational Bayes. The validation of the model has been based on a
challenging application namely images categorization. We are currently working
on the extension of our learning framework to online settings to handle the
problem of dynamic data modeling.
References
1. Attias, H.: A variational Bayes framework for graphical models. In: Proc. of Ad-
vances in Neural Information Processing Systems (NIPS), pp. 209–215 (1999)
2. Berg, T., Forsyth, D.: Animals on the web. In: 2006 IEEE Computer Society Con-
ference on Computer Vision and Pattern Recognition, vol. 2, pp. 1463–1470 (2006)
3. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006)
4. Blei, D.M., Jordan, M.I.: Variational inference for Dirichlet process mixtures.
Bayesian Analysis 1, 121–144 (2005)
10 W. Fan et al.
5. Bouguila, N., Ziou, D.: A hybrid SEM algorithm for high-dimensional unsupervised
learning using a finite generalized Dirichlet mixture. IEEE Transactions on Image
Processing 15(9), 2657–2668 (2006)
6. Bouguila, N., Ziou, D.: High-dimensional unsupervised selection and estimation of
a finite generalized Dirichlet mixture model based on minimum message length.
IEEE Transactions on Pattern Analysis and Machine Intelligence 29(10), 1716–
1731 (2007)
7. Boutemedjet, S., Bouguila, N., Ziou, D.: A hybrid feature extraction selection ap-
proach for high-dimensional non-Gaussian data clustering. IEEE Transactions on
Pattern Analysis and Machine Intelligence 31(8), 1429–1443 (2009)
8. Fan, W., Bouguila, N.: Variational learning of a Dirichlet process of generalized
Dirichlet distributions for simultaneous clustering and feature selection. Pattern
Recognition 46(10), 2754–2769 (2013)
9. Fan, W., Bouguila, N., Ziou, D.: Unsupervised hybrid feature extraction selection
for high-dimensional non-gaussian data clustering with variational inference. IEEE
Transactions on Knowlege and Data Engineering 25(7), 1670–1685 (2013)
10. Ferguson, T.S.: Bayesian Density Estimation by Mixtures of Normal Distributions.
Recent Advances in Statistics 24, 287–302 (1983)
11. Ishwaran, H., James, L.F.: Gibbs sampling methods for stick-breaking priors. Jour-
nal of the American Statistical Association 96, 161–173 (2001)
12. Korwar, R.M., Hollander, M.: Contributions to the theory of Dirichlet processes.
The Annals of Probability 1, 705–711 (1973)
13. Law, M.H.C., Figueiredo, M.A.T., Jain, A.K.: Simultaneous feature selection and
clustering using mixture models. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence 26(9), 1154–1166 (2004)
14. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International
Journal of Computer Vision 60(2), 91–110 (2004)
15. Parkhi, O.M., Vedaldi, A., Zisserman, A., Jawahar, C.V.: Cats and dogs. In: IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3498–3505
(2012)
16. Ramanan, D., Forsyth, D.A.: Using temporal coherence to build models of animals.
In: Proc. of the 9th IEEE International Conference on Computer Vision (ICCV),
pp. 338–345. IEEE Computer Society (2003)
17. Sato, M.: Online model selection based on the variational Bayes. Neural Compu-
tation 13, 1649–1681 (2001)
18. Sethuraman, J.: A constructive definition of Dirichlet priors. Statistica Sinica 4,
639–650 (1994)
19. Teh, Y.W., Jordan, M.I.: Hierarchical Bayesian Nonparametric Models with Ap-
plications. In: Hjort, N., Holmes, C., Müller, P., Walker, S. (eds.) Bayesian Non-
parametrics: Principles and Practice. Cambridge University Press (2010)
20. Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical Dirichlet processes.
Journal of the American Statistical Association 101(476), 1566–1581 (2006)
21. Wang, C., Paisley, J.W., Blei, D.M.: Online variational inference for the hierarchical
Dirichlet process. Journal of Machine Learning Research - Proceedings Track 15,
752–760 (2011)
Importance of Feature Selection
for Recurrent Neural Network
Based Forecasting of Building Thermal Comfort
1 Introduction
Although artificial neural networks are very popular soft-computing techniques
used in industrial applications, recurrent neural networks are not used so often
like the feed-forward models. One possible cause is the fact that their training
is usually much more difficult and more complex recurrent models are more
sensitive to over-fitting. It can be therefore crucial to perform a proper selection
of network inputs, which can simplify the training and can lead to a better
generalization abilities [4], [2]. A proper input selection was observed to be very
important particularly in real-world applications [8].
In this paper, a simple recurrent neural network model is adopted. The result-
ing network can be further used as a data-based black box model for optimization
of the building heating. At each hour, a building management system finds the
indoor air temperature set points that lead to a minimum output of the con-
sumption model and a proper level of thermal comfort. For this purpose, it is
crucial to reach a good prediction accuracy of both consumption and thermal
comfort. Since the cost function is highly nonlinear and multi-modal, popula-
tion based metaheuristical optimization can be used with advantage. Because
the neural network is used in optimization loop many times, it is also crucial to
have the network as smallest as possible. Moreover, if the optimization is per-
formed remotely, one must minimize the amount of data that are measured and
transferred from the building to the optimization agent. All these requirements
imply a critical need for a proper selection of features, which leads to reasonable
data acquisition requirements and proper prediction accuracy.
We focus on one hour ahead forecasting of thermal discomfort in particular
gas-heated office building. The comfort is assessed in terms of Predicted Mean
Vote model. First we demonstrate that accuracy of the recurrent model is higher
than the accuracy of feed-forward network. The main conclusion however is that
sensitivity based feature selection can help the recurrent network to reach much
higher reduction of the input dimensionality by simultaneously keeping a good
accuracy level. This phenomenon has been observed also in study [3] focused on
the prediction of heating gas consumption.
In Section 2, we briefly describe all the methods used in the experimental
part, which is described in section 3. Some final discussion and conclusions can
be found in Section 4.
A real office building located at ENEA (Cassacia Research Centre, Rome, Italy)
was considered as a case study (see Figure 1). The building is composed of three
floors and a thermal subplant in the basement. The building is equipped with an
advanced monitoring system aimed at collecting data about the environmental
conditions and electrical and thermal energy consumption. In the building there
are 41 offices of different size with a floor area ranging from 14 to 36 m2 , 2
EDP rooms each of about 20 m2 , 4 Laboratories,1 Control Room and 2 Meeting
Rooms. Each office room has from 1 up to 2 occupants.
In order to simulate the variables of interest, a MATLAB Simulink simulator
based on HAMBASE model ([7], [9]) was developed. In particular, the build-
ing was divided into 15 different zones according to different thermal behavior
14 M. Macas et al.
depending to solar radiation exposure. Therefore each zone is modeled with sim-
ilar thermal and physical characteristics. Figure 2 shows the division of zones
for each floor. Each zone covers more rooms. Although there are 15 zones at
all, zones numbers 3, 8 and 13 correspond to corridors and do not have fan
coils. Below, these zones are called non-active zones while all the other zones
are called active zones. The simulator is used to generate data from which the
thermal comfort can be evaluated. Its inputs are indoor temperature set points
and external meteorological conditions. Its outputs include radiant temperature,
air temperature and relative humidity.
From those variables, the PPD can be computed using PMV model. Among
those taken from the building model,there are four variables, that were set to
their typical values - metabolism ( 70 [W m−2 ]), external work (0 [W m−2 ]),
clothing (1 [−]), and air velocity (0.1 [ms−1 ]). This simplification is used, because
we do not have any model of yet. This issue is out of scope of this paper and
will be solved in future work.
2.3 Data
La Paz. Bolivia unhappily has no seaport, but lake and river ports
are better than none; of these she has several. Nevertheless La Paz
is the chief port of entry with the principal Custom House, the
terminus of the three railways leading from the Pacific.
Railways
Mining