Professional Documents
Culture Documents
Download pdf Advances In Self Organizing Maps Learning Vector Quantization Clustering And Data Visualization Proceedings Of The 13Th International Workshop Wsom 2019 Barcelona Spain June 26 28 2019 Alfred ebook full chapter
Download pdf Advances In Self Organizing Maps Learning Vector Quantization Clustering And Data Visualization Proceedings Of The 13Th International Workshop Wsom 2019 Barcelona Spain June 26 28 2019 Alfred ebook full chapter
https://textbookfull.com/product/business-information-
systems-22nd-international-conference-bis-2019-seville-spain-
june-26-28-2019-proceedings-part-ii-witold-abramowicz/
https://textbookfull.com/product/business-information-
systems-22nd-international-conference-bis-2019-seville-spain-
june-26-28-2019-proceedings-part-i-witold-abramowicz/
Business Information Systems Workshops: BIS 2019
International Workshops, Seville, Spain, June 26–28,
2019, Revised Papers Witold Abramowicz
https://textbookfull.com/product/business-information-systems-
workshops-bis-2019-international-workshops-seville-spain-
june-26-28-2019-revised-papers-witold-abramowicz/
https://textbookfull.com/product/graph-drawing-and-network-
visualization-26th-international-symposium-gd-2018-barcelona-
spain-september-26-28-2018-proceedings-therese-biedl/
https://textbookfull.com/product/swarm-intelligence-12th-
international-conference-ants-2020-barcelona-spain-
october-26-28-2020-proceedings-marco-dorigo/
https://textbookfull.com/product/big-data-7th-ccf-conference-
bigdata-2019-wuhan-china-september-26-28-2019-proceedings-hai-
jin/
Alfredo Vellido
Karina Gibert
Cecilio Angulo
José David Martín Guerrero Editors
Advances in
Self-Organizing
Maps, Learning Vector
Quantization, Clustering
and Data Visualization
Proceedings of the 13th International
Workshop, WSOM+ 2019, Barcelona,
Spain, June 26–28, 2019
Advances in Intelligent Systems and Computing
Volume 976
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, Electronic Engineering, University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen, Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
Editors
Advances in Self-Organizing
Maps, Learning Vector
Quantization, Clustering
and Data Visualization
Proceedings of the 13th International
Workshop, WSOM+ 2019, Barcelona, Spain,
June 26–28, 2019
123
Editors
Alfredo Vellido Karina Gibert
Department of Computer Science Knowledge Engineering and Machine
UPC BarcelonaTech Learning Group (KEMLG) at Intelligent
Barcelona, Spain Data Science and Artificial Intelligence
Research Center
Cecilio Angulo UPC BarcelonaTech
Department of Automatic Control Barcelona, Spain
UPC BarcelonaTech
Barcelona, Spain José David Martín Guerrero
Departament d’Enginyeria Electrònica
Universitat de València
Burjassot, Valencia, Spain
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The Association for Computing Machinery (ACM) has named Yoshua Bengio,
Geoffrey Hinton, and Yann LeCun as recipients of the 2018 Turing Award for their
major contribution to the development of deep neural networks as a critical com-
ponent of computing. This is a timely reminder of the renewed vitality of the
machine learning field, in which self-organizing systems have played a major role
from the 1980s not only from the perspective of data analysis, but also as in silico
models in computational neuroscience.
This book contains the peer-reviewed and accepted contributions presented at
the 13th International Workshop on Self-Organizing Maps (WSOM+2019) held at
Universitat Politècnica de Catalunya (UPC BarcelonaTech), Barcelona (Spain),
during June 26–28, 2019. WSOM+2019 is the latest in a series of biennial inter-
national conferences that started with WSOM’97 in Helsinki, Finland, with Prof.
Teuvo Kohonen as General Chairman. We would like to express our gratitude to
Prof. Kohonen for serving as Honorary Chair of WSOM+2019.
The reader will find here a varied collection of studies that testify to the vitality
of the field of self-organizing systems for data analysis. Most of them relate to the
core models in the field, namely self-organizing maps (SOMs) and learning vector
quantization (LVQ), but the workshop also catered for research in the broader
spectrum of unsupervised learning, clustering, and multivariate data visualization
problems. It is also worth highlighting that the book includes a balanced mix
of theoretical studies and applied research, covering a wide array of fields that vary
from business and engineering to the life sciences. As a result, the book should be
of interest to machine learning researchers and practitioners in general and, more
specifically, to those interested in keeping up with developments in
self-organization, unsupervised learning, and data visualization.
The book collects the work of more than 90 researchers from 18 countries, and it
is the result of a collective effort. It would not have been possible without the advice
and guidance of the international WSOM Steering Committee, and the quality
of the final selection of papers is the result of the selfless reviewing work performed
by the Program Committee members and the anonymous additional reviewers,
which enhanced the sterling work of the authors themselves. We are truly indebted
v
vi Preface
June 2019
Organization
Steering Committee
Program Committee
vii
viii Organization
ix
x Contents
1 Introduction
Self-organizing maps (SOMs) or Kohonen maps, introduced in [9], is a particular
topographically organized vector quantization algorithm. It computes a mapping
from a high dimensional space to a usually one or two dimensional regular grid
with the specificity that close positions in the regular grid are associated with
close positions in the original high dimensional space. We have a pretty good
understanding of what a SOM is doing. Even if there is no energy function asso-
ciated with the Kohonen learning rule which could formally state what Kohonen
maps do actually capture (some authors actually suggested some alternative
formulations derived from an energy function, see for example [6]), we can still
pretty much see Kohonen maps as a K-means with a topology i.e. capturing the
distribution of input samples in a topographically organized fashion. As soon
as we experiment with, for example, 2D-Kohonen maps with two dimensional
input samples, we quickly face the nice unfolding of the map sometimes trapped
in some kind of local minima where there remains some twist in the map. While
our understanding of Kohonen SOMs is pretty clear, the things become more
complicated when we turn to recurrent SOMs.
Recurrent SOMs are a natural extension of SOMs when dealing with serial
inputs in order to “find structure in time” (J. Elman). This extension follows
the same principle introduced for supervised multi-layer perceptrons by [3,7] of
feeding back a context computed from the previous time step. These recurrent
SOMs are built by extending the prototype vector with an extra component
which encapsulates some information about the past. There are indeed various
proposals about the information memorized from the past, e.g. keeping only the
c Springer Nature Switzerland AG 2020
A. Vellido et al. (Eds.): WSOM 2019, AISC 976, pp. 3–12, 2020.
https://doi.org/10.1007/978-3-030-19642-4_1
4 J. Fix and H. Frezza-Buet
location of the previous best matching unit [4] or the matching over the whole
map [11]. An overview of recurrent SOMs is provided in [5]. Cellular and biolog-
ically inspired architectures have been proposed as well [8]. When the question
of understanding how recurrent SOMs work comes to the front, there are some
theoretical results that bring some answers. However, as any theoretical study,
they are necessarily limited in the questions they can address. For example, [10]
studied the behavior of recurrent SOMs by analyzing its dynamics in the absence
of inputs. As for usual SOMs for which mathematical investigations do not cover
the whole field yet [2], these theoretical results bring only a partial answer and
there is still room for experimental investigation. Despite numerous works, how
the recurrent SOMs deal with serial inputs and what they actually learn is not
obvious: “The internal model representation of structures is unclear” [5]. We
indeed lack the clear representations that we possess for understanding SOMs.
In order to tackle this issue, we focus in this paper on the simplest recur-
rent SOM where the temporal context is only the position of the best matching
unit (BMU) within the map at the previous iteration (which bears resemblance
to the SOM-SD of [4]). This simplicity comes with the ability to design spe-
cific visualizations to investigate the behavior of the map. As we shall see in
the experiments, despite this simplicity, there is still an interesting richness of
dynamics. In particular, we will investigate and visualize the behavior of this
simple recurrent SOM when inputs are provided sequentially by different hid-
den Markov models. These will illustrate the behavior of the recurrent SOM
in the presence of ambiguous observations, long-term dependencies, changing
dynamics, noise in the observations and noise in the transitions.
2 Methods
2.1 Algorithm
In our experiments, the inputs in X that are provided at each time step
are generated from a Hidden Markov Model (HMM). The HMM has a finite
set S = {s0 , s1 , · · ·} of states. Each state is an integer (i.e. S ⊂ N). At each
time step, a state transition is performed according to a transition matrix. In
the current state st , the observation is sampled from the conditional probability
P (ξ | st ), defined by the observation matrix of the HMM. Different states of the
HMM may provide a similar observation. In this case, the recursive architecture
is expected to make the difference between such states in spite of the observation
ambiguity. In other words, the current BMU it value is expected to represent the
actual st even if several other states could have provided the current input ξ t .
2.2 Representations
Algorithm 1 can be executed with any dimension for M without loss of generality.
Nevertheless, we use 1D maps (M = [0, 1]) for the sake of visualization. Weights
wi are in X = [0, 1] as well. They can be represented as a gray scaled value, from
black (0) to white (1). In the bottom left part of Fig. 1, the background of the
6 J. Fix and H. Frezza-Buet
chart is made of wit , with t in abscissa and i in ordinate. On this chart, red curves
are also plotted. This is done when the HMM is deterministic (and thus cycling
through its states, visiting s0 , s1 , · · · , sp−1 , s0 , s1 , · · · ). If the state sequence that
is repeated throughout the experiment has a length p (p = 10 in experiment
of Fig. 1), p red curves are plotted on the chart. For 0 ≤ k < p, the kth red
curve links the points {(t, it ) | t mod p = k}. The curves show the evolution of
the BMU position corresponding to each of the p states throughout learning.
From left to right in that chart in Fig. 1, some red curves are initially overlaid
before getting progressively distant. Such red curves splits show a bifurcation
since the map allocates a new place on M for representing a newly detected
HMM state. This allocation has a topography since the evolution is a split and
then a progressive spatial differentiation of the state positions.
Let us take another benefit from using 1D maps and introduce an original
representation of both w and c weights. This representation is referred to as a
protograph in this paper. It consists of a circular chart (see three of them on top
left in Fig. 1). The gray almost-closed circle represents M = [0, 1]. At time step
t, one can plot on the circle the two weights related to it . First weight, related to
the input, is w (it ), which is a value in X to which a gray level is associated. This
is plotted as a gray dot with the corresponding gray value, placed on the circle
at position it . The second weight to be represented for it is c (it ), related to the
recurrent context, which is a position in M and thus a position on the circle.
c (it ) is represented with an arrow, starting from position c (it ) on the circle and
pointing at it on the circle, where the dot representing w (it ) is actually located.
This makes a dot-arrow pair for it . The full protograph at time t plots the dot-
arrow pairs (w (i ), c (i )) for the 50 last steps. The third protograph in Fig. 1
seems to contain only 10 dot-arrow pairs since many of the 50 ones are identical
to others. This last protograph corresponds to an organized map, it reveals the
number of states visited by the HMM (number of dots), where they are encoded
in the map (dot positions), which observation each state provides (dot colors),
and the state sequence driven by the HMM transitions (follow the arrows from
one state to another). Making movies from the succession of such protographs
unveils the dynamics of the organization of spatio-temporal representations in
the map. The splits and separation mentioned for the red curves is then visible
as a split of one dot into two dots that slide afterwards away one from the other.
Movies of the experiments are available online2 .
2.3 Evaluation
If the map encodes the HMM states with a dedicated BMU position, each
observed BMU position must be paired with a single state. In this case, Dt
can be viewed as a set of samples of a function from M to S. To check this
property for the map at time t, a supervised learning process is performed from
Dt , that is viewed here as an input/output pairs container. As S ⊂ N, this is
a multi-class learning problem. A basic bi-class decision stump is used in this
paper (i.e. a threshold on map position values makes the decision), adapted to
the multi-class problem thanks to a one-versus-one scheme. Let us denote by χt
the classification error rate obtained on Dt (i.e. the empirical risk). The value
χt is null when one can recover the state of the HMM from the position of the
BMUs collected during the 100 steps. It is higher when a small contiguous region
of the map is associated with several HMM states.
In our experiment, χt is computed every 100 steps in a run. As previously
said, 1000 runs are performed in order to compute statistics about the evolution
of χt as the map gets organized. At each time step t, only the best 90% of the
1000 χt are kept. The evolution curve, as reported in the right of Fig. 1, plots
the upper and lower bounds of these 900 values, as well as their average. There
are indeed less than 10% of the runs for which the map does not properly self-
organize. A deeper investigation of this phenomenon is required, but it is out
of the scope of the present paper, which is focused on the dynamic of the self-
organization when it occurs. This is why the corresponding runs are removed
from the performance computation.
3 Results
As mentioned in Sect. 2.1, the serial inputs ξ t are observations provided by the
successive states of a HMM. Let us use a comprehensive notation for the HMMs
used in our experiments. Observations are in X = [0, 1] as previously stated and
6 specific input values are represented by a letter (A = 0, B = 0.2, C = 0.4, D =
0.6, E = 0.8, F = 1). The HMM denoted by AABFCFE is then a 7-state HMM
for which s0 provides observation A, s1 provides A as well, s2 provides B, ... s6
provides E. The states are visited from s0 to s6 periodically. In this particular
HMM, (s0 , s1 ), as well as (s3 , s5 ) are ambiguous since they provide the same
observation (A and F) as an input to the recurrent SOM. When a state provides
an observation uniformly sampled in [0, 1], it is denoted by ∗. The notation
σ
ABCD EF means that values for both s2 and s3 are altered by an additive normal
noise with standard deviation σ. Last, the notation ABCpq DEF means that the
HMM is made of two periodical HMMs ABC and DEF, with random transitions
from any of the state of ABC to any of the state of DEF with a probability p.
Random transitions from DEF to ABC occurs similarly with a probability q.
In order to test the ability of the recurrent SOM to deal with ambiguous obser-
vations, we consider the HMM ABCEFEDCB, i.e. a HMM with 10 states and 6
8 J. Fix and H. Frezza-Buet
0.8
0.6
0.1
0.4
0.2
0.0
0.0
1000 2000 3000 4000 5000 0 2000 4000 6000 8000 10000
step
0.8
0.6
0.1
0.4
0.2
0.0
0.0
1000 2000 3000 4000 5000 0 2000 4000 6000 8000 10000
step
In this third experiment, the algorithm receives observations from the HMM
ABCDEFEDCB for the 10000 first steps and then from the HMM ABCBAFEDEF for
the last 10000 steps. The prototypes obtained at t = 10000 and t = 20000 as well
as the evolution of the observation weights and winner locations are displayed
on the left of Fig. 3 for a single run. The algorithm successfully recovers the
structure of the two HMM. Analyzing the red curves at the time the second
HMM is presented is illuminating. One can note there is a reuse of the previously
learned prototypes and some adaptation of the prototypes. Indeed, there was a
single BMU responsive for a F (white node on the first protograph) for the first
sequence which splits and two BMUs are now responsive for a F for the second
sequence, which makes sense given the second HMM has two different states
producing the observation F. The same comment holds for the BMUs when the
observation A is produced by the HMM. On the contrary, while two BMUs had
observation prototypes w close to a C and D during the first training period, only
one BMU is remaining for each C and D after learning with the second HMM.
The performance of the algorithm ran for 1000 independent trials is shown on
the right of Fig. 3. Similarly to the first experiment, it takes around 5000 steps
to learn the sequence. At the time the HMM is changed, there is a degradation
in the performances that quickly drops.
10 J. Fix and H. Frezza-Buet
0.4 0.4
0.4
0.0 0.0
0.6 0.6
0.2
0.8 0.8
1.0
0.8 0.1
0.6
0.4
0.2 0.0
0 2500 5000 7500 10000 12500 15000 17500 20000
0.0 step
0 2500 5000 7500 10000 12500 15000 17500 20000
Fig. 3. Observations from ABCDEFEDCB for the first 10000 steps and then from
ABCBAFEDEF. The protographs are recorded at t = 10000, 20000.
0.2 0.5
decision stump risk
0.4
0.4
0.0
1.0 0.3
0.6
0.2
0.8
1.0
0.8 0.1
0.6
0.4
0.2 0.0
0 2500 5000 7500 10000 12500 15000 17500 20000
0.0 step
0 2500 5000 7500 10000 12500 15000 17500 20000
0.05
Fig. 4. Observations from BCDEDC . The protograph is recorded at t = 20000.
0.2 0.5
decision stump risk
0.4
0.4
0.0
1.0 0.3
0.6
0.2
0.8
1.0
0.8
0.1
0.6
0.4
0.2 0.0
0 10000 20000 30000 40000 50000
0.0 step
0 10000 20000 30000 40000 50000
Fig. 5. Observations from ABCDEFEDCBpq ∗ . The protograph is recorded at t = 20000.
12 J. Fix and H. Frezza-Buet
lead to a perfect classification. Indeed, when the HMM is back from the noisy
state ∗, it sometimes requires two successive observations to identify in which
state the HMM is. This explains why χt is not null.
4 Conclusion
This paper presents an empirical approach of recurrent self-organizing maps by
introducing original representations and performance measurements. The exper-
iments show how spatio-temporal structure gets organized internally to retrieve
the hidden states of the external process that provides the observations. An area
of the map associated with an observation splits into close areas when obser-
vation ambiguity is detected, and then areas get progressively separated onto
the map. Unveiling the emergence of such a complex and continuous behavior,
from both the SOM-like nature of the process and a simple re-entrance, is the
main result of this paper. Such a simple architecture also shows robustness to
temporal and spatial damages in the input series, as well as the ability to deal
with deep time dependencies while the recurrence only propagates previous step
context. Forthcoming work will consist in using such recurrent maps in more
integrated multi-map architecture, as started in [1].
References
1. Baheux D, Fix J, Frezza-Buet H (2014) Towards an effective multi-map self orga-
nizing recurrent neural network. In: ESANN, pp 201–206
2. Cottrell M, Fort J, Pags G (1998) Theoretical aspects of the SOM algorithm.
Neurocomputing 21(1):119–138
3. Elman JL (1990) Finding structure in time. Cognit Sci 14:179–211
4. Hagenbuchner M, Sperduti R, Tsoi AC, Member S (2003) A self-organizing map
for adaptive processing of structured data. IEEE Trans Neural Netw 14:491–505
5. Hammer B, Micheli A, Sperduti A, Strickert M (2004) Recursive self-organizing
network models. Neural Netw 17(8–9):1061–1085
6. Heskes T (1999) Energy functions for self-organizing maps. In: Oja E, Kaski S
(eds) Kohonen maps. Elsevier Science B.V, Amsterdam, pp 303–315
7. Jordan MI (1996) Serial order: a parallel distributed processing approach. Technical
reporty 8604, Institute for Cognitive Science, University of California, San Diego
8. Khouzam B, Frezza-Buet H (2013) Distributed recurrent self-organization for
tracking the state of non-stationary partially observable dynamical systems. Biol
Inspired Cognit Archit 3:87–104
9. Kohonen T (1982) Self-organized formation of topologically correct feature maps.
Biol Cybern 43(1):59–69
10. Tiňo P, Farkaš I, van Mourik J (2006) Dynamics and topographic organization of
recursive self-organizing maps. Neural Comput 18(10):2529–2567
11. Voegtlin T (2002) Recursive self-organizing maps. Neural Netw 15(8–9):979–991
Self-Organizing Mappings
on the Flag Manifold
1 Introduction
manifold will be utilized later to introduce the geodesic formula on the flag
manifold. The nested structure inherent in a flag shows up naturally in the
context of data analysis.
1. Wavelet analysis: Wavelet analysis and its associated multiresolution repre-
sentation produces a nested sequence of vector spaces that approximate data
with increasing resolution [1,12,13]. Each scaling subspace Vj is a dilation
of its adjacent neighbor Vj+1 in the sense that if f (x) ∈ Vj then a reduced
resolution copy f (x/2) ∈ Vj+1 . The scaling subspaces are nested
· · · ⊂ V2 ⊂ V1 ⊂ V0 ⊂ V−1 ⊂ · · ·
and in the finite dimensional setting can be considered as a point on a flag
manifold. The flag SOM algorithm provides a means to visualize relationships
in a collection of discrete wavelet transforms and organize the corresponding
sequences of nested subspaces in a coherent manner via a low-dimensional
grid.
2. SVD basis of a real data matrix: Let X ∈ Rn×k be a real data matrix con-
sisting of k samples in Rn . Let U ΣV T = X be the thin SVD of X. The
columns of the n-by-d orthonormal matrix U is an ordered basis for the
column span of X. This basis is ordered by the magnitude of the singular
values of X. This order provides a straightforward way to associate to U a
point on a flag manifold. If U = [u1 |u2 | . . . |ud ] then the nested subspaces
span([u1 ]) span([u1 |u2 ]) · · · span([u1 | · · · |ud ]) Rn is a flag of type
(1, 1, . . . , 1, n − d; n) in Rn . After we introduce the distance metric on the flag
manifold in Sect. 3.2, one could consider computing the distance between two
flags, perhaps derived from a thin SVD of two different data sets, which takes
the order of the basis into consideration.
Fig. 2. Illustration of Eq. (2). The vertical lines represents the equivalence classes [Q1 ]
and [Q2 ] respectively. Q1 is mapped to an element in [Q2 ] by right multiplication with
exp(H) which is then sent to Q2 by multiplying with M .
Q = exp(H) · M (4)
Here we define W as the vector space of all n-by-n skew symmetric matrices. Let
p = (n1 , n2 , . . . , nd ; n). We define Wp to be the set of all block diagonal skew
symmetric matrices of type p and its orthogonal complement Wp⊥ in W, i.e.
⎛ ⎞ ⎛ ⎞
G1 ··· 0 0n1 ∗
⎜ .. ⎟}, W ⊥ = {H ∈ W | H = ⎜ ⎟
Wp = {G ∈ W | G = ⎝ ... ..
. . ⎠ p ⎝ ..
. ⎠}.
0 ··· Gd −∗ T
0nd
where, by definition, Gi ∈ Rni ×ni is skew symmetric for all i. Instead of solving
Eq. (4) directly, we propose to solve the following alternative equation:
Setting Axles.
Setting axles is giving them the bend and slope required, in order
to fall in with the principles of the dished wheel. It is chiefly applied to
the axle-arm, and this is the most important part, setting the beds
being mere caprice.
The great object to be obtained is, to give the arm the right pitch
every way, to make the carriage run easy and as light as possible,
even in the absence of a plumb spoke. All carriages do not look best,
when running, with the bottom spoke plumb or vertical. In some of
the heavier coaches or carriages more slope or “pitch” has to be
given to the arm to carry the wheel away from the body, so as to
bring them to some specified track, in order to suit some particular
customer, so that we must be governed by circumstances.
There is a patent “axle-set,” but it is not of much assistance, for
half the smiths know nothing about it, and if they did it would not be
generally used, as the advantages derived from its use are not equal
to the trouble of using it. Besides, the wheels are not always dished
exactly alike, and it would require adjusting to each variety of wheel;
and again, the wheels are not always (though they ought to be)
ready; and when the smith knows the sort of vehicle he is working
upon he can give his axles the required pitch, within half a degree or
so, and the patent axle-set is, unfortunately, not capable of being
adjusted to an idea.
Fig. 21.
Fig. 21 shows a contrivance for setting the axles when cold, and
consists of an iron bar a, 2 feet 1 inch long, and about 2 inches
square at the fulcrum b. A hole is punched through the end to allow
the screw c to go through; this hole to be oval, to allow the screw to
move either way. At the end of this screw is an eye of sufficient size
to go on to the axle-arm. In setting the axle the eye is slipped on to
about the centre of the arm; the clevis, d, is placed on the bar a,
near the end; the fulcrum, b, is placed at the shoulder, either on top
or underneath, according as the axle may be required to set in or
out. When the fulcrum is laid on top, a strip of harness leather should
be placed on the axle bed, and on that, an iron e, of the shape of the
axle bed, and on the end of this the fulcrum is placed; then by
turning the screw the axle may be bent or set to any required pitch.
Fig. 22.
Fig. 23.
The figure shows the two ways of doing this, one with the bar or
lever on top and the other with the lever below.
Figs. 22 and 23 show two improved forms of axles.
Fig. 24.
Fig. 24 shows another variety of the axle-set. It consists of a bar
hooked on to the axletree in two places. The bar is fastened by the
clamp m, and fulcrum block f. The eyebolt, l, is hooked over the end
of the spindle or arm, and the adjustment of the latter is
accomplished by the screw, s, and the nuts j, k.
SPRINGS.
Springs in locomotive vehicles are the elastic substances interposed
between the wheels and the load or passengers in order to intercept
the concussion caused by running over an uneven road, or in
meeting with any slight obstacle.
A great variety of substances have been used for this purpose,
such as leather, strips of hide, catgut, hempen cord, &c.; but these
have now been totally superseded by metal springs, so that what is
technically understood by the word “spring” is a plate or plates of
tempered steel properly shaped to play in any required mode.
It is very probable that the earliest steel springs were composed of
only one plate of metal. This was very defective in its action; and
unless it was restrained somewhat in the manner of the bow by the
string, it was liable to break on being subjected to a sharp
concussion.
There is no hard and fast rule by which the spring-maker can be
guided so as to proportion the strength and elasticity of his springs to
the load they are required to bear; and even were such a rule in
existence it would be practically useless, because the qualities of
spring steel differ so much that what is known in mathematics as a
“constant” could hardly be maintained. The only guide to the maker
in this respect is observation of the working of certain springs under
given loads, such springs being made of a certain quality of steel,
and any peculiar features that appear should be carefully noted
down for future reference and application.
Springs are of two kinds, single and double; i.e. springs tapering in
one direction from end to end, and those which taper in two opposite
directions from a common centre, as in the ordinary elliptic spring.
The process of making a spring is conducted in the following
manner:—
The longest or back plate being cut to the proper length, is
hammered down slightly at the extremities, and then curled round a
mandrel the size of the suspension bolt. The side of the plate which
is to fit against the others is then hollowed out by hammering; this is
called “middling.” The next plate is then cut rather shorter than the
first; the ends are tapered down so as not to disturb the harmony of
the curve. This plate is middled on both sides. A slit is then cut at
each end about ¾ of an inch in length and ⅜ inch wide, in which a
rivet head slides to connect it with the first plate, so that in whatever
direction the force acts these two plates sustain each other. At a little
distance from this rivet a stud is formed upon the under surface by a
punch, which forces out a protuberance which slides in a slit in the
next plate. The next plate goes through precisely the same
operations, except that it is 3 or 4 inches shorter at each end, and so
on with as many plates as the spring is to consist of. The last plate,
like the first, is of course only middled on one side.
The plates of which the spring is to be composed having thus
been prepared, have next to undergo the process of “hardening” and
“tempering.” This is a very important branch of the business, and will
bear a detailed description. There is no kind of tempering which
requires so much care in manipulation as that of springs. It is
necessary that the plates be carefully forged, not over-heated, and
not hammered too cold; one is equally detrimental with the other. To
guard against a plate warping in tempering, it is requisite that both
sides of the forging shall be equally well wrought upon with the
hammer; if not, the plates will warp and twist by reason of the
compression on one side being greater than on the other.[1]
The forge should be perfectly clean, and a good clean charcoal
fire should be used. Or if coal be used it must be burned to coke in
order to get rid of the sulphur, which would destroy the “life” of the
steel. Carefully insert the steel in the fire, and slowly heat it evenly
throughout its entire length; when the colour shows a light red,
plunge it into lukewarm water—cold water chills the outer surface too
rapidly—and let it lie in the water a short time. Animal oil is better
than water; either whale or lard oil is the best, or lard can be used
with advantage. The advantage of using oil is that it does not chill the
steel so suddenly, and there is less liability to crack it. This process
is called “hardening.”
Remove the hardened spring-plate from the water or oil and
prepare to temper it. To do this make a brisk fire with plenty of live
coals; smear the hardened plate with tallow, and hold it over the
coals, but do not urge the draught of the fire with the bellows while
so doing; let the fire heat the steel very gradually and evenly. If the
plate is a long one, move it slowly over the fire so as to receive the
heat equally. In a few moments the tallow will melt, then take fire,
and blaze for some time; while the blaze continues incline the plate,
or carefully incline or elevate either extremity, so that the blaze will
circulate from end to end and completely envelop it. When the flame
has died out, smear again with tallow and blaze it off as before. If the
spring is to undergo hard work the plates may be blazed off a third
time. Then let them cool themselves off upon a corner of the forge;
though they are often cooled by immersion in water, still it is not so
safe as letting them cool by themselves.
After tempering the spring-plates are “set,” which consists in any
warps or bumps received in the foregoing processes being put
straight by blows from a hammer. Care should be taken to have the
plates slightly warm while doing this to avoid fracturing or breaking
the plates.
The plates are now filed on all parts exposed to view, i.e. the
edges and points of the middle plates, the top and edges of the back
plate, and the top and edges of the shortest plate. They are then put
together and a rivet put through the spring at the point of greatest
thickness, and this holds, with the help of the studs before
mentioned, the plates together.
It is evident from the above description of a common mode of
making springs, that the operation is not quite so perfect as it might
be. The plates, instead of being merely tapered at the ends, ought to
be done so from the rivet to the points. And another thing, it would
surely make a better job of it if the plates were to bear their whole
width one on the other; in the middled plates they only get a bearing
on the edges, and the rain and dust will inevitably work into the
hollows in the plates, and it will soon form a magazine of rust, and
we all know what an affinity exists between iron and oxygen and the
result of it; as far as carriage springs are concerned, it very soon
destroys their elasticity and renders them useless and dangerous.
To prevent oxidation some makers paint the inner faces of the
springs, and this is in a measure successful, but the play of the
spring-plates one upon the other is sure to rub off some portions of
the paint, and we are just as badly off as ever. A far better plan
would be to cleanse the surfaces by means of acid, and then tin
them all over, and this would not be very expensive, and certainly
protect the plates of the spring longer than anything else.
The spiral springs, used to give elasticity to the seats, &c., are
tempered by heating them in a close vessel with bone dust or animal
charcoal, and, when thoroughly heated, cooled in a bath of oil. They
are tempered by putting them into an iron pan with tallow or oil, and
shaking them about over a brisk fire. The tallow will soon blaze, and
keeping them on the move will cause them to heat evenly. The steel
springs for fire-arms are tempered in this way, and are literally “fried
in oil.” If a long slender spring is needed with a low temper, it can be
made by simply beating the soft forging on a smooth anvil with a
smooth-faced hammer.
In setting up old springs where they are inclined to settle, first take
the longest plate (having separated all the plates) and bring it into
shape; then heat it for about 2 feet in the centre to a cherry red, and
cool it off in cold water as quick as possible. This will give the steel
such a degree of hardness that it will be liable to break if dropped on
the floor. To draw the temper hold it over the blaze, carrying
backward and forward through the fire until it is so hot that it will
sparkle when the hammer is drawn across it, and then cool off.
Another mode is to harden the steel, as before stated, and draw
the temper with oil or tallow—tallow is the best. Take a candle, carry
the spring as before through the fire, and occasionally draw the
candle over the length hardened, until the tallow will burn off in a
blaze, and then cool. Each plate is served in the same way.
Varieties of Springs.
The names given to springs are numerous, but the simple forms
are few, the greater part of the varieties being combinations of the
simple forms.