Professional Documents
Culture Documents
Discovery Science 21st International Conference DS 2018 Limassol Cyprus October 29 31 2018 Proceedings Larisa Soldatova
Discovery Science 21st International Conference DS 2018 Limassol Cyprus October 29 31 2018 Proceedings Larisa Soldatova
https://textbookfull.com/product/foundations-of-intelligent-
systems-24th-international-symposium-ismis-2018-limassol-cyprus-
october-29-31-2018-proceedings-michelangelo-ceci/
https://textbookfull.com/product/swarm-intelligence-11th-
international-conference-ants-2018-rome-italy-
october-29-31-2018-proceedings-marco-dorigo/
https://textbookfull.com/product/runtime-verification-18th-
international-conference-rv-2018-limassol-cyprus-
november-10-13-2018-proceedings-christian-colombo/
https://textbookfull.com/product/decision-and-game-theory-for-
security-9th-international-conference-gamesec-2018-seattle-wa-
usa-october-29-31-2018-proceedings-linda-bushnell/
Cyberspace Safety and Security 10th International
Symposium CSS 2018 Amalfi Italy October 29 31 2018
Proceedings Arcangelo Castiglione
https://textbookfull.com/product/cyberspace-safety-and-
security-10th-international-symposium-css-2018-amalfi-italy-
october-29-31-2018-proceedings-arcangelo-castiglione/
https://textbookfull.com/product/prima-2018-principles-and-
practice-of-multi-agent-systems-21st-international-conference-
tokyo-japan-october-29-november-2-2018-proceedings-tim-miller/
https://textbookfull.com/product/discovery-science-17th-
international-conference-ds-2014-bled-slovenia-
october-8-10-2014-proceedings-1st-edition-saso-dzeroski/
Discovery Science
21st International Conference, DS 2018
Limassol, Cyprus, October 29–31, 2018
Proceedings
123
Lecture Notes in Artificial Intelligence 11198
Discovery Science
21st International Conference, DS 2018
Limassol, Cyprus, October 29–31, 2018
Proceedings
123
Editors
Larisa Soldatova George Papadopoulos
Goldsmiths University of London University of Cyprus
London, UK Nicosia, Cyprus
Joaquin Vanschoren Michelangelo Ceci
Eindhoven University of Technology Università degli Studi di Bari Aldo Moro
Eindhoven, The Netherlands Bari, Italy
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The 21st International Conference on Discovery Science (DS 2018) was held in
Limassol, Cyprus, during October 29–31, 2018. The conference was co-located with
the International Symposium on Methodologies for Intelligent Systems (ISMIS 2018),
which was already in its 24th year. This volume contains the papers presented at the
21st International Conference on Discovery Science, which received 71 international
submissions. Each submission was reviewed by at least three committee members. The
committee decided to accept 30 papers. This resulted in an acceptance rate of 42%.
Invited talks were shared between the two meetings. The invited talks for DS 2018
were “Automating Predictive Modeling and Knowledge Discovery” by Ioannis
Tsamardinos from the University of Crete, and “Emojis, Sentiment, and Stance in
Social Media” by Petra Kralj Novak from the Jožef Stefan Institute. The invited talks
for ISMIS 2018 were “Artificial Intelligence and the Industrial Knowledge Graph” by
Michael May from Siemens, Germany, “Mining Big and Complex Data” by
Sašo Džeroski from the Jozef Stefan Institute, Slovenia, and “Bridging the Gap
Between Data Diversity and Data Dependencies” by Jean-Marc Petit from INSA Lyon
and Université de Lyon, France. Abstracts of all five invited talks are included in these
proceedings.
We would like to thank all the authors of submitted papers, the Program Committee
members, and the additional reviewers for their efforts in evaluating the submitted
papers, as well as the invited speakers. We are grateful to Nathalie Japkowicz and
Jiming Liu, ISMIS program co-chairs (together with Michelangelo Ceci), for ensuring
the smooth coordination with ISMIS and myriad other organizational aspects. We
would also like to thank the members of the extended DS Steering Committee (con-
sisting of past organizers of the DS conference) for supporting the decision to organize
DS jointly with ISMIS this year, and in particular Sašo Džeroski for supporting us in
bringing this decision to life. We are grateful to the people behind EasyChair for
making the system available free of charge. It was an essential tool in the paper
submission and evaluation process, as well as in the preparation of the Springer pro-
ceedings. We thank Springer for their continuing support for Discovery Science. The
joint event DS/ISMIS 2018 was organized under the auspices of the University of
Cyprus. Financial support was generously provided by the Cyprus Tourism Organi-
zation and Austrian Airlines. Finally, we are indebted to all conference participants,
who contributed to making this momentous event a worthwhile endeavor for all
involved.
Symposium Chair
George Papadopoulos University of Cyprus, Cyprus
Program Committee
Mihalis A. Nicolaou Imperial College London, UK
Ana Aguiar University of Porto, Portugal
Fabrizio Angiulli DEIS, University of Calabria
Annalisa Appice University of Bari Aldo Moro, Italy
Martin Atzmueller Tilburg University, The Netherlands
Gustavo Batista University of São Paulo, Brazil
Albert Bifet LTCI, Telecom ParisTech, France
Hendrik Blockeel Katholieke Universiteit Leuven, Belgium
Paula Brito University of Porto, Portugal
Fabrizio Costa University of Exeter, UK
Bruno Cremilleux Université de Caen, France
Andre de Carvalho University of São Paulo, Brazil
Ivica Dimitrovski Macedonia
Kurt Driessens Maastricht University, The Netherlands
Saso Dzeroski Jozef Stefan Institute, Slovenia
Floriana Esposito Università degli Studi di Bari Aldo Moro, Italy
Peter Flach University of Bristol, UK
Johannes Fürnkranz TU Darmstadt, Germany
Mohamed Gaber Birmingham City University, UK
Joao Gama University of Porto, Portugal
Dragan Gamberger Rudjer Boskovic Institute, Croatia
Crina Grosan Brunel University, UK
Makoto Haraguchi Hokkaido University, Japan
Kouichi Hirata Kyushu Institute of Technology, Japan
Jaakko Hollmén Aalto University, Finland
Geoffrey Holmes University of Waikato, New Zealand
Bo Kang Data Science Lab, Ghent University, Belgium
Kristian Kersting TU Darmstadt, Germany
Takuya Kida Hokkaido University, Japan
Masahiro Kimura Ryukoku University, Japan
VIII Organization
Additional Reviewers
Ioannis Tsamardinosi
University of Crete
Saso Dzeroski
Abstract. Increasingly often, data mining has to learn predictive models from
big data, which may have many examples or many input/output dimensions and
may be streaming at very high rates. Contemporary predictive modeling prob-
lems may also be complex in a number of other ways: they may involve
(a) structured data, both as input and output of the prediction process, (b) in-
completely labelled data, and (c) data placed in a spatio-temporal or network
context.
The talk will first give an introduction to the different tasks encountered
when learning from big and complex data. It will then present some methods for
solving such tasks, focusing on structured-output prediction, semi-supervised
learning (from incompletely annotated data), and learning from data streams.
Finally, some illustrative applications of these methods will be described,
ranging from genomics and medicine to image annotation and space
exploration.
Artificial Intelligence and the Industrial
Knowledge Graph
Michael May
Jean-Marc Petit
Classification
Meta-Learning
Reinforcement Learning
Text Mining
Applications
Abstract. Decision trees are among the most popular machine learning
algorithms, due to their simplicity, versatility, and interpretability. Their
underlying principle revolves around the recursive partitioning of the fea-
ture space into disjoint subsets, each of which should ideally contain only
a single class. This is achieved by selecting features and conditions that
allow for the most effective split of the tree structure. Traditionally, impu-
rity metrics are used to measure the effectiveness of a split, as ideally in a
given subset only instances from a single class should be present. In this
paper, we discuss the underlying shortcoming of such an assumption and
introduce the notion of local class imbalance. We show that traditional
splitting criteria induce the emergence of increasing class imbalances as
the tree structure grows. Therefore, even when dealing with initially bal-
anced datasets, class imbalance will become a problem during decision
tree induction. At the same time, we show that existing skew-insensitive
split criteria return inferior performance when data is roughly balanced.
To address this, we propose a simple, yet effective hybrid decision tree
architecture that is capable of dynamically switching between standard
and skew-insensitive splitting criterion during decision tree induction.
Our experimental study depicts that local class imbalance is embedded
in most standard classification problems and that the proposed hybrid
approach is capable of alleviating its influence.
1 Introduction
leads to a given classification and the gleaning of valuable insights from the ana-
lyzed data [19]. They can handle various types of data, as well as missing values.
They are characterized by low computational complexity, making them a per-
fect choice for constrained or dynamic environments [12,20]. They are simple to
understand and implement in distributed environments [5], which allows them
to succeed in real-life industrial applications. Finally, they have gained extended
attention as an excellent base component in ensemble learning approaches [21].
The general idea behind decision tree induction lies in a sequential parti-
tioning of the feature space, until the best possible separation among classes is
achieved. Ideally, each final disjunct (leaf) should contain instances coming only
from a single class. This is achieved by using impurity metrics that measure how
well each potential split separates instances from distinct classes. While a perfect
separation may potentially be obtained, it may lead to the creation of small dis-
juncts (e.g., containing only a single instance) and thus to overfitting. Therefore,
stopping criteria and tree post-pruning are used to improve the generalization
capabilities of decision trees [10]. However, one must note that these mechanisms
are independent from the used splitting criterion and thus will not alleviate other
negative effects that may be produced when using standard impurity criteria.
In this paper, we focus on the issue that while each split created during deci-
sion tree induction aims at maximizing the purity of instances in a given disjunct
(i.e., ensuring they belong to the same class), the underlying class distributions
will change at each level. Therefore, we will be faced with a problem of local
class imbalance. We state that this is a challenge inherent to the learning pro-
cess. Contrary to the well-known problem of global class imbalance [14], we do
not know the class proportions a priori. They will evolve during decision tree
induction and thus global approaches to alleviate class imbalance, such as sam-
pling or cost-sensitive learning, cannot be used before tree induction. This also
prevents us from using any existing splitting criteria that are skew-insensitive
[2,7,16], as they do not work as well as standard ones when classes are roughly
balanced.
We propose to analyze in-depth the problem of local class imbalance during
decision tree induction and show that it affects most balanced datasets at some
point of classifier training. This affects the tree structure, leading to the creation
of bias in tree nodes towards the class that is better represented. In order to
show that this issue impacts the generalization capabilities of decision trees, we
propose a simple, yet effective hybrid decision tree architecture. Our proposed
classifier is capable of switching between standard and skew-insensitive splitting
metrics, based on the local class imbalance ratio in each given node. The main
contributions of this paper include:
– Analysis of the local class imbalance phenomenon that occurs during decision
tree induction.
– Critical assessment of standard and skew-insensitive splitting criteria.
– A hybrid decision tree architecture that is capable of dynamically switching
between splitting criteria based on local data characteristics.
Addressing Local Class Imbalance in Balanced Datasets 5
In our work we adopt the following notation [3] for facilitation of discussion:
for a given node, n, in a binary decision tree there corresponds a class of splits
{s} defined on the instances in n. To keep decision tree induction tractable, this
class of splits is limited to only splits corresponding to axis-parallel hyperplanes
bi-partitioning the local decision space of n; we will discuss {s} under this restric-
tion. A goodness-of-split function, θ(s, n), is defined and the best split taken as
the s that maximizes θ(s, n). Denote the class proportions, or probabilities, (we
will use the terms interchangeably) p = (p0 , p1 ) where p0 represents the propor-
tion of majority class instances in n and and p1 the minority - notice p0 ≥ p1
with equality when the class distribution of n is balanced. Finally, define the
p1
local imbalance ratio nI = as the proportion of minority class to majority
p0
class instances in n. Notice, it is irrelevant what class a given instance belongs to
as we are only concerned with the distribution of classes amongst all instances
in n.
Criterion Definition
2
Gini 1– pi
pi ∈p
Entropy – pi log (pi )
pi ∈p
1 √ √ 2
Hellinger distance √ pi − pj
2 pi ,pj ∈p
6 A. Mulyar and B. Krawczyk
Common impurity functions found in decision tree learning include Gini and
Information Gain (Entropy) [4]. In recent literature, new criterion for impurity
have been formulated that exhibit properties favorable when addressing modern
challenges in Machine Learning; DKM [13] and Hellinger distance [7], in partic-
ular, have been shown to be impurity criterion highly robust to class imbalances
in p - the latter more-so than the former. Table 1 gives definitions for Gini,
Entropy, and normalized Hellinger distance but is by no measure an exhaustive
listing of impurity criterion present in literature.
Fig. 1. Skew surfaces of Hellinger distance and Gini impurity. Hellinger distance retains
a constant skew surface over all levels of class imbalance. (a) depicts a skew surface
of Hellinger distance, (b) depicts the Gini skew surface with c = 1 (completely class
1
balanced). (c) depicts the Gini skew surface with c = 10 (10:1 class imbalance). (d)
1
depicts the Gini skew surface with c = 20 . The iso-line diagonal of the above isosurfaces
corresponds to splits that result in complete class separation hence having an impurity
of zero (ie., completely pure).
The Gini impurity demonstrates a strong sensitivity to class skews when pro-
jected under Flach’s node impurity model. This can be visualized by plotting
the Gini isosurfaces for c = {1, 101
, 20
1
}. Notice the striking difference between
Figs. 1b and c where isosurfaces for Gini are generated utilizing class imbalances
1
of 1 and 10 respectively. As class imbalance escalates, Gini isosurfaces become
flatter meaning that in a decision node that is highly homogeneous (appearing
closer to the edges of the isosurface) class skew forces the Gini criterion to give
a more biased estimate of node impurity [8].
The Hellinger distance, a skew insensitive impurity metric capturing the diver-
gence of the class distribution modeled by p [7], under Flach’s model produces
an identical isosurface under all values of nI . This is demonstrated analytically
by observing the corresponding equation for Hellinger distance in Table 2 is
not a function of c. Consequently, this means that the performance of Hellinger
distance as a decision tree impurity metric does not degrade in the presence of
increased class skew.
60
40 5
20
0 10 20 30 40 0 10 20 30 40 50 60
Node Depth Node Depth
Fig. 2. Average imbalance ratio at node depth in decision trees induced over benchmark
datasets. (a) depicts decision trees induced using Gini impurity, (b) depicts decision
trees induced using Hellinger distance.
over a dataset must take into account the ever changing class distributions at
each child node (sub-space), not only the root (entire decision space).
State-of-the-art approaches to dealing with imbalanced data would tradi-
tionally not be applied to a dataset without a global class imbalance [11]. A
consequence of the above observation concludes that, in the context of decision
tree learning, considering only global class imbalance is misguided as local class
imbalances can arise throughout the learning process.
Decision Tree induction utilizing the Hellinger distance as φ (p) has been shown
to select splits independent from class skews in p [7]. We observe empirically in
Fig. 2b that this insensitivity to skew results in a stable local class imbalance
ratio throughout all levels of tree depth. We do note that in exchange for this
convenience, trees induced using Hellinger distance require many more levels to
arrive at pure child nodes.
10 A. Mulyar and B. Krawczyk
This formulation takes advantage of the Gini impurities excellent class sep-
arating ability while diminishing it’s under performance in the presence of class
skews by allowing the Hellinger distance to quantify the impurity of a region
when necessary. The parameter α is to be tuned dependent on local class imbal-
ance severity during tree induction.
Addressing Local Class Imbalance in Balanced Datasets 11
6 Experimental Study
We designed the following experimental study in order to answer the following
two research questions posed in this manuscript:
– Does local class imbalance in balanced datasets impact the predictive power
of inducted decision trees?
– Is the proposed hybrid architecture for dynamic splitting criteria selection
capable of improving decision tree performance?
6.1 Datasets
Data benchmarks employed for empirical analysis consist of a subset of the Stan-
dard datasets found in the KEEL-dataset repository [1] . The KEEL-dataset
repository contains a well categorized collection of datasets that span over vari-
ous application domains. Table 3 comprises the data benchmarks utilized in our
empirical study of hybrid splitting criterion decision trees. The column IR refers
to the global imbalance (root imbalance) ratio of the dataset. This study con-
siders solely initially balanced datasets to demonstrate the emergence of local
class imbalances during tree induction.
6.2 Set-up
Decision trees with dynamic impurity selection are evaluated against decision
trees utilizing traditional Gini and Entropy splitting criterion. Multiple trials
with increasing α thresholds are utilized on each respective benchmark to illus-
trate the need for tuning of α based on the severity of local class imbalance intro-
duced during tree induction. Experimentation is conducted on CART Decision
Trees [4] as implemented in the scikit-learn [17] repository with the modification
of our dynamic splitting criterion. All default parameters are left unchanged as
provided in the DecisionTreeClassifier documentation. Evaluation is conducted
with use of stratified 5×2 cross validation as already performed in KEEL [1]
recording accuracy, recall (sensitivity) on the target class, F-measure, G-Mean,
and area under the ROC curve (AUC). Additionally, we conduct a Shafer post-
hoc statistical analysis over multiple datasets [9] with significance level α = 0.05.
12 A. Mulyar and B. Krawczyk
Table 4 presents results of evaluations across our twenty data benchmarks. The
corresponding row labeled dynamic(α) corresponds to a decision tree induced
with dynamic impurity selection using α as the imbalance threshold. An exhaus-
tive search [2 ≤ α ≤ 200] is conducted to find the α threshold that maximizes
accuracy on each respective dataset. Due to space constraints enumeration of
evaluation performance of every tree in the exhaustive search is not feasible but
we note that alpha thresholds with in the vicinity of the one showcased in Table
4 had similar, superior performance to trees induced using gini, hellinger, and
entropy. Additionally, Table 5 presents the outcomes of a post-hoc statistical
analysis of results. We give a broad summary of general performance below.
We summarize the performance of dynamic impurity selection over thresholds
[2 ≤ α ≤ 200] on our benchmarks with assistance of Figs. 3a–e. In Figs. 3a–e,
each point (α, E) on the variable curve corresponds to the mean value of the
referenced evaluation metric, E, when utilizing an imbalance ratio threshold α
over all data benchmarks. The solid and dashed constant lines correspond to the
mean value of E when utilizing Gini impurity and Hellinger distance respectively.
We conclude primarily the need for tuning of α as clearly visible from increased
average performance at certain thresholds values across all evaluation metrics
metrics. We would like to note an interesting observation concerning the average
limiting behavior (α > 100) where the predominant majority of splits are chosen
utilizing the Gini impurity - the performance of dynamic impurity selection
becomes constrained between the performance of Gini impurity and Hellinger
distance implying that it may be best to restrict a search for an optimal α
threshold to α ≤ 100.
Dynamic impurity selection featured across-the-board performance improve-
ment or matching on every benchmark except appendicitis when compared to
the widely used Gini impurity in regards to accuracy or AUC. This is due to
the fact that when using a large α threshold the majority of splits chosen will
be splits determined by the Gini impurity. It is interesting to consider datasets
such as bupa, ionosphere, magic, sonar, spectfheart, twonorm, wdbc, and tic-
Another random document with
no related content on Scribd:
CHOLELITHIASIS. BILIARY CALCULI. GALL
STONES.
Largely secondary, from stomach, intestine, lymph glands, spleen, pancreas; the
hepatic tumor may be disproportionately large. In horse: sarcoma rapidly growing
soft, succulent, slow-growing, fibrous, tough, stroma with round or spindle shaped
cells and nuclei. Symptoms: emaciation, icterus, enlarged liver, rounded tumors on
rectal examination. Melanoma, in old gray or white horses, with similar formations
elsewhere; not always malignant. Lymphadenoma. Angioma. Carcinoma.
Epithelioma, lesions, nodular masses, white or grayish on section, and having firm
stroma with alveoli filled with varied cells with refrangent, deeply staining, large,
multiple nuclei, cancerous cachexia and variable hepatic disorder. In cattle:
sarcoma, adenoma, angioma, cystoma, carcinoma, epithelioma. In sheep:
adenoma, carcinoma. In dog: lipoma, sarcoma, encephaloid, carcinoma,
epithelioma. Wasting and emaciation, yellowish pallor, temporal atrophy, ascites,
liver enlargement, tender right hypochondrium, dyspepsia, symptoms of primary
deposits elsewhere.
The great quantity of blood which passes through the liver lays it
open, in a very decided way, to the implantation of germs and
biological morbid products. Hence tumors of the liver are largely
secondary, the primary ones being found mostly in the stomach,
intestine, abdominal lymph glands, spleen and pancreas. The
primary neoplasm is often comparatively small, while the hepatic
one supplied with a great excess of blood may be by far the most
striking morbid lesion. The hepatic tumors are mostly of the nature
of angioma, sarcoma, melanoma, adenoma, lipoma, cystoma,
carcinoma, and epithelioma.
NEOPLASMS IN HORSES LIVER.
Sarcoma. This is usually a secondary formation from the primary
tumors in the spleen and peritoneum, and it occurs as multiple
masses throughout the substance of the gland. The liver is greatly
increased in size, extending far beyond the last rib on the right side,
and weighing when removed as high as 70 ℔s. (Mason), or even 88
℔s. (Cadeac), in extreme cases.
The whole surface of the liver may show bulging, rounded masses,
and the morbid growth may have involved the capsule and caused
adhesion to the back of the diaphragm (Bächstädt). The cut surface
of the neoplasm is smooth, elastic, yellowish and circular or oval in
outline. It may have a variable consistency—friable or tough,
according to the activity of growth and the relative abundance of cells
and stroma. The portal glands are hypertrophied and thrombosis of
the portal vein is not uncommon.
Microscopic examination of the dark red scrapings shows
numerous blood globules, intermixed with the round or spindle
shaped cells and nuclei of the tumor. Sections of the tumor show
these cells surrounded by a comparatively sparse fibrillated stroma.
The round cells may vary from .005 to .05 m.m. They contain one or
more rather large nuclei and a number of refrangent nucleoli. The
nuclei are often set free by the bursting of the cells in the scrapings.
They become much more clearly defined when treated with a weak
solution of acetic acid. Small grayish areas in the mass of the tumor
represent the original structure of the liver, the cells of which have
become swollen and fatty.
A liquid effusion more or less deeply tinged with red is usually
found in the abdominal cavity.
Symptoms are those of a wasting disease, with some icterus,
sometimes digestive disorder, and a marked enlargement of the liver.
The last feature can be easily diagnosed by palpation and percussion.
If an examination through the rectum detects the enlargement and
irregular rounded swellings of the surface of the liver or spleen, or
the existence of rounded tumors in the mesentery or sublumbar
region, this will be corroborative. The precise nature of the
neoplasms can only be ascertained after death.
Melanoma. Melanosis of the liver is comparatively frequent,
especially in gray horses, and above all when they are aging and
passing from dark gray to white. In many cases a more certain
diagnosis can be made than in sarcoma for the reason that primary
melanotic neoplasms are especially likely to occur on or near the
naturally dark portions of the skin, as beneath the tail, around the
anus or vulva, in the perineum, sheath, eyelids, axilla, etc. The extent
of the disease is likely to be striking, the liver, next to the spleen,
being the greatest internal centre for melanosis. The whole organ
may be infiltrated so that in the end its outer surface is completely
hidden by melanotic deposit. The surface deposits tend to project in
more or less rounded, smooth masses of varying size according to the
age of the deposit and the rapidity of its growth. Individual deposits
may vary in size from a pea to a mass of 40 or 50 lbs. They are
moderately firm, and resistant, and maintain a globular or ovoid
outline. The color of the melanotic deposits is a deep black with a
violet or bluish tint. If the pigmentary deposit is in its early stage it
may be of a dark gray. The deposits are firmer than the intervening
liver tissue and rarely soften or suppurate.
Melanosis in the horse is not always the malignant disease that it
shows itself to be in man, and extensive deposits may take place
externally and considerable formations in the liver and other internal
organs without serious impairment of the general health. It is only in
very advanced conditions of melanosis of the liver that appreciable
hepatic disorder is observed. If, however, there is marked
enlargement of the liver, in a white or gray horse, which shows
melanotic tumors on the surface, hepatic melanosis may be inferred.
Lymphadenoma. Adenoid Tumor. Lienaux describes cases of this
kind in which the liver was mottled by white points which presented
the microscopical character of adenoid tissue, cells enclosing a
follicle and a rich investing network of capillaries.
Angioma. These are rare in the horse’s liver, but have been
described by Blanc and Trasbot as multiple, spongy tumors on the
anterior of the middle lobe, and to a less extent in the right and left,
of a blackish brown color, soft and fluctuating. The largest mass was
the size of an apple, and on section they were found to be composed
of vascular or erectile tissue. The tendency is to rupture and
extensive extravasation of blood (30 to 40 lbs.) into the peritoneum.
Carcinoma. Epithelioma. These forms of malignant disease are
not uncommon in the liver as secondary deposits, the primary
lesions being found in the spleen, stomach, intestine, or pancreas, or
more distant still, in the lungs. The grafting or colonization of the
cancer in the liver depends on the transmission of its elements
through the vena portæ in the one case, and through the pulmonary
veins, the left heart and hepatic artery in the other.
Lesions. The liver may be greatly enlarged, weighing twenty-seven
pounds (Benjamin) to forty-three pounds (Chauveau), hard, firm,
and studded with firm nodules of varying sizes. These stand out from
the surface, giving an irregular nodular appearance, and are
scattered through its substance where, on section, they appear as
gray or white fibrous, resistant, spheroidal masses shading off to a
reddish tinge in their outer layers. Microscopically these consist of a
more or less abundant fibrous stroma, enclosing, communicating
alveoli filled with cells of various shapes and sizes, with large nuclei
(often multiple) which stain deeply in pigments. The relative amount
of fibrous stroma and cells determines the consistency of the mass,
and whether it approximates to the hard cancer or the soft. In the
horse’s liver they are usually hard, and, on scraping off the cut
surface, yield only a limited quantity of cancer juice. In the epithelial
form, which embraces nearly all that have originated from primary
malignant growth in the walls of the intestine, the epithelioid cells,
flattened, cubical, polyhedral, etc., are arranged in spheroidal masses
or cylindrical extensions, which infiltrate the tissues more or less.
These seem in some cases to commence in the radical bile ducts
(Martin), and in others in the minor coats of the larger biliary ducts.
As the disease advances a brownish liquid effusion is found in the
abdomen, and nodular masses formed on the surface of the
peritoneum.
Symptoms. As in other tumors of the liver these are obscure. As
the disease advances there may be œdema of the legs and sheath,
indications of ascites, stiff movements, icterus, occasional colics,
tympanies, and diarrhœa. Nervous symptoms may also appear, such
as dullness, stupor, coma, vertigo and spasms. Emaciation goes on
rapidly and death soon supervenes.
TUMORS OF THE LIVER IN CATTLE.