Professional Documents
Culture Documents
Agents and Data Mining Interaction 10th International Workshop ADMI 2014 Paris France May 5 9 2014 Revised Selected Papers 1st Edition Longbing Cao
Agents and Data Mining Interaction 10th International Workshop ADMI 2014 Paris France May 5 9 2014 Revised Selected Papers 1st Edition Longbing Cao
Agents and Data Mining Interaction 10th International Workshop ADMI 2014 Paris France May 5 9 2014 Revised Selected Papers 1st Edition Longbing Cao
https://textbookfull.com/product/engineering-multi-agent-systems-
second-international-workshop-emas-2014-paris-france-
may-5-6-2014-revised-selected-papers-1st-edition-fabiano-dalpiaz/
https://textbookfull.com/product/multi-agent-based-simulation-xv-
international-workshop-mabs-2014-paris-france-
may-5-6-2014-revised-selected-papers-1st-edition-francisco-
grimaldo/
Agents
LNAI 9145
123
Lecture Notes in Artificial Intelligence 9145
Bo An Andreas L. Symeonidis
•
Philip S. Yu (Eds.)
Agents
and Data Mining
Interaction
10th International Workshop, ADMI 2014
Paris, France, May 5–9, 2014
Revised Selected Papers
123
Editors
Longbing Cao Vladimir Gorodetsky
University of Technology Sydney Russian Academy of Sciences
Sydney, NSW St. Petersburg
Australia Russia
Yifeng Zeng Frans Coenen
Teesside University University of Liverpool
Middlesborough Liverpool
UK UK
Bo An Philip S. Yu
Nanyang Technological University University of Illinois at Chicago
Singapore Chicago, IL
Singapore USA
Andreas L. Symeonidis
Aristotle University of Thessaloniki
Thessaloniki
Greece
We are pleased to welcome you to the proceedings of the 2014 International Workshop
on Agents and Data Mining Interaction (ADMI 2014), held jointly with AAMAS 2014.
In recent years, agents and data mining interaction (ADMI, or agent mining) has
emerged as a very promising research field. Following the success of previous ADMIs,
ADMI 2014 provided a premier forum for sharing research and engineering results, as
well as potential challenges and prospects encountered in the coupling between agents
and data mining.
The ADMI 2014 workshop encouraged and promoted theoretical and applied
research and development, which aims at:
– Exploiting agent-enriched data mining and demonstrating how intelligent agent
technology can contribute to critical data mining problems in theory and practice
– Improving data mining-driven agents and showing how data mining can strengthen
agent intelligence in research and practical applications
– Exploring the integration of agents and data mining toward a super-intelligent
system
– Discussing existing results, new problems, challenges and the impact of the inte-
gration of agent and data mining technologies as applied to highly distributed
heterogeneous, including mobile, systems operating in ubiquitous and P2P
environments
– Identifying challenges and directions for future research and development on the
synergy between agents and data mining
The 10 papers included in this edition are from eight countries. ADMI 2014 sub-
missions represented areas from America, Europe, to Asia, indicating the boom of
agent mining research globally. The workshop also included two invited talks by two
distinguished researchers.
As with previous ADMIs, the papers accepted for ADMI 2014 were revised and are
included in this LNAI proceedings volume published by Springer. We appreciate
Springer, in particular Alfred Hofmann, for the continuing publication support.
ADMI 2014 was sponsored by the Special Interest Group: Agent-Mining Interaction
and Integration (AMII-SIG: www.agentmining.org). We appreciate the guidance of the
Steering Committee.
More information about ADMI 2014 is available from the workshop website: http://
admi14.agentmining.org/.
VI Message from the Workshop Chairs
General Chair
Philip S. Yu University of Illinois at Chicago, USA
Workshop Co-chairs
Longbing Cao University of Technology Sydney, Australia
Yifeng Zeng Teesside University, UK
Bo An Nanyang Technological University, Singapore
Andreas L. Symeonidis Aristotle University of Thessaloniki, Greece
Vladimir Gorodetsky Russian Academy of Sciences, Russia
Frans Coenen University of Liverpool, UK
Program Committee
Ahmed Hambaba San Jose State University, USA
Ajith Abraham Norwegian University of Science and Technology,
Norway
Andrea G.B. Tettamanzi University of Milan, Italy
Andreas L. Symeonidis Aristotle University of Thessaloniki, Greece
Andrzej Skowron Institute of Decision Process Support, Poland
Bo Zhang Tsinghua University, China
Daniel Kudenko University of York, UK
Daniel Zeng Arizona University, USA
David Taniar Monash University, Australia
Deborah Richards Macquarie University, Australia
Dionysis Kehagias Informatics and Telematics Institute, Greece
Eduardo Alonso University of York, UK
Eugenio Oliveira University of Porto, Portugal
Frans Oliehoek Massachusetts Institute of Technology, USA
Gao Cong Nanyang Technological University, Singapore
Henry Hexmoor University of Arkansas, USA
Ioannis Athanasiadis Democritus University of Thrace, Greece
Jason Jung Yeungnam University, Korea
VIII Organization
Steering Committee
Longbing Cao University of Technology Sydney, Australia (Coordinator)
Edmund H. Durfee University of Michigan, USA
Vladimir Gorodetsky St. Petersburg Institute for Informatics and Automation,
Russia
Hillol Kargupta University of Maryland Baltimore County, USA
Matthias Klusch DFKI, Germany
Michael Luck King’s College London, UK
Jiming Liu Hong Kong Baptist University, SAR China
Pericles A. Mitkas Aristotle University of Thessaloniki, Greece
Joerg Mueller Technische Universität Clausthal, Germany
Ngoc Thanh Nguyen Wroclaw University of Technology, Poland
Carles Sierra Artificial Intelligence Research Institute of the Spanish
Research Council, Spain
Andreas L. Symeonidis Aristotle University of Thessaloniki, Greece
Organization IX
1 Introduction
Extending single-agent based graphical models of influence diagrams [7], interac-
tive dynamic influence diagrams (I-DIDs) [4,16] provide a general framework for
solving sequential multiagent decision making problem under uncertainty. Differ-
ing from other frameworks such as Dec-POMDPs [12] and multiagent influence
diagrams [10], I-DIDs solve the problem from the perspective of an individual
agent and do not make the common belief assumption in modeling other agents.
Hence I-DIDs are a general decision model and may be employed to solve both
cooperative and competitive multiagent decision problems.
Algorithms for solving I-DIDs need to solve a large number of candidate
models of other agents that represent how the agents optimize their decisions
in an uncertain environment. In addition, the I-DIDs track the evolution of
all the models as other agents observe, act and update their beliefs over time.
Consequently, the computational complexity of solving I-DIDs is mainly due
to the exponential growth in the number of models that are ascribed to other
c Springer International Publishing Switzerland 2015
L. Cao et al. (Eds.): ADMI 2014, LNAI 9145, pp. 1–11, 2015.
DOI: 10.1007/978-3-319-20230-3 1
2 Y. Pan et al.
agents. The complexity is further increased when a large number of agents are
to be modeled in the I-DIDs. The existing research of I-DID mainly focuses on
the case of n = 2 agents, which is not a general setting in practical applications.
In this paper, we extend I-DIDs for solving interactive decision making prob-
lems with n > 2 agents. Following the conventional representation of I-DIDs,
we need to introduce additional model space to represent every of other agents.
This not only increases the modeling complexity, but also grows the solution
complexity due to the increasing number of other agents’ models. Observing pos-
sible relations of agents’ actions, we proceed to reduce the modeling complexity
by simplifying the extended I-DID representation. We construct their relations
using Bayesian networks [8] and learn the model parameters accordingly.
We organize this paper as follows. We briefly review the I-DID model
in Sect. 2. Subsequently, we extend I-DID for the case of multiple agents
(n > 2) in Sect. 3. One learning algorithm is proposed to simplify the extended
I-DID in Sect. 4. We show the preliminary results by experimenting the proposed
techniques in one problem domain in Sect. 5. Additionally, we review the rele-
vant work in Sect. 6. Finally we conclude the paper with remarks on the future
work.
2.1 Representation
In Fig. 1, the I-DID represents how a subject agent i optimizes its decisions while
interacting with another agent j whose actions may impact their common states
S. Different from regular chance, decision and utility nodes in DID [13], the
new type of node called the model node, Mj,l−1 , models how another agent j
makes its decisions simultaneously in level l − 1. More explicitly, it contains all
possible j’s models whose solutions generate the predicted behavior Aj , which
is represented by a policy link (the dashed line) connecting Mj,l−1 and Aj . Each
model, mj,l−1 , could be either a level l − 1 I-DID or a DID at level 0 where agent
j does not further model agent i.
As agent j acts and receives observations over time, its models are updated to
t
reflect their changed beliefs. The model update link, a dotted arrow from Mj,l−1
t+1
to Mj,l−1 in Fig. 1, represents the update of j’s models over time. We zoom in
the model update link in Fig. 2. The updated models differ in the initial beliefs
that are computed through a pair of j’s actions and observations. Consequently,
the set of updated models at time t + 1 will have up to |Mtj,l−1 ||Aj ||Ωj | models.
Here, |Mtj,l−1 | is the number of models at time step t, |Aj | and |Ωj | are the
largest spaces of actions and observations respectively.
I-DID becomes a regular DID when the model update link is replaced with
dependency links and chance nodes. We may employ any DID technique to solve
an I-DID.
Learning Agents’ Relations in Interactive Multiagent Dynamic Influence 3
Ri Ri
Ait Ait+1
t
Aj Ajt+1
t t+1
S S
Mj,l-1t Mj,l-1t+1
O it Oit+1
Ajt
Mj,l-1t Mj,l-1t+1 Ajt+1
St
St+1
Mod[Mjt+1]
t
Mod[Mj ]
Ojt+1
Ait mj,l-1t+1,1
Aj1
mj,l-1t,1 1
Aj Oj1 mj,l-1t+1,2
Aj2
mj,l-1t,2 mj,l-1t+1,3
Aj2 Oj 2 Aj3
mj,l-1t+1,4
Aj4
Fig. 2. Implementation of the model update link e.g. two models, mt,1 t,2
j,l−1 and mj,l−1 ,
are updated into four models (shown in bold) at time t + 1.
2.2 Solutions
As indicated in the modeling phase above, solving a level l I-DID requires the
expansion and solution of j’s models at level l−1. We outline the I-DID algorithm
in Fig. 3. Lines 4–5 solves j’s models for the policy link while lines 7–15 imple-
ments the model update link in I-DID. Finally, lines 17–18 solve the transformed
I-DID through standard DID algorithms.
The difficulty arises for solving I-DID of a large planing horizon, T , since a
large number of j models need to be accommodated and resolved in the model
node. A set of successful algorithms have been proposed [16], and mainly focus
on prune the models of Behavioral Equivalence whose behavioral predictions for
agent j are identical [11]. Line 6 presents such a technique PruneBehavioralEq
(Mj,l−1 ) and returns representative models of j, which generates a series of
efficient I-DID algorithms.
Fig. 3. Algorithm for exactly solving a level l > 1 I-DID or level 0 DID expanded over
T time steps.
Rit Rit+1
Ait Ait+1
St St+1
t
Aj
Oit Oit+1 Ajt+1
Mod[Mjt] Mod[Mjt+1]
Oj Akt+1
Akt
Mod[Mkt] Mod[Mkt+1]
Ok
Fig. 4. Level l agent i I-DID modeling other agents j and k. The blue and red parts
represent the update of agent j’s and agent k’s models respectively.
Rit Rit+1
Ait Ait+1
St+1
St
t
Aj
Oit Oit+1 Ajt+1
Mod[Mjt] Mod[Mjt+1]
Akt Oj Akt+1
Fig. 5. Level l I-DID when relating agent j’s actions (Aj ) to agent k’s actions (Ak ).
4 Relation Learning
We consider relations of agents’ actions in a simple Bayesian network that models
influence between variables. Under this context, we aim to learn CPDs of the
arc connecting two nodes Aj and Ak . Given the known models of agents j and
k, we may solve the models to obtain their actions in each time step. Resorting
to the parameter learning techniques, we can construct the CPDs accordingly.
The learning algorithm is described in the following two steps.
Given the policy trees of agents j and k, we can aggregate their actions
at each time step, which is indicated by the red block in Fig. 6. Subsequently,
we need to construct the relations of their actions in each time step.
Step 2: Learning CPDs. Assume that agent k’s actions depend on agent
j’s actions, we may construct Bayesian network (BN) relating their actions in
Fig. 7. Relations of agent j’s actions over time follow the update of agent j’s
models in I-DID while agent k’s actions are predicted according to j’s actions.
Consequently, we don’t need to represent the update of agent k’s models over
time. Instead we learn the CPDs, like P r(Atk |Atj ), in the constructed BN.
We follow the maximum likelihood estimation techniques to learn the CPDs
in the BN. The CPDs represent the relation strength between agents j and k’s
actions. They provide the input into the I-DID construction for n > 2. Using
the example of agents j and k, we summarize the two steps in Fig. 8.
The above procedures may replace lines 3–5 in Fig. 3 so that I-DID can
be extended to represent interactive decision making for more agents. Given
W other agents, the number of model nodes will be W in the I-DID and each
model node contains up to |M0j |(|Aj ||Ωj |)t models. Through learning relations of
agents’ actions, we may maintain only one model node in the I-DID while relating
actions of other agents. This significantly reduces the solution complexity.
5 Experimental Results
We implemented the procedures in Fig. 7 and built the I-DID for three-agent in
the Tiger problem (|S| = 2, |Ai | = |Aj | = |Ak | = 3, |Ωi |=6, |Ωj | = |Ωk |=3) [9].
We compare four methods for solving the I-DIDs.
– NO: we solve the I-DIDs exactly without reducing the model space;
– DMU: we solve the I-DIDs using the PruneBehavioralEq like the approxi-
mate DMU method [3];
– PL: we use the BN learning techniques (in Fig. 7) to relate agents’ actions
thereby reducing the model space;
– DL: we combine DMU and PL to solve the I-DID.
Learning Agents’ Relations in Interactive Multiagent Dynamic Influence 7
Fig. 7. A Bayesian network (red parts) relates agents j and k’s actions over time.
Fig. 8. Procedures for solving agents j and k’s models and learning the relations of
their actions.
We ran experiments on a Linux platform with Intel Core 2.2.4 GHz with 4 GB of
memory. Table 1 shows the running time for solving the I-DIDs through different
methods. The time required by solving and expanding three-agent I-DIDs is
much more than that needed for solving two-agent I-DIDs. Both DMU and PL
methods significantly reduce the times particularly in complex problems like
T = 5 cases with large N . Combining two methods results in more efficient
solutions and is scalable to solve more complex problems.
We further show the running time for learning BN in the PL method. Table 2
shows that the BN learning procedure spends relatively short time and the learn-
ing time does not increase significantly with the initial number of models (N )
as well as the planning horizons (T ).
In Fig. 9, we show the average rewards gather by executing the policy trees
obtained from solving the level 1 I-DIDs for the multiagent tiger problem. We let
the number of the initial models labeled be either 25 or 50. Each data point is
the average of 1000 runs of executing the policies. In Fig. 9, as can be seen that
the average of N = 50 is larger than one of N = 25, and the average rewards
for T = 5 is larger than that of T = 3. In other words, the rewards improve
as either the number of the initial models or the planning horizon increases.
8 Y. Pan et al.
Table 1. The running times for solving I-DID for more agents.
6 Related Works
Fig. 9. The average rewards agent i gathers when it plays with both agents j and k
according to the I-DID solutions.
oping partial policy trees for speeding up the I-DIDs and investigating online
planning techniques can be found in [18–20], respectively.
7 Conclusion
It is a new challenge discussed by many researchers that the I-DIDs are gen-
eralized to the setting of n > 2 agents. Instead of literally developing models
for every other agent, we resort to the BN learning for relating actions of other
agents. The proposed method simplifies the I-DID representation and signif-
icantly reduces the solution complexity. Experiments in one problem domain
show promising results. Further work would focus on more comprehensive tests
in larger scale problem domains.
References
1. Chandrasekaran, M., Doshi, P., Zeng, Y.: Approximate solutions of interactive
dynamic influence diagrams using -behavioral equivalence. In: International Sym-
posium on Artificial Intelligence and Mathematics (ISAIM) (2010)
2. Doshi, P., Chandrasekaran, M., Zeng, Y.: Epsilon-subject equivalence of models
for interactive dynamic influence diagrams. In: WIC/ACM/IEEE Conference on
Web Intelligence and Intelligent Agent Technology (WI-IAT) (2010)
3. Doshi, P., Zeng, Y.: Improved approximation of interactive dynamic influence dia-
grams using discriminative model updates. In: AAMAS, pp. 907–914 (2009)
4. Doshi, P., Zeng, Y., Chen, Q.: Graphical models for interactive pomdps: represen-
tations and solutions. J. Autonom. Agents Multi-Agent Syst. (JAAMAS) 18(3),
376–416 (2009)
5. Gal, K., Pfeffer, A.: Networks of influence diagrams: a formalism for representing
agents’ beliefs and decision-making processes. J. Artif. Intell. Res. 33, 109–147
(2008)
6. Gmytrasiewicz, P., Doshi, P.: A framework for sequential planning in multiagent
settings. J. Artif. Intell. Res. (JAIR) 24, 49–79 (2005)
7. Howard, R.A., Matheson, J.E.: Influence diagrams. In: Readings on the Principles
and Applications of Decision Analysis, pp. 721–762 (1984)
8. Jensen, F.V.: Bayesian Networks and Decision Graphs. Information Science and
Statistics. Springer, New York (2001)
9. Kaelbling, L., Littman, M., Cassandra, A.: Planning and acting in partially observ-
able stochastic domains. Artif. Intell. J. 101, 99–134 (1998)
10. Koller, D., Milch, B.: Multi-agent influence diagrams for representing and solving
games. In: International Joint Conference on Artificial Intelligence (IJCAI), pp.
1027–1034 (2001)
Learning Agents’ Relations in Interactive Multiagent Dynamic Influence 11
11. Pynadath, D., Marsella, S.: Minimal mental models. In: Twenty-Second Conference
on Artificial Intelligence (AAAI), Vancouver, Canada, pp. 1038–1044 (2007)
12. Seuken, S., Zilberstein, S.: Formal models and algorithms for decentralized deci-
sion making under uncertainty. J. Autonom. Agents Multi-Agent Syst. (JAAMAS)
17(2), 190–250 (2008)
13. Tatman, J.A., Shachter, R.D.: Dynamic programming and influence diagrams.
IEEE Trans. Syst. Man Cybern. 20(2), 365–379 (1990)
14. Zeng, Y., Chen, Y., Doshi, P.: Approximating behavioral equivalence of mod-
els using top-k policy paths (extended abstract). In: International Conference on
Autonomous Agents and Multi-Agent Systems (AAMAS), pp. 1229–1230 (2011)
15. Zeng, Y., Doshi, P.: Speeding up exact solutions of interactive influence diagrams
using action equivalence. In: International Joint Conference on Artificial Intelli-
gence (IJCAI) (2009)
16. Zeng, Y., Doshi, P.: Exploiting model equivalences for solving interactive dynamic
influence diagrams. J. Artif. Intell. Res. (JAIR) 43, 211–255 (2012)
17. Zeng, Y., Doshi, P., Chen, Q.: Approximate solutions of interactive dynamic influ-
ence diagrams using model clustering. In: Twenty Second Conference on Artificial
Intelligence (AAAI), Vancouver, Canada, pp. 782–787 (2007)
18. Zeng, Y., Doshi, P., Pan, Y., Mao, H., Chandrasekaran, M., Luo, J.: Utilizing
partial policies for identifying equivalence of behavioral models. In: Twenty-Fifth
Conference on Artificial Intelligence (AAAI), pp. 1083–1088 (2011)
19. Zeng, Y., Mao, H., Pan, Y., Luo, J.: Improved use of partial policies for identi-
fying behavioral equivalence. In: Autonomous Agents and Multi-Agent Systems
Conference (AAMAS), pp. 1015–1022 (2012)
20. Chen, Y., Doshi, P., Zeng, Y.: Iterative online planning in multiagent settings with
limited model spaces and PAC guarantees. In: Autonomous Agents and Multi-
Agent Systems Conference (AAMAS) (2015)
Agent-Based Customer Profile Learning in 3G
Recommender Systems: Ontology-Driven
Multi-source Cross-Domain Case
1 Introduction
Advanced Recommendation Systems (RS) qualified as RS of the third genera-
tion (3G) emphasize employment of semantically clear model of customer cross-
domain profile learned using all available data sources where the customer’s
“footprints” can provide, for learner, with useful information about customer’s
interest and preferences. Focus on semantic aspects of customer profile stimu-
lates, in turn, wide spread of ontology-based meta-modeling of data sources. It is
worth to note that well known fact that customers prefer to trust much more to
the recommendations of their “friends” than to anonymous sources like routine
advertisements is an additional argument in favor of importance of semantics in
customer profiling. Indeed, the core of the customers’ trust to the “friends” is
their semantic similarity. As a result, e.g., collaborative filtering (CF) as applied
to the “friend” community leads to good results due to implicit meeting the
members of this community to the semantic similarity requirement.
Recent understanding of the topmost importance of the semantic basis of
customers’ motivations determining his/her preferences in buying of those or
c Springer International Publishing Switzerland 2015
L. Cao et al. (Eds.): ADMI 2014, LNAI 9145, pp. 12–25, 2015.
DOI: 10.1007/978-3-319-20230-3 2
Agent-Based Customer Profile Learning in 3G Recommender Systems 13
these product items results in noticeable shift of the RS-related research to the
causal analysis of the customer interests and preferences, in particular, to active
research on ontology-based model of particular customer interests and model of
customer profile as a whole. The core of this shift is focus on well semantically-
grounded personalized customer interests. Let us mote that similarity measures
constituting the basis of any former versions of CF are understood as purely sta-
tistical properties. Statistical similarity measures are independent of the causes
motivating the customer’s selections. In contrast, 3G RS similarity measures
should, first of all, to explain semantically why two customers are similar or dis-
similar, although they may select the same product item. E.g. one customer can
select a movie due to its favor director, whereas another one may to do the same
choice due to the movie genre and/or leading actor team. Former CF ignores such
facts. Therefore, customer interests presented as whole customer profile have to
be clearly semantically interpretable. Let us note that such profiles should be
learnt from all available data sources.
Accordingly, several novel problems appear in modeling of 3G RS. These
problems formulated below as the questions are the followings:
– What can be an appropriate customer interest formal model covering its mul-
tiple interests in several domains?
– How this model interacts with the multiple domain ontologies peculiar to the
applications having multiple learning data sources?
– How customer interest formal model interacts with reasoning on
recommendation-related decisions?
– How semantic similarity measures of a pair of customers can look like and
how these measures interact with the formal model of customer profile?
Many other novel questions exist too, but they cannot be answered in a single
paper. This paper focuses on conceptual aspects of formal modeling of RS com-
ponents associated with the aforementioned questions while emphasizing impor-
tant role of customer profile formal model as a core of the whole 3G RS model.
Another paper topic is about the novel roles of agents supporting interactions of
RS components in the customer profile learning and decision making use cases.
Taking the agent mining approach [4], this work combines agent technology
with ontology, customer profiling and recommender systems. To make the paper
ideas and contribution more understandable, it starts with presenting of a case
study data set comprising several data sets of cross-domain nature (Sect. 2).
Afterwards, in Sect. 3, the proposed formal model of the customer profile sat-
isfying the requirements to its semantically clear interpretation is described.
This section sketches interaction of ontologies of multiple data sources with the
customer profile formal model too. Section 4 outlines the agent-based architec-
ture of RS components implementing two its basic use cases that are (1) the
customer profile learning and (2) recommendation related decision making use
cases. Section 5 provides for related work survey with the focus on the exist-
ing ontology-based customer profile models. Conclusion describes the current
progress in development of the presented components and sketches future efforts.
14 V. Gorodetsky et al.
Figure 2 presents additional information about the Amazon data set structure
while depicting entity-relationship diagram with extended information about the
following concepts: Product group categories, Product similarity and customer’s
Agent-Based Customer Profile Learning in 3G Recommender Systems 15
reviews that can be provided for a Product. Figure 3 gives a shortened example
of the Amazon data set record specifying an instance of the Product of the
group Book and representing the instance properties related to categorization
and similarity.
Fig. 3. An example of the Amazon data set record specifying an instance of the Product
of the group Book concerning with categorization and similarity
Fortunately, this task is not new. Let us remind that the routine subtask of
Machine learning that is selection of informative features if successfully solved
results in a set of features and a quality measure of each such feature is well
Agent-Based Customer Profile Learning in 3G Recommender Systems 17
known measure called coverage. At that, usual requirement to the found set of
the features is that the found features together have to provide for coverage of all
learning data instances. A peculiarity of the learning task formulated in previous
paragraph, in comparison with the general case, is that the features in question
have to be presented in a special form: they have to be specified in terms of
predicates PS (xi1 ∈ X i , . . . , xi ∈ Xi ), where xi , . . . , xi – particular proper-
1 k k 1 k
ties of the concept Product, Xi1 , . . . , X i – sub-domains of the above properties
k
Xi ) then the customer interests are specified in term of structured specific ontol-
k
ogy sub-categories. As a result, they are clearly interpretable in terms of ontol-
ogy concepts and their attributes are given as some statements about Product
properties. In this case, the remaining problem is whether a Machine learning
technique that is capable to discover knowledge representing formally customer
interests in terms of predicates PS (xi1 ∈ X i , . . . , xi ∈ Xi ) exists. The answer
1 k k
on this question is positive: a variant of such a Machine learning technique was
proposed in [6]. Since the description of this technique is out of the paper scope,
it is omitted here and the interested are referred to [6].
Hereinafter, it is assumed that the customers profile is represented in the
form of a structured subset of domain ontology concepts and looks like it is
shown in the toy example depicted in Fig. 4 with each node NS associated a
set of Product instances that match the properties indicated by the predicate
PS (xi1 ∈ X i , . . . , xi ∈ X i ) mapped to the corresponding node NS . Figure 5
1 k k
extends Fig. 4 in such way while exemplifying the structure of the customers
profile model considered in this paper.
Several advantages are peculiar to the proposed formal model of the customer
profile. Some of them are as follows:
1. It is compact, clearly and unambiguously interpretable in semantic terms
of ontology concepts: each particular interest is a subclass (a category) of the
domain ontology.
2. This formal model naturally implements personalization.
3. Profiles of various customers in the same domain are presented in terns
of the same concept subclasses and therefore they are simply comparable since
computing a semantic similarity of a customer pair requires to compare the both
profile structures and to find the set of common successors. Semantic similarity
measure can be expressed in terms of relative number of common interests of a
pair of customers peculiar to the Product item under recommendation procedure.
In fact this measure should be a subject of special research and experiments.
4. Customer’s profile, in fact, is represented in about actionable form: only
few efforts are required to design decision making mechanism, e.g. decision tree,
voting mechanism or some other one.
18 V. Gorodetsky et al.
The trail is made in two halves of box section built of bent and
riveted steel plate. Each half is bolted to a lug on the equalizing gear,
so that it may be rotated horizontally from the junction point of the
trail to the point where the trail hits the wheel.
The trails are locked together in traveling position by means of a
cone-shaped vertical lug on the lunette bracket which fits in a socket
in the trail coupling, and is locked in place by the trail-coupling latch.
Trail-coupling latch has a handle and catch with a vertical spindle
seated in a socket in the lunette bracket. A handle-return spring is
assembled around the spindle and the latch engages a catch on the
trail coupling when trails are fixed in the traveling position. Latch is
opened by moving handle forward.
Lunette consists of a ring for attaching the carriage to the limber
and is bolted through the lunette bracket.
Floats are attached to the bottoms of both trails at their rear ends,
consisting of flanged steel plates for the purpose of increasing
bearing area of the trails on soft ground.
Spade bearings are riveted to rear of the trails and form bearings
for spades in firing position. Spades are driven through the bearings,
and their upward movement relative to the trails is prevented by
spade latch.
Spade-latch bracket consists of a bronze plate with a cylindrical
chamber for a spring and plunger and two bearings for latch-handle
pin. Bracket is riveted to the inside top of trail in front of the spade.
Spade-latch plunger, with a spring assembled around it, is seated in
the chamber and the spade-latch handle is pinned in the bearing.
Top of handle extends through the trail and is roughened for use as a
foot pedal. Lower part of handle engages with the plunger. When the
spade is driven the plunger is forced into a notch in the spade by
means of the spring, and the slope on face of plunger allows a
downward movement of the spade and prevents upward movement.
To release spade the foot pedal on latch handle is pressed down,
disengaging plunger from spade, and the spade is removed.
Trail handles are riveted to outside of both trails for lifting trails.
Name plate is riveted to outside lower left trail. It is important that the
number of carriage on this plate be recorded by the officer in charge
of the unit to which it is assigned and that this number be used as a
reference in all correspondence. Wheel guards, rear, are plates
riveted to the outside lower left of both trails for the protection of trail
bodies against contact with limber wheels on short turns. Trail
guards are bent plates riveted to the top of trail in front of trail-
coupling latch to prevent battering of trails by sledges used for
driving the spades.
Sponge-staff fastenings are riveted to tops of both trails. Sponge
staffs are inserted in upper rings of staff fastenings and the lower
ends are clamped in place. The smallest section of sponge staffs fits
in sponge fastenings.
Sledge fastenings are similar to sponge staff fastenings and are
riveted to the outside of each trail. Wheel guards (front) are plates
riveted to the outside of trails near the front to prevent contact of
trails with wheels when the trails are separated.
Spare parts case is a steel box with a hinged steel cover provided
with a bolt snap and padlock riveted to the outside of front left trail.
This case contains spare parts for emergency use.
Trail seats are made of formed bent plates riveted to the tops of
trails near breech of gun. Oiler support with springs is under the
right-hand trail seat. Oiler rests on this support and is held in place
by springs.
Traveling lock bar consists of a forged steel bar pinned to lock bar
bearing on left trail and made to swing across trails in traveling
position and along left trail in firing position. In traveling position the
socket in the middle of the lock bar engages with the traveling lock
stud in the bottom of cradle, and right end of lock bar is held in lock
bar clip on right trail by the latch. To disengage the latch for firing, the
latch handle is lifted and the lock bar swung to fastening in left trail,
where it latches.
To lock the cradle, the gun is brought to 0 azimuth and the
traveling lock pointer on right trunnion cap brought to line marked
“March.” In this position the traveling lock socket fits over stud, and
the lock is latched. The latch consists of a lever pinned at one end to
the lock bar with a plunger pinned in center extending through the
bar with a spring around the plunger body to hold the latch in place.
Trail connections are riveted to front end of trail and bolted to
equalizing pinions.
The cradle comprises the spring cylinder with attached parts.
The spring cylinder is below and shorter than the gun. It is in the
form of two cylinders joined at the center, with axes in the same
horizontal plane. Above the cylinders are the gun ways, parallel to
the cylinders, bronze lined, and opening toward the center line of
cylinders. Traveling lock stud is bolted through a lug at the rear and
below the cylinders. Firing-shaft bracket is riveted to the left side and
range-scale bracket to the right side of the cylinder at its rear end.
Shoulder guards are pinned in sockets in both firing-shaft and range
scale brackets to prevent contact of the gun during recoil, with the
cannoneers. Trunnions are riveted and keyed to the cylinder near
center. Elevating arc is bolted to lugs on the bottom of cylinder at
trunnions. Piston-rod bracket is riveted to projections on the cylinder
above the gun slides near the front end. Cylinder cover is pinned to
cylinder clips, which are riveted to the front of spring cylinder. (Note:
On some carriages the clips are made integral with the cylinder.)
The recoil cylinder being full of oil, this oil is forced by the piston
through holes in recoil valve in front of piston up into annular space
between valve and cylinder and into space behind and vacated by
the piston. The hydraulic resistance caused by forcing the oil through
the holes in valve absorbs most of the recoil energy of the gun, and
the remaining energy is taken up by compression of the counter-
recoil springs and friction.
When the gun reaches the end of recoil all of the recoil energy has
been absorbed and the counter-recoil springs acting against spring-
rod piston force the gun back to battery position. The purpose of the
counter-recoil buffer is to overcome the tendency for gun to return to
battery too rapidly, at the same time allowing sufficient speed of
counter recoil to permit maximum rapidity of fire. Buffer action is
necessary, as the strength of springs required to return the gun to
battery at high elevations is greater than is required at lower
elevations.
The action of counter-recoil buffer is as follows:
As the buffer rod moves backward in piston rod the valve in buffer-
rod head is opened by the pressure of oil in back of valve and the
vacuum in front, which forces oil into buffer chamber in front of the
buffer-rod head. At full recoil the buffer chamber is full of oil and
buffer-rod head is inside the rear end of piston rod. When springs
force gun back in counter recoil, buffer rod moves forward,
compressing oil in chamber and forcing valve closed. This prevents
escape of oil through valve and forces oil to throttle between outside
surface of buffer-rod head and inside surface of piston rod, offering
resistance to spring action and thus easing the gun into battery. The
inside bore of piston rod is tapered at front end to increase
resistance and obtain desired decrease in counter-recoil velocity.
If guns fails to return to battery after a few rounds of rapid firing, it
is probably due to expansion of oil. This may be determined and
corrected by loosening filling plug. If oil spurts out, allow it to run until
gun is back in battery. It may be necessary to relieve oil two or three
times immediately after filling. Gun should never be allowed to
remain out of battery more than 1 inch on counter recoil without
determining and correcting the cause.
If gun remains out of battery and the relief of oil does not cause it
to return, it is due to:
(a) Weak or broken springs; (b) piston-rod gland too tight; (c) dirt
or lack of lubrication in gun slides; (d) distortion of gun on gun ways;
(e) distortion of piston rod due to improper counter recoil action.
The majority of cases are due to (a), (b) and (c).
(a) Can be determined only by removing springs, and should be
undertaken only after all other methods have been tried.
(b) Can be determined by loosening piston-rod gland. If gland is
too tight, gun will return to battery when it is loosened. If gland
cannot be loosened, piston-rod is probably distorted.
(c) Flood slides with oil, and if possible retract gun and examine
gun ways and slide for dirt.
(d) If possible allow gun to cool for 15 or 20 minutes. In case of
(a), (c) or (d) gun can generally be pushed back into battery by hand.
(e) If piston rod or interior mechanism is distorted, mechanism
must be disassembled and defective parts replaced. If distortion has
occurred, it can generally be identified by very rapid counter recoil
for round on which gun does not return to battery. This may be
caused by foreign matter in oil causing buffer valve to stick, or by
lack of sufficient oil. If distortion has occurred, it will be near gland
and can generally be felt by running hand along rod from bracket to
gland.