You are on page 1of 19

Expert Systems With Applications 44 (2016) 367–385

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

Assuring safety in air traffic control systems with argumentation


and model checking
Sergio Alejandro Gómez a,∗, Anca Goron b, Adrian Groza b, Ioan Alfred Letia b
a
Artificial Intelligence Research and Development Laboratory (LIDIA), Department of Computer Science and Engineering, Universidad Nacional del Sur, Av. Alem
1253, Bahía Blanca 8000, Argentina
b
Intelligent Systems Group, Department of Computer Science, Technical University of Cluj-Napoca, Baritiu 28, 400391 Cluj-Napoca, Romania

a r t i c l e i n f o a b s t r a c t

Keywords: Although the continuous safety technology advances in fields like air traffic control (ATC) systems or medical
Safety systems devices, the crux of safety assurance still comes down to human decision makers, which, within the context
Defeasible argumentation
of having to define priorities while simultaneously considering different contextual criteria, present a con-
Model checking
stant high risk of erroneous decisions. We illustrate in this article a recommender framework for assisting
Hybrid Logics
Defeasible Logic Programming flight controllers, which combines argumentation theory and model checking in the evaluation of trade-offs
Hybrid Logic Model Checker and compromises to be made in the presence of incomplete and potentially inconsistent information. We
view a Hybrid Kripke model as a description of an ATC domain and we apply a rational decision strategy
based on Hybrid Logics and Defeasible Reasoning to assist the process of model update when the system
has to accommodate new properties or norm constraints. When the model fails to verify a property, a de-
feasible logic program is used to analyze the current state and perform updating operations on the model.
The introduced decision making framework is tested on a recommender system in ATC and model update
is demonstrated with respect to the verification and adaption of unmanned aerial vehicles routes in the air
traffic space. The results show an important potential for the presented framework to be integrated directly
into existing decision-making routines for achieving higher accuracy in recommender system methods.
© 2015 Elsevier Ltd. All rights reserved.

1. Introduction and Choi (2011) who investigate the relationship of an increase of ac-
cidents in aviation with an increase in traffic volume). Clerical auto-
Assuring safety in complex technical systems is a crucial issue matic aids to ATC controllers came online since early 1970s, they help
(Graydon & Kelly, 2013) in several critical applications like air traffic human controllers to envision current and future traffic situations,
control (ATC) or medical devices. ATC is a service provided by ground- and they provide information needed for directing aircraft. Besides,
based controllers who direct aircraft on the ground and through it is generally believed that recommending actions to controllers will
controlled airspace, providing advisory services to aircraft in non- lead to an increase in controller productivity (see Spencer, 1989).
controlled airspace. The primary purpose of ATC is to prevent colli- It is known that controlling air traffic is one of the world’s more
sions, organize the flow of traffic, and provide information and sup- stressful jobs because human controllers have to consider the “life
port for pilots. In this context, accidents in ATC are mainly produced and death” element, including the need for razor-sharp reactions on
by human errors (research shows that ATC-related aircraft accidents everything from terrorist attacks to extreme weather. Media reports
are rarer events and that more than 90% of all the system errors such as Thomson (2010) say that despite the apparent complexity of
that do occur stem from human mistakes in attention, judgment, the task, and the high-tech appearance of the equipment, it is still a
and communications by controllers and their supervisors (Danaher, job that relies completely on the ability and skill of its staff—the basic
1980); moreover, as navigable airspaces are becoming more crowded process of finding the best route and landing safely would be simple
due to an increasing volume of air traffic, the number of accidents if it was not for the thousands of flights dodging each other every day.
produced by human errors is increasing, see for instance Moon, Yoo, The role that technology plays in this process is, perhaps surprisingly,
fairly minimal. The equipment looks impressive, and no flights would

take off without being able to communicate with controllers, but the
Corresponding author. Tel.: +54 291 4595135.
crux of the system still comes down to human decision-making. Al-
E-mail addresses: sag@cs.uns.edu.ar, sergio.alejandro.gomez@gmail.com (S.A.
Gómez), anca.goron@gmail.com (A. Goron), Adrian.Groza@cs.utcluj.ro (A. Groza), though human controllers may affirm that the process is simple con-
letia@cs.utcluj.ro (I.A. Letia). sidering its automation, we think that such accidents produced by

http://dx.doi.org/10.1016/j.eswa.2015.09.027
0957-4174/© 2015 Elsevier Ltd. All rights reserved.
368 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

human errors could be avoided by verifying the safety for the ATC binder. All the automatic verifications in our approach are to be per-
system in a logical manner in order to produce support for human formed by an extended version of the model checker HLMC (available
air controllers to make rationally justified decisions. In this way, a at http://luigidragone.com/software/hybrid-logics-model-checker/),
computerized tool would be of help to human controllers. Nowadays, that implements the model checking algorithms for the hybrid log-
each controller receives information from several (Thomson, 2010 ics MCLite and MCFull (Franceschet & de Rijke, 2006) and adds sup-
cites as many as 26) different air traffic management systems and port for temporal operators such as Next, Future, Past, Until and Since
must synthesize this into a schedule for each flight. Part of the prob- (Letia & Goron, 2014). The argumentative reasoning is performed in
lem resides in that the information received can be potentially incon- DeLP (García & Simari, 2004).
sistent (a notable example consists of inconsistent radar readings, an The main application of our proposal is in the field of autonomous
issue that will be left outside the scope of this paper). systems, by empowering agents with self-verification capabilities
Argumentation (Bench-Capon & Dunne, 2007; Chesñevar, after they have updated the world model. Assuring safety in the
Maguitman, & Loui, 2000b; Rahwan & Simari, 2009) provides a contexts of unmanned aerial vehicles (UAVs) pose extra constraints
sophisticated mechanism for the formalization of common-sense in ATC that those presented in systems with only human compo-
reasoning, allowing to decide when contradicting opinions may exist. nents. Autonomous agents, as embodied by entities such as UAVs, un-
In an argumentation system, instead of proofs, we have arguments. manned ground vehicles (UGVs), or software-agents, are expected to
Intuitively, an argument can be thought of as a coherent set of be compliant with a set of requirements and specifications. We con-
statements that supports a claim. When arguments are in conflict tend that argumentation can be used to assure safety in complex crit-
(either pair-wise or group-wise), we need to decide which ones ical systems by providing guidance to update the model of the world
prevail. Thus, the ultimate acceptance of an argument will depend in the case of having contradictory information.
on a dialectical analysis of arguments in favor and against the claim. This paper consolidates and extends the results presented in
Defeasible Logic Programming (DeLP) (García & Simari, 2004) is a Goron, Groza, Gómez, and Letia (2014) and Gómez, Goron, and Groza
reasoning framework based on logic programming and defeasible ar- (2014). We think that the people who can benefit from the results
gumentation with a working implementation that has been applied presented in this work are designers and implementers of safety sys-
successfully in several domains (see Section 2.1 for details). tems in the industry that have to deal with safety situations where
Safety assurance and compliance to safety standards-based meth- the engineer is confronted with domain specifications that may be in-
ods of certification such as DO-178B (Rushby, 2009) or HACCP (Letia complete and potentially inconsistent. The rest of the article is struc-
& Groza, 2013) may prove to be a real challenge when having to deal tured as follows. In Section 2, we present the fundamentals of ar-
with adaptive systems, in which it is necessary to handle continu- gumentation with DeLP as a form of non-monotonic reasoning and
ous changes. As traditional methods are not very effective in this, show how DeLP can be used to model a recommender system in the
argument-based safety cases offer a plausible alternative basis for presence of incomplete and potentially inconsistent description of
certification in these fast-moving fields. Our hypothesis is that argu- an ATC domain. In Section 3, we present the fundamentals of model
mentation can be used to assure safety in complex critical systems checking with HLs. In Section 4, we present a case study where our
by providing a way of assisting end-users to reach rationally justified approach is applied to model repair in an UAV. In Section 5, we com-
decisions. In this paper we propose a decision support system for an bine argumentation and model checking for implementing model up-
ATC based on DeLP. Landing criteria for assuring safety in complex date in HLs. In Section 6, we discuss the implementations developed
landing situations are modeled as a DeLP program. Prospective deci- as a proof of concept of the approach presented. In Section 7, we dis-
sions are presented to the system as queries. Given a query represent- cuss related work. Finally, in Section 8, we conclude the paper and
ing a decision concerning a safety requirement with respect to such a outline possible future research lines.
set of criteria, the DeLP engine will engage in an introspective dialec-
tical process considering pros and cons against a decision and will 2. Continuous safety verification in an ATC with argumentation
answer a recommendation in the case that there is a warrant for the
query. Besides, as in a real-time environment in which border con- Next, we will introduce DeLP, a formalism for knowledge repre-
ditions may vary from second to second, decisions cannot be taken sentation and non-monotonic reasoning that can handle scenarios in
with respect to a static DeLP program. Thus, we present a framework ATC where reaching a decision may not be clear-cut because many
for making recommendations based on sensor input regarding the opinions and conflicting evidence about the safety of the situation
values of the parameters characterizing the safety problem. may exist. We will introduce the definition of the reasoning frame-
Another tool that is used to provide safety checks in critical work and will show a case scenario where DeLP can be used to han-
systems is model checking (Chatzieleftheriou et al., 2012). Model dle the mentioned problem in a natural way. Later in the paper we
checking (Lange, 2009) refers to the problem of verifying, given a will introduce model checking and its use in an ATC scenario and will
certain model, whether different properties hold for that model. show how DeLP can be used to help repairing a model.
Properties are represented using formulas usually specified in differ-
ent types of logic languages such as Linear Temporal Logics (Clarke, 2.1. Fundamentals of Defeasible Logic Programming
Grumberg, & Hamaguchi, 1997), Description Logics (Ben-David,
Trefler, & Weddell, 2010) or Hybrid Logics (Franceschet & de Rijke, In the context of rule-based knowledge representation systems,
2006), while the model is given as a labeled graph known as a Kripke when a rule supporting a conclusion may be defeated by new infor-
structure. In this regard, model checking tools lack mechanisms to as- mation, it is said that such reasoning is defeasible (Nute, 1988; Pollock,
sist the user in the revision of a Kripke model. In this work, we show 1974, 1987, 1995; Simari & Loui, 1992). When defeasible reasons or
how to integrate model checking and structured argumentation to- rules are chained to reach a conclusion, we have arguments instead of
ward supporting automated change of the model. We will use Hybrid proofs. Arguments may compete, rebutting each other, so that a pro-
Logics (HLs) (Areces & ten Cate, 2007; Cranefield & Winikoff, 2011) cess of argumentation is a natural result of the search for arguments.
for formalizing safety cases such that they can be further subjected Adjudication of competing arguments must be performed, compar-
to a complete verification. Although Linear Temporal Logic (LTL) or ing arguments in order to determine what beliefs are ultimately ac-
Computation Tree Logic (CTL) are generally used in such cases, we cepted as warranted or justified. Preference among conflicting argu-
consider HL as a better alternative due to its higher degree of ex- ments is defined in terms of a preference criterion which establishes
pressiveness and with the advantages it brings along through the a relation among possible arguments. In this setting, since conclu-
inclusion of nominals and specific operators such as @ and the ↓ sions are obtained by building defeasible arguments, and since logical
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 369

argumentation is usually referred to as argumentation, this kind of rea- A2 , Q2 if A1 ⊆ A2 . We call Args(P ) to the set of arguments that can
soning is called defeasible argumentation (see Gómez, Chesñevar, & be derived from P.
Simari, 2013).
The notion of defeasible derivation corresponds to the usual
The growing success of argumentation-based approaches during
query-driven SLD derivation used in logic programming, performed
the last 20 years has caused a rich crossbreeding with other dis-
by backward chaining on both strict and defeasible rules; in this con-
ciplines, with results in several research areas such as group de-
text a negated literal ∼P is treated just as a new predicate name no_P.
cision making (Zhang, Sun, & Chen, 2005), knowledge engineer-
Minimality imposes a kind of ‘Occam’s razor principle’ on argument
ing (Carbogim, Robertson, & Lee, 2000), legal reasoning (Prakken &
construction. The non-contradiction requirement forbids the use of
Sartor, 2002; Verheij, 2005), and multiagent systems (Parsons,
(ground instances of) defeasible rules in an argument A whenever
Sierrra, & Jennings, 1998; Rahwan et al., 2003; Sierra & Noriega,
 ∪ A entails two complementary literals. The notion of contradic-
2002), among others. During the last decade several defeasible argu-
tion is captured by the notion of counterargument.
mentation frameworks have been developed, most of them on the
basis of suitable extensions to logic programming (see Chesñevar, Definition 2. Given a DeLP program P = (, ) with A1 , A2 ⊆ .
Maguitman, & Loui, 2000a; Kakas & Toni, 1999; Prakken & Vreeswijk, An argument A1 , Q1 is a counterargument for an argument A2 , Q2
2002). DeLP (García & Simari, 2004) is one of such formalisms, com- iff there is a subargument A, Q of A2 , Q2 such that the set  ∪ {Q1 ,
bining results from defeasible argumentation theory (Simari & Loui, Q} is contradictory. An argument A1 , Q1 is a defeater for an argu-
1992) and logic programming (Lloyd, 1987). DeLP is a suitable frame- ment A2 , Q2 if A1 , Q1 counterargues A2 , Q2 , and A1 , Q1 is
work for building real-world applications which has proven to be preferred over A2 , Q2 with respect to a preference criterion on
particularly attractive in that context, such as clustering (Gómez & conflicting arguments. Such criterion is defined as a partial order
Chesñevar, 2004), intelligent web search (Chesñevar, Maguitman, & ⊆ Args(P ) × Args(P ). The argument A1 , Q1 will be called a proper
Simari, 2006; Chesñevar & Maguitman, 2004b), knowledge manage- defeater for A2 , Q2 iff A1 , Q1 is strictly preferred over A, Q with
ment (Chesñevar, Brena, & Aguirre, 2005a, 2005b), multiagent sys- respect to ; if A1 , Q1 and A, Q are unrelated to each other will
tems (Brena, Chesñevar, & Aguirre., 2006), natural language process- be called a blocking defeater for A2 , Q2 .
ing (Chesñevar & Maguitman, 2004a), intelligent web forms (Gómez,
Chesñevar, & Simari, 2008), ontology reasoning (Gómez, Chesñevar, Generalized specificity (Simari & Loui, 1992) is typically used as
& Simari, 2010; Gómez et al., 2013), relational databases (Deagustini a syntax-based preference criterion among conflicting arguments,
et al., 2013), planning (Chow, Siu, Chan, & Chan, 2013), recommender favoring those arguments which are more informed or more direct
systems (Bedi & Vashisth, 2014; Briguez et al., 2014), machine learn- (Simari & Loui, 1992; Stolzenburg, García, Chesñevar, & Simari, 2003).
ing (Fomina, Morosin, & Vagin, 2014), among others. For example, let us consider three arguments {a≺ b, c}, a , {∼a ≺
DeLP (García & Simari, 2004) provides a language for knowl- b}, ∼a and {(a≺ b); (b≺ c}), a built on the basis of a program P =
edge representation and reasoning that uses defeasible argumentation (, ) where
(Chesñevar et al., 2000a; Prakken & Vreeswijk, 2002; Simari & Loui,  = {b, c}
1992) to decide between contradictory conclusions through a dialecti- ⎧ ⎫
cal analysis. In a defeasible logic program P = (, ), a set  of strict ⎪
⎨b ≺ c; ⎪ ⎬
a ≺ b;
rules P ← Q1 , . . . , Qn , and a set  of defeasible rules P ← Q1 , . . . , Qn = .
can be distinguished, where P, Q1 , . . . , Qn are literals that can be pos- ⎩a ≺ b, c;⎪
⎪ ⎭
∼a ≺ b
itive or negative (i.e. classically negated with ∼). We define Lit (P ) as
the set of all literals that occur in P. Besides, although in a program When using generalized specificity as the comparison criterion
P = (, ) strict and defeasible information are separated, many between arguments, the argument {a≺ b, c}, a would be preferred
times P is just considered as  ∪  (see García & Simari, 2004, Ex- over the argument {∼a ≺ b}, ∼a as the ormer is considered more
ample 2.2 in p. 99). Propositional DeLP is restricted to rules whose informed (i.e., it relies on more premises). However, argument {∼
literals are just propositional variables (e.g. as in the defeasible rule a ≺ b}, ∼a is preferred over {(a≺ b); (b≺ c}), a as the former is
p≺ q), in this work we will be concerned with rules having predicates regarded as more direct (i.e., it is obtained from a shorter deriva-
as literals (e.g. as in in the defeasible rule p(X)≺ q(X)). As it is usual in tion). However, it must be remarked that besides specificity other
a logic programming setting based on the Prolog programming lan- alternative preference criteria could also be used; e.g., considering
guage, variable names begin with upper-case letters and constants numerical values corresponding to necessity measures attached to
with lower-case letters. The complement of a literal L (noted as L) is argument conclusions (Chesñevar, Simari, Alsinet, & Godo, 2004;
p(X) when L = ∼ p(X ) and ∼ p(X) whenever L = p(X ). There is an ex- Chesñevar, Simari, Godo, & Alsinet, 2005c) or defining argument
tension to DeLP that allows to represent defeasible facts, known as comparison using rule priorities. This last approach is used in d-
presumptions (see García & Simari, 2004, Section 6.2 and Martinez, Prolog (Nute, 1988), Defeasible Logic (Nute, 1992), extensions of De-
García, & Simari, 2012); they are however outside the scope of this feasible Logic (Antoniou, Billington, & Maher, 1998; Antoniou, Maher,
work. & Billington, 2000), and logic programming without negation as fail-
Deriving literals in DeLP results in the construction of arguments. ure (Dimopoulos & Kakas, 1995; Kakas, Mancarella, & Dung, 1994).
An argument A is a (possibly empty) set of ground (i.e., without vari- In order to determine whether a given argument A is ultimately
ables) defeasible rules that together with the set  provides a logical undefeated (or warranted), a dialectical process is recursively carried
proof for a given literal Q, satisfying the additional requirements of out, where defeaters for A, defeaters for these defeaters, and so on,
non-contradiction and minimality. Formally: are taken into account. An argumentation line starting in an argument
A0 , Q0 is a sequence [A0 , Q0 , A1 , Q1 , A2 , Q2 , . . . , An , Qn . . .]
that can be thought of as an exchange of arguments between two
Definition 1. Given a DeLP program P = (, ), an argument A for parties, a proponent (evenly-indexed arguments) and an opponent
a query Q, denoted A, Q , is a subset of ground instances of the de- (oddly-indexed arguments). Each Ai , Qi is a defeater for the pre-
feasible rules  in P, such that: (i) there exists a defeasible derivation vious argument Ai−1 , Qi−1 in the sequence, i > 0. In order to avoid
for Q from  ∪ A; (ii)  ∪ A is non-contradictory (i.e.,  ∪ A does fallacious reasoning, dialectics imposes additional constraints on such
not entail two complementary literals P and ∼P), and, (iii) there is an argument exchange to be considered rationally acceptable. Given
no A
⊂ A such that there exists a defeasible derivation for Q from a DeLP program P and an initial argument A0 , Q0 , the set of all
 ∪ A
. An argument A1 , Q1 is a sub-argument of another argument acceptable argumentation lines starting in A0 , Q0 accounts for a
370 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

clearance(ID, A, T) −≺ ∼forbidden(ID, A, T), runway(A), flight (ID).


∼clearance(ID, A, T) −≺ forbidden (ID, A, T), runway(A), flight (ID).
clearance(ID, A, T) ← critical failure(ID, T ).
∼forbidden(ID, A, T) −≺ calm(A, T ).
forbidden (ID, A, T) −≺ windy (A, T ).
∼forbidden(ID, A, T) −≺ fuel trouble(ID, T ).
∼forbidden(ID, A, T) −≺ fuel trouble(ID, T ), windy(A, T ).
windy(A, T ) −≺ wind speed (A, S, T), S > 15.
calm(A, T ) −≺ wind speed (A, S, T), S < 15.
fuel trouble(ID, T ) −≺ remaining fuel (ID, R, T), R < 15.
∼fuel trouble(ID, T ) −≺ remaining fuel (ID, R, T), R > 15.

Fig. 1. Logical criteria for assuring safety in an air control system.

whole dialectical analysis for A0 , Q0 (i.e., all possible dialogues runway(r01). flight(f 701).
about A0 , Q0 between proponent and opponent), formalized as a wind speed (r01, 3, 10). wind speed (r01, 30, 20).
dialectical tree. wind speed (r01, 40, 30). remaining fuel (f 701, 60, 30).
Nodes in a dialectical tree TA0 ,Q0 can be marked as undefeated wind speed (r01, 16, 40). remaining fuel (f 701, 12, 40).
and defeated nodes (U-nodes and D-nodes, respectively). A dialectical critical failure(f 701, 50). wind speed(r01, 100, 60).
tree will be marked as an and-or tree: all leaves in TA0 ,Q0 will be critical failure(f 701, 60).
marked U-nodes (as they have no defeaters), and every inner node is
to be marked as D-node iff it has at least one U-node as a child, and Fig. 2. Sensor information for the air control system.
as U-node otherwise. An argument A0 , Q0 is ultimately accepted as
valid (or warranted) with respect to a DeLP program P iff the root of safety verification is clearance for landing denoted by the literal clear-
its associated dialectical tree TA0 ,Q0 is labeled as U-node. ance(ID, A, T), meaning that flight ID has clearance to land on runway
Given a DeLP program P, solving a query Q with respect to P ac- A at time T. The meaning of the safety criteria are as follows: Flight
counts for determining whether Q is supported by (at least) one war- ID usually has clearance to land on runway A at time T if ID is not
ranted argument. Different doxastic attitudes can be distinguished forbidden to land on runway A at time T and viceversa. A flight has
as follows: Yes, accounts for believing Q iff there is at least one war- clearance to land at time T if it has had a critical failure T. It is allowed
ranted argument supporting Q on the basis of P; No, accounts for be- to land at time T on runway A if the wind is calm on runway A at time
lieving ∼Q iff there is at least one warranted argument supporting T and it is forbidden if it is windy at that time. A runway is windy at
∼Q on the basis of P; Undecided, neither Q nor ∼Q are warranted time T if the wind speed is greater than 15 knots, it is calm otherwise.
with respect to P, and Unknown, Q does not belong to the signature Exceptionally a flight is allowed to land on a windy runway if it has
of P. fuel trouble. A flight is considered to have fuel trouble if its remaining
fuel allows it to fly less than 15 min, otherwise the flight has no fuel
trouble.
2.2. Safety verification for air traffic control systems in DeLP
In Fig. 2 we present sensor information S for the rules presented
above. There is one runway called r01 and only one flight called f701.
We now present a safety verification framework for an ATC system
We will consider the safety verification system (P, S ), for this we will
based on DeLP.
introduce sensor information and show how the proposed approach
Definition 3. A safety verification system V is a pair (P, S ) where P is can recommend a decision to a human traffic controller based on a
a DeLP program establishing logical criteria for assuring safety and S dialectical analysis performed on P ∪ S.
is a set of literals containing sensor information of the environment. At time 0, for flight f701 we only have its identification as there is
The language LV of the safety verification system is the set of all lit- only a fact flight(f701) to consider. In this case the safety recommen-
erals in Lit (P ∪ S ). Let V = (P, S ) be a safety verification system. A dation for the prospective decision clearance(f701, r01, 0) is Unable to
prospective safety decision is a literal in LV . Reach Recommendationas DeLP answer for query clearance(f701, r01,
0) is Undecided because no argument can be built for that literal from
Notice that each element in S plays an important role in solving the information available.
queries in V. Each element of S is considered as a fact in the DeLP At time 10, for flight f701 we know that the wind speed at runway
program P ∪ S as the next definition shows: r01 is three knots. The prospective decision represented by the literal
clearance(f701, r01, 10) has Performas safety recommendation because
Definition 4. Let V = (P, S ) be a safety verification system and D be there exists a warranting argument A1 for it:
a prospective safety decision. A safety recommendation for D is either ⎧


⎪ clearance( f 701, r01, 10) ≺ ∼forbidden( f 701, r01, 10), ⎪
one of:
⎨ ,⎪

runway(r01), flight ( f 701)
• Perform: If there is a warranting argument for D with respect to A1 = ,

⎪ ∼forbidden( f 701, r01, 10) ≺ calm(r01, 10) , ⎪

the DeLP program P ∪ S. ⎩ ⎭
calm(r01, 10) ≺ wind_speed(r01, 3, 10), 3 < 15
• Do not perform: If there is a warranting argument for ∼D with
respect to the DeLP program P ∪ S. meaning that flight f701 has permission to land on runway r01 be-
• Unable to Reach Recommendation: When there is neither a war- cause it is a calm runway as the wind speed is only three knots at
ranted argument for D nor ∼D with respect to the DeLP program time 10.
P ∪ S. When we consider the situation for flight f701 at time 20, we see
• Not Applicable: Whenever D does not belong to LV . that wind speed at runway r01 is 30 knots (making it very windy).
Then the safety recommendation for the prospective decision clear-
Example 1. In Fig. 1 we present an example of a DeLP program P ance(f701, r01, 20) is Do not perform as there is an argument A2 , ∼
for defining a safety verification system for an ATC. The main system clearance( f 701, r01, 20) where:
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 371

U
A 6 , clearance(f 701, r01 , 40)

D D
B 6 , ∼clearance(f 701, r01 , 40) D 6 , forbidden(f 701, r01 , 40)

U U
C 6 , ∼forbidden (f 701, r01 , 40) C 6 , ∼forbidden(f 701, r01 , 40)

Fig. 3. Dialectical tree for the query clearance(f701, r01, 40).




⎪ ∼clearance( f 701, r01, 20) ≺ forbidden( f 701, r01, 20), ⎪ At time unit 50, we see that when there is a critical failure on the
⎨ ,⎪

runway(r01), flight ( f 701) plane, the system will allow it to land independently of runway con-
A2 = .

⎪ forbidden( f 701, r01, 20) ≺ windy(r01, 20) , ⎪

ditions. So the safety recommendation for the prospective decision
⎩ ⎭ clearance(f701, r01, 50) will be Perform as a warranted argument ∅,
windy(r01, 20) ≺ wind_speed(r01, 30, 20), 30 > 15
clearance(f701, r01, 50) can be found, which is based on the deriva-
The next case shows how having information about fuel reserve tion using the strict rule
comes into play. We see that at time 30, at runway r01 wind speed
is over 15 knots and the flight f701 has fuel for 60 min, which is clearance( f 701, r01, 50)) ← critical_ f ailure( f 701, 50).
plenty of time. In this case, in regards to the prospective decision
Notice that at time unit 60, a richer situation arises: In this sce-
clearance(f701, r01, 30) the safety recommendation of the system is
Do not perform as an argument similar to the previous situation can nario the flight has a critical malfunction at time 60 and sensor in-
be built: A3 , ∼clearance( f 701, r01, 30) where formation indicates that wind speed at runway r01 is also 100 knots

⎫ at that time. The safety recommendation for the safety prospective

⎪ ∼clearance( f 701, r01, 30) ≺ forbidden( f 701, r01, 30), ⎪
⎨ ,⎪
⎬ decision clearance(f701, r01, 60) is Perform based on the evidence
runway(r01), flight ( f 701) that the DeLP answer for clearance(f701, r01, 60) is Yes as there is
A3 = .

⎪ forbidden( f 701, r01, 30) ≺ windy(r01, 30) , ⎪
⎪ an empty warranting argument. However notice that in this case
⎩ ⎭
windy(r01, 30) ≺ wind_speed(r01, 40, 30), 40 > 15 the DeLP answer for windy(r01, 60) is Yes and the answer for for-
bidden(f701, r01, 60) is also Yes, but in this case the argument for
Notice that in this case, the information about the remaining fuel
∼clearance( f 701, r01, 60) based on the defeasible rule
is apparently not taken into account but it actually is as we show in
the next case. ∼clearance( f 701, r01, 60) ≺ forbidden( f 701, r01, 60)
When we consider the case of flight f701, at time unit 40, there
is a wind speed of 16 knots at runway one, which is not much is not activated as, according to the specificity criterion for comparing
but it is over the safety limit of 15 knots. Besides f701 has only 12 arguments, it is weaker than the argument for clearance(f701, r01, 60)
min left of fuel. We will see that the answer for the query clear- based on the strict rule
ance(f704, r01, 40) is obtained through an interesting dialectical pro-
clearance( f 701, r01, 60) ← critical_ f ailure( f 701, 60),
cess modeled by the dialectical tree presented in Fig. 3 making the
recommendation to be Perform. We can see that there is an argu- so the latter is preferred.
ment A6 , clearance( f 701, r01, 40) (expressing that the plane can
land because it has fuel trouble having only 12 min left) which is Finally, we will discuss how our approach is also able to detect
defeated by another argument B6 , ∼clearance( f 701, r01, 40) (say- inconsistencies in the knowledge representation, and this can be used
ing that the plane cannot land because it is windy) that in turn to help not only users of the ATC system but implementers as well.
is defeated by C6 , ∼forbidden( f 701, r01, 40) (that says that the
plane can land whenever it has fuel trouble in windy conditions), Example 2. Consider the DeLP program P from Example 1 and add
thus reinstating A6 ; on the other hand A6 is also defeated by further rules P
along with new sensor information S’ where
D6 , forbidden( f 701, r01, 40) (arguing that the plane should be for- P
= {∼clearance(ID, A, T ) ← runway_closed(A, T )} and
bidden to land because of windy conditions) which in turn is also
defeated by C6 where: S
= {flight ( f 701), critical_ f ailure( f 701, 70),

⎫ runway_closed(r01, 70)}.

⎪ clearance( f 701, r01, 40) ≺ ∼forbidden( f 701, r01, 40), ⎪

⎪ , ⎪⎪

⎨ runway(r01), flight ( f 701) ⎬ Notice that P ∪ P
∪ S
would not be a valid DeLP program as the set

A6 = ∼forbidden( f 701, r01, 40) ≺ f uel_trouble( f 701, 40) , of strict rules would be contradictory because both the literals clear-



⎪ f uel_trouble( f 701, 40) ≺ remaining_ f uel ( f 701, 12, 40), ⎪⎪ ance(f701, r01, 70) and ∼clearance( f 701, r01, 70) have a strict deriva-

⎩ ⎪

12 < 15 tion from it. This situation will be detected and pointed out by the

⎫ DeLP interpreter.

⎪ ∼clearance( f 701, r01, 40) ≺ forbidden( f 701, r01, 40), ⎪
⎨ ,⎪

runway(r01), flight ( f 701)
B6 = 2.3. Continuous reasoning with DeLP

⎪ forbidden( f 701, r01, 40) ≺ windy(r01, 30) , ⎪

⎩ ⎭
windy(r01, 40) ≺ wind_speed(r01, 16, 40), 16 > 15 We now extend the framework presented above for producing



⎪ ∼forbidden( f 701, r01, 40) ≺ f uel_trouble( f 701, 40), ⎪ recommendations to assist in deciding about critical safety issues of

⎪ , ⎪ ⎪

⎨ windy(r01, 40) an ATC system. The scenario presented in Example 1 showed that

⎬ DeLP can be used to characterize a recommender system for assisting
C6 = f uel_trouble( f 701, 40) ≺ remaining_ f uel ( f 701, 12, 40),

⎪ 12 < 15

⎪ human air-traffic controllers. However, in a real-time environment

⎪ ⎪

⎩ ⎭ where border conditions do not remain fixed as implied by the use
windy(r01, 40) ≺ wind_speed(r01, 16, 40), 16 > 15
 of an static DeLP program, the framework presented in Section 2.2
D6 = forbidden( f 701, r01, 40) ≺ windy(r01, 40) , . is insufficient. Because of this, we now extend it for including sensor
windy(r01, 40) ≺ wind_speed(r01, 16, 40), 16 > 15 information that continuously feeds the DeLP system. In this regard,
372 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

Fig. 4. Semantics for temporal logic.

we show how the DeLP system can produce a stream of recommen- Example 5. Recalling the answers of the system presented at
dations in real time according to the input fed to the system. Notice Example 1, the chain of safety recommendations would be:
that below τ will refer to an interval and T to a specific point in time.
( f 701,r01,T ) ( f 701, r01, 0)
10
Sclearance
Definition 5. A data stream S(key, τ ) = v1 , v2 , . . . , vi , . . . is an infi- = (Unable to Reach Recommendation, 0),
nite list of values vi for the given key updated every τ units of time.
When a sensor fails to collect a sample measure at a certain time unit,
(Perform, 10), (Do not perform, 20),
the value ⊥ is added to the data stream. (Do not perform, 30), (Perform, 40),
(Perform, 50), (Perform, 60) .
Example 3. In the scenario defined by the DeLP program P
in Example 1, we can consider that there are three streams In the next section we will consider a complementary approach
of sensor data: Swind_speed (r01, 10), Sremaining_ f uel (f701, 10) and where model checking and defeasible argumentation are combined
Scritical_ f ailure ( f 701, 10). The first stream provides the wind speed at to assist UAVs.
runway r01 at each time unit, the second stream updates the remain-
ing fuel for the flight f701 also at each time unit, and the third informs 3. Model checking with Hybrid Logics
about a critical failure situation in flight f701. In the running example,
the stream Swind_speed (r01, 10) = 3, 30, 40, 16, ⊥, 100 will generate Model checking (Lange, 2009) refers to the problem of verify-
the DeLP facts presented in Fig. 2, namely wind_speed(r01, 3, 10), ing, given a certain model, whether different properties hold for that
wind_speed(r01, 30, 20), wind_speed(r01, 40, 30), wind_speed(r01, model. Properties are represented using formulas usually specified in
16, 40) and wind_speed(r01, 100, 60). different types of logic languages such as Linear Temporal Logics, De-
According to Fig. 2, we only have information for scription Logics or Hybrid Logics, while the model is given as a labeled
the remaining fuel at time units 30 and 40, the stream graph known as a Kripke structure.
Sremaining_ f uel ( f 701, 10) = ⊥, ⊥, 60, 12, ⊥, ⊥ is stored as the DeLP Temporal logics (TL) (see Franceschet and de Rijke, 2006 for details)
facts: remaining_ f uel ( f 701, 60, 30) and remaining_ f uel ( f 701, 12, extend propositional logics with temporal operators future F, past P,
40). Regarding the critical failure sensor at flight f701, the stream until U, since S, so that with the set of propositional symbols P =
Scritical_ f ailure (f701, 10) = ⊥, ⊥, ⊥, ⊥, yes, yes will produce the facts: { p1 , p2 , . . .}, the syntax of temporal logic is the one below
critical_ f ailure( f 701, 50) and critical_ f ailure( f 701, 60).
ϕ :=  | p | ¬ϕ | ϕ ∧ ϕ | Fϕ | Pϕ | ϕ Uϕ | ϕ Sϕ
A continuous query is a query executed continuously using data
The dual of P is Hα = ¬P¬α and the dual of F is Gα = ¬F¬α .
arriving from sensors. Formally:
The semantics of TL is presented in Fig. 4, where M = M, R, V is
Definition 6. A continuous query CQP = (Q, S, τ ), is a query Q posed a Kripke structure, m ∈ M, and g is an assignment.
to a DeLP program P executed continuously every τ time units HL extend temporal logics with special symbols that name indi-
against data arriving from a set S of sensor streams. vidual states and access states by name (Areces & ten Cate, 2007).
With nominal symbols N = {i1 , i2 , . . .} called nominals and Svar =
Example 4. The continuous query {x1 , x2 , . . .} called state variables the syntax of hybrid logics is shown
below.
CQP = (clearance( f 701, r01, T ), {Swind_speed (r01, 10),
ϕ := TL | i | x | @xt ϕ |↓ x.ϕ | ∃x.ϕ
Sremaining_ f uel ( f 701, 10), Scritical_ f ailure ( f 701, 10)}, 10)
Let Wvar be the set of state variables. With i ∈ N , x ∈ Wvar , t ∈
investigates the validity of the clearance(f701, r01, T) predicate at each N ∪ Wsym , the set of state symbols Wsym = N ∪ Wvar , the set of atomic
time unit T, based on the wind speed at runway r01, the remaining letters Alet = P ∪ N , and the set of atoms A = P ∪ N ∪ Wvar , the op-
fuel of flight f701 and the critical failure sensor at flight f701. At time erators @, ↓, ∃ are called hybrid operators.
instance T = 0, the query is computed based on the data that arrive
at each time step from the input streams. Definition 8. A hybrid Kripke structure M consists of an infinite se-
quence of states m1 , m2 , . . . , R a family of binary accessibility rela-
A continuous query generates an output stream of safety deci- tions on M and a valuation function V that maps ordinary propo-
sions, which are computed based on the semantics of DeLP. sitions and nominals to the set of states in which they hold, i.e.
τ ( p , . . . , p , T ) is a
M = m1 , m2 , . . . , R, V .
Definition 7. A chain of safety recommendations SD 1 n
stream of prospective safety decisions D about parameters p1 , . . . , pn In the graph-based representation of M, the nodes correspond
starting at time 0, updated every τ units of time, and annotated with to the sequence of states brought about by different modalities
the time in which D is valid. represented as links between states. Each state is labeled by a
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 373

Fig. 5. Semantics for hybrid logic.

different nominal, while links are labeled by the relation connecting 1. (PU1 ) Adding one relation element: S
= S, L
= L, and R
= R ∪
two states. {(si , s j )} where (si , s j ) ∈ R for two states si , sj ∈ S.
Fig. 5 presents the semantics of hybrid logic used in our approach, 2. (PU2 ) Removing one relation element: S
= S, L
= L, and R
= R \
where the hybrid logic operator @t is used to shift the evaluation {(si , s j )} where (si , s j ) ∈ R for two states si , sj ∈ S.
point to the state named by the nominal or state variable t, the dow- 3. (PU3 ) Changing labeling function in one state: S
= S, R
= R, s ∈
narrow binder ↓x to assign the state variable x to the current state of S, L
(s∗ ) = L(s∗ ), and L
(s) = L(s) for all states s ∈ Sࢨ{s∗ }.
evaluation and the existential binder ∃x to refer to some state in the 4. (PU4 ) Adding one state: S
= S ∪ {s∗ }, s ∈ S, R
= R, ∀s ∈ S, L
(s) =
model M using the state variable x. L(s).

Example 6. A landing is considered safe if all possible major risks Our task is to build an argumentative based decision procedure
are identified and managed. For this case here we will consider wind that takes as input a model M and a formula φ , it outputs a model M

speed, remaining fuel and critical failure as possible risks. To ease the where φ is satisfied. Fig. 6 depicts the proposed model repair frame-
validation process, we will assume that the sensors used to measure work.
the required data are failure-free and the two streams of sensor data The task addressed here focuses on a situation on which the spec-
received return accurate information. We know that a flight has al- ification of the model is not consistent. Consider the following two
ways clearance to land no matter the wind speed and remaining fuel “rules of the air” (Webster, Fisher, Cameron, & Jump, 2011a):
criteria if it has had a critical failure:
R3 : Collision Avoidance – “When two UAVs are approaching each
↓ x.critical_ f ailure([F]x → [clearance]) (1) other and there is a danger of collision, each shall change its
The formula states that if there is a state in which critical failure course by turning to the right.”
is encountered, then for all upcoming states, clearance should be se- R4 : Navigation in Aerodrome Airspace – “An unmanned aerial vehicle
lected. passing through an aerodrome airspace must make all turns to
Denying the clearance to land for a flight implies that it is forbid- the left [unless told otherwise].”
den to land as long as there is no critical failure and viceversa. Let
¬(clearance) → f orbidden U critical_ f ailure (2) ⎧ ⎫
⎨alter_course(uav1 , right ) ≺ aircra f t (uav1 ), aircra f t (uav2 )
⎪ ⎪

We also know that a flight is allowed to land at time T on runway collision_hazard(uav1 , uav2 )
A2 =
A if the wind is calm on runway A at time T and it is forbidden if it is ⎪
⎩ collision_hazard ( uav1 , uav2 ) ≺ approaching_head_on ( uav1 , uav )
2 ⎪,

distance(uav1 , uav2 , X ), X < 1000
windy at that time. A runway is windy at time T if the wind speed is
greater than 15 knots. Exceptionally a flight is not forbidden to land
in the argument A2 , alter_course(uav1 , right ) , a collision hazard oc-
on a windy runway if it has fuel trouble. A flight is considered to have curs when two aerial vehicles uav1 and uav2 approach head on, and
fuel trouble if its remaining fuel allows it to fly less than 15 min, oth- the distance between them is smaller than a threshold. The collision
erwise the flight has no fuel trouble: hazard further triggers the necessity to alter the course to the right,
according to the R3 specification. Let
↓x.(windSpeed > speedthreshold )(@x ¬( f orbidden)
 
→ remainingF uel < f uelthreshold ) (3) alter_course(uav1 , le f t ) ≺ aircra f t (uav1 ), nearby(uav1 , aerodrom),
A3 = change_direction_required(uav1 )
We also know that the wind speed, the remaining fuel and critical change_direction_required(uav1 ) ≺ collision_hazard(uav1 , uav2 )
failure are measured values and the measures are considered accu-
rate (measured → ). in the argument A3 , alter_course(uav1 , le f t ) , if a change of direc-
tion is required in the aerodrome airspace, the direction should be
[windSpeed > speedthreshold ] → measured (4)
altered to the left. A possible conflict occurs between arguments
A2 , alter_course(uav1 , right ) and A4 , ∼alter_course(uav1 , right )
[remainingF uel < f uelthreshold ] → measured (5)
where:

[critical_ f ailure] → measured (6) A4 = {∼alter_course(uav1 , right ) ≺ alter_course(uav1 , le f t )}.


One can observe that knowing that formulas (4) and (5), respec- The command A5 , ∼alter_course(uav1 , le f t ) conveyed from the
tively (1) will always denote valid values as the measurements are ground control system to change direction to the right acts as a de-
considered accurate, then also formula (3), respectively (2) and (6) feater for the argument A3 , where (notice that strict rules should not
can be validated. form part of argument structures as they are not points of attack, we
abuse notation here just for emphasis):
4. Model repair for an unmanned aircraft vehicle
A5 = {∼alter_course(uav1 , le f t )
Given a Kripke structure M and a formula φ , with M¬  φ , the ← conveyed_command_course(uav1 , right )}
task of model repair is to obtain a new model M
such that M
 φ . We
Assume that the current model M satisfies the specification R3 .
consider the following primitive update operations (Zhang & Ding,
The problem is how to repair M with the model M
which also sat-
2008).
isfies R4 . Our solution starts by treating rules R3 and R4 as structured
Definition 9. Given M = (S, R, L), the updated model M = (S
, R
, L
) arguments. The conflict between them are solved by a defeasible the-
is obtained from M by applying the primitive update operations: ory encapsulated as DeLP program, which outputs a dialectical tree
374 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

Initial
Kripke
Model

Property φ

no
Model Checking: φ?

yes
no Rebuild Ini-
Repair solution in DeLP?
tial Model
φ

yes

Update Model

Fig. 6. Argumentative model repair algorithm.

of the argumentation process. The information from this tree is fur- base system (GBS), which provides all the required coordinates for
ther exploited to decide which primitive update operations PUi are the UAV. The autonomous decision making performed by the UAV
required to repair the model. control system must consider the general set of safety regulations im-
These four heuristics are illustrated in Section 5, by verifying the posed to a UAS during a mission at all times.
specifications in hybrid logics on the updated models. We propose a solution for modeling such UASs in compliance to
the set of safety regulations. We will zoom over the following subset
5. Interleaving argumentation and model checking of the “Rules of the Air” dealing with collision avoidance:

R1 : Obstacle Detection – “All obstacles must be detected within an


We will further detail an automated solution for model self-
acceptable distance to allow performing safely the obstacle avoid-
adaption based on the interleaving between HLs based model check-
ance maneuver”
ing and argumentation and illustrate it on a real world scenario for an
R2 : Obstacle Avoidance – “All obstacles must be avoided by perform-
unmanned aircraft system (UAS). We argue that model checking pro-
ing safely a slight deviation from the preestablished path and an
vides a significant support in the analysis of a system’s model, while
immediate return to the initial trajectory once all collision risks
the expressivity of HLs used in formalizing different constraints that
are eliminated.”
the system must comply to, enables a more refined verification by
R3 : Collision Avoidance – “When two UAVs are approaching each
allowing to focus over specific states or transitions of interest in the
other and there is a danger of collision, each shall change its
model. Once the non-compliant states or transitions between states
course by turning to the right.”
are identified, DeLP provides a filtering between possible repair solu-
tions, considering only the minimum set of changes to be applied to The first rule states that all obstacles (e.g. human-controlled air-
the model such that compliance is achieved, allowing in the end to crafts, other UAVs, etc.) that are interfering with the initial trajectory
reach a viable update decision. of the UAV must be signaled within a certain limit of time such that to
allow avoidance maneuvers to be performed by the UAV in safe con-
5.1. Illustrative example ditions. The avoidance maneuver as shown by rules R2 and R3 consists
of a slight change of the initial path to the right such that to allow the
We consider the scenario presented in Webster, Fisher, Cameron, safe avoidance of the approaching UAV followed by a repositioning on
and Jump (2011b), referring to the safe insertion of an unmanned air- the initial trajectory.
craft vehicle (UAV) into the civil air traffic. The scope is to demon-
strate that safety requirements are being met by such an UAV so that 5.2. Kripke model for the unmanned aerial vehicle
they do not interfere or put in danger human controlled aircrafts. A
mission is considered safe if all the major risks for the UAV are identi- We will further represent the behavior of the UAV noted by uav1
fied and managed (e.g. collision with other objects or human-piloted captured in an obstacle avoidance scenario. The following states will
aircrafts and loss of critical functions). An UAV comes equipped with be considered in constructing the Kripke model: path-following (pf),
an autonomous control system, responsible for decision making dur- obstacle detection (od), turn left (tl) and turn right (tr). To each state
ing the mission and keeps a communication link open with a ground- we will attach the boolean state variable uav2 , which will indicate
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 375

r2
r3
r0 pf L0{¬uav2} od L1{uav2}

r1
r4

r5

tl L2{¬uav2} tr L3{¬uav2}

r6

Fig. 7. Initial Kripke model M0 for the UAV.

the presence or absence of another approaching UAV. In the path- Checker (HLMC) (Franceschet & de Rijke, 2006). Each formula in HL is
following state pf, the UAV uav1 performs a waypoint following ma- also given as input to the HLMC. Once the tests are performed for each
neuver, which includes periodical turns to the left or to the right. The formula against the Kripke model, we can complete the verification
appearance of an obstacle (uav → ) leads to the transition of the UAV of the model. The result confirms that the modeled Kripke structure
into obstacle detection state od and from there in turn right tr state as of the UAS complies with the defined safety regulations.
part of the obstacle avoidance maneuver, followed by a return to the
initial path-following state. 5.4. Model repair using arguments
The initial model M0 is presented below:
M0 = {od, tr, tl, p f }, {r0 , r1 , r2 , r3 , r4 , r5 , r6 }, We focus on the UAV scenario and we illustrate a solution for mod-
eling the existing UAS to comply to newly introduced rules. In this di-
{( p f, {¬uav2 }), (od, {uav2 }), (tr, {¬uav2 }), (tl, {¬uav2 })}
rection, we will consider the initial set of rules extended by a newly
Notice that r0 , . . . , r6 are the transitions between states (or acces- adopted norm for UAVs navigating in an Aerodrome Airspace:
sibility relations between states) and they belong to R—a family of
binary accessibility relations from the Hybrid Kripke Structure (see R4 : Navigation in Aerodrome Airspace – “An unmanned aerial vehicle
Definition 8 for details). The corresponding hybrid Kripke structure is passing through an aerodrome airspace must make all turns to
illustrated in Fig. 7. the left [unless told otherwise].”

As a first step we check whether the existing UAS model complies


5.3. Verifying compliance to safety regulations
to the new regulation R4 translated into HLs. To be able to differen-
tiate the different contexts for the reasoning process (the presence
Once the modeling of the UAS is done, we have to verify whether
or the absence of an aerodrome in the vicinity of the UAV), we add
the mentioned safety regulations hold for this model. To be able to
to each possible state the boolean variable a, which is initially set to
perform model checking, we will further express the two safety reg-
false and which becomes true when the UAV enters an aerodrome
ulations using hybrid logics:
airspace:
R1 : [F ]od → tr (7)
R4 : @i a → ([F ]i → (¬tr)) (10)
The above formula corresponds to the first safety regulation R1
and states that once the od (ObstacleDetect) state is reached then The formula translates in natural language as: all transitions from
all the successor transitions should contain the transition toward an the states in which the state variable aerodrome a holds should not lead
avoidance maneuver state, for our case here, state tr, meaning that to the tr (TurnRight) state, the only state which is forbidden when nav-
the obstacle was detected in time and it allowed the avoidance ma- igating inside the aerodrome space. Since the only states from which
neuver to be safely performed. turns are possible are pf and od, we further consider only a reduced
subset of states for the verification process. Going back to formula
R2 : [F ](tr ∨ tl ) → p f (8) (10), one can observe that it does not hold for the existing model.
The formula corresponding to safety regulation R2 states that all Considering that the aerodrome a state variable is true for our model,
the following transitions from the TurnRight or TurnLeft state S should then a turn to the left is not possible from the od state, but only to
always lead to the PathFollow state. The formula below corresponding the tr (TurnRight) state. Going further, from the pf state transitions
to safety regulation R3 states that if another UAV is detected in the are possible to the tl (TurnLeft) state, but, at the same time, to the tr
od (ObstacleDetect) state then all the following transitions should be (TurnRight) states. Therefore, the existing model does not comply to
done towards state tr (TurnRight): the new regulation. Hence, a separate model Ma should be consid-
ered for UAVs passing through an aerodrome space, which does not
R3 : @od uav2 → ([F ]od → tr) (9)
include transitions to tr state (see Fig. 8).
Model checking is performed to verify whether the formulas hold By analyzing the model for an UAV navigating through an aero-
or not for that model. To perform the model checking automati- drome space depicted in Fig. 8, one can observe that there is no pos-
cally, the Kripke structure corresponding to the UAS model is trans- sibility to perform a collision avoidance maneuver once an obstacle is
lated into an XML file and given as input for the Hybrid Logic Model detected. Hence, a more refined repairing solution should be applied.
376 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

r3
r0 pf L0{¬uav2, a} od L1{uav2, a}

r1

r5

tl L2{¬uav2, a} tr L3{¬uav2, a}

r6

Fig. 8. Kripke model Ma for the UAV in an aerodrome space.

We argue that the initial model M0 could be extended to include also sponds to the set  of strict rules, while the labels and the transitions
the new rules without having to construct a new model from the be- between states can be regarded as belonging to the set of defeasible
ginning. Although various algorithms were already presented for the rules . Once a formal verification is performed on the model, which
repair of a Kripke model (Chatzieleftheriou et al., 2012), we propose yields a negative result on what it concerns conformance to a certain
a method based on argumentation for extending the model such that set of constraints α , we are able to identify whether the presence or
it complies to the updated set of regulations. The proposed solution absence of a certain state(s) or transition(s) led to the undesired out-
does not only eliminate the complexity of proposed repair/updating come for the model checking task. Depending on the output of the
algorithms (Chatzieleftheriou et al., 2012), but it allows the system to model checker, the following steps are performed:
adapt to new information in a faster and more efficient manner.
To decide upon the most suitable solution (with minimum 1. Each non-compliant transition is considered for a query Qr and
changes) for model repair, we represent several possible extensions an argument Ar , Qr , is entailed for clarifying the infringement
to the Kripke model as defeasible arguments and include them in of a constraint α (promoting or demoting the operation PU2 to be
DeLP for choosing the best possible option between different con- performed on the model).
flicting arguments. 2. Each indication of an absence of a required transition leads to a
Firstly, consider the uav1 is in the obstacle detect od ∈ S state, new query Qrx and an argument Arx , Qrx , which promotes the
where S is the set of states in M
with the labeling function L(od) = introduction of the missing transition rx (by performing operation
{uav2 , ¬a}. It means that uav1 has detected another aerial vehicle PU1 ).
uav2 . Assume that in this state the DeLP program will warrant the op- 3. Each non-compliant labeling of a state is considered for a query
posite conclusion a. This triggers the application of the primitive op- Ql and an argument Al , Ql , is entailed for clarifying the infringe-
eration PU3 which updates the labeling function L(od) = {uav2 , ¬a} ment of the constraint α , which results in an update to the label-
with L
(od) = {uav2 , a}. ing values (by performing operation PU3 ).
Secondly, assume that the DeLP program based on the state vari- 4. Each indication of an absence of a required state sx leads to an up-
ables uav2 , and ¬a and the nominal od infers a relation ri between od date of the  set of the defeasible logic program P by x  ∪ {sx}
and another nominal i ∈ N of the model. The repair consists of apply- and of the Kripke model (by performing operation PU4 ) and the
ing the operation PU1 on M
, where the relation set R
is extended argumentation steps 1–3 are repeated for the updated defeasible
with a relation between the two states ob and i: R
= R ∪ {(od, i)}. The program and Kripke model.
reasoning mechanism is possible because hybrid logic provides the
possibility to directly refer to the states in the model, by means of Once the arguments are constructed, the decision over a certain
nominals. repair solution is taken using DeLP.
Thirdly, the program can block the derivation of a relation Going back to our example, we will consider different model up-
r between the current state and a next state. For instance, if dated based on arguments A2 –A5 , promoting the rules R2 , R3 , respec-
L(od) = {uav2 , a} and the argument A3 succeeds, the transition be- tively R4 depicted in Figs. 9–12 .
tween state od and state turn_right can be removed. Formally, One can observe by analyzing the initial model (see Fig. 7) that
R
= R \ {(od, turn_right )}. there is no possibility for the UAV to go into the tl state once it has
Fourthly, if the DeLP program warrants, based on the current state reached the od state, but only to the tr state. Since inside the aero-
variable and available arguments, a nominal i which does not appear drome space, only turns to the left are permitted, then the link con-
in S, the set of states is extended with this state: S
= S ∪ {i}. necting od and tr (r4 ) should be taken out from the model (see Fig. 8).
We argue that for compliance to the new regulation, we only need to
5.5. Adapting the model to new specifications change the links in the model to point from the od and pf states only
to the tl state, when the state variable a is set to true (indicating the
In order to apply argumentation for the repair of Kripke mod- presence of an aerodrome in the vicinity of the UAV).
els one must be able to map the information encapsulated in the Therefore, we need to perform the following PU operations for up-
Kripke structure to a DeLP program P such that arguments can be dating the model:
constructed and based on them updates to be performed on the initial
model. In this direction, we view the elements of a Kripke structure 1. (PU2 ) Remove the relation elements (od, tr) and (pf, tr) such that
(states, labels and transition between states) as part of a defeasible we have: S
= S, L
= L, and R
= R \ {(od, tr), ( p f, tr)} as indicated
logic program P = (, ), where the information about states corre- by argument A3 (in Fig. 10).
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 377

2. (PU1 ) Add the relation element (od, tl) such that we have: S

= S
,
L

= L
, and R

= R
∪ {(od, tl )} as indicated by arguments A3 and
od L1 {uav2} A4 (see Figs. 10 and 11).

However, the remove operation should be necessary only when


that specific relation element causes a conflict between two argu-
r4 ments.
We further consider a new argument A6 , alter_course(uav1 ,
le f t ) , which suggests updating rule R3 by allowing the obstacles to
be avoided to the left, instead of to the right when inside the aero-
drome space, where:
⎧ ⎫
tr L3 {¬uav2} ⎨alter_course(uav1 , le f t ) ≺ aircra f t (uav1 ), aircra f t (uav2 )
⎪ ⎪

collision_hazard(uav1 , uav2 )nearby(uav1 , aerodrom)
A6 = .
⎩collision_hazard(uav1 , uav2 ) ≺ approaching_head_on(uav1 , uav2 ),⎪
⎪ ⎭
distance(uav1 , uav2 , X ), X < 1000
Fig. 9. Transitions promoted by argument A2 .

If we go back to argument A2 , promoting the application of the


initial rule R2 and A6 , sustaining a slight modification of the rule R2
for navigation in aerodrome space, one can see that they do not attack
pf L0{¬uav2, a} od L1{uav2, a}
each other as they offer solutions for different contexts: the A2 argu-
ment refers to collision avoidance outside the aerodrome space, while
r1 the A6 argument considers the case of collision avoidance when the
UAV is nearby an aerodrome. A similar reasoning applies for the tran-
rx sition (pf, tr), which will be possible only when the state variable a
does not hold at pf. Therefore, the PU2 step can be left out and the
updating of the model can be done only through a PU1 operation.
The decision to turn left or turn right will be taken in accordance
to the value of the state variable a, which indicates the presence or
tl L2{¬uav2, a} absence of an aerodrome in the vicinity of the UAV.
We illustrate the update operation by adding a link r7 from the od
state to the tl state. Additionally, we attach to each state the boolean
Fig. 10. Transitions promoted by argument A3 . state variable a, such that it allows the UAV to perform only those
transitions that comply to the set of regulations in different contexts,
respectively inside or outside the aerodrome space.
The updated model Mx is presented in Fig. 13.
od L1{uav2, a}
M1 = {od, tr, tl, p f }, {r0 , r1 , r2 , r3 , r4 , r5 , r6 , r7 }, {( p f, {¬uav2 }),
(od, {uav2 }), (tr, {¬uav2 , ¬a}), (tl, {¬uav2 })}
One can observe that if the UAV reaches the od state, then it will
decide to perform the transition to the next state that has the same
rx value for a as the od state. Therefore, if the UAV uav1 detects another
approaching UAV uav2 and it is outside the aerodrome space (¬a), it
will look for the next possible state that has the same value for the
state variable a. Fig. 13 shows that the state that complies to this con-
dition is tr. Also, if uav1 is in the pf state and the state variable a holds
at that state, then the possible transitions will be tl or od.
tl L2{¬uav2, a} If uav1 reaches the od state, while in the vicinity of an aerodrome,
it will perform a transition to the tl state, where the state variable
a also holds. If uav1 reaches pf then it will perform a transition to
Fig. 11. Transitions promoted by argument A4 . either tl or od states. The other transitions from the model are not
dependent on the state variable a, therefore they will remain the
same as in the initial model. By adding the condition ¬a for reaching
state tr, we can avoid transitions to that state when a holds for the
pf L0{¬uav2} od L1{uav2} model.
By checking next the R1 , R2 , R3 and R4 formulas against M, the re-
sults returned by HLMC showed that they hold for the updated model.
Notice that in the example above the information from a state is
r4 considered a fact as it models the world corresponding to that state.
The conditionals/implications are considered to be the transitions be-
tween rules. In the DeLP programs we consider, the  set consists of
r2 uav1 (presence of the first UAV), uav2 (presence of the second UAV),
tr L3{¬uav2} a (presence of an aerodrome), ¬uav1 (absence of the first UAV), ¬uav2
(absence of the second UAV), ¬a (absence of an aerodrome).
The illustrated example captures a simple scenario for UAV mis-
sions, but we argue that more complex conflicting situations can be
Fig. 12. Transitions promoted by argument A5 .
handled by the presented argumentation framework.
378 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

r2
r3
r0 pf L0{¬uav } od L1{uav }

r1
r4
r7
r5

tl L2{¬uav } tr L3{¬uav, ¬a}

r6

Fig. 13. Extended Kripke model Mx for the UAV compliant with the new regulation.

% Safety criteria for landing:


clearance(ID, A, T ) <- critical_failure( ID, T ).
clearance(ID, A, T ) -< ~forbidden( ID, A, T ), runway(A), flight(ID).
~clearance(ID, A, T ) -< forbidden( ID, A, T ), runway(A), flight(ID).
~forbidden(ID, A, T) -< calm(A,T).
forbidden(ID, A, T) -< windy(A, T).
~forbidden(ID, A, T) -< fuel_trouble(ID,T).
~forbidden(ID, A, T) -< fuel_trouble(ID,T), windy(A, T).
windy(A, T) -< wind_speed(A, S, T), S > 15.
calm(A,T) -< wind_speed(A, S, T), S < 15.
fuel_trouble( ID, T ) -< remaining_fuel( ID, R, T ), R < 15.
~fuel_trouble( ID, T ) -< remaining_fuel( ID, R, T ), R > 15.

runway(r01) <- true. % Runway information: r01 is a runway.


flight( f701 ) <- true. % Flight information: f701 is a flight.
wind_speed( r01, 3, 10 ) <- true. % Wind speed at runway r01 is 3 knots at time 10.
wind_speed( r01, 30, 20 ) <- true. % Wind speed at runway r01 is 30 knots at time 20.
wind_speed( r01, 40, 30 ) <- true. % Wind speed at runway r01 is 40 knots at time 30.
remaining_fuel( f701, 60, 30 ) <- true. % Remaining fuel is 60 at time 30.
wind_speed( r01, 16, 40 ) <- true. % Wind speed at runway r01 is 16 knots at time 40.
remaining_fuel( f701, 12, 40 ) <- true. % Remaining fuel is 12 at time 40.
critical_failure( f701, 50 ) <- true. % There is a critical failure in flight f701 at time 50.
critical_failure( f701, 60 ) <- true. % There is a critical failure in flight f701 at time 60.
wind_speed( r01, 100, 60 ) <- true. % Wind speed at runway r01 is 100 knots at time 60.
Fig. 14. DeLP code for the DeLP program presented in Example 1.

6. Instrumentation support by the DeLP interpreter for the query clearance(f701, r01, 40) that we
presented conceptually in Fig. 3 can be seen in Fig. 15.
In this section we discuss the instrumentation support for imple-
menting the proof of concept for the approach proposed in this work. Support for performing model checking. Model checking is performed
We worked with the DeLP interpreter (García & Simari, 2014) and the to verify whether the formulas hold or not for that model. To auto-
HLMC (Franceschet & de Rijke, 2006). mate the model checking task, each HL formula is given as input for
the HLMC (Franceschet & de Rijke, 2006) and it is verified against the
Kripke structure corresponding to the UAS model translated into an
Support for reasoning in DeLP. The DeLP interpreter is a working im- XML file. Each state of the Kripke model is specified by means of an
plementation of DeLP that runs as an online server and as a stan- identifier and by the introduction of one or more nominals denoting
dalone program. The examples presented in Section 2 were tested in it, while the binary accessibility relations are identified by a name
the standalone version of DeLP. The code presented in the formal lan- and a list of the pairs of worlds for which the relation holds. For the
guage in Figs. 1 and 2 is presented in Fig. 14. valuation function of the Kripke model, propositional symbols are in-
It can be seen that the conceptual notation presented in Example 1 cluded together with the set of worlds at which they hold, with a
can be translated straightforwardly to satisfy the syntatic conven- partial specification of the Kripke model shown in Fig. 16.
tions of the DeLP interpreter. Facts “p(a)” are codified as “p(a) <- Each HL formula is provided according to the HLMC specific syntax
true”, strict rules “p(X) ← q(x)” as “p(X) <- q(X)”, and defea- (Franceschet & de Rijke, 2006), for example: A(F(od))tr, which corre-
sible rules “p(X)≺ q(X)” as “p(X) -< q(X)”. The answer produced sponds to the first safety regulation R1 (see Formula 7).
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 379

Black nodes are undefeated arguments


Red nodes are defeated arguments

<{clearance(f701, r01, 40)-< ~forbidden(f701, r01, 40), runway(r01), flight(f701),


flight(f701),
runway(r01),
~forbidden(f701, r01, 40)-<fuel_trouble(f701, 40),
fuel_trouble(f701, 40)-<remaining_fuel(f701, 12, 40), 12<15,
12<15,
remaining_fuel(f701, 12, 40)},
clearance(f701, r01, 40)>

<{~clearance(f701, r01, 40)-<forbidden(f701, r01, 40), runway(r01), flight(f701), <{forbidden(f701, r01, 40)-<windy(r01, 40),
flight(f701), windy(r01, 40)-<wind_speed(r01, 16, 40), 16>15,
runway(r01), 16>15,
forbidden(f701, r01, 40)-<windy(r01, 40), wind_speed(r01, 16, 40)},
windy(r01, 40)-<wind_speed(r01, 16, 40), 16>15, forbidden(f701, r01, 40)>
16>15,
wind_speed(r01, 16, 40)},
~clearance(f701, r01, 40)>
<{~forbidden(f701, r01, 40)-<fuel_trouble(f701, 40), windy(r01, 40),
windy(r01, 40)-<wind_speed(r01, 16, 40), 16>15,
16>15,
<{~forbidden(f701, r01, 40)-<fuel_trouble(f701, 40), windy(r01, 40), wind_speed(r01, 16, 40),
windy(r01, 40)-<wind_speed(r01, 16, 40), 16>15, fuel_trouble(f701, 40)-<remaining_fuel(f701, 12, 40), 12<15,
16>15, 12<15,
wind_speed(r01, 16, 40), remaining_fuel(f701, 12, 40)},
fuel_trouble(f701, 40)-<remaining_fuel(f701, 12, 40), 12<15, ~forbidden(f701, r01, 40)>
12<15,
remaining_fuel(f701, 12, 40)},
~forbidden(f701, r01, 40)>

Fig. 15. Answer for the query clearance(f701, r01, 40) in the DeLP interpreter.

<?xml version="1.0" encoding="UTF-8"?>


<!DOCTYPE hl-kripke-struct SYSTEM "hl-ks.dtd">
<hl-kripke-struct name="M">
<world label="PF"/> <world label="OD"/> <world label="TL"/> <world label="TR"/>
<modality label="r0">
<acc-pair to-world-label="PF" from-world-label="PF"/>
</modality>
<modality label="r1">
<acc-pair to-world-label="TL" from-world-label="PF"/>
</modality>
<modality label="r2">
<acc-pair to-world-label="TR" from-world-label="PF"/>
</modality>
<modality label="r3">
<acc-pair to-world-label="OD" from-world-label="PF"/>
</modality>

<modality label="r4">
<acc-pair to-world-label="TR" from-world-label="OD"/>
</modality>

<modality label="r5">
<acc-pair to-world-label="PF" from-world-label="TL"/>
</modality>
<modality label="r6">
<acc-pair to-world-label="PF" from-world-label="TR"/>
</modality>
<prop-sym label="uav1" truth-assignments="PF TL TR"/>
<nominal label="uav2" truth-assignment="OD"/> <nominal label="pf" truth-assignment="PF"/>
<nominal label="od" truth-assignment="OD"/> <nominal label="tr" truth-assignment="TR"/>
<nominal label="tl" truth-assignment="TL"/>
</hl-kripke-struct>
Fig. 16. Kripke model as XML input file for HLMC.
380 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

Reading the model from ’test/modeluav.xml’...


Kripke structure: M
Worlds: PF (0), OD (1), TL (2), TR (3)
...
Model checking...
0 world variable assignment object allocated
MCFull procedure started
Evaluating MCFull at level 1
Evaluating MCFull at level 2
Evaluating MCFull at level 3
Evaluating MCFull at level 4
Checking assignment of nominal 2 (1)
Evaluating MC_P
Checking accessibility from 1 to 0 on modality 0 (FALSE)
Checking accessibility from 1 to 0 on modality 1 (FALSE)
...
Checking accessibility from 1 to 3 on modality 4 (TRUE)
Checking accessibility from 1 to 3 on modality 5 (FALSE)
...
Checking accessibility from 3 to 0 on modality 6 (TRUE)
Checking accessibility from 3 to 1 on modality 0 (FALSE)
Checking accessibility from 3 to 1 on modality 1 (FALSE)
...
Checking accessibility from 0 to 0 on modality 0 (TRUE)
Checking accessibility from 0 to 0 on modality 1 (FALSE)
...
Checking accessibility from 1 to 3 on modality 4 (TRUE)
Checking accessibility from 1 to 3 on modality 5 (FALSE)
Checking accessibility from 1 to 3 on modality 6 (FALSE)
Evaluating MCFull at level 3
Evaluating MCFull at level 4
Checking assignment of nominal 3 (3)
MCFull procedure completed
The formula holds in following world(s):
TR
Fig. 17. Excerpt of the verification process in HLMC.

We verify whether the avoidance maneuver state tr (TurnRight) (Chatzieleftheriou et al., 2012; Zhang & Ding, 2008), Petri Nets and
appears in all the successor paths of state od (Obstacle Detect) by CTL (Martínez-Araiza & López-Mellado, 2015), UML and arguments
checking the formula against the Kripke model. Fig. 17 shows a frag- (Jureta & Faulkner, 2007), labeled transition systems (De Menezes,
ment of the interaction with the model checker. do Lago Pereira, & de Barros, 2010), probabilistic systems (Bartocci,
Once the tests are performed for each formula against the Kripke Grosu, Katsaros, Ramakrishnan, & Smolka, 2011), epistemic logic
model, we can complete the verification of the model. The results (Bonakdarpour, Hajisheykhi, & Kulkarni, 2014), or hard constraints
confirm that the modeled Kripke structure of the UAS complies with (Carrillo & Rosenblueth, 2014).
the defined safety regulations. The solution for model repair in Chatzieleftheriou et al. (2012)
uses abstraction refinement to tackle state explosion. The proposed
7. Related work Abstract Model Repair method relies on: (i) Kripke structures for
the concrete model, (ii) Kripke Modal Transition Systems (KMTS)
The conceptual contribution of the paper is to enact argumenta- for the abstract model, and (iii) a three-valued semantics for inter-
tion to guide the process of model repair. Our technical solution in- preting CTL over KMTS. The KMTS is determined by a partial func-
terleaves rule-based arguments and model checking based on hybrid tion mapping a set of concrete states si to an abstract state sa . Re-
logic. We exemplify the method on a scenario for increasing safety of pair occurs when a CTL problem is not validated on the KMTS. In
UAVs. Chatzieleftheriou et al. (2012) the expressiveness is increased by in-
Hence, we approach related work from three perspectives: Firstly, troducing two types of transitions in a KMTS: instead of an accessibil-
we relate our research with current methods proposed for model re- ity relation, Chatzieleftheriou et al. (2012) introduce must-transitions
pair. Secondly, we investigate the usage of argumentation in the con- for necessary behavior and may-transitions for possible behavior. A
text of formal verification. Finally, we integrate our solution within must-transition between states si and sj occurs if there are transi-
current methods for verifying aerospace-related systems. tions from all concrete states of si to some concrete state of sj . A
may-transition between states si and sj occurs if there is a transi-
Model repair. With the first solution for model repair published in tion from some concrete state of si to some concrete state of sj . An
1999 (Buccafurri, Eiter, Gottlob, & Leone, 1999), there are still not abstract state sa is labeled with a literal only if the literal is true
many approaches tackling this problem. As there are no scalable in all concrete states of sa . In our case, the expressiveness is in-
approaches to repair a model (Tacchella & Katoen, 2015), there are creased by the nominals and the possibility to access states by name.
various attempts: model repair relies on abductive reasoning and A difference from the repair operators point of view is that the so-
theory revision (Buccafurri et al., 1999), Kripke structures and CTL lution in Chatzieleftheriou et al. (2012) requires seven such basic
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 381

operators (viz. addMust, addMay, removeMust, removeMay, change- dialectical tree aims to support a software agent to repair its be-
Label, addState, and removeState), while our method used four oper- havioral model and it is not oriented toward the human agents as
ators (viz. add relation, remove relation, change state label, and add in Jureta and Faulkner (2007). Also, our method is oriented toward
state). The main difference between two approaches is that, while the developing agents that need to take justified decisions in real-time
repair algorithm of Chatzieleftheriou et al. (2012) searches through (Navarro, Heras, Botti, & Julián, 2013) and not justifying compli-
the entire space of possible abstract repaired models guided by the ance (Burgemeestre, Hulstijn, & Tan, 2011) against various normative
minimality of changes heuristic (formalized also by Zhang & Ding, frameworks.
2008), our search strategy is knowledge-driven, where the knowl-
edge is represented by arguments. The winning argument for a given
situation is the one that guides the repair. Hence, the searching space Argumentation and formal verification. A tighter connection between
is limited in Chatzieleftheriou et al. (2012) by abstraction and the argumentation frameworks and model checking is given by means of
minimal change heuristic, and in our case structured arguments are modal logic. Basic notions of argumentation theory like acceptability,
responsible to guide the repairing process. admissibility, complete or stable extensions have been formalized in
Carrillo and Rosenblueth (2014) introduce hard constraints on the modal logic K extended with universal modality (Grossi, 2010).
what can be updated within a model. These protections specify what Consequently, the relationship between dialogue games and model
transactions or labels cannot be removed without compromising checking games has been further investigated, where model checking
the previously verified CTL formulas. The method brings advantages games have been used to prove that an argument belongs to a certain
when compared to the direct method of repetitively model-check an extension of the argumentation framework. Our research is in the line
updated model with respect to already-verified subformulas. By us- opened by Grossi (2010, 2011) and continued by Schechter (2012) on
ing argumentation, we aimed to provide a general solution for model interleaving argumentation theory with modal logics. The approach
repair, in which the above principles can be implemented. For in- of Grossi translates an argumentation framework into a Kripke struc-
stance, the protections in Carrillo and Rosenblueth (2014) can be ture, while in our case we use structured arguments aiming at auto-
modeled as defeaters which block the derivation of protected oper- matic model repair.
ations (viz. add transition, remove transition, add label, and remove OpenRISA framework (Yu, Franqueira, Tun, Wieringa, & Nuseibeh,
label). We argue that better scaling might be achieved by codifying 2015) integrates informal risk assessment with formal analysis by
protections with arguments. Moreover, the expressiveness of defea- proposing a modeling language for argumentation and risk assess-
sible logic employed in DeLP allows one to codify both hard protec- ment. The resulted automated tool applies argumentation in the do-
tions and soft protections. For our scenario, even if the requirement main of software security. Automated checking of arguments iden-
protects the UAV to turn left, there might be some particular danger- tifies relevant risks and treats them as rebuttals for arguments. As
ous situation in which it is better to treat this protection as soft and prioritized risks with numerical values are attached to arguments
the controller might breach it. and security requirements, indirectly results in the prioritization of
A knowledge based algorithm for automatic repair of authen- arguments. In our case, the preference relation is qualitative and it is
tication protocols is proposed in Bonakdarpour et al. (2014). Epis- directly attached to the arguments. Differently from our work, the ap-
temic logic is used to specify and reason on the security properties of proach in Yu et al. (2015) relies on the Toulmin model of argumenta-
the protocol, process that requires analysis of the knowledge of dif- tion, while we based on the rule-based structured arguments of DeLP.
ferent agents in different states. Repairing is based on three steps: Reasoning in Yu et al. (2015) relies on propositional logic, while we
(1) identify the breached state, (2) identify the step that could exploit the expressiveness of defeasible logic to handle incomplete
be modified to eliminate the breaching, and (3) use knowledge- and inconsistent information.
difference between states reached in the presence, respectively ab- Our approach can be integrated into the larger domain of using
sence, of adversary. Compared to Bonakdarpour et al. (2014), we did argumentation for safeness of critical software. Critical software sys-
not attack the complex recursive situations of agents reasoning about tems requires submission of safety assurance documents to obtain
each others knowledge. At this level, the DeLP system assumes a sin- certification from various certification bodies. Such assurance docu-
gle agent that solves the arguments. However, mechanisms for multi- ments are represented by safety cases that are justificatory and ex-
agent argumentation do exist and they have been already applied in planatory arguments supporting safeness of the system for a particu-
safety critical scenarios (i.e., Tolchinsky, Modgil, Atkinson, McBurney, lar application in a particular context or environment.
& Cortés, 2012). Still, our solution does include knowledge in the form In this line, a connection between model checking and argumen-
of arguments to guide the repairing. tation theory is given by assurance cases in the Goal Structuring No-
Automated CTL model repair of Petri Nets is proposed in Martínez- tation (GSN) (Spriggs, 2012). In this software engineering related ap-
Araiza and López-Mellado (2015). The repair algorithm uses closed proach, the arguments use the outputs of model checking as evidence
and open constraints. Closed constraints must be satisfied after each to support general claims (or goals) like safety in different operative
repair. Open constraints are the constraints not repaired yet and they contexts. Usual approaches of GSN in aeronautic industry (Denney,
are updated each time the repair algorithm is recursively called. If Pai, & Pohl, 2012; Kritzinger, 2006; Poreddy & Corns, 2011) use com-
the algorithm tries to include in the set of open constraints an ex- pliance arguments (standard based), risk arguments (product based)
isting constraint, a cycle is signaled. A repair strategy is defined for or confidence arguments (confidence in evidences and inferences).
each CTL operator in an open constraint. For instance, the strategy to The safety argumentative cases in GSN are based on arguments on
solve a CTL subformula of type “there exists a next state” is to add top evidences collected from formal methods. In our case, after each
a transition between two states. Our argument-based solution sup- argumentation step, the new model is verified against the available
ports continuous reasoning about changes in the environment, as the specifications. Quantifying confidence in a GSN-based safety argu-
premises of the arguments are fed with data from sensors. ment for an UAS is presented in Denney, Pai, and Habli (2011); af-
Argumentation has been used to trace the rationale behind ter the uncertainties in the GSN argument are identified and quanti-
UML model change (Jureta & Faulkner, 2007). The goal was to jus- fied, Bayesian Networks (BNs) are used to reason about confidence in
tify the changes to UML models to all stakeholders, with struc- a probabilistic way. Given the heterogeneity of safety-relevant infor-
tured arguments providing rigor during discussions. The proposed mation in aviation assurance cases, one task is to integrate formal and
argumentation-based method contributes to the traceability of de- non-formal methods into a single safety case (Denney et al., 2012).
sign rationale in UML. Our work differs by two aspects: First, the While the GSN is a graphical notation for arguments, we rely on DeLP
model is not a UML diagram, but a Kripke structure. Second, our reasoning capabilities to analyze and compare arguments.
382 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

Recently, argument patterns have been proposed to enable the ever, the advantages LTL brings on what it concerns the use of tempo-
construction and formal verification of a safety case (Prokhorova, ral operators is sometimes shadowed by the limitations encountered
Laibinis, & Troubitsyna, 2015). The set of argument patterns in- on what it concerns the knowledge and state based representation
cludes: argument for well-definedness of formal development, argu- of the model. It lacks mechanisms for naming states, for accessing
ment for safety requirements of global properties, argument about states by names, and for dynamically creating new names for states
local properties, argument about control flow, argument for absence (Franceschet & de Rijke, 2006). HL (Lange, 2009) comes as a solu-
of system deadlock, argument for hierarchical safety requirements. tion in this direction as it allows to refer to states in a truly modal
Argumentation-based reasoning is used to support the safety case framework, mixing features from both first-order and modal logics.
with formal proofs and model checking calculus used as the safety But hybridization is not simply about quantifying over states. Rather,
evidence with respect to a set of safety requirements. In the same hybridization is about handling different type of information in a uni-
line, we enact rule-based argument to support the update of the be- form way (Blackburn & Tzakova, 1999). This is useful in model check-
havioral model, with respect to a set of safety requirements. Differ- ing where we need to combine different types of information to be
ently, the technical instrumentation used to complement argument verified against a model. This is also useful when we need to feed the
reasoning is HLs in our case, and, respectively, the Event-B (Abrial, argumentation machinery of DeLP with various information to decide
2010) formal method for verification in the case of Prokhorova et al. in case of inconsistent specifications.
(2015). In brief, the advantages that an argumentation machinery on top
of hybrid logic brings to model repair are:
Verifying aerospace systems. Formal methods for aerospace systems
• arguments represent a means to introduce and use human knowl-
increase assurance of correctness, facilitate certification and reduce
edge during repairing (instead of epistemic logic in Bonakdarpour
development cost. Various case studies on model checking aerospace
et al. (2014));
systems have been published (Zhao & Rozier, 2014).
• arguments guide the repairing process, thus limiting the search
Nakamatsu, Suito, Abe, and Suzuki (2002) propose a theoretical
space of possible models (instead of abstraction refinement of
framework for a logical safety verification for an ATC system based on
Chatzieleftheriou et al. (2012));
a paraconsistent logic program called an Extended Vector Annotated
• argumentation provide a general solution for model repair, in
Logic Program with Strong Negation (EVALPSN for short). EVALPSN
which protections (Carrillo & Rosenblueth, 2014), but also hard or
uses numerical weights for assigning credibility to facts, as our ap-
soft constraints can be implemented with elements of defeasible
proach relies on DeLP and on argument construction and a dialecti-
logic: strict rules, defeasible rules, and defeaters;
cal process for determining which conclusions, our solution relieves
• combining arguments with hybrid logic can handle different types
the knowledge engineer of the burden of weighing his information.
of information in a uniform way (avoiding limitation of LTL and
Capobianco, Chesñevar, and Simari (2005) define ODeLP which al-
CTL (Franceschet & de Rijke, 2006)); this is particularity useful
lows to use DeLP in a dynamic environment based on facts obtained
given the heterogeneity to be considered for UAV safety;
through observations. ODeLP allows to precompile the status of argu-
• argumentation is a means to include real-time decisions on a pre-
ments in order to compute warrants more efficiently. Our approach
viously formal specified system, and,
could benefit from ODeLP in the case of programs formed only by de-
• interleaving argumentation and model checking increases trans-
feasible rules because ODeLP does not accept strict rules as needed
parency and facilitate writing of safeness assurance documents
by our proposal.
(in line with Prokhorova et al., 2015), as required by the current
Program verification is used in Fisher, Dennis, and Webster (2013)
certification standards in aeronautics (i.e., Federal Aviation Ad-
to prove that a rational agent always acts in line with the specifica-
ministration operational approval guidance for UASs).
tions and never chooses actions it believes that lead to unsafe situa-
tions. Hence, the verification regards the decisions of the agent and It would not be fair on our part to overlook opponents to the au-
not the effects of these decisions in the environment. Thus an agent tomation in ATCs. To the best of our knowledge we can cite the work
behavior is considered safe if all the available relevant information of Endsley (1996), who notes that the successful implementation of
has been used to take the best decision for the moment. The com- automation is a complex issue because the traditional form of au-
putation tree logic is applied on the BDI model to verify sentences tomation that places humans in the role of monitor has been shown
like: “the agents choose action they believe to be good”, or “the agents to negatively impact situation awareness and thus their ability to per-
never select an intention they believe will lead to something bad”. In form that function. Endsley also points out that losses in situation
our case, the checker would verify that a proper argumentation tree awareness can be attributed to the unsuitability of humans to per-
has been generated for each decision, where “proper” means that the form a monitoring role, assumption of a passive role in decision mak-
dialectical tree should satisfy specific requirements. ing, and inadequate feedback associated with automation.
Checking of temporal properties of a model has been the focus
of many research works. LTL has been widely used in this direction. 8. Conclusions and future work
Model checking agents for autonomous unmanned aircraft (Webster,
Cameron, Fisher, & Jump, 2014). were used to encode and verify 23 We presented a framework based on defeasible argumentation
properties based on “Rules of Air and Airmanship” using linear tem- and model checking that is suitable to be used as the basis for the im-
poral logic. The approach demonstrated that model checking and plementation of recommender systems for assisting flight controllers
flight simulation can be used together to increase assurance of the in ATC systems. The crux of safety assurance in ATC systems still
safety UAV. In our case, we interleave model checking with argumen- comes down to human decision makers, which, within the context
tation to update the UAV model in order to be compliant with a subset of having to define priorities while simultaneously considering dif-
from the same rules of air. ferent contextual criteria, present a constant high risk of erroneous
Dealing with conflicting specifications formalized in LTL assumes decisions. Our approach presents as a serious aid for assisting flight
(Tumova, Reyes Castro, Karaman, Frazzoli, & Rus, 2013) that each LTL controllers to reach rational decisions related to safety constraints
formula that is satisfied by the system has a reward. The proposed so- that must be kept in order to conduct a safe landing. We presented
lution is to find an optimal control strategy (model) that maximizes a case study where DeLP, as a working implementation of defeasi-
the sum of rewards earned if this control strategy is applied. Instead ble argumentation, is used to codify a set of possibly incomplete and
of specifying absolute values for rewards, our mechanism uses pref- potentially inconsistent landing safety criteria that are evaluated in
erences and defeaters as specified by defeasible logic of DeLP. How- a real-time fashion to provide an stream of safety recommendations
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 383

based on the input fed to the system by the plane and runway sensors Among the limitations of our approach we can point out the fol-
located in a simplified world of an airport with a plane that have to lowing. In DeLP the knowledge engineer must decide the separation
land under different weather, runway and aircraft conditions. of safety criteria into strict and tentative codifying them either as
We proposed using DeLP for hybrid logics model update, where strict or defeasible rules. Even though DeLP is capable of detecting
we view a Hybrid Kripke model as a description of the world that we when the strict information is inconsistent, this impedes the system
are interested in and the update on this Kripke model occurs when from reaching any conclusion. The engineer is posed with the task of
the system has to accommodate some newly desired properties or debugging the strict information (just as the case of more traditional
norm constraints in which the argumentation theory then acts as a approaches). In this regard, Belief Revision has presented in the past
control mechanism during the adaptive process. The main applica- as a viable solution (Falappa, García, Kern-Isberner, & Simari, 2011).
tion of our approach is in autonomous systems, which should safely Reasoning in DeLP is performed using the so-called grounded seman-
adapt their behavior to the new specifications. We presented a case tics which has some known issues (Caminada & Amgoud, 2007) and it
study where our proposal is applied to assisting the flight decisions in does not currently support continuous reasoning as proposed in this
an UAV. We detailed an automated solution for model self-adaption work.
based on the interleaving between HLs based model checking and ar- As part of our future work we are considering exploring the re-
gumentation and illustrated it on a real world scenario for an UAS. lationship between model checking and belief revision to apply in
We showed that model checking provides a significant support in the the possibility of having strict information inconsistent. Other im-
analysis of a system’s model, while the expressivity of HLs used in provements can be found by exploring the behavior of our sys-
formalizing different constraints that the system must comply to, en- tem under other argumentation semantics such as stable, preferred
ables a more refined verification by allowing to focus over specific and admissibility-based. For implementing continuous reasoning in
states or transitions of interest in the model. Once the non-compliant DeLP, the embedding of infinite list processing in the world of logic
states or transitions between states are identified, DeLP provides a programming can benefit with a functional programming approach
filtering between possible repair solutions, considering only the min- based on lazy evaluation as suggested by Groza and Letia (2012).
imum set of changes to be applied to the model such that compliance We do not consider here more complicated practical dimensions
is achieved, allowing in the end to reach a viable update decision. which are very significant in UAV operation, like the integration of
In order to apply argumentation for the repair of Kripke models, we different sources of information or hybrid symbolic and numeric data
map the information encapsulated in the Kripke structure to a DeLP at different levels of abstraction. Also, the dynamics of the environ-
program such that arguments can be constructed and based on them ment might require a more refined model of the flying process, in-
updates to be performed on the initial model. In this direction, the cluding some kind of stream representation and reasoning. We did
elements of a Kripke structure (states, labels and transition between not consider the support required for the different levels of abstrac-
states) are considered part of a defeasible logic program, where the tion, the management of uncertainty, flexible configuration and re-
information about states corresponds to the set of strict rules, and the configuration, or quantitative and qualitative processing of the flu-
labels and the transitions between states can be regarded as belong- ent streams. Future research may cover processing that enhances
ing to the set of defeasible rules. Once a formal verification is per- the capabilities of the proposed method or/and adaptation to vari-
formed on the model, which yields a negative result on what it con- ous events. The human–UAV interaction enabled by the use of hy-
cerns conformance to a certain set of constraints, we identify whether brid logic, not tackled in this article, is extremely important when
the presence or absence of a certain state or transition led to the un- we want to enable the use of human knowledge in various environ-
desired outcome for the model checking task. Depending on the out- ments and situations. Since hybrid logic is better suited for human
put of the model checker, the following steps are performed: each understanding than other logics, another significant path of research
non-compliant transition is considered for a query and an argument would be to further advance the reasoning of planning for UAVs com-
is entailed for clarifying the infringement of a constraint; each indica- bined with goal driven autonomy. The engineering of autonomous
tion of an absence of a required transition leads to a new query and an agents that display rational and goal-directed behavior in dynamic
argument which promotes the introduction of the missing transition; environments needs a much better bridge of the sense-reasoning gap
each non-compliant labeling of a state is considered for a query and available today and this constitutes a very large field of research for
an argument is entailed for clarifying the infringement of the con- argumentation.
straint, which results in an update to the labeling values, and, each
indication of an absence of a required state leads to an update of the Conflict of interest
set of the defeasible logic program by and of the Kripke model and the
argumentation steps are repeated for the updated defeasible program The authors declare that they have no conflict of interest.
and Kripke model. Once the arguments are constructed, the decision
over a certain repair solution is taken using DeLP. Acknowledgments
The presented approach has several contributions in regards to
the field of Expert and Intelligent Systems. To the best of our knowl- Part of this work was supported by the Argentina-Romania Bilat-
edge, this is the first approach that combines defeasible argumen- eral Agreement entitled “ARGSAFE: Using argumentation for justify-
tation and model checking to the problem of dealing with inconsis- ing safeness in complex technical systems” (MINCYT-MECTS Project
tent and incomplete safety criteria. Because our approach is based RO/12/05) and Universidad Nacional del Sur, Argentina. Sergio Gómez
on DeLP, the inconsistency of criteria is handled automatically by the would like to thank Nicolás Rotstein, Alejandro García and the DeLP
reasoning system. Therefore, the system engineer can concentrate team who implemented the DeLP interpreter used for implementing
on the knowledge representation process even when the field be- the prototypes shown in this article. We would also like to show our
ing modeled could be intrinsically inconsistent, and when the field gratitude to the anonymous reviewers whose suggestions served to
is consistent, the system would behave exactly the same as a tradi- improve an early version of this manuscript.
tional logic programming setting. In our particular case study, the ar- References
guments produced by the argumentation reasoning mechanism are
compared syntactically using specificity, however, the system accepts Abrial, J.-R. (2010). Modeling in Event-B: System and software engineering. Cambridge
other modular criteria for comparing arguments, making it flexible University Press.
Antoniou, G., Billington, D., & Maher, M. (1998). Normal forms for defeasible logic. In
for modeling other comparison criteria (e.g. based on measures of Proceedings of international joint conference and symposium on logic programming
sensor reliance, trust between different sources, etc.). (pp. 160–174). MIT Press.
384 S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385

Antoniou, G., Maher, M. J., & Billington, D. (2000). Defeasible logic versus logic pro- Denney, E., Pai, G., & Habli, I. (2011). Towards measurement of confidence in safety
gramming without negation as failure. Journal of Logic Programming, 42, 47–57. cases. In 2011 international symposium on empirical software engineering and mea-
Areces, C., & ten Cate, B. (2007). Hybrid logics. In P. Blackburn, J. Van Benthem, & surement (ESEM) (pp. 380–383). IEEE.
F. Wolter (Eds.), Handbook of modal logic (pp. 821–868). Amsterdam: Elsevier. Denney, E., Pai, G., & Pohl, J. (2012). Heterogeneous aviation safety cases: Integrating
Bartocci, E., Grosu, R., Katsaros, P., Ramakrishnan, C., & Smolka, S. A. (2011). Model re- the formal and the non-formal. In 2012 17th international conference on engineering
pair for probabilistic systems. In Tools and algorithms for the construction and anal- of complex computer systems (ICECCS) (pp. 199–208). IEEE.
ysis of systems (pp. 326–340). Springer. Dimopoulos, Y., & Kakas, A. (1995). Logic programming without negation as failure. In
Bedi, P., & Vashisth, P. B. (2014). Empowering recommender systems using trust and J. Lloyd (Ed.), Logic programming (pp. 369–383). Cambridge, MA: MIT Press.
argumentation. Information Sciences, 279, 569–586. Endsley, M. R. (1996). Automation and situation awareness. In R. Parasumaran, &
Ben-David, S., Trefler, R., & Weddell, G. (2010). Model checking using description logic. M. Mouloua (Eds.), Automation and human performance: Theory and applications
Journal of Logic and Computation, 20(1), 111–131. (pp. 163–181). Mahwah, NJ: Lawrence Erlbaum.
Bench-Capon, T. J. M., & Dunne, P. E. (2007). Argumentation in artificial intelligence. Falappa, M. A., García, A. J., Kern-Isberner, G., & Simari, G. R. (2011). On the evolving re-
Artificial Intelligence, 171(10-15), 619–641. lation between belief revision and argumentation. The Knowledge Engineering Re-
Blackburn, P., & Tzakova, M. (1999). Hybrid languages and temporal logic. Logic Journal view, 26(1), 35–43.
of IGPL, 7(1), 27–54. Fisher, M., Dennis, L., & Webster, M. (2013). Verifying autonomous systems. Communi-
Bonakdarpour, B., Hajisheykhi, R., & Kulkarni, S. S. (2014). Knowledge-based auto- cations of the ACM, 56(9), 84–93.
mated repair of authentication protocols. In FM 2014: Formal methods (pp. 132– Fomina, M., Morosin, O., & Vagin, V. (2014). Argumentation approach and learning
147). Springer. methods in intelligent decision support systems in the presence of inconsistent
Brena, R., Chesñevar, C., & Aguirre, J. (2006). Argumentation-supported information data. In 14th international conference on computational science (ICCS 2014): vol. 29
distribution in a multiagent system for knowledge management (Utrecht, Nether- (pp. 1569–1579). Procedia Computer Science, Elsevier.
lands, July 2005). In Proceedings of ArgMAS 2005. LNCS: vol. 4049 (pp. 279–296). Franceschet, M., & de Rijke, M. (2006). Model checking hybrid logics (with an applica-
Springer Verlag. tion to semistructured data). Journal of Applied Logic, 4, 279–304.
Briguez, C. E., Budán, M. C., Deagustini, C. A., Maguitman, A. G., Capobianco, M., & García, A., & Simari, G. (2004). Defeasible logic programming an argumentative ap-
Simari, G. R. (2014). Argument-based mixed recommenders and their application proach. Theory and Practice of Logic Programming, 4(1), 95–138.
to movie suggestion. Expert Systems with Applications, 41, 6467–6482. García, A. J., & Simari, G. R. (2014). Defeasible logic programming: Delp-servers, contex-
Buccafurri, F., Eiter, T., Gottlob, G., & Leone, N. (1999). Enhancing model checking in tual queries, and explanations for answers. Argument & Computation, 5(1), 63–88.
verification by ai techniques. Artificial Intelligence, 112(1), 57–104. http://dx.doi.org/10.1080/19462166.2013.869767.
Burgemeestre, B., Hulstijn, J., & Tan, Y.-H. (2011). Value-based argumentation for justi- Gómez, S., & Chesñevar, C. (2004). A hybrid approach to pattern classification using
fying compliance. Artificial Intelligence and Law, 19(2-3), 149–186. neural networks and defeasible argumentation. In Proceedings of 17th international
Caminada, M., & Amgoud, L. (2007). On the evaluation of argumentation formalisms. FLAIRS conference, Miami, Florida, USA (pp. 393–398). American Association for Ar-
Artificial Intelligence, 171, 286–310. tificial Intelligence.
Capobianco, M., Chesñevar, C. I., & Simari, G. R. (2005). An argument-based framework Gómez, S. A., Chesñevar, C. I., & Simari, G. R. (2008). Defeasible reasoning in web forms
to model an agent’s beliefs in a dynamic environment. In I. Rahwan, P. Moraïtis, through argumentation. International Journal of Information Technology & Decision
& C. Reed (Eds.), LNAI - ArgMAS 2004: vol. 3366 (pp. 95–110). Berlin, Heidelberg: Making, 7, 71–101.
Springer-Verlag. Gómez, S. A., Chesñevar, C. I., & Simari, G. R. (2010). Reasoning with inconsis-
Carbogim, D., Robertson, D., & Lee, J. (2000). Argument-based applications to knowl- tent ontologies through argumentation. Applied Artificial Intelligence, 1(24), 102–
edge engineering. The Knowledge Engineering Review, 15(2), 119–149. 148.
Carrillo, M., & Rosenblueth, D. A. (2014). Ctl update of kripke models through protec- Gómez, S. A., Chesñevar, C. I., & Simari, G. R. (2013). ONTOarg: A decision support
tions. Artificial Intelligence, 211, 51–74. framework for ontology integration based on argumentation. Expert Systems with
Chatzieleftheriou, G., Bonakdarpour, B., Smolka, S., Katsaros, P., Goodloe, A., & Person, S. Applications, 40, 1858–1870.
(Eds.) (2012). NASA formal methods. Abstract model repair. Lecture Notes in Com- Gómez, S. A., Goron, A., & Groza, A. (2014). Assuring safety in an air traffic control sys-
puter Science vol. 7226 (pp. 341–355). Springer Berlin Heidelberg. tem with defeasible logic programming. In 15th Argentinian Symposium of Artificial
Chesñevar, C., Brena, R., & Aguirre, J. (2005a). Knowledge distribution in large organiza- Intelligence (ASAI 2014) (pp. 18–25). Sociedad Argentina de Investigación Operativa
tions using defeasible logic programming. In Proceedings of the 18th Canadian Con- (SADIO).
ference on AI (published in LNCS, vol. 3501, Springer Verlag) (pp. 244–256). Springer Goron, A., Groza, A., Gómez, S. A., & Letia, I. A. (2014). Towards an argumentative ap-
Verlag. proach for repair of hybrid logics models. In ArgMAS 2014. MIT. http://www.mit.
Chesñevar, C., Brena, R., & Aguirre, J. (2005b). Modelling power and trust for knowl- edu/∼irahwan/argmas/argmas14/w12-04.pdf.
edge distribution: an argumentative approach. In LNAI springer series (Proceedings Graydon, P., & Kelly, T. P. (2013). Using argumentation to evaluate software assurance
of the 3rd Mexican international conference on artificial intelligence—MICAI’05): 3789 standards. Information and Software Technology, 55(9), 1551–1562.
(pp. 98–108). Grossi, D. (2010). On the logic of argumentation theory. In Proceedings of the 9th inter-
Chesñevar, C., Maguitman, A., & Loui, R. (2000a). Logical models of argument. ACM national conference on autonomous agents and multiagent systems, vol. 1 (AAMAS’10)
Computing Surveys, 32(4), 337–383. (pp. 409–416). Richland, SC: International Foundation for Autonomous Agents and
Chesñevar, C., Maguitman, A., & Simari, G. (2006). Argument-based critics and recom- Multiagent Systems.
menders: A qualitative perspective on user support systems. Data & Knowledge Grossi, D. (2011). Argumentation in the view of modal logic. In Argumentation in multi-
Engineering, 59(2), 293–319. agent systems (pp. 190–208). Springer.
Chesñevar, C., Simari, G., Alsinet, T., & Godo, L. (2004). A logic programming framework Groza, A., & Letia, I. (2012). Plausible description logics programs for stream reasoning.
for possibilistic argumentation with vague knowledge. In Proceedings of the inter- Future Internet, 4, 865–881.
national conference in uncertainty in artificial intelligence (UAI’04) (pp. 76–84). Banff, Jureta, I. J., & Faulkner, S. (2007). Tracing the rationale behind UML model
Canada. change through argumentation. In Conceptual modeling (ER’07) (pp. 454–469).
Chesñevar, C., Simari, G., Godo, L., & Alsinet, T. (2005c). Argument-based expansion op- Springer.
erators in possibilistic defeasible logic programming: Characterization and logical Kakas, A. C., Mancarella, P., & Dung, P. M. (1994). The acceptability semantics for logic
properties. In LNAI/LNCS Springer series, vol. 3571 (Proceedings of the 8th ECSQARU programs. In Proceedings of the 11th international conference on logic programming.
international conference, Barcelona, Spain) (pp. 353–365). Santa Margherita, Italy (pp. 504–519). MIT Press.
Chesñevar, C. I., Maguitman, A., & Loui, R. (2000b). Logical models of argument. ACM Kakas, A. C., & Toni, F. (1999). Computing argumentation in logic programming. Journal
Computing Surveys, 32(4), 337–383. of Logic and Computation, 9(4), 515–562.
Chesñevar, C., & Maguitman, A. (2004a). An argumentative approach to assessing nat- Kritzinger, D. (2006). 5—Goal-based approach. In D. Kritzinger (Ed.), Aircraft system
ural language usage based on the Web corpus. In Proceedings of the 16th ECAI con- safety (pp. 57–68). Woodhead Publishing. http://www.sciencedirect.com/science/
ference, Valencia, Spain (pp. 581–585). article/pii/B9781845691363500051.
Chesñevar, C., & Maguitman, A. (2004b). ArgueNet: An argument-based recommender Lange, M. (2009). Model checking for hybrid logic. Journal of Logic, Language and Infor-
system for solving web search queries. In Proceedings of the 2nd IEEE international mation, 18(4), 465–491.
IS’04 conference, Varna, Bulgaria (pp. 282–287). Letia, I. A., & Goron, A. (2014). Instrumenting the auditing of business process logs. In
Chow, H. K., Siu, W., Chan, C.-K., & Chan, H. C. (2013). An argumentation-oriented multi- 4th international workshop on combinations of intelligent methods and applications
agent system for automating the freight planning process. Expert Systems with Ap- (CIMA’14). Limassol, Cyprus.
plications, 40, 3858–3871. Letia, I. A., & Groza, A. (2013). Compliance checking of integrated business processes.
Clarke, E. M., Grumberg, O., & Hamaguchi, K. (1997). Another look at ltl model checking. Data and Knowledge Engineering, 87, 1–18.
Formal Methods in System Design, 10(1), 47–71. Lloyd, J. (1987). Foundations of logic programming. Springer-Verlag.
Cranefield, S., & Winikoff, M. (2011). Verifying social expectations by model checking Martinez, M. V., García, A. J., & Simari, G. R. (2012). On the use of presumptions in struc-
truncated paths. Journal of Logic and Computation, 21(6), 1217–1256. tured defeasible reasoning. In B. Verheij, S. Szeider, & S. Woltran (Eds.), Computa-
Danaher, J. W. (1980). Human error in ATC system operations. Human Factors: The Jour- tional models of argument—Proceedings of COMMA 2012, Vienna, Austria, September
nal of the Human Factors and Ergonomics Society, 22(5), 535–545. 10–12, 2012: vol. 245 (pp. 185–196). Frontiers in Artificial Intelligence and Applica-
Deagustini, C. A., Dalibón, S. E. F., Gottifredi, S., Falappa, M. A., Cheñevar, C. I., & tions, IOS Press. http://dx.doi.org/10.3233/978-1-61499-111-3-185.
Simari, G. R. (2013). Relational databases as a massive information source for de- Martínez-Araiza, U., & López-Mellado, E. (2015). Ctl model repair for bounded and
feasible argumentation. Knowledge-Based Systems, 51, 93–109. deadlock free petri nets. IFAC-PapersOnLine, 48(7), 154–160.
De Menezes, M. V., do Lago Pereira, S., & de Barros, L. N. (2010). System design mod- Moon, W.-C., Yoo, K.-E., & Choi, Y.-C. (2011). Air traffic volume and air traffic control hu-
ification with actions. In Advances in artificial intelligence–SBIA 2010 (pp. 31–40). man errors. Journal of Transportation Technologies, 1, 47–53. doi:10.4236/jtts.2011.
Springer. 13007.
S.A. Gómez et al. / Expert Systems With Applications 44 (2016) 367–385 385

Nakamatsu, K., Suito, H., Abe, J. M., & Suzuki, A. (2002). Paraconsistent logic program Simari, G., & Loui, R. (1992). A mathematical treatment of defeasible reasoning and its
based safety verification for air traffic control. In 2002 IEEE International Conference implementation. Artificial Intelligence, 53, 125–157.
on Systems, Man and Cybernetics: vol. 5. IEEE. doi: 10.1109/ICSMC.2002.1176399, Spencer, D. (1989). Applying artificial intelligence techniques to air traffic control au-
isbn 0-7803-7437-1. tomation. The Lincoln Laboratory Journal, 2(3), 537–554.
Navarro, M., Heras, S., Botti, V., & Julián, V. (2013). Towards real-time agreements. Expert Spriggs, J. (2012). GSN—The goal structuring notation. Springer.
Systems with Applications, 40(10), 3906–3917. Stolzenburg, F., García, A., Chesñevar, C., & Simari, G. (2003). Computing generalized
Nute, D. (1988). Defeasible reasoning. In J. H. Fetzer (Ed.), Aspects of artificial intelligence specificity. Journal of Non-Classical Logics, 13(1), 87–113.
(pp. 251–288). Norwell, MA: Kluwer Academic Publishers. Tacchella, A., & Katoen, J.-P. (2015). A greedy approach for the efficient repair of stochas-
Nute, D. (1992). Basic defeasible logic. In L. Fariñas del Cerro (Ed.), Intensional logics for tic models. In NASA Formal Methods: Proceedings of the 7th international symposium
programming. Oxford: Claredon Press. (NFM’15), Pasadena, CA, USA, April 27–29, 2015,: 9058 (p. 295). Springer.
Parsons, S., Sierrra, C., & Jennings, N. (1998). Agents that reason and negotiate by argu- Thomson, R. (2010). How technology is improving air traffic control. http://www.
ing. Journal of Logic and Computation, 8, 261–292. computerweekly.com/feature/How-technology-is-improving-air-traffic-control
Pollock, J. (1974). Knowledge and justification. Princeton. (accessed on 21.10.14).
Pollock, J. L. (1987). Defeasible reasoning. Cognitive Science, 11, 481–518. Tolchinsky, P., Modgil, S., Atkinson, K., McBurney, P., & Cortés, U. (2012). Deliberation
Pollock, J. L. (1995). Cognitive carpentry: A blueprint for how to build a person. Brad- dialogues for reasoning about safety critical actions. Autonomous Agents and Multi-
ford/MIT Press. Agent Systems, 25(2), 209–259.
Poreddy, B. R., & Corns, S. (2011). Arguing security of generic avionic mission control Tumova, J., Reyes Castro, L. I., Karaman, S., Frazzoli, E., & Rus, D. (2013). Minimum-
computer system (mcc) using assurance cases. Procedia Computer Science, 6, 499– violation ltl planning with conflicting specifications. In American control conference
504. (ACC), 2013 (pp. 200–205). IEEE.
Prakken, H., & Sartor, G. (2002). The role of logic in computational models of legal Verheij, B. (2005). Virtual arguments. On the design of argument assistants for lawyers
argument—A critical survey. In A. Kakas, & F. Sadri (Eds.), Computational logic: Logic and other arguers. Asser Press, The Hague.
programming and beyond (pp. 342–380). Springer. Webster, M., Cameron, N., Fisher, M., & Jump, M. (2014). Generating certification evi-
Prakken, H., & Vreeswijk, G. (2002). Logical systems for defeasible argumentation. In dence for autonomous unmanned aircraft using model checking and simulation.
D. Gabbay, & F. Guenther (Eds.), Handbook of philosophical logic (pp. 219–318). Journal of Aerospace Information Systems, 11(5), 258–279.
Kluwer Academic Publishers. Webster, M., Fisher, M., Cameron, N., & Jump, M. (2011a). Formal methods for the cer-
Prokhorova, Y., Laibinis, L., & Troubitsyna, E. (2015). Facilitating construction of safety tification of autonomous unmanned aircraft systems. In Computer safety, reliability,
cases from formal models in event-b. Information and Software Technology, 60, 51– and security (pp. 228–242). Springer.
76. http://www.sciencedirect.com/science/article/pii/S0950584915000038. Webster, M., Fisher, M., Cameron, N., & Jump, M. (2011b). Model checking and the certifi-
Rahwan, I., Ramchurn, S. D., Jennings, N. R., Mcburney, P., Parsons, S., & Sonenberg, L. cation of autonomous unmanned aircraft systems. Department of Computer Science,
(2003). Argumentation-based negotiation. Knowledge Engineering Review, 18(4), University of Liverpool, Liverpool, United Kingdom Technical report ulcs-11-001.
343–375. Yu, Y., Franqueira, V. N., Tun, T. T., Wieringa, R. J., & Nuseibeh, B. (2015). Automated
Rahwan, I., & Simari, G. R. (2009). Argumentation in artificial intelligence. Springer. analysis of security requirements through risk-based argumentation. Journal of
Rushby, J. (2009). A safety-case approach for certifying adaptive systems. In AIAA In- Systems and Software, 106, 102–116. http://www.sciencedirect.com/science/article/
fotech@Aerospace Conference, American Institute of Aeronautics and Astronautics, pii/S0164121215000850.
Seattle. Zhang, P., Sun, J., & Chen, H. (2005). Frame-based argumentation for group decision
Schechter, L. M. (2012). A Logic of Plausible Justifications. In L. Ong, & R. de Queiroz task generation and identification. Decision Support Systems, 39, 643–659.
(Eds.), Lecture Notes in Computer Science: vol. 7456. Logic, Language, Information and Zhang, Y., & Ding, Y. (2008). CTL model update for system modifications. Journal of Ar-
Computation (pp. 306–320). Springer. tificial Intelligence Research, 31, 113–155.
Sierra, C., & Noriega, P. (2002). Agent-mediated interaction. From auctions to negotia- Zhao, Y., & Rozier, K. Y. (2014). Formal specification and verification of a coordination
tion and argumentation. In Foundations and applications of multi-agent systems. In protocol for an automated air traffic control system. Science of Computer Program-
LNCS Series: vol. 2403 (pp. 27–48). Springer. ming Journal, 96(3), 337–353.

You might also like