This action might not be possible to undo. Are you sure you want to continue?

Contents lists available at SciVerse ScienceDirect

**Future Generation Computer Systems
**

journal homepage: www.elsevier.com/locate/fgcs

**Model checking grid security
**

F. Pagliarecci ∗ , L. Spalazzi, F. Spegni

Università Politecnica delle Marche – Ancona, Italy

article

info

abstract

Grid computing is one of the leading forms of high performance computing. Security in the grid environment is a challenging issue that can be characterized as a complex system involving many subtleties that may lead designers into error. This is similar to what happens with security protocols where automatic verification techniques (specially model checking) have been proved to be very useful at design time. This paper proposes a formal verification methodology based on model checking that can be applied to host security verification for grid systems. The proposed methodology must take into account that a grid system can be described as a parameterized model, and security requirements can be described as hyperproperties. Unfortunately, both parameterized model checking and hyperproperty verification are, in general, undecidable. However, it has been proved that this problem becomes decidable when jobs have some regularities in their organization. Therefore, this paper presents a verification methodology that reduces a given grid system model to a model to which it is possible to apply a ‘‘cutoff’’ theorem (i.e., a requirement is satisfied by a system with an arbitrary number of jobs if and only if it is satisfied by a system with a finite number of jobs up to a cutoff size). This methodology is supported by a set of theorems, whose proofs are presented in this paper. The methodology is explained by means of a case study: the Condor system. © 2011 Elsevier B.V. All rights reserved.

Article history: Received 12 April 2011 Received in revised form 22 November 2011 Accepted 23 November 2011 Available online 9 December 2011 Keywords: Grid computing Grid security Parameterized model checking Hyperproperties

1. Introduction Grid computing is one of the leading forms of high performance computing. It enables inter-operability between distributed and heterogeneous resources in a standardized manner. Its architecture is defined in terms of a layered collection of protocols [1] (Fig. 1 reports the related architecture that is often referred to as the hourglass model). Security in a grid environment is a challenging issue because this environment instantiates interactions among a set of possibly unknown entities. Therefore, security in grid computing can be characterized as a complex system involving many subtleties that may lead designers into error1 [2]. This is similar to what happens with security protocols [3] where, on the other hand, automatic verification techniques (specially model checking) have been proved to be very useful at design time [4]. This paper proposes a formal verification method based on model checking that can be applied to host security verification. More in depth, host security, as part of infrastructure related issues

∗ Correspondence to: Università Politecnica delle Marche, Dipartimento di Ingegneria dell’Informazione - Via Brecce Bianche 60131 Ancona, Italy. Tel.: +39 0712204447. E-mail addresses: pagliarecci@dii.univpm.it (F. Pagliarecci), spalazzi@dii.univpm.it (L. Spalazzi), spegni@dii.univpm.it (F. Spegni). 1 For a list of vulnerabilities, see http://lists.globus.org/pipermail/securityannounce/ or http://www.cs.wisc.edu/condor/security/vulnerabilities/. 0167-739X/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.future.2011.11.010

at Fabric Layer of Fig. 1 [5,6], deals with issues that make a host apprehensive about affiliating itself into the grid system, namely: job protection and data protection. In other words, whenever a host is affiliated to the grid, host security consists in protecting the already existing jobs, resources, and data in the host. Indeed, the host submitting the job may be untrusted or unknown to the host running the job. This means that the guest job may contain a malicious or defective code that violates security requirements of already existing jobs and data: the well known CIA Triad – i.e., confidentiality, integrity, availability – as well as authentication, authorization, and non-repudiation (e.g., the reader can refer to the glossary proposed by the Committee on National Security Systems [7]). Nevertheless, this domain has some peculiarities that makes it different from model checking security requirements of security protocols. As a matter of fact, security protocols can be described by means of a finite state model whereas a grid computing system is composed of an arbitrary number of hosts each of them running an arbitrary number of jobs that work on an arbitrary number of data and, thus, it can be described by means of an infinite-state parameterized model. Unfortunately, parameterized model checking is known to be undecidable in general [8]. However, it has been proved that this problem becomes decidable when systems have some regularities in their organization [9,10]. The approach presented in this paper relies on two key ideas: the first one consists in using Emerson and Kahlon’s labeled transition systems with disjunctive guards [10], whose verification has been proved to be decidable. The related theorem states that a

liveness (‘‘something good eventually happens’’). Pagliarecci et al. Section 2 reports the related work on the formal verification of security requirements.... state representations (e. The latter consists in finding an invariant that represents the common behavior exhibited by all instances of the parameterized system. and they proved that each hyperproperty is the intersection of a safety hyperproperty (SH ) and a liveness hyperproperty (LH ). Alpern and Schneider [12] proposed a classification of program specifications in terms of trace properties (each property is a subset of the set of all traces). Section 3 reports the theoretical backgrounds that are used in the rest of the paper. Section 7 provides some concluding remarks. confidentiality is a combination of SH and LH. it has been proved [15. the average response time). Some work has been done to formalize grid architectures and their protocols. the property itself is represented as an automaton. they proved that each trace property can be expressed as either safety (‘‘nothing bad happens’’). A requirement is satisfied by the parameterized system when a given relation over the invariant. For the sake of self-containment. they suggested that ‘‘hyperproperties can be models for branching-time temporal predicates. response time) and sometimes a hyperproperty (e. Furthermore. The former consists in finding a finite number of process instances such that if they satisfy a property then the same property is satisfied by an arbitrary number of such processes.. process algebras (e. Related work Formal verification of security requirements. the temporal logic used in this paper.e. Formal verification of grid systems. There exists a vast literature on verifying parameterized systems. Concerning Petri Nets. parameterized system model checking. Nevertheless. i. availability is sometimes a property (e.g. He proposed an intuitionistic semantics for ltl based on Heyting algebra instead of the classical Boolean algebra. Yang and Li [29] proposed a sound and complete method to compute such a cutoff for parameterized systems with only rendezvous actions. whose proofs are presented in this paper. A completely different approach is based on computing a cutoff on the number of process replications or on the maximum length of paths. a model checking algorithm. Formal verification of parameterized systems. After that result. and functional specifications (e. several works proved the existence of cutoffs for further kinds of parameterized systems and specifications: namely token rings [9. whereas the verification methodology built on such results is reported in Section 5.13] that k-safety (a subset of safety hyperproperties) can be reduced to a simple safety property by means of self-composition. German and Sistla [27]. safety and liveness properties receive an elegant characterization. The methodology is explained by means of a case study: the Condor system [11].24] and shared resources [25]. The paper is structured as follows.g. and the property is satisfied. Section 6 provides the application of the methodology to the case study. see [32–36]). Maier [16] proposed a different solution for properties.38]).g. or the conjunction of safety and liveness. According to Maier’s approach. Finally. Furthermore. Unfortunately. several security requirements cannot be expressed in terms of trace properties [13]. The proposed approaches can be grouped. and grid systems. When a property is satisfied within the bounded path.g. The fundamental work on this field states that the verification problem for such kind of systems is in general undecidable [8]. In that work. Emerson and Namjoshi [28] proved that such a cutoff exists for the verification of parameterized systems composed of a control process and an arbitrary number of user processes against indexed ltl properties. a generic process. several authors faced the problem with (sound but incomplete) techniques based on the idea of finding appropriate abstractions and/or invariants [17–23]. The former consists in finding a (relatively small) finite-state abstraction that preserves the property to verify. max. Clarkson and Schneider [13] proposed a new classification of program specifications based on the notion of hyperproperties (each hyperproperty is a set of sets of traces). the parameterization of the system is naturally modeled by means . For this reason. According to their classification. whereas properties are limited to modeling linear-time temporal predicates’’ [13]. 1. computing the cutoff on the maximum length of paths of a parameterized system consists in finding an upper bound on the number of nodes in its longest computation path. After the seminal work of Emerson and Kahlon [10] on cutoffs for synchronization skeletons. Hanna et al. The application of intuitionistic semantics to hyperproperties is still to be studied. Recently. with an arbitrary number of process instances. see [37. The main limitation of the above techniques is that the process of building an abstraction mapping or an invariant for the system under verification must be guided by the user. The second one consists in proposing a verification methodology that reduces a given grid system model to a model to which it is possible to apply a ‘‘cutoff’’ theorem and. see [30. according to underlying formal models..812 F. / Future Generation Computer Systems 29 (2013) 811–827 Fig. This methodology is supported by a set of theorems. Some novel theoretical results are reported in Section 4.g.g. Grid reference architecture. thus. On the other hand. based on: Petri Nets (e. The part of hyperproperties that is covered by ctl∗ \x is still an open problem. see [39]). then the property holds for a system with unbound paths. [26] proposed a procedure to compute a cutoff for Input–Output Automata that is independent of the communication topology. requirement is satisfied by a system with an arbitrary number of jobs if and only if it is satisfied by a system with a finite number of jobs up to a cutoff size.. integrity should be a safety property (even if not yet proved). A temporal logic that joins together branching-time temporal predicates and linear-time temporal predicates is the computation tree logic (ctl∗ \x) [14]. 2..31]).

this approach requires that. λl . • λl is the identity function (i. S0 l ⟩. such approaches do not take into account parameterized systems. Σl . . . nothing is said about how Σi ∈ Ui is built. ..35. . Finally. S0 ⊆ S is the set of initial states. In the latter case. . Finally. Zhou and Zeng [36] used another variant of π -calculus for grid service composition verification. sk ) r r condition Tl = s denotes the universal condition which r r l s ∈S v n is always true (roughly speaking. Ll .g. a concurrent system P is defined as follows: P = ⟨U1 (n1 ) . it expresses that it is possible to ignore the state instance r of definition l is in). . the execution of each instance is interleaved with the others). In the approach proposed in this work. see [32]).36]. A kind of calculus is proposed. and functional specifications. Notice that. 3. r • Σl = {g | g ≡ r ̸=i 1≤u≤pr sl u ∨ j̸=l 1≤h≤nj 1≤v ≤qh sh } where for each definition l (resp. they use it as a basis to foresee the load balancing of resources and throughput time of the overall grid. . more recently.. Uk k ⟩. Xu et al. [33] verified loadbalancing algorithms by means of a static analysis on process algebra expressions. . .e. . sl ∈ Sl denotes the corresponding state of Uli . A labeled transition system is defined as follows: Definition 1 (Labeled Transition System). λ. LTSs with only an initial state S0 = {ι} will be considered. in general. Concerning works on process algebras. Uk (nk ) 1 1 . s ∈ S ). . . and finding an efficient way to verify the system. →⊆ S × Σ × S is the transition relation. A Labeled Transition System definition (or LTS) U is the tuple ⟨S . nj ] of a definition j such that 1≤v≤qh sh j is true (the current k state sk j of instance k of definition j is sj for a given v ). Definition 3 (Emerson and Kahlon’s System [10]). [34. However this paper deals with parameterized systems and proposes an efficient verification methodology (see Sections 4 and 5). grid application verification is based on functional specification.20. some restrictions on Σ have been imposed. ⟩ = ⟨U1 .32]). In the following. . →l . On the other hand. this means modeling the system as a parameterized system.F. Definition 2 (Concurrent System). the kind of model considered in this work.2. . λl (s) = s). Bolotov et al.33]). . it should be noted that only few works are specifically focused on security specification verification (e. The verification of complex systems is based on the natural deduction calculus. . it is possible to provide the notion of concurrent system as a set of transition system definitions. Concerning security. Σ . a Labeled Transition System is in general a non-deterministic system. thus for each i i state sl ∈ Sl . [38] and Prodan [39]. s1 . This reduces notation complexity.29]. . and any of its running copy will be referred to as an instance. for each given specification. a Labeled Transition System definition will be referred to as a definition. Then. . S0 ⟩ where: • • • • • • Σ is the finite set of actions.35] proposed a model checker for a variant of π -calculus to be applied to verification [34] and optimization [35] of grid service chain. Let {U1 . nl ] \ {i} of definition r l such that 1≤u≤pr sr l u is true (the current state sl of instance r r of definition l is sl u for a given u) or there exists an instance k ∈ [1 . . [37] proposed to formally verify grid resource allocation protocol properties by means of SPIN.26. Properties are expressed in ltl. system specifications can be expressed in terms of traces l l . In the synchronization skeleton framework [10]. / Future Generation Computer Systems 29 (2013) 811–827 813 of a parametric amount of tokens in the network. grid protocols [37.. Once the model has been validated. Σli . . λl . grid optimization algorithms [35. . a parameterized system can be modeled as a set of labeled transition systems. . At this point. guard is denoted by (s1 1 . .g. 3. . . . . by means of π -calculus. proposed [38] an integration of ctl and Deontic logic. . For the sake of simplicity. protocols/workflows verification/composition do not need to be modeled as parameterized systems. Uk } be a finite set of Labeled Transition System definitions. It should be noted that. This means that the verification process is not completely automatic but requires a theorem proving phase. sr ∈ S (resp. No formal verification is made with it. but results presented in this paper can be easily generalized to the case with more initial states. In the work of Prodan [39].1. Du et al.37]. . some auxiliary lemmata must be proved [30]. S is a finite and nonempty set of states. representing the safety and security hyperproperties as branching-time temporal formulas. of non-repudiation in a delegation protocol. While each definition can be written as Ul = ⟨Sl .e. . this phase is not required (see Section 5). . →. Van der Aalst et al. Transition systems with guards According to several authors [27. Uk n n Intuitively. and the rest of them use some kind of bisimulation analysis or calculus. Aziz and Hamilton [32] proposed the formal verification. lu l jv j An Emerson and Kahlon’s LTS is an LTS that applies the above restrictions. the i-th instance of Ul i i i (i ≤ nl ) is denoted by Uli = ⟨Sli . According to the proposed approach. . A completely different approach can be found in the work of Bolotov et al. This means that they can be applied only to specific problems (e. that is. . 3. λ : S → L is a labeling function on the states. U1 1 . for short. Pagliarecci et al. An active n g v k 1 1 g . On the other hand. and they are not able to model the intrinsically parameterized nature of grid systems. and the computing elements that gave their contribution to the result will be payed for. . if and only if there exists an instance r ∈ [1 . →i l . state representations. . Notice that P is still an LTS where each state is a tuple of instance states [10]. j ) and instance r (resp. as well. the complexity has raised some concerns as some authors [38] admit. .10. grid service chain and composition [34. . Uk k ⟩ such that for each definition l: • Ll = S l . Mishra et al. L. sk . Dalheimer et al. as well. Some authors use model checking [34. L is a finite set of labels. [30] showed that a grid system model eventually satisfies every request from grid users. Ll . An Emerson and Kahlon’s System (or Concurrent System with Disjunctive Guards) is (n ) (n ) the tuple ⟨U1 1 . . Clarkson and Schneider [13] and Schneider [40]. P denotes a concrete system composed of n1 copies of U1 through nk copies of Uk running in parallel asynchronously (i. [31] proposed that the model is validated running the same tasks on the model and on the real grid implementation. Theoretical background This work deals with formal verification of security requirements of a grid system. Specification language According to the seminal work by Alpern and Schneider [12] and. jv h h r h). This section reports the basic notion in this field that are needed in the rest of the paper. Actions such as g are called disjunctive guards because a transition s − → t can be applied when g is considered active. More in details. . . S0 l ⟩.

. 2. Definition 7 (Syntax of Indexed − ctl∗ \x). →. Definition 6 (Hypersafety and Hyperliveness).. . sk . . S0 ⟩ is an LTS such that: (n1 ) • SH is a hypersafety if and only if ω ∗ ∀T ∈ 2S .. H ⊆ 2S . and an index i. produced by system P . . U1 1 1 • Σ = ΣU1 ∪ · · · ∪ ΣUk . s1 k n sk k )| • ( . h − v ′v sj = s j (for each v ̸= u and j ̸= h). . Let S be a set of states. Intuitively. Then Π ⊆ S∗ ∪ Sω is the set of paths of P if and only if for each π ∈ Π .e. . s1 . For a path π = s0 . . s ∈ S . .. Σ . . π [.. Intuitively. Uk (nk ) ⟩ be an • S= × ··· × = {( .. φ(i) (resp. Formally speaking. According to the above notions. . s1 .∃M ∈ 2S . . s . (suffix of π ). This work is based on the Indexed − ctl∗ \x. π [i. it is possible to introduce the following notation: π [i] = si . i. . . .] = si . an Indexed − ctl∗ \x formula can be defined as: ∗ ω ω ω . Let H be a hyperproperty. path-formulas) that contain at least an atomic proposition indexed by i. the Indexed Branching Temporal Logic without the next (X) operator. As an example. s1 1 s1 k s1 1 n s11 n SU1 1 n SUk k n s11 nk sk . .. . . ) − − → ′ n1 ′1 ′ nk (s′ 1 .. 2. A finite path π is a prefix of another (finite or infinite) path π ′ (denoted π ≤ π ′ ) if and only if there exists an index i such that g ∈Σ φ(i) ::= p(i) | φ(i) ∧ φ(i) | ¬φ(i) | AΦ (i) | i φ(i) Φ (i) ::= φ(i) | Φ (i) ∧ Φ (i) | ¬Φ (i) | Φ (i) U Φ (i) where i ∈ I and p ∈ AP .. Let AP be a set of atomic propositions. 3.. . the temporal structure of a system is defined as follows: Definition 4 (Temporal Structure). .M ≤ T ′ ⇒ T ′ ̸∈ SH ))).. as well) in the following way: Π ≤ Π ′ iff ∀π ∈ Π · ∃π ′ ∈ Π ′ · π ≤ π ′ . . for each i.. . . S0 ⟩ where ⟨S .T ≤ T ′ ∧ T ′ ∈ LH . . s . si+1 . Π ′ may contain paths that have no prefix in Π . .. . Then. 3 shows the asynchronous product of the simple system defined in Fig. a hyperproperty is defined as follows: Fig. Let S ∗ be the set of finite sequence of states. si . there exists a hypersafety SH and a hyperliveness LH. λ. . Then. . • LH is a hyperliveness if and only if ∀T ∈ 2S . . where a trace is a state sequence of the system temporal structure. Let I be a set of possible indexes. formulas without path . si . / Future Generation Computer Systems 29 (2013) 811–827 π = π ′ [. sk ) n n g. Then P = ⟨S .. .. .. . such that H = SH ∩ LH .∃M ∈ 2S ..i] = s0 . . Σ . Definition 5 (Hyperproperty).. . Simple system. . Fig.. si+1 ∈ S and si − − → si+1 .M ≤ T ′ ⇒ T ′ ̸∈ SH ))). . Theorem 1 (Hyperproperty Decomposition). Let S ω be the set of infinite sequence of states.. (T ̸∈ SH ⇒ (M ≤ T ∧ (∀T ′ ∈ 2S . . Let S be a set of states. . . ∈ SUk }. Clarkson and Schneider proved that each property can be obtained by the intersection of a hypersafety and a hyperliveness. .. As a consequence. si+1 . Now it is possible to extend the notion of prefix to a set of traces (denoted by ≤. . . Then P satisfies the hyperproperty H (denoted P H ) if and only if Π ω is in H . In the following. .. 2. λ. .. Asynchronous product of processes in Fig. s1 . . π (s0 ) denotes any path π such that π [0] = s0 . s ) 1 k k 1 s1 k n sk k g ∈Σ iff there exist u. h such that su − − → s′ h ... Pagliarecci et al. . formally P H iff ω Π ω ∈ H. . .∃T ′ ∈ 2S . si (prefix of π ). a formal specification is associated to the notion of hyperproperty. there exists g ∈ Σ such that: π = s0 . L. • k-SH is a k-hypersafety if and only if ω ∗ ∀T ∈ 2S . . . Let P = ⟨U1 Emerson and Kahlon system. si+1 . Let Π ω be the set of infinite paths of a system P . Two noteworthy hyperproperties are: safety hyperproperty or hypersafety (‘‘nothing bad happens’’) [13] and liveness hyperproperty or hyperliveness (‘‘something good eventually happens) [13]... according to the recent work by Clarkson and Schneider [13]. .. . Notice that. →. . . They are defined as follows: Fig. n1 s1 . For each hyperproperty H. is obtained straightforwardly computing the asynchronous products of all the instances. . . . Φ (i)) defines the set of so-called stateformulas (resp. (T ̸∈ SH ⇒ (M ≤ T ∧ |M | ≤ k ∧ (∀T ′ ∈ 2S . and si . More in depth. s1 . . .i]. Several Temporal Logics have been introduced in order to give formal specifications of hardware and software systems.. Definition 4 means that the temporal structure. . . . . . and h g ∈ΣU u k 1 1 (s1 1 . L. . (or paths).814 F.

Uh +1 . . The third result regards concurrent systems with some independent processes. this means that the product can be used to transform the problem of verifying a k-hyperproperty in the problem of verifying a simple safety property. ck ) is given by cil = |Uil | + 2 if il ∈ i. . For the sake of self-containment of the paper. . . The first result regards the transformation of safety hyperproperties in simple safety properties: Theorem 4 (Hypersafety Transformation Theorem). π p(i) iff 1 1 1 s = (s1 1 . under certain conditions. Uk k ⟩ ′ n n Eh(1p ) iff ⟨U1 1 . . Uk ⟩ where np = min(np . The way in which φ Clarkson and Schneider [13]. called clients. π P. the notation (n1 . . cil = |Uil | + 1 otherwise. .r. Indexed − ctl∗ \x can be extended as below: Furthermore. iff Proof.⟨U ⟩ k−φ iff ∀k. . . Corollary 5 (Client–Server Single Cutoff Lemma). This clearly holds when some nq = 1. . . Therefore. s P. For each k-hypersafety kSH. .r. P . . Uk (ck ) (nk ) ⟩ φ ⟨U1 . .F. . s P . . 1) .P . . π ∈ Π ω ). and Π ω be its set of infinite paths. . . it can be noted that models for state-formulas are hyperproperties. Let φ be i Af (i) or i Ef (i) where i = (i1 . the above result together with Theorem 2. ∀(n1 . . ∀i ∈ I . . π φ1 (i) φ Φ1 ∧ Φ2 ¬Φ1 Φ1 U Φ2 φ Φ1 and P . cil = |Uil | + 1 otherwise. which are an important part of hyperproperˆ is built from φ is reported in the work of ties. . . nk ) ≽ (1. . such a theorem has been reported here. . . |Up |+2) and for p ̸= q. Theorem 2 (Emerson and Kahlon’s Single Cutoff Lemma [10]). . .⟨U ⟩ k − φ (according to Clarkson and Schneider [13]). In the following. . . . s φ2 iff P . Pagliarecci et al. Then n n φ1 ∧ φ2 ¬φ1 AΦ 1 φ1 (i) i ∃l. for services. Let φ be a state formula (resp. S be its set of states. a state s (resp. the temporal structure introduced in Definition 4. π Φ2 iff P . 4. . . mk ) denotes the pair-wise less-than (<) operator. The second result regards concurrent systems with multiple process instances.⟨U1 φ iff ( c1 ) (n1 ) . Φ be a path formula). . Let φ built from φ . . s P. 3. . π [k] Φ2 and ∀j : j < k. Uh ( c1 ) (ch ) (1) (1) . In this work. Uh +1 . there exists a 1-hypersafety 1SH such that: P kSH iff P (k) 1SH φ1 ∨ φ2 ≡ ¬(¬φ1 ∧ ¬φ2 ) φ1 (i) ≡ ¬ ¬φ1 (i) i i Φ1 ∨ Φ2 ≡ ¬(¬Φ1 ∧ ¬Φ2 ) EΦ1 ≡ ¬A¬Φ1 (for some path Φ1 ) FΦ1 ≡ true U Φ1 (in the future Φ1 ) GΦ1 ≡ ¬F¬Φ1 (in all states Φ1 ).. π P. . . s φ1 and P . Uk ⟩ be an Emerson and Kahlon’s System (Definition 3). Uh+1 . P . . called servers. Let φ be a safety hyperproperty. Let s ∈ S (resp. This number is called the cutoff of the system. π Φ ) denotes that φ is true w. .(p ∈ SUl and sil = p) iff P . .P . . nk and p: ⟨U1 1 . s φ1 iff ∀π : π [0] = s. . . Theorem 3 (k-Hypersafety Transformation Theorem [13]). nk ) ≼ (m1 . .3. Proof. the semantics of Indexed − ctl∗ \x formulas is provided w. s P. . Then Intuitively.⟨U1 (n1 ) . 1 ≤ il ≤ nl . . π [j] Φ1 . .P . requesting single instances. . . Clarkson and Schneider [13] proved another result that is used in this work. . . . symmetry and duality between A and E operators prove the corollary. . The approach presented in this paper is based on the work by Emerson and Kahlon [10] where a Cutoff Theorem that holds for systems that comply with Definition 3 is presented. nh ) ≽ (1. . s P. Let P be an Emerson–Kahlon system. . this section reports some novel theoretical results that can be useful for formal verification. Definition 8 (Semantics of Indexed − ctl∗ \x). sk ) and Intuitively. . . π Φ1 iff iff ⟨U ⟩ φ iff ∀k. . according to Definition 4. . Let us introduce the notion of dependency/independency between definitions as follows: where the cutoff (c1 . s φ (resp. . . sk . |Uq |+1).. In fact. . Let ⟨U1 (n ) (1) (1) (n1 ) . Uk ⟩ φ iff φ ⟨U1 .P . . . ∀k. . ih ) is a set of indexes such that for each il ∈ i. . . . . nq = min(nq .⟨U (k) ⟩ ˆ (Theorem 3). This means that AP = SU1 ∪ · · · ∪ SUk .t. / Future Generation Computer Systems 29 (2013) 811–827 815 quantifiers (A and E ) form ltl \ x. . . . . . Therefore.t. Then. . .t. Nevertheless. s1 . φ As suggested by Clarkson and Schneider [13]. Grid system verification: a theoretical framework A grid system may have some regularities that can be exploited in order to make its verification efficient. . Uh h . 1). Transformation theorems The verification of parameterized systems is known to be in general undecidable [8].. . . . By the Truncation Lemma [10].r. . This situation can be analyzed with a corollary of the Emerson and Kahol’s Reduction Theorem (Theorem 2). ⟨U ⟩ φ iff ∀k. . we have that for all n1 . a path π ) in the temporal structure of P if and only if: P. the semantics of each state-formula denote a set of sets (one for each state s) of paths (all the paths π such that π [0] = s). Let U be an LTS ˆ be a safety property definition. . Φ is true w. where k-φ is a k-safety hyperproperty.. . s P. U k ⟩ where the cutoff (c1 . Let P be a system. π Φ1 iff ∃k..⟨U (k) ⟩ ˆ φ. π P. where f is an ltl \ x. π [0 ] P. the verification of a system with an arbitrary number of instances can be reduced to the verification of a system with a pre-defined number of instances. . . this means that parameterized systems can fully cover hypersafeties. ch ) is given by cil = |Uil | + 2 if il ∈ i. a subset of Indexed − ctl∗ \x. Conventionally. Uh (nh ) (1) (1) . Uk k ⟩ ′ n′ n′ Eh(1p ) ∀(n1 . .

. . Then (n ) (n ) ( . . . s′ 11 . . s1 1 n s11 ′1 ′ n1 ′1 ′ nk−1 1 . . (s1 1 .. .) − → (s 1 . . sk−1 ). h such that su → s′ h . . . . . 1 ≤ il ≤ nl . . . s′ k−1 . . it is possible to obtain a path π .. . Then (n ) (n ) ∀π . . . . it is possible to obtain a path π ′ . . . . . respectively) means that Ul does not depend on Ul or Uj .. In this case also. . . ⟨U1 1 .. . . . . . Let ⟨U1 1 . . . . . . . . . This is possible thanks to the following result: Theorem 7 (Splitting Theorem). . .. . . sk−1 . . . ( . At this point. s′ 11 . . . . . s k−1 ).. . . . . sk ).. . sk−1 . . . 1 ) nk Taking into account that g does not depend on ι1 k .. . . . sk−1 . s′ 11 . . . .. . U1 s1 k−1 (p) (n1 −p) . 1 s′ k−1 . . s1 . . . . . nk ) ≽ (1. . . . .. . . . . . Uk k ⟩ be a Concurrent System with Disjunctive Guards (Definition 3). . n1 . . .. s′ 11 . . . π f (il ) (by Definition 8) implies nk−1 nk n1 1 1 ∀π. .π = . ιk . s′ k−1 . . . . . At this point. s′ 11 . ιk .. sk . s′ k− 1 .. . . . . . it is possible to prove that: ∀π . . . sk−1 . . . . . . applying the two rules above to each transition in π . Ul (nl ) (n −1 ) . . ..⟩.. s k−1 .. . . . Ul depends on Uj ) when for each i. therefore transition (1) can be substituted by the following transition n n Furthermore. s1 1 g n s11 . . . . . . . Uk−k1 ⟩ (n ) Af (il ) implies k r (Gk j (g ) ̸= Tj ). sk−1 . Pagliarecci et al. it can be useful. k−1 sk− g. . s1 . . . the related g does not depend on s1 k . . . . . . . . . Intuitively.. . sk ) Every time a transition such as Transition (1) with h < k occurs nk in path π . . . . . . . . .. . . sk−1 .. . . h − n1 1 1 v ′v sj = s j (for each v ̸= u and j ̸= h). . . .. Let ⟨U1 1 .π ⟨U1 (n1 ) f (il ) implies ∀π ′ . Ul l . . s1 . . . . ⟨U1 (n1 ) (n ) (n ) (n ) φ(il ) φ(il ). 1). . . . .. . . . 2 A sequence of states that do not change is equivalent to a single state. Ul . nk−1 sk− 1 . . . Uk (nk ) ⟩ The case for Ef (il ) can be derived by the equivalence Ef (il ) ≡ ¬A¬f (il ). . . . s1 . applying the above rule to each transition in Notice that each path π contains transitions as the following: k−1 1 1 k 1 (s1 1 . . . nk−1 sk− 1 ′ nk−1 s k−1 ). ( . sk−1 . and (s1 . s1 k −1 . . iff n n n . . . s ⟨. . / Future Generation Computer Systems 29 (2013) 811–827 Definition 9 (Dependency). . . using the block bisimulation and taking into account that f does not contain states of Uk .π ′ n f (il ) implies ∀π . . . 1 s′ k .. . . . . (s1 1 . . k−1 − → (s′ 1 . sk . . . . . . . s1 . . Uk k ⟩ be a Concurrent System with Disjunctive Guards (Definition 3).⟩ 1 n 1 n f (il ). Uk ⟩ φ. . In fact. . . . . . 1). . transition (1) can be substituted by state k−1 1 1 (s1 1 . . . .. Ul depends on itself (resp. . .. nk ) ≽ (1. . Ul l .. Uk k ⟩ ∀(n1 . • (Outer-dependency) Ul depends on Uj (denoted by Ul → Uj ) if and only if r – either there exists g ∈ Σl where g ≡ r ̸=i Gl ∨ j̸=l k k k 1≤k≤nj Gj and such that Gj (g ) ̸= Tj – or there exists Ut . . . k−1 1 k (s′ 1 . .. . . and f is an ltl \ x path formula. ιk ). s′ k−1 . Notice that it may be still the case that Uj → Ul . . . . . k) such that the behavior of Uli depends r on the behavior of Ulr (resp. . . . Uk (nk ) ⟩ φ iff (nk ) Obtaining the path ∀(p.. . . ( .. . ∀(n1 . Theorem 6 (Independent Process Theorem). . . . k−1 k 1 sk− g. . . ιk . ..π ′ = ⟨· · · (s1 1 . .⟩. . Independency (denoted by Ul ̸→ Ul and Ul ̸→ Uj . Transition (1) means that there exists u. n n g 1 n 1 n (2) g u Transition (2) means that there exists u. .. . 1 . . sk−1 . . . . . . . . sk . s k . nl . . . . to group in a different way LTS instances. and (s1 . . . . s′ k− 1 . . s′ k−1 .. It should be noted that Ul → Uj does not usually imply that Uj → Ul . . sk−1 . . . . 1 s′ 1 n s′ 11 (n ) (n ) n n n 1 n 1 n n . . . . . . ). 1). . . Let ⟨U1 1 .⟩. . sk ). . . . .⟨U1 . 1 s′ 1 n s′ 11 . . . . . k−1 ′ ′ k − → (s′ 1 . using the block bisimulation2 introduced by Emerson and Namjoshi [9] (for this reason f must be a ltl \ x formula) and taking into account that f does not contain states of Uk . . . . π ′ . . . . sk−1 . sk−1 . . ⟨U1 1 . ιk . . . . s1 . s1 . . . sk−1 ) . . . ιk . . Uk k ⟩ Af (il ) implies ∀π.⟨U1 (n1 ) . . . . . nk−1 ) ≽ (1. Uk k ⟩ be a Concurrent System with Disjunctive Guards (Definition 3). . (Case ⇒) Obtaining the path (n ) k−1 1 1 k 1 ⟨. . sk−1 . . • (Self-dependency) Ul depends on itself (denoted by Ul → Ul ) if r and only if there exists a guard g ∈ Σl where g ≡ r ̸=i Gl ∨ k r r j̸=l 1≤k≤nj Gj and such that Gl ̸= Tl . Let Uh ̸→ Uk for each h < k. f (il ). . s′ k−1 . (n ) (n ) Every time a transition such as Transition (1) with h = k occurs in path π . .. nk ) ≽ (1. . . each path π contains transitions as the following: ′ k−1 1 1 (s1 1 . h − n1 v 1 1 ′v sj = s j (for each v ̸= u and j ̸= h). . . . . . . .π ′ f (il ) implies Af (il ). .. . . . . . . . . . 1). (s1 1 . it means that the state. .. affects the behavior of Uli (a transition that can be either allowed or not). . nl . π f (il ) (by Definition 8) implies nk−1 n1 1 ∀π ′ . . . . s′ k− 1 ). s′ k− 1 ). n n ∀π ′ . . . . s k ). . it is possible to substitute transition (2) with the following transition: k−1 1 1 k 1 (s1 1 . s′ k− 1 . it is possible to prove that: As a consequence. . . . n ∀(n1 . . . . . Uk−k1 ⟩ g 1 n 1 n n Proof. . . sk . . . . . . . . . .. . s1 . s1 . nk−1 s′ k− 1 .. . . sk−1 ). when Gr l (g ) ̸= Tl n n As a consequence. . . . . g 1 n 1 n 1 (1) g u ⟨U1 (n1 ) . sk−1 ) n n obtaining the path k−1 1 1 ⟨. sk−1 . . ιk ) k−1 1 k − → (s′ 1 . . k−1 (s′ 1 . Let φ(il ) be Af (il ) or Ef (il ) where 1 ≤ l < k. of Ujk ). h such that su → s′ h . . . . . there exists r (resp. . .. where instance Ul (resp. . .. s1 .. . . in certain circumstances. . .. . . Ul . . Ul ′ (nl ) (n −1 ) ...π (nl ) f (il ) implies Af (il ). Ujk ) currently is. . . n n s′ k k ). .. . .. . . Let U l be a LTS k r definition such that: Σl = {g | g ≡ r ̸=i Gl ∨ j̸=l 1≤k≤nj Gj } Then. .. . . . . . sk−1 .. . . 1 s′ k−1 .816 F. . . . . such that Ul → Ut and Ut → Uj (transitivity). . Uk−k1 ⟩ (Case ⇐) ⟨U1 (n1 ) ′ (nl ) −1 . . . . ιk ). . . ιk ).

Now. . belonging to the same definition. . Pagliarecci et al. . . Honest Actor. . Then. →H . !m denotes the action of sending m. Grid system verification: a methodological framework The theoretical results obtained in Section 4 together with the results obtained by other authors and briefly reported in Section 3 allow for the definition of a methodology in order to verify. can be split in two sets: 1 U1 . n1 instances of U1 nk instances of Uk This means that instance states can be also grouped as: 1 n n p p+1 1 s1 . ΣH . Such information can be obtained by analyzing system technical specifications and/or inspecting its code. such as. Before going into the details of what an honest actor is. s1 . for each actor type there may be several instances. SH0 ⟩ . let A be the A-LTS defined over L and M and let D be the D-LTS defined over D . . . 1 Intuitively.1. . (2) the security specification can be denoted by an Indexed − ctl∗ \x formula. each state of a given D-LTS represents a possible value of the given datatype. . where it is a standard practice and is based on the so called Dolev–Yao model of intruder [41]. more instances may be the consequence of multiple parameters. . . . . such definitions can be rewritten with a single index. sk k . .j. . In other words. Let M = {m1 . let us introduce two auxiliary concepts: Data Labeled Transition Systems (Data LTS or D-LTS) and Action Labeled Transition Systems (Action LTS or A-LTS). s1 . the Action LTS (A-LTS. coherently. . .. . for short) A built upon L and M is an Emerson and Kahlon’s LTS (Definition 3) such that: ≡ Ul C (i. . Let M = {m1 . more jobs to manage (when the actor is a computing element). the Data LTS (D-LTS. a grid system is composed of • SA0 = {sa0 } where sa0 is the initial state. actor synchronization is therefore obtained by means of message passing. dn } be a finite set representing the domain of some datatype. 5. . . different instances may need to be linked among them and this can be done by means of multiple indexes. Furthermore. let Cn. Proof. . 4. U1 and U1 . Notice that indexes of guards in all LTS definitions must be adapted. i. n1 . . j) = (i − 1)m + j. . LTS definitions for grid systems may have multiple indexes. . Definition 12 (Honest Actor). sk .k)) and so on.C (j. s1 . . For each state of the system. Thanks to any bijective pairing function. Definition 10 (Data LTS). . .m : [1 · · · n] × [1 · · · m] → [1 · · · n · m] be: Cn. Modeling The first step consists in modeling the grid system as a Concurrent System with Disjunctive Guards with the presence of possible multiple indexes. The presence of more instances for the same actor is a consequence of job migration (when the actor is a grid job). . the aim of D-LTS is to represent the data stored in the actor’s memory as an Emerson and Kahlon’s System. Indeed. a grid system must be modeled as a set of honest actors and an intruder. Then. . lm } be a finite set of local actions. . As a consequence. mk } be a finite set of messages. Then. . Definition 11 (Action LTS). The resulting methodology is reported in Fig. . it is useful to model a grid job that has a malicious or defective behavior. . the aim of an Action LTS is to represent the behavior of a given actor that can execute actions enumerated in L and exchange messages enumerated in M . . . U1 . how many instances of a given actor type are running is not known a priori. Moreover. / Future Generation Computer Systems 29 (2013) 811–827 817 Fig. Finally. grid system security. sk k . the state is a tuple of instance states. under certain conditions. . or multiple requests for the same resource (when the actor is a resource manager). . whereas ?m denotes the action of receiving a message m. dm } be a finite domain. . Let L = {l1 . . s?m | m ∈ M} ∪ SA0 . . Modeling an honest actor means that the model must be able to represent all the legal actions the actor can perform in the right order. . This allows the verification of security requirements even when the worst conditions occur. . 4 and consists of six steps. . . . for short) D built upon D is an Emerson and Kahlon’s LTS (Definition 3) such that: The fact that guards are only in disjunctive form guarantees that a different grouping does not change the truth of φ . . . In all the above cases. Ul ≡ UlC (i. The result can be extended to a i.j Ul p p+1 n p p+1 • SD0 = {sd0 } where sd0 is the initial state. . U1 1 . s11 . an Honest Actor is the Emerson and Kahlon’s LTS: H = ⟨SH . j)). . Indeed. Let D = {d1 . The order in which actions are executed is represented by the transition relation →A . U1 .k generic tuple of indexes. . . . (3) The proof is trivial and is omitted.F. s11 . ln } be a finite set of local actions. 5. mk } be a finite set of messages. . . U1 . then multiplex indexes are not a problem. sk . C (i. s1 .g. . This approach is borrowed from a formal verification of security protocols. . . . this means that a set of instances U1 . . . . • SA = {sl | l ∈ L} ∪ {s!m . . Furthermore. p instances of U1 n1 −p instances of U1 nk instances of Uk a certain number of actor types and. Intuitively. These are the following conditions: (1) the grid system can be modeled as a Concurrent System with Disjunctive Guards with the presence of possible multiple indexes.e. then i . Let L = {l1 . modeling an honest actor can be considered as a parameterized problem. . Provided that it is possible to find bijective pairing functions (e. the system state has the form: 1 n n p p+1 1 s1 . . . . . . Furthermore. Intuitively. .j) . . it is possible to define an honest actor as the composition of its behavior (an A-LTS) and its data (a D-LTS).m (i. • SD = {sd | d ∈ D } ∪ SD0 . Notice that. From all the above considerations. Let D = {d1 . Verification methodology.

. (sa2 . sd ) for the sender and the guard j (s!ms . as follows: • →H ⊇ {(sA0 . q)) from another actor U . sread .. The honest actor can thus be obtained by applying Definition 12. sq ) | sl ∈ SA . Let j be an index ranging over all the instances of U . s?open_req . the domain consists of all the users. the set of messages consists of the related requests. q). Pagliarecci et al.e. and the D-LTS can be built according to Definition 10. sp ) for a receiving action). chirp_fetch behavior. sp ). For j j . SA = {sopen . (3) store and reuse any information it can access/eavesdrop/encrypt or decrypt with the proper key. sq ∈ SD . (sequence) For each sequence of actions a1 (p. where d represents the output of a1 that must be used as input for a2 : T L = {open. 6. sp ) | (sa . sp ). p = q). (s!ms . It should be noted that the action of sending a message does not change the data of the sender actor. when a local action does not produce any output (i. ΣH . (s!ms . let us consider the chirp_fetch process in the Condor System [42]. at each step. the state (s!me . the action of sending a message for an actor H must be synchronized with the action of receiving a message of a given j j actor U . q) admitted in the system (i. . sp ) − → (s?mr . . checks who the owner is. d). to the case of Grid computing. sp ∈ SD . / Future Generation Computer Systems 29 (2013) 811–827 where SH0 = {(sA0 . the next action to perform in a nondeterministic way in order to try all possible combinations.. and q ∈ D is its output parameter. (s!ms . (s?mr . M = {open_req. sd ) (resp. For example. sp ) | s!m ∈ SA . sp ). where p ∈ D is its input parameter. sp ). sd ) for the receiver. This process receives access file requests from other processes. • if p ̸= q. q) represents the fact that when an instance of parameter p is available (represented by the state (sl . all the actions are of the form a(p. An output parameter denotes the data that an actor obtains after the execution of the related action. The security of a system should be checked against all possible hostile processes. a2 =!m). (sa2 . sd ) ∈ SH } When a1 =!m (resp. and q ∈ D is its output parameter: • SH ⊇ {(sl .sd ) j the request. an input parameter denotes the data that an actor needs in order to execute the related action. (s!me . sp ) = (s!m0 . In the methodology proposed in this paper. the fragment of automaton for an action a(p. It should be noted that all the actions used in the example do not produce any output. Therefore. sp ) for a local action. sd )). it does not have any output parameters. SD = {scondor . The above assumptions represent an adaptation of the Dolev–Yao assumptions [41]. sp . s?close_req } (5) • →H ⊇ {(sa1 . sp ). sp ) ∈ SH } When a =!m then (sa . sp ) − − − − − − → (s!me . j (sj?me . sp ). sd ) ∈ SU } (receive) For each action of receiving a message m ∈ M (denoted by ?m(p. Usually. . j (sj!ms . where p ∈ D is its input parameter. sa1 − → sa2 ∈ →A ).e. sq ∈ SD } (send) For each action of sending a message m ∈ M (denoted by !m(p. then ΣH ⊇ {T }. i.818 F. This is represented by the guard j (s?me . The transition relation →A is reported in Fig. SH . sp ) − − − − − − → (s?me . sd ). As a consequence. read. and →H are the smallest sets satisfying the following constraints: (local) For each local action l(p. sd ) = (s!m0 . sd ) = (s!me . the honest actor starts the execution of action a that ends by returning an instance of parameter q (represented by the state (sl . (s?mr . where p represents the input for a: T T T j j (s!ms . Then • SH ⊇ {(s!m0 . sp . (s?me . the set of actions consists of opening. . sp ). and closing a file. sp ) for a sending action. then →H ⊇ {(sl . sd ) ∈ SU } (initial) For each initial action a(p. susern } ∪ {sD0 }. Then T j j j (s?me . user1 . (sl . and the result (limited to two users) is reported in Fig. sjd )} • →H ⊇ {(s?m0 . the state (s?m0 . sp ∈ SD } • ΣH ⊇ {T . s?read_req . sp ) for a sending action. Furthermore. the A-LTS can be built according to Definition 11. adopted in security protocol verification [43].sd ) Fig. p ∈ D is its input parameter. Intruder. sq ) | s?m ∈ SA . it can neither perform an operation nor access a resource if it does not have the authorization. no transition is inserted in →H .. sd ) | (sa1 . sp . sq ) for a receiving action).e. usern }. Let j be an index ranging over all the instances of U . sp ) − → (sl . sp ∈ SD } • ΣH ⊇ {T . a2 (d. sclose . sd ) − → (sa2 . this problem is overcome by analyzing the case where only the most powerful intruder is considered [41].e. the state (s?me . sp ) | s!m ∈ SA . formally: D = {condor. . (s?me . In other words. Finally. sp ). sq ) | s?m ∈ SA . Furthermore. suser1 . q) admitted in the system (i. This means that a formal model of intruder must be able to choose. sp ) − • →H ⊇ {(s!m0 . p) or !m(p) for short) to another actor U . 5. then (sa1 . where l ∈ L. sD0 )}. and eventually performs T ∪ {sA0 }. close_req}. and it cannot decrypt a piece of data without the proper decryption key (the so-called perfect encryption assumption). As a consequence. sq ) | sl ∈ SA . The case for ?m is similar. The case for ?m is similar. Intuitively. Furthermore. (4) →D = S D × { T } × S D . sjd )} → (s!ms . the state (s!m0 . read_req.e. • SH ⊇ {(s?m0 . i. s A0 − → sa ∈ →A ). . close}. an intruder must be able to (1) perform any operation the system allows it. 5. The Dolev–Yao model represents one of the most general attacker models for protocols. p) (condition p = q holds). . sD0 ) − → (sa . but there is not a unique way to generalize it to systems with a more complex behavior than message delivery [40]. reading. (2) eavesdrop any message. sq ) for a local action. sq ∈ SD }. T • if p = ̸ q.

(sl . and several important security requirements can be modeled as hypersafety. Indexed − ctl∗ \x has been chosen. q)) from another actor U . The consequence of all the above considerations is that the methodology proposed in this paper allows for the verification of several interesting requirements such as access control [44]. sp ). sp ) | (sA0 . Basically. sq ) | sl ∈ SA . sp ) − • →I ⊇ {(sA0 .2. (s!me . For this work. 6. observational determinism [46]. Furthermore. sq ) | (sA0 . For example. sp ) − → (sl . which is able to capture a large set of hyperproperties. sp ) ∈ SI } (send) For each action of sending a message m ∈ M (denoted by !m(p. sp ) − → (sA0 . Let L. an Intruder is an Emerson and Kahlon’s LTS: I = ⟨SI . sp ) − (sA0 . (s!me . (sl . sq ) − → (sA0 . sq ) | s?m ∈ SA . sq ) ∈ SI } else → (sl . the kind of hyperproperties modeled by Indexed − ctl∗ \x is as wide as possible. (sl . (s?mr . sjd )} → (s!ms . respectively. (s?mr . SI . Furthermore. chirp_fetch honest actor model. sp ) | s!m ∈ SA . This allows the intruder to perform actions in no predefined order. sp . sp ) − → (sA0 . all the hyperproperties that contain a set of traces starting from the same initial state. sd ) | d ∈ D }. and a domain. respectively. The kind of LTSs used in this work have only one initial state.F. (sA0 . sd ) after the execution of each action. as in Definition 12. the resulting intruder model would be the one as reported in Fig. (4) and (5)). (sl . Let j be an index ranging over all the instances of U . (sA0 . sp ). and q ∈ D is its output parameter. Then. Therefore. p ∈ D is its input parameter. as well. most of the requirements can be classified into safety hyperproperties and liveness hyperproperties. sp ∈ SD } • ΣI ⊇ {T . sp ). sq ). Definition 13 (Intruder). sp ) − − − − − − → (s!me . Then T T • SI ⊇ {(sA0 . sp ).sd ) • SI ⊇ {(sA0 . Let A and D be an A-LTS and a D-LTS. D be a set of local actions. 7. sp ). sp ) − − − − − − → (s?me . As a consequence. q). (s!ms . sp ) − (sl . sq ) − → →I ⊇ {(sA0 . and secret sharing [47]. sp ) − → (s?mr . sq ). the intruder model can be considered as a parameterized model. Pagliarecci et al. (sl . sp ). where p ∈ D is its input parameter. (s?me . sp ). According to Clarkson and Schneider [13] and Schneider [40]. T Fig. exchanged messages. sp ∈ S D } • ΣI ⊇ {T . sp ). all the safety hyperproperties can be verified by means of a parameterized model. where p ∈ D is its input parameter. if one wanted to define an intruder based on the chirp_fetch process described above (see Eqs. sp ). j (sj?me . sq ). (s!ms . 5. (sl . defined over them. Then T T j j j (s?me . sD0 )}. sq ∈ SD } • ΣI ⊇ {T } • if p ̸= q. (sA0 . ΣI . sd ) | d ∈ D } • →I ⊇ {(sA0 . Goguen and Meseguer’s noninterference [45]. p) or !m(p) for short) to another actor U . sjd )} • →I ⊇ {(sA0 . ΣI . sD0 ) − → (sA0 . sq ). where l ∈ L. in fact. sp ) | s!m ∈ SA . sd ) ∈ SU } (receive) For each action of receiving a message m ∈ M (denoted by ?m(p. sp ). sp ). (s?me . →I ⊇ {(sA0 . then T T T → (sl . Specifying The second step consists in specifying those requirements that the grid system must satisfy in order to be considered safe and secure. sp ). sq ) | s?m ∈ SA . reasons similar to those for honest actors. sp ). / Future Generation Computer Systems 29 (2013) 811–827 819 • SI ⊇ {(sA0 . sq ). and q ∈ D is its output parameter: . Let j be an index ranging over all the instances of U . j (sj!ms . (sl . M . (s!ms . sp ∈ SD . (s?me . sq ∈ SD . sd ) ∈ SU } (initial) For each d ∈ D : T T j j j (s!ms .sd ) • SI ⊇ {(sA0 . SI0 ⟩ where SI0 = {(sA0 . as a consequence of Theorem 4. an intruder can be modeled as the composition of its behavior (an A-LTS) and its data (a D-LTS). the main difference between the honest actor model and the intruder model consists in a transition toward one of the states of the form (sA0 . →I . sp . sp ). and →I are the smallest sets satisfying the following constraints: (local) For each local action l(p.

Condor incorporates many of the emerging Grid-based methodologies and protocols [11]. . it starts the shadow process to manage the job execution. 5. as shown in Fig. the schedd represents a job queue. This means looking for LTS definitions so that a subset of their instances does not depend on the rest of the instances. This vulnerability leads to a privilege escalation.54]. Model checking The sixth step consists in verifying the system as an outcome of the previous steps. which takes care of finding the correct machine type and running the job. interact with the Condor matchmaker. Condor processes running on that computer. Condor starts the submitted job on an available grid machine. This means computing the cutoff of the system and.1. thus. For instance. In order to run a job submitted to Condor.4. The verification technique used in the proposed methodology is model checking [48]. Condor manages the jobs for a Grid computing system [49].820 F. Condor sends a notification to the users. Finally. the shadow represents a job on the Submission machine that has been matched with an Execution Fig. It places the job in a queue until the required resources are available. Each component runs some processes. In the Submission machine. i. chirp_fetch intruder model. 7. simplifying the verification problem. Liveness : i A(φ(i) → Fψ(i)) where φ(i) must have as a consequence that ψ(i) will eventually occur. This means looking for LTS definitions on which no other definition depends. reducing the original problem to the problem of verifying the behavior of a small set of LTS instances. Finally.5. Jobs are prepared and submitted to Condor. Splitting The third step consists in applying Theorem 7. Reducing The fifth step consists in applying the Reduction Theorem 2 and Corollary 5. It also takes care of switching the job to another machine if necessary and sharing available resources with other jobs in the queue. This daemon records a job event and manages all of its data file during its lifetime.3. Actually. This choice was inspired by a known vulnerability [50] that involved Condor and the protocol used to access files on remote hosts.e. Pagliarecci et al. files and the program environment. all its instances can be removed. there are interactions between three components. two popular examples of safety and liveness hyperproperties are reported below. to a host security violation. If such a definition exists. 6. / Future Generation Computer Systems 29 (2013) 811–827 5. Condor-G [52] is fully interoperable with resources managed by Globus [53. 5. including command line arguments. its instances can be split into two groups (the first one independent of the second one). In Condor [42]. a job manager for Grid system. If such a definition exists.. Users do not submit to global queues because Condor has a decentralized model where users submit to a local queue on their computer and. Notice that this step together with the previous one allow for the removal of a part of the instances of a given definition. For the convenience of the reader. 5. the Execution machine (E ) and the Central Manager (CM). The Condor system Condor [51] is a software suite to manage system workload that has been developed at the University of Wisconsin. 8: the Submission machine (S ). When the job completes. Removing The fourth step consists in applying the Independent Processes Theorem 6. a brief description of the individual processes is presented. a job is a program to be executed along with a description of how to start the program. Case study This section presents the application of the methodology presented in Section 5 to Condor [49].6. In particular. 6. Notice that each step is sound and complete according to the theorems that have been applied. the verification of the resulting system is equivalent to the verification of the original system. Indeed. Safety : i AG¬φ(i) where φ(i) represents a ‘‘bad’’ state. Therefore.

the schedd forks a shadow. From a graphical point of view. In the Execution machine. The former is a repository of machine contexts (ClassAd) from all the servers in the system. In the connecting phase. After the . Matchmaking requires four steps. / Future Generation Computer Systems 29 (2013) 811–827 821 Fig. let us make some remarks. The state with an incoming arrow from the special dot must be considered the initial state for the given process (see state init in Fig. 9. For instance. and updates the job status. the startd process forks a starter. In the first step.1. machine. is depicted in Fig. 8. Indeed. The Central Manager runs two daemon processes: the collector and the negotiator.1. These two steps. In the third step. Verification First of all. the rounded rectangles denote the states of each process. it is invoked by the user and receives the job to be executed together with some metadata. the processes representing all possible inputs that the operation can receive will be abstracted away and coded into the structure of the process itself. namely i∈proc_1 φ(i) ∨ · · · ∨ i∈proc_n φ(i). it sends an OK message containing an identification of the Execution machine to the shadow. that are labeled 1a and 1b in the Fig. will be represented either as Finally. a simplified version of honest actors and intruders will be used. . provided that in the system instances of processes proc_1 . 8. the starter represents a job on the Execution machine that is running or that has to be started by this machine. a matchmaker scans the known ClassAds (the job and machine context) and creates pairs that satisfy each other constraints and preferences. The latter performs matchmaking between jobs and execution hosts. In the second step. can be executed in any order or in parallel. The first process. . For the sake of clarity. It has a fairly simple logic. as already explained in Section 5. with respect to the description given in Section 5. The latter notation is just a shortening for a longer guard. the schedd process in the Submission sends the job context to the Central Manager. it sends a not-OK message and the matchmaking and connecting phase are repeated. proc_n exist and their states are mentioned in φ . In the final step. submit process. 9. The interaction protocol can be split in three phases: matchmaking. This process contacts the startd in the Execution machine and sends the job requirement.2.F. Namely. If the startd finds that the job requirements match its machine context and the Execution machine is still idle. Modeling. where φ is an indexed boolean formula involving states of proc_1 or other definitions in the system. handles forwarded system calls. The former denotes that the transition considers only instances of the process whose name is proc_1. connecting and starting. The shadow sends the job (executable code) directly to the starter and it begins its execution (the starter forks the job). The Condor protocol. and the related labels are their names. If one of these conditions is not satisfied. the i guard i∈proc_1 s is considered satisfied any time instance i of definition proc_1 is in state s. the startd process sends the machine context to the Central Manager. This process transfers files. Pagliarecci et al. 9). it informs the schedd containing the job and the startd so the letter can run the job. the startd process records the host status and sets the starter process to manage the job execution. the owner of the Execution machine may have reclaimed the machine for his/her own use after that the matchmaker performed that match. guards on transitions i∈proc_1 φ(i) or just as i φ(i). the matchmaker sends to the schedd an identification of a machine where the job can be executed. It should also be noted that state names could have special symbols like ! and ? as prefixes. Intuitively. Fig. and therefore no jobs are able to run on the machine. These data are used to get the status of the system and to find idle execution hosts. 6. named submit. if a match is found. This is useful because the matchmaker operates on information that may be out of date. The startd checks to make sure that the job and machine ClassAds still match. In the last phase.

2. 13. 15) is the same as the state trusted. a simple process is shown representing a data structure which stores the owner attribute for a given job during its execution. it describes the behavior of the command condor_ chirp when it is invoked with parameter set_job_attr Owner as in: $condor_chirp set_job_attr Owner foo where the attribute owner of the job is set to value foo. The starter process is responsible for launching the actual job and is represented in Fig.sock-B). from state trusted. it is set to be the new owner of the job (i. Finally. In Fig. the job owner is reset only if the received name is the same as the sock owner. It models the invocation of command: $condor_chirp fetch rfoo lfoo that copies the file rfoo from the (possibly remote) submit host onto the file lfoo on the local Execute machine. is obtained by merging these states in one big process. In all the other cases the ‘‘set_job_attr Owner’’ operation fails. the state !chirp_op_end is reached). / Future Generation Computer Systems 29 (2013) 811–827 Fig.4 that actually handles the Chirp message ‘‘set_job_attr Owner’’.sock-A (resp. Fig.sock-B (resp. 11. Similarly. such requests are forwarded via the chirp_set_owner process toward the local schedd process that takes care of them.sock-A and untrusted. 16. More precisely. Finally. 14 (resp. the shadow process is depicted.cpp of Condor 7. starter process. 11.822 F. untrusted. the value of the attribute can change or remain the same.sock-B). untrusted. also the logic of chirp_set_owner is rather simple: after having forwarded the request together with its arguments to the schedd process and waited for a reply. • at any write operation. in this work an abstract model has been considered. The process handles an internal queue of processes that are sent from the local host to the grid.sock-B). then. whatever name is received. 10. in Fig. 13–15 together represent the schedd process. the state trusted. states trusted. The model has been inferred inspecting the code of method chirp_get_ one_file contained in file src/condor_chirp/condor_chirp. On the contrary. 12. states untrusted. In this model the focus is put on how the set_job_attr Owner request is handled. Basically.e. The whole picture. or the username is that of another user who submitted a job into the system. respectively.2. 10.sock-A) in Fig. if no user should be trusted (cfr. the attribute can be either set to A or B. untrusted. Fig. 14 (resp.sock-A Fig. which is not represented for the sake of space. and trusted. it returns as soon as the grid job returns. matchmaking phase submit triggers the starter process on the (possibly remote) execute host and after that triggers the shadow process on the (local) submit host. it ends its execution. As in the case of submit. 17. the process chirp_fetch is showed. In case the configuration states each user should be trusted (cfr. 15) begins. Those attribute values are described by means of the states ?owner_A and ?owner_B. The process chirp_set_owner is represented in Fig. As it can be seen. starter forwards to the underlying operating system any request it receives for pausing or restarting the job. In this abstraction. a subprocess dual to the one depicted in Fig. .4.cpp of Condor 7. it is compared against the sock owner and queue superuser. This abstraction naturally describes the evolution of the above data structure: • the non-determinism among the initial transitions expresses the fact that it is not possible to foresee which user this job will belong to. V6/qmgmt.sock-A (resp. In particular. Figs.. following the Condor implementation of the Chirp protocol. After launching the job. it can be seen that after reading the name received as input. Since it is not possible to represent all the infinite usernames that the owner attribute can assume. In Fig.sock-A) in Fig. Pagliarecci et al. shadow process. The process depicted in the aforementioned figures has been extracted from the method SetAttribute in src/condor_ schedd.

schedd process (part 2).F. . Pagliarecci et al. 13. 12. chirp_set_owner process. Fig. 14. schedd process (part 1). / Future Generation Computer Systems 29 (2013) 811–827 823 Fig. Fig.

it represents an unbounded number of operations executed by the intruder using one of the messages of Table 1. i. .824 F. nk . Ushadow . . owner attribute for a grid job. The set of states of this model is obtained as the product of the sets: {init} ∪ ({?chirp_close. Pagliarecci et al. 18. 10. Finally. schedd process (part 3). As a consequence. . In particular. Uintr [start ] . every such process exists in p copies in the modeled system. already described in Fig.. Uchirp_fetch . . Considering m the number of grid hosts. the latter is associated to its own attribute owner and may launch a copy of chirp_set_owner. and thus there will be p · r copies of it. ?chirp_open. and it is named job in order to communicate Every process is started by an instance of submit that launches its own copies of starter and shadow. chirp_fetch process. intr follows the generic structure explained in Section 5. B}). This model follows the previous abstraction that a resource. . . Usubmit . (p) Uowner . the desideratum is to guarantee that if the owner of a job j does not correspond to the owner of resource r . in this case a file. . If µ is a message named in Table 1. for some n1 . 18. Uintr ⟩. . it is possible to say that: k 1 Ujob = ⟨Uintr [chirp_close] . . / Future Generation Computer Systems 29 (2013) 811–827 Fig. . 15. where n = n1 + · · · + nk . .1. . can be either owned by user A or by user B. ?chirp_read} × {A. (p·r ) (m·n) (6) is just a shortening for the longer (m·n ) Let us suppose that Uintr (m·n ) (m·n) Fig. Fig. The structure of the intruder process is called intr and is modeled in Fig.e. Fig. Uintr[submit_start] ⟩ (n) (n ) (n ) for some n1 . The messages µ admitted by the system are enumerated in Table 1. Uchirp _set _owner . . . 17. p the number of grid jobs. There may exist an unknown number of instances of this process intr[µ]. Generic intruder intr. . 16. the system that is modeled and verified is then the following: (m+p) (p) (p) (p) (p) ⟨Uschedd . nk . every job i that accesses a resource j runs a different copy of process chirp_fetch. . r the number of available resources. . Process schedd runs in every executing host. but it is possible for any process to execute it (since it is a regular executable file present in the system) and thus it exists in m · p copies. Specifying. The host protection requirement that has been analyzed with the current example is the correct authorization before accessing data available in the system. k 1 composition: Uintr [chirp _close] . and Table 1 summarizes the messages that can be ‘‘exchanged’’ by the system processes. then j should not be allowed to access r ’s content. with process starter. In symbols. The intruder job is obtained by combining the many copies of process intr[µ]. Ustarter . then intr[µ] is the name of one process that composes the intruder.

(1) Uchirp_set _owner . Uowner . Model checking. Usubmit . it might be the case that instances 1 of the former group refer to instances of Uschedd . it is possible to avoid the states that do not represent a synchronization with other processes. it is possible to establish (1+p1 ) (p1 ) (p1 ) (p1 ) the following properties: Uschedd (resp. the number of states in the reduced systems are specified between parentheses. Uowner . Ushadow . and the cutoffs are computed on the reduced system. it was possible to verify that Specification (13) does not hold. Uchirp_fetch . Usubmit . In Table 2. sock − B. Usubmit .pagliarecci. Usubmit . 1 1 1 Uchirp _set _owner . NuSMV [55] was used. (1+p ) (n) (p ) (p ) (p ) (p ) (p ·r ) (p−p ) Uintr . After the splitting operation justified by Theorem 7.r r AG¬(?owner_Bj owner ∧ F (chirp_read. Ushadow . Usubmit . Notice that in order to avoid the state explosion problem. With Theorem 2 and Corollary 5 allow to compute the amount of instances needed for the verification. (m−1+p−p ) isolating one of the p processes In symbols.r r AG¬(?owner_Aj owner ∧ F (chirp_read. For this experiment. Indeed. Ushadow (1) (r ) (n) j. As a consequence. (m+p) set of instances Uschedd is divided into two sets. it is possible to reduce it to a system with 46 states. Ustarter . Pagliarecci et al. with 60 states. Ustarter . Such sets must be split as well. it is possible to iterate the splitting and removing phases isolating one of the p1 processes. Uchirp_fetch . All NuSMV models used in this example are available at. Notice that the reduced system has a relatively small size. Ushadow . Ustarter . Uintr ⟩. the obtained system is: (2) (1) (1) (1) ⟨Uschedd . Indeed. it is possible to remove the state whose name does not begin with the special characters ! or ?. http://www. Usubmit . and their guards must be (p) rewritten accordingly. Reducing. Uintr ⟩. B)chirp_fetch ) (12) (p ) (p ) (p ·r ) (n) iff (2) (1) (1) (1) ⟨Uschedd . Uchirp_fetch . Uintr ) does not depend on instances j . this correct authorization requirement can be stated as the following hypersafety properties: j. once Uchirp_set _owner is split into 1 1 Uchirp _set _owner and Uchirp_set _owner . Uchirp_1set _owner . Splitting. Since the instances of actors modeling processes are independent from their siblings. Ustarter ⟨Uschedd . Such transformation is actually an abstraction of the system that does not lose information for the property considered. Ushadow . . Uchirp_fetch . (9) (1) Uchirp_set _owner . while instances of (p ) (p−p ) (1+p ) the latter group might refer to instances of Uchirp_1 set _owner . in process schedd. Applying this transformation recursively. (as the instances of chirp_set_owner) or indirectly dependent on it (as the instances of all the other processes). (p−p ) (p−p ) (p−p1 ) 1 1 Ustarter . Usubmit . the (1+p1 ) namely Uschedd 1 and Uschedd . Uintr ⟩ |= (1) (9) (4) Uschedd (m−1+p−p1 ) ((p−p )·r ) . In fact. Ushadow . Uowner . Uchirp_set _owner . B)chirp_fetch ). Uowner . B)chirp_fetch ) (7) (8) Notice that Theorem 6 cannot be applied to the original System (6). Uchirp_fetch . ((m−1)·n) (p−p1 ) (p−p1 ) (p−p1 ) (p−p1 ) (p−p1 ) 1 Uchirp_fetch and Uintr . Process States 3 3 60 (46) 10 (8) Cutoff – 4 – – Instances 1 4 2 1 Process States 7 10 8 8 Cutoff 9 – – – Instances 9 1 1 1 owner intr schedd starter chirp_fetch chirp_set_owner submit shadow In a more formal fashion. (1) (1·r ) (n) An analogue reduction can be obtained for Specification (8). Observing System (9). Ustarter . Ustarter . The grid system (6) can be rewritten by the Execution machines and assuming that p1 of (where p1 < p) are executing on that machine. Uchirp_fetch . Usubmit . (p1 ) (p1 ) (p1 ·r ) (n) (1+p1 ) (p1 ) (p1 ) (p1 ) r AG¬(?owner_Aj owner ∧ F (chirp_read. model checking can still be efficiently applied to many more large-sized systems [48]. In other words. and thus. As an example. It should be underlined that the sets of instances induced by the remaining definitions are either directly dependent on process schedd. Uintr ⟩ |= Removing.zip Using NuSMV. Uchirp _set _owner . it is possible to obtain the following equivalent system: j . Uschedd (p−p ) (m−1+p−p1 ) (p−p ) 1 1 . Ushadow . Uowner . Ustarter . it is possible to avoid the state trusted adding transitions: ?schedd_start → trusted. For example. a well-known state-of-the-art model checker. applying Theorem 6 to System (9). their removal as well.r ⟨Uschedd . It has been shown how the initial problem of verifying Specification (7) in a parameterized system can be reduced to the problem of verifying the same specification on a finite system: (1) (1) (2) (1) .r r AG¬(?owner_Aj owner ∧ F (chirp_read. sock − A ?schedd_start → trusted. Uchirp_fetch . summarized in Table 2. Ushadow . A)chirp_fetch ). / Future Generation Computer Systems 29 (2013) 811–827 Table 1 The messages exchanged in the system. Uowner . NuSMV generates (11) . (13) (10) With a similar reasoning. the grid system becomes: (p1 ) 1 1 1 1 1 1 ⟨Uschedd .F. it/condor-example. Uintr ((p−p )·r ) ((m−1)·n) ⟩. it is possible to reduce the system even more. it is the splitting phase that makes independency relationships among instances evident. Uowner . Uowner . Uchirp_set _owner . Starting from the depicted model. (1) Uchirp_set _owner . 825 chirp_close chirp_read job_restart owner_A schedd_start starter_start chirp_op_end chirp_set_owner_A job_return owner_B shadow_return submit_return chirp_op_fail chirp_set_owner_B job_start schedd_set_owner_A shadow_start submit_start chirp_open chirp_set_owner_return job_wait schedd_set_owner_B starter_return Table 2 The instances needed for the verification.

in general. then a job owned by A executes chirp_set_owner by specifying username B as parameter. 19. John Wiley & Sons Inc. G. Available at http://www. This work proposes to model security requirements as Indexed − ctl∗ \x and shows how this is enough for hypersafety. Journal of Computer Security 18 (6) (2010) 1157–1210. Condor and the grid. The anatomy of the grid: enabling scalable virtual organizations. Finally. According to the proposed methodology. / Future Generation Computer Systems 29 (2013) 811–827 Fig. C. it is possible to model grid processes on the basis of technical specifications and code inspection. is reported in Section 6. Committee on National Security Systems. In order to verify security requirements.. 2000. Indeed. First of all. there are some results on both parameterized systems and hyperproperties to make decidable verification when there are some regularities. 4009. parameterized systems are. Tacconi. This is an example of privilege escalation attack. in: J. [14] E. Grid computing security: a taxonomy. D. Bertino. Singapore. 54:1–54:3. security requirements have also been recognized to be very complex to model. Emerson. A general i∈shadow . G. Springer-Verlag. Meadows. The results presented in this paper are twofold: (1) a methodology has been presented to verify grid system security. Temporal and modal logic. Kahlon. Sengupta. CNSS Instruction No. F. Hyperproperties. where the number of instances is not known a priori. Schneider. in: CADE. in: F. Conclusions The basic idea presented in this paper consists in exploiting formal verification techniques. the proposed models and specifications have been enough to reproduce a well-known security problem in Condor (see Section 6. 2009. pp. vol.cnss. Information Processing Letters 15 (1986) 307–309. Apt. Emerson. undecidable. Revised June 2006.). Both aspects make them a natural example of parameterized systems. Which fragment of hyperliveness is covered by Indexed − ctl∗ \x is another open question to be investigated in the future. Thain. the security properties of security protocols. security requirements can be modeled. Spalazzi. This is actually a novel contribution of the present work since the related work mainly focuses on verifying the security of grid protocols only. [9] E. in order to verify the security requirements of grid systems. T. Limits for automatic verification of finite-state concurrent systems. Lee. and it will be the goal of another work in the future.). it has been proved that parameterized systems can be used for hypersafety verification problems. ACM. Chakrabarti. 2002.T. Nevertheless. in: Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research. 236–254. After that. except for the one to be verified. they are also intrinsically dynamic systems since resources and users can. This is an extension of the Clarkson and Schneider’s result stating that a self-composed system can be used for k-hypersafety verification. This idea came up from the observation that model checking has been recognized as a very useful technique to verify security requirements in different domains as. pp. A.A. can be dropped from the verification process. Information Processing Letters 21 (4) (1985) 181–185. [8] K. National Information Assurance Glossary. New York.1 whereas a concrete example. Namjoshi. [2] W.gov/Assets/pdf/cnssi_4009. This solution has already been discussed in the vulnerability description [50] and is depicted in Fig. 2007. Handbook of Electronic Security and Digital Forensics. S. Safe implementation for owner. join and leave the network at any moment. the underlying idea of the Dolev–Yao model of intruder [41] has been adapted in order to verify host protection requirements (see Section 5.S. 47–82. This requires to substitute the first transitions of Fig. Livny. 996–1072 (Chapter 14). It is in this particular line of research that this work has been inserted. Germany. The Netherlands. NY.2). S. Regarding the theoretical results. E. the schedd allows to modify it and consequently. Fox. A. Whether the proposed intruder model suffices for security verification or a more powerful model is needed is still an open question. when some regularities hold. One solution to this problem is to modify the implementation to include a safe version of process owner where the attribute is set only once per grid job.826 F. [7] CNSS glossary working group. Grid Computing Security. for instance. USA. in: H. CSIIRW’10. in general. Elsevier Science Publishers B. In the present work. 12 with the followings: i∈shadow !chirp_set _owner _Ai − − − − − − − − − − − − − − − − − − → ?chirp_set_owner_A init − !chirp_set _owner _Bi schema of the model is reported in Section 5. they can be modeled as hyperproperties that are considered. it has been proved that parameterized systems can be decomposed. V. Kozen. World Scientific Publishing. as hyperproperties. [10] E. Tuecke. F. As a matter of fact. Formal methods for cryptographic protocol analysis: emerging issues and trends. in general. init − − − − − − − − − − − − − − − − − − − → ?chirp_set_owner_B. Damodaran. this work shows how to model a grid system. for instance model checking. in independent sub-systems and that all the independent sub-systems. formal verification has been applied to a different part of a grid system. Notice that. In this work.A. pp. Grid Computing: Making the Global Infrastructure a Reality. security privacy. On reasoning about rings. Acknowledgments The authors would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. B. Classification of attacks on cryptographic protocols. (2) a set of original theoretical results has been presented that can be exploited for verification. 1990. Amsterdam. Unfortunately. Furthermore.V. by definition. pp. Pagliarecci et al. Watson. [11] D. [6] A. References [1] I. the Condor system. Foster. Berlin. a counterexample where a job from user B is first scheduled by schedd process (state B-seen).pdf. van Leeuwen (Ed.A.). Berman. undecidable. Regarding the methodology. F. M. a model of intruder is also needed which should be as powerful as possible. New York. 7. Tannenbaum. [12] B. this is a novel contribution in the research area of grid security.1). D. Furthermore. International Journal of Foundations of Computer Science 14 (4) (2003) 527–550. T. Emerson.. Reducing model checking of the many to the few. in: Formal Models and Semantics. IEEE 6 (1) (2008) 44–51. the chirp_fetch process is allowed to read the file. A second solution would be to allow instances of process chirp_set_owner to be called only from instances of shadow since the operation of setting the owner of a job may be considered a system operation. [4] C. Grid systems result from combining of many copies of a finite amount of processes. K. Hey (Eds. As far as we know. S. 2006. NY. Vulnerabilities leading to denial of services attacks in grid computing systems: a survey. Clarkson. [3] L. 2010. Kesselman. Handbook of Theoretical Computer Science. Alpern. Defining liveness. IEEE Journal on Selected Areas in Communications 21 (1) (2003) 44–54. Me. 19. Ft Meade MD. [13] M. they cover two complementary aspects that are exploited in the proposed methodology. Squicciarini. [5] A. IJHPCA 15 (3) (2001) 200–222. Schneider. Leonhardt (Eds. Chakrabarti. Jahankhani. as well.

Globus alliance web site. Du. and the Ph. Glesner. Italy (2008).D.v22:11 URL http://dx. [36] J. Performance optimization of temporal reasoning for grid workflows using relaxed region analysis.S. Team. 2001. 2011. Habermehl. Basso. vol. He worked as software engineer and researcher at Regione Marche. Berlin/Heidelberg. (CAV’96).M. [34] K. Information and Computation 98 (2) (1992) 142–170. Goguen. He was an International Fellow at SRI International.L. NY. URL http://www. vol. A reference model for grid architectures and its validation. Bratosin. in: C. pp. Talupur. Jha. in: B. Sp ecification-correct and scalable coordination of grid applications. UK. http://www. doi:10. Formal verification of a grid resource allocation protocol. S. pp. in: Lecture Notes in Computer Science. C. [23] T. F.com/science/article/pii/S0167739X08000320. German.). 2005. H.S. Pagliarecci is a post-doctoral researcher at the Università Politecnica delle Marche. Grigoriev. R.2010. Csp and determinism in security modelling. in: C. Italy in 1989 and 1994.M. Symbolic model checking: 1020 States and beyond. Wang. in: T. Y. K.1109/CCGRID.edu/condor/manual. and has been a visiting fellow at SRI International. Towards the semi-automatic verification of parameterized real-time systems using network invariants. Grumberg. Ramakrishnan. O. I.cs. pp. C. 3672. in Computer Science at Università degli Studi di Bologna. C. P.A. K. in: Lecture Notes in Computer Science. Namjoshi. IICAI 2009. Proceedings.C. N. Beowulf Cluster Computing with Linux. Kurshan.2008. USA. Computer and Network Security. On the security of public key protocols. and Multiagent Systems. H. Lingras.F. Roveri. R. Washington. 2009. Protection. IEEE Computer Society. Kushwaha.12. ACM Press. Trento. doi:10. T. Italy (2007) and BINT. Security policies and security models. Yang. Li.globus. L. Meadows. Zhou. Gardner. Trento. Foster. International Journal on Software Tools for Technology Transfer 2 (4) (2000) 410–425. / Future Generation Computer Systems 29 (2013) 811–827 [15] T. 6447. C. D. P. 2011. N. M. Wu. His present research areas include Formal Methods and Model Checking applied to Semantic Web Services. URL http://www. pp. [31] W. He has worked as consultant at the Istituto di Ricerca Scientifica e Tecnologica (IRST). G. Vojnar. Condor—a distributed job scheduler.D. California in 2010. 33–47. His research includes application of formal methods to design and development of distributed software systems. C. vol.doi. 2009. in: 18th Workshop on Computer Science Logic. A. 2009.1016/j. J. Sterling (Ed. (ICSE’ 10). Clarke. 276–291. Static Analysis.13. Italy.1002/cpe. wisc. CSL.). MIT Press. S. pp. Liu. Hankin. Concurrency and Computation: Practice and Experience 22 (2010) 1365–1385. 114–127. in: Proceedings of the International Workshop on Automatic Verification Methods for Finite State Systems. in: Lecture Notes in Computer Science.2008. 2010. A. I. A. Giunchiglia. His present research areas include Formal Methods and Model Checking applied to Semantic Web Services. Italy in 2007. Karnataka. in: Software Engineering and Formal Methods (SEFM).future. [27] S. D. Condor project web site.2006. Clarke. M. Italy in December 2006.L.L. pp. Formal methods for cryptographic protocol analysis: emerging issues and trends. Grumberg. K. in Artificial Intelligent Systems from the Universita‘ Politecnica delle Marche. Zhou. in: Proceedings of the 8th International Conference on Computer Aided Verification. Tannenbaum. N. Rehof (Eds. 2010. Springer. 2008. vol.2008. Merz. India. C. ACM SIGOPS Operating Systems Review 8 (1) (1974) 18–24. in: Lecture Notes in Computer Science.R. [20] E. He was a visiting scholar at the Australian Artificial Intelligence Institute (AAII). Clarke.org/10.html. Blueprint for a science of cybersecurity.). USA. Systems and Structures 30 (3–4) (2004) 139–169. [33] S. URL http://www.S. E. and Multi-agent Systems. pp. IEEE Computer Society.sciencedirect. P. in: Proceedings of the IEEE Symposium on Security and Privacy. Model checking and abstraction to the aid of parameterized systems (a survey). 1989. doi:10. in: J. Team. Automatic verification of parameterized synchronous systems (extended abstract). 2010. respectively.R. Bolotov.com/ science/article/pii/S0167739X10002566. Italy. NY. Talupur. J. Siveroni (Eds. Browne. Towards a formal model for grid architecture via petri nets. T. Tuecke. C. O. New York.S. in Electronic Engineering and the Ph. E. in: IEEE Symposium on Security and Privacy. pp. Prodan. J. Budapest. Xu.48. Hwang. Pfreundt. Springer. USA. March 29–April 6. 2004. Italy. 187–194. ACM Transactions on Information and System Security (TISSEC) 8 (3) (2005) 259–286. L.2010. Dong. 11–20. Formal Methods in System Design 32 (2008) 129–172. Meseguer. Clarke.. Globus: a metacomputing infrastructure toolkit. Lovinfosse. doi:10. He received the M. [32] B. in: LNCS. V. Sidorova. Wolper. J. 1982. London.). C. 239–247. H.). I.edu/condor/security/vulnerabilities/CONDOR-2009-0001. A. Prasad. Emerson. A. Verifying a delegation protocol for grid systems. Secure information flow as a safety problem. Hamilton. November 17–19. Frey.cs. Rajan. Lampson.D. Terauchi.003. Formal verification technique for grid service chain model and its application.M. in Electronics Engineering from the University of Ancona in December 2002. [17] M. van der Aalst. D.M. SAS 2005. Stanford University. Veith. Australia and at the Computer Science Department. D. Misra. pp. Cimatti. F. [18] R. May 2011.02. G. 1995. Tumkur. Automating cut-off for multiparameterized systems. Vic. Springer-Verlag. respectively.1016/j. McMillan. Zeng.cs.edu/condor. Springer-Verlag. 352–367. Deontic extension of deductive verification of component model: combining computation tree logic and deontic 827 [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] logic in natural deduction style calculus. D.J. 1996.008. Intuitionistic LTL and a new characterization of safety and liveness. Aiken. Hybrid reliable load balancing with mosix as middleware and its formal verification using process algebra. 2010 8th IEEE International Conference on. A.wisc. Bouajjani. Foster. Livny. He received his B. [28] E. 338–354. Condor manual. Xu. Computing and Information Science Technical Reports. Reasoning about networks with many identical finite state processes. F. Italy (1991–1997). [16] Maier. Apss: Proactive secret sharing in asynchronous systems. Menlo Park.1016/j. Yoshida (Eds.M. Guo. Proving Ptolemy right: the environment abstraction framework for model checking concurrent systems. Verification of parametric concurrent systems with prioritised fifo resource management. Future Generation Computer Systems 25 (3) (2009) 378–383. http://www. URL http://doi. Wright. [30] Y. IEEE Transactions on 29 (2) (1983) 198–208. pp. Y. Cluster Computing 5 (2002) 237–246.org/10. 4963. F.12. Team. pp. A cut-off approach for bounded verification of parameterized systems. doi:http://dx. Ram (Eds. 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems.wisc. A mechanism for grid service composition behavior specification and verification. Computer Languages. Hungary. DC.S. 2005. Future Generation Computer Systems 23 (4) (2007) 587–605. http://www. McMillan. Spegni is a Ph. Gothel. Sistla. J. TACAS. [29] Q. California in 1992 and 1996. 2008. [22] E. Proceedings of the 4th Indian International Conference on Artificial Intelligence. Verification by network decomposition. in: P. Zuck.-J. Van Renesse. ACM Transactions on Programming Languages and Systems 19 (5) (1997) 726–750. California USA in 2005. Samuelson.sciencedirect. C. CONCUR 2004 Concurrency Theory. in: Proceedings of the 2008 Eighth IEEE International Symposium on Cluster Computing and the Grid. K. Cao. Condor vulnerabilty reports—condor-2009-0001. [38] A. Aziz.A.1002/cpe. [19] P. His research has been supported by grants from European Union and Italian Minister of University and Scientific Research. Touili.sciencedirect. He has been visiting student at University of Illinois at Urbana-Champaign in 2004/2005. M. 3170. F. pp. Team. Cornell University. Schneider. Clarke. L. He received the M.S. Tannenbaum. Verifying properties of large sets of processes with network invariants. Spalazzi is an associate professor at the Università Politecnica delle Marche.1109/WAINA. J. Livny. Dolev. Information Theory.com/ science/article/pii/S0167739X06001853. in: Proceedings of the International Conference on Software Engineering. Mishra. Reasoning about systems with many processes. Information Technology Journal 5 (2006) 833–841. [37] M. Springer–Verlag. G. Future Generation Computer Systems 27 (5) (2011) 476–485. H. Burch. Carlton.P. A structural induction theorem for processes. September 7–9. A. S. doi:10. student at Università Politecnica delle Marche. Schneider. 12th International Symposium. He has worked as consultant at the Fondazione Bruno Kessler. Basu. Clarke. E. December 16–18. Springer. Yao. [35] K. 332–339.ieeecomputersociety. 310–314. IEEE Journal on 21 (1) (2003) 44–54. M. Pagliarecci et al.doi. Kesselman. 87–98. Miller. Roscoe. in Artificial Intelligent Systems from the University of Ancona. NUSMV: a new symbolic model checker. China.M. Verifying parameterized networks. 2004.007. Veith. 345–354.1016/j. in: AINA Workshops.). S. T. [25] A.P. Dill. Shanghai. in 2009. L. Computer and Network Security (in cooperation with Italian State Police). [24] E.09. New York. pp. Zhu (Eds. Jiang. Information and Computation 81 (1) (1989) 13–31.org/10. 166–185. Italy. Dalheimer. and M.v22:11. A. pp. M. IICAI. Hanna. http://www. Trčka. Wu. International Journal of Supercomputer Applications 11 (1996) 115–128. ICFEM 2010. Selected Areas in Communications. M. Science in China Series F: Information Sciences 50 (2007) 1–20. 2008. Formal Methods and Software Engineering – 12th International Conference on Formal Engineering Methods. Pnueli. F. 68–80. Journal of ACM 39 (3) (1992) 675–735. A. Future Generation Computer Systems 27 (5) (2011) 506–526.future. 2008. 1990.013. [21] L. 2010. Condor-G: a computation management agent for multi-institutional grids. B.future. in: ACM Symposium on Principles of Distributed Computing. O. [26] Y. .org/. T. 2010.future. Springer-Verlag.

- 06720160
- 06740844(2).pdf
- 01263614
- 01661695
- 00899059.pdf
- 04353456
- 01403492
- 01399433
- 01296409
- 00990967
- 00754569
- 2412008_14997
- A Contour-based Shape Descriptor for Biomedical Image Classification and Retrieval
- Rotation Derivation
- 01602471
- ma01
- FromDHTMLtoDOMscripting_March2006
- Distributed Database Access Using Mobile Agents - J Vasudeva Rao
- A Survey of Context-Aware Middleware
- typeinst
- Kotz Future2
- 10.1.1.43.7985
- 7 Reasons
- lda (1)
- Love and Respect Chapter -Eight1(1)

Sign up to vote on this title

UsefulNot usefulpaper

paper

- Linear Temporal Logic Symbolic Model Checking
- 13 Talk
- bangalore_march07
- Verifying Autonomous Systems
- Power Intent Formats_ Light at the End of the Tunnel_ _ EE Times
- 08 Model Verification & Validation
- Grid computing
- FINALGrid Computing Seminar Report
- compusoft, 2(2), 49-53
- Intro to Grid computing.pdf
- Grid Computing
- Grid Computing
- Computational Grids
- Next Gen. Computing and Delivery Platforms
- Grid Computing
- ADEM Automating Deployment and Management of Application Software
- Lecture04_Review of Grid Computing TechnologyC
- s8898
- 10.1.1.95.5439[1]
- (Temporal Bar Graph)
- A Determinant Identity That Implies Roger-Ramanujan
- 2011 IEEE Cloud-Computing 01-26-11
- Some Uniqueness Results for Hyper-leibniz,
- 1509.05363v2
- Fm Schulte Lecture4
- Mapwindow Watershed manual
- The Characteristics of Cloud Computing (1)
- John Dee - Equinox
- 1269
- Model Checking Techniques for Test Generation From Business Process Models
- 1-s2.0-S0167739X11002330-main