You are on page 1of 8

A Theory for System Security

Kan B a n g
Cambridge University, Computer Laboratory
New Museums Site, Pembroke Street, Cambridge CB2 3QG, UK
kz200@cl.cam.ac.uk

Abstract Intuitively, any system behaviour can be thought of as in-


formation flow. Security is mainly about undesired informa-
In this papel; two independent definitions of system secu- tion flows. Available information flow models define what
rity are given through two distinct aspects of a system exe- the desired external observable behaviours are, but don’t di-
cution, i.e. state and transform. These two definitions are rectly specify which kind of internal information flow is per-
proven to be equivalent, which gives both confidence to the mitted and which is not. For example, the Noninterference
soundness of our explication and insight into the intemal security model[7] says, informally, what one group of users
causality of informationflow. Using this definition of infor- does has no effect on what another group of users can see.
mationflow security, a general security modelfor nondeter- Although information flow models give a nice and simple
ministic computer systems is presented. On one hand, our definition of what secure systems should be, their modelling
model is based on information$ow which allows it to expli- of confidentiality’scausal restrictions has been unsatisfac-
cate security semantically as other i n f o m t i o n j o w models. tory. Therefore, so far they provide less guidance to imple-
On the other hand, our model imposes concrete constraints mentation.
on the intemal system processes which facilitate implemen- In this paper, security is defined as an intrinsic system
tation and ver@cation in thefashion of access security mod- property. In the fashion of information flow models, our def-
els. Our model is also more general than previous state- inition of security is based on the semantic concept of infor-
based information flow models, e.g. allowing for concur- mation flow in order to be as general as possible. However,
rency among system processes, which is more suitable for we focus on explicating the causality of information flow
distributed systems. which is the source of disclosure in computer systems. Two
independent definitions of secure transition are given. One
is defined in terms of the changes to user views of system
1 Introduction state, which, in a sense, can be seen as the results of infor-
mation flow. The other is defined in terms of the interactions
between system processes, which can be seen as the cause of
System security has been extensively studied in the past
information flow. These two formulations are proven to be
twenty years. Many definitions and models have been pro-
equivalent to give further insight into the causality of infor-
posed to explicate security. Generally speaking, there are
mation flow. To facilitate implementation and verification,
two different approaches. One is access security models [2,
an unwinding theorem is given to show the constraints on
16,5]. A system is said to be secure if its execution satisfies
all processes in order for a system to be secure as defined in
a set of access control conditions. In practice, these mod-
both this paper and some previous models, e.g. Noninterfer-
els are easier to implement and verify, since they explicitly
ence.
specify the desired operations of the system. However, in-
stead of defining security as a more general system property
in its own right, access security models explicate security 2 Overview
in terms of a set of concrete mechanisms. These are more
of a set of implementation rules than an abstract definition Here security or, more precisely, confidentialityis treated
of what security is, and as a result, their applications are re- as an intrinsic property of a system. Intuitively, we define a
stricted to certain aspects of security. For example, the Bell- system with total security to be a system that will not allow
LaPadula model[2] does not deal well with covert channels. information flow between its users. We think that such to-
In order to explicate security more abstractly, informa- tal security as a system property is application or policy in-
tion flow models have been proposed [7,8,21,13,14,18,3]. dependent. It is a property of the internal functionality of a

148
0-8186-7990-5/97 $10.00 0 1997 IEEE
system. Let us call a system with such total security prop-
erty an ideal secure system. In contrast, a security policy is
derived from applicationrequirements. What a security pol- where each si is an element of S and each Ti is a relation
icy says should be which security violating operations are on S. Therefore a given system run can be viewed as a se-
allowed for the correspondingapplication. Therefore, a sys- quence of states or transitions. First, we will define secure
tem is said to be secure under a specific security policy if all transit ion.
of its operations conform to an ideal secure system except We will denote the identity relation on a set S by Is and
those permitted in the security policy. the transitive closure of a relation G by t(G).Let A be a set
This approach to security policy is different to that taken and let G be an equivalencerelation on set A. If 2 E A, then
in [7]. But it is worth taking because of the following con- the equivalence class of 2 modulo G is the set [ z ] defined
~
siderations. On one hand, system security in general de- as follows:
pends on a system as a whole, including all the system oper-
ations. When verifying that external observable system be- [2]G = {Y E A I (Y,g ) E G} = {Y E A I Y 2 2).
haviours, e.g. input/output or traces, satisfy a given security
policy, we have to deal with all the internal system opera- Iff : A + B is a function, we denote the equivalence rela-
tions related to them. Therefore, it is better to start with an tion determined by f on set A by Gf, which is given by:
ideal secure system. The job of a security officer is to define
the appropriate permissions for an application with the con-
Gf = ((2,Y) I f(.) = f(Y)).
fidence that these are the only cases in which the system may In tlhis paper, relational composition is denoted by juxta-
pass information between its users. On the other hand, an position and the naturally defined parallel product of binary
ideal secure system is the most suitable system from which relations by 8.
to explicate security in its most strict terms, i.e. no informa- Suppose we have n different users on a system. And be-
tion flow between users. Similar approach has been taken in tween each pair of successive transitions there is a set F of
[91. total functions with domain S, whose element fi : S + Ai
The definition of an ideal secure system consists of two is the function giving out information about system state to
parts. One is secure start state, the other is secure transition. user i. Please note here the functions in F do not change the
The definition of secure transition is given in two distinct as- state. ‘Therefore the view of a system state by a user i is de-
pects of an execution of a computer system, i.e. states and termined by the equivalence relation Gf, on S. Hereafter we
transforms. We think that information flow is the result of will use symbol & instead of Gf, when we talk about views.
system transition, not static state. Therefore the definition Here we only consider systems in which the scope of a user’s
of secure state is given just as a secure start state. As John view is not changed by the transition, i.e. & = F‘.This is
McLean has pointed out in [lS], any explication of security not a serious limitation as it seems. Obviously, the scope of
based solely on the notion of a secure state ... at best ... can a user’s view should not be enlarged in an ideal secure sys-
serve as a secure initial state. The concept of a secure sys- tem. A,nd the scope of a user’s view can not be changed by
tem has to be explicated as one whose initial state is secure other users, otherwise it can become a covert channel. The
and whose system transfom is secure. only permissible way to change the scope of a user’s view
The paper is organised as follows. In section 3, we ex- is to reduce it by himself. Even if one is allowed to reduce
plain what an ideal secure system is. Two independent defi- his own view scope, the reduced scope can not be restored
nitions of secure transition are given and proven to be equiv- to the previous one even by himself due to the definition of
alent. An unwinding theorem is also proven to facilitate im- secure transition given below, so this is not very interesting.
plementation and verification. Next, in section 4 we give Therefore, in this paper, we just consider the case where the
the security model which is an ideal secure system coupled scopes of views are not changed. However, the functions f i
with a security policy. In section 5 , some considerations in defined on S can be changed to, for example, f,! after a tran-
defining the model are discussed. Comparisons with previ- sition, therefore the corresponding co-domains from Ai to
ous models are given in section 6. We draw our conclusions A:, Siince the equivalence relations, i.e. views, induced by
in section 7. fi and f; are the same &, there is an isomorphism between
fi and fi. From now on we don’t distinguishbetween fi and

3 An ideal secure system f,! or between Ai and A:.


Hence, for each user i, a transition relation T induces a
relation T , on Ai given by
We work at a semantic level in order to be as general as
possible. A system has a set S of states. System transitions ri = {(fi(s),fi(s’)) I ( s , ~ ’ ) ET).
are represented by relations on S. An execution of a system
is a sequence Definition 1 A transition T is a secure transition i f

149
1.1 vi,v ( ~ ~ E, T ,y S 2such that s1 5 s 2 ,jS;such which means no user can use this shared view to signal oth-
that ers. Therefore, no information flows outside one’s view. In
S; V, s;,
N
( ~ 2 , s ;E) T (1) the same three-file system example as above, Condition 1.2
requires that U’ = U.
1.2 V i , V j , V(s, s’) E T : Define parallel product function v as = f 1 @ f 2 @ . .
. @ f,. Let relation 17-l = { ( a , s ) I ~ ( s=) a , s E S, a E
[slv*+,= [s’lv,+, (2) A1 x A2 1 x A,} and T’ = q ( r l @r2 @ . . @ r,)V-’. If 7
+

is injective, T = T’. If 7 is not injective, which means there


where K+j = t (K U vj) .
is some information in the system that no user is aware of,
Condition 1.1 says that if two states look the same to a
T E T‘. Hereafter, we let T equal to T‘ since no user can
user before a transition, he or she should not be able to dis-
tell the differences between T and TI.
tinguish the corresponding resulting states after the transi-
tion. Otherwise, the user will be able to get some more in- Definition 2 A transition T is a secure transition if rela-
formation about the system state after the transition than he tions R I ,Rz, ..., R, given by:
can get before the transition. We notice that in [18], simi-
lar observations have been made through an automata theo- -& = V ( I A 1 ‘8 I A 2 @ ’ * * @ Ti @ * * * @ I A , ) V - l (4)
retic approach. The meaning of Condition 1.1 may not look
are well-defined on S.
apparent in its current form due to non-determinism. How-
Definition 2 is difficult to understand intuitively in its
ever, if we translate Condition 1.1 into power set concepts,
original form. Let us borrow a concept called semantic in-
the meaning should become evident. A transition T can be
dependence from the field of parallel processing. We follow
represented by a mapping m : S + p(S). Here we denote
the definition of semantic independence of n relations as in
the quotient set induced by an equivalence relation V on S
[ 6 ] . n relations R I , Rz, ... and R, on the state set S are
by S/V and the quotient mapping by n : S + S / V . We
called semantically independent, denoted by R1 11 R2 11
generalise an equivalence relation V from S to p(S) as in ... 11 R,, if there are sets A I , A2, ..., A,, relations r l ,
[18]. I f A , B E p(S) wesaythatAisequivalenttoBunder
7 3 , ..., r, on A I , A2, ..., A, respectively, and a function
V , written A B , iff for each s a E A , 32; E B such that -

-V ’
s> dB and vice versa. Therefore, we can define the quo-
tient set p(S)/V and the quotient mapping p(n) : p(S) -+
C : S -+ A I x A2 x . . x A, so that

p(S)/V. From [18], we have a theorem stating that there


is a bijection between p(S)/V and p(S/V). Therefore, we
can freely interchange between p(S)/V and p(S/V). With
these definitions Condition 1.1 can be restated as following:
Condition 1.1 V i , the mapping : S/K + p(S)/K, &C = C ( I A @
~ I A , @ . . . @ rn), RnCC-l = Rn.
given by Semantically independent program segments can be ex-
mi([slv,) = [m(s)lv, (3) ecuted concurrently without the need for intermediate syn-
is well-defined. chronisation or data exchange. And also they can be exe-
In terms of each user’s view of the system, Condition 1.1 cuted in any order [6].
means that no information flows into one’s view. For exam- It follows that relations R; defined in Definition 2 are se-
ple, assume there is a three-file system with two users X and mantically independent. And it is not difficult to show that
Y.Let sets U , V , W denote the sets of possible contents of T = R IR2 . R,. r; can be seen as the projected rela-
1

the three files, respectively. Then the state set of the system tion of transition T on the view of user i. Therefore, Defi-
is a subset of vector set U x V x W . Suppose user X can nition 2 means that a secure transition is a transition which
see the contents of the first two files, i.e. the values of U can be represented as a relational composition of indepen-
and V . And user Y can see the contents of the last two files, dent transforms on each user’s view space.
i.e. the values of V and W . Therefore, they both can see V . Theorem 1 Definition 1 --r‘ Definition 2.
If a secure transition T moves system state from ( U , w,w) to Proof.
(U’, U’,w’). Then Condition 1.1 requires that with respect to For simplicity, we give a proof for a two-user system.
user X the content change of the first two files, i.e. from U , w Quotient set S/K and set Ai are both decided by the
to U’, U’ is independent of the content of the third file, i.e. w.
And similarly with respect to user Y , the change from U ,w
-
same equivalence relation K, and in turn by fi; mapping
mi : S/K + p(S)/K and relation ri on Ai are both in-
to U’,w’is independent of U . duced by the same transition as a mapping m : S + p(S)
Condition 1.2 says that the part of system that can be and as a relation T on S , respectively. From (3) we know
seen by any two users cannot be changed by the transition, that ri is also well-defined on Ai in the sense that any two

150
states s1 and s2 being mapped by fi to the same a E Ai have systems there can be only one reasonable case in which ev-
exactly the same set of images under ri. ery Ri represents the total effects of processes run by user
For an arbitrary (s,s’) E T, suppose the images of s i during the transition. Since it is difficult to apply Defini-
and s’ under r] are (a, b) and (a’, b’), respectively. From tion 2 directly in practice, we give an unwinding theorem as
the definitions, we know ( ( a ,b), (a’, b‘)) E r1 8 r2, Theorem 3 below.
( ( a ,a), (a’, b ) ) E T I @ I A and We ;assumethat during a transition, there are a number of
~ ( ( a ,b ) , ( a ,b’)) E IA, @ r2.
By Condition 1.2, we can always find SI,sa E [S]V,+, such program segments executed by different users and they can
that s 2 s1 2 s’ and s 2 s2 2 s’. Then the image of s1 be executed concurrently.
ands2 are (a’, 6 ) and (a,b’), whichmeansrelationsrl81~~ Theorem 3 A transition T as a whole is a secure transi-
and IA*€9 r2 are closed under q. Therefore, relations R1 and tion i f
R2 given by 3.1 These concurrent program segments are atomic be-
RI = q(ri @ 1 ~ ~ ) V - l tween users in the sense that transition T can be seen as the
relational composition of sets of program segments in some
R2 = ~ ( I A@, r2)v-l order, where all the program segments in the same set be-
are well-defined on S. long to the same user. We represent the kth set of program
Theorem 2 Dejinition 2 Dejinition 1. segments belonging to user i by relation Pi” on S. (Due to
concurrency, Pi” may not be the relational composition of
Proof. the individual relations representing each program segment
For the same two-user system, we have
in the set. Please see the note below for detailed explana-
tion.)
R1R2 = v(r1 €9 IA,)V-’V(IA~€9 r2)q-l
3.2 V i # j,Vk, VI, Pi” and Pj’ are semantically indepen-
= ~ ( r8i IA,)(~A,8 r2)v-l dent, i.e., Q$ and Qfii given by:
= V(T1 €9 7-2)q-l
= T Qbj = v i t j ( $ 8 IA,)r]L:j
Similarly, I -1
Q’..
3’ = v i + j ( l A t 8 T j ) r ] i + j
R2R1 = T = RlR2
where vi+j = f j €9 fj and re, are relations induced by
From the definition of R I ,it is evident that R1 relates an ar- Pi”, Pi on Ai, A j , respectively, are well-defined on S.
bitrary state s to a state s‘ that is equivalent to s under V2.
So does R2 with respect to VI. Proof.
In order to find out the properties of states being related For all i , j , and k, let Qb = n
Qfj. Let r] = f1 8 f 2 €9
j#i
by T, we can do it first by considering relation R2 and then . IFn, and define Rb as ,

R I ,or RI first and then R2. Suppose SI 5 s2, SIR2S1 and


s2R2S2. Since R2 relates states only to VI equivalent ones, ft!? = q ( l A l 8 la, ‘8 ’ ‘ ’ @ rf 8 . . .€9 I A , , ) v - ’
SI and S 2 are mapped by f1 to the same element in A I ,which
in turn being related by r1 to the same set of images in A l . It’s apparent that Rf E: @. Therefore, R: is well-defined
The same argument can be applied to V2. Together, Condi- and V i # j , RP and Rfi are semantically independent. Since
tion 1.1 holds. Pi“ and Rb are equivalent from users’ point of view, we will
For any (s,s’) E T = R1R2, let z be an arbitrary ele- use Rb instead of Pi” in our following discussions.
Using (9,we can always arrange the order of the com-
2
2
- -
ment in [s]v1+,.Then there must be a y E [S]V,+~such that
Vl
y
v2
s. Suppose sR1S. Then S
E [ S ] V ~ +Then
~ . we can choose another z
5 y, which means
E [S]v1+,such
positiolnal components Ri of T so that T can be represented
as:
that z 2 z 5 S . Since R2 relates 3 to s’, we have s’ 5 z, T = (RiR: . . .R:)(RiRi . * .R i ) . . . .(R:R: * .RE)
which means z E [s’]v,+,. Similarly, we can show that for
any z E [s’]v1+,, z E [S]V,+~. Together, Condition 1.2 IfwedefineRI = R:R:...Rt,i.e . ~1 = r : r f - - . r t , a n d
holds. W so on, it is apparent that T = RIR2 . . .R, and R I ,R2, ...,
From Theorem 1 and 2 we know that Definition 1 and 2 R, satisfy Definition 2. W
are equivalent. However, there is nothing stating the owner- A note on modelling concurrent program segments. If all
ship of these transforms represented as relations Ri. From the program segments in kth set of user i are semantically in-
the definitionof Ri, we can see that these relations are totally dependent or executed serially, Pi” is the relational compo-
decided by relation rj on each user’s view space. Although sition of the individual relations representing each program
in theory it is possible that Rj may have owner j, but in real segment in the set. In this case, we can replace Pi” with those

151
individual relations and represent each program segment in- Transitions that satisfy Definition 1 or 2 are defined to be
dividually as a compositional component P of T . However, ideally secure, which means there is no information flow be-
if these program segments are not semantically independent tween users. A system with total security can make access
and true concurrency is allowed, the parallel execution of control decisions for information that is within the system
them may not be represented as the relational composition domain. However, it cannot decide where new information
of the individual relations representing each program seg- should fit in without help from security policy. Therefore,
ment. In this case, a new relation P,r”is needed to represent input information is not included in the definition of an ideal
the total effects of them. This is not easy. However, our pur- secure system.
pose is not to find the relations Pi” themselves but to decide In practice, inputs can be modelled in two ways. One way
whether they are independent or not. is to take into account all possible information when we de-
Definition 3 An ideal secure system is a system whose fine the state space, which means inputs have been dealt with
all possible executions consist of only secure transitions. when we define the views for each user. Another way is to
To say a system is an ideal secure system means that the let the security policy be responsible for the security of in-
system has total security. It does not mean that the system puts, which means the security policy should decide for each
is a secure system according to some security policy. It only transition which input should go to whose view. In this per-
means no matter which state it starts from, an ideal secure spective, inputs are treated as new information from outside
system will not leak to a user any information that is pro- the system. A system with total security should be able to
hibited to that user at the start state. protect information that is already in the system, but to de-
cide the sensitivity of new information is beyond its reach.
4 Security Policy and Secure System The model presented in this paper is a peer-to-peer one,
i.e. we treat all users as equal. Any private information be-
When we say a system is secure we mean the system is longing to a user is protected. While multilevel security is a
secure according to a specific security policy. Here the word partially ordered policy, it can be implemented in our model
“secure” means an application property. by defining a high user view I4 to be a refinement of a low
Definition 4 A security policy is an application specific user view 4, or by giving permissions for high users to ac-
set of rights which includes the specification of allowable cess low users’ information in the security policy.
user views of system states, and depending on the applica-
tion, possibly some permissions for some users to take ac- 6 Comparison with previous models
tions that are not permitted in an ideal secure system.
Definition 5 A state is a secure start state according to a 6.1 Bell-LaPadula model
security policy if every user’s view on the state satisfies the
security policy. The Bell-LaF’adula model is designed to enforce multi-
Definition 6 A system execution is a secure execution level security and discretionary security within the domain
under a security policy if the execution starts from a secure of a computer system. Basically, this model defines how
start state and all program segments in any transition satisfy subjects(e.g. user processes), may get access permission
Theorem 3 except those permitted in the security policy. to objects(e.g. memory locations). Security is enforced by
All executions of an ideal secure system are secure if the controlling access to objects, a method known as access con-
system starts from a secure start state defined by the corre- trol.
sponding security policy. Although it is defining properties about security for gen-
Definition 7 A system is secure with respect to a secu- eral purpose computer systems, Bell-LaF’adula model is pre-
rity policy if all possible executions of the system are secure sented in a set of rules which represent what a secure system
according to the security policy. should be. This rule-based approach facilitates implementa-
tion, but fails to explicate security in a more general and se-
5 Discussion mantic way. Therefore, the model is unable to address secu-
rity concerns that fall outside the subject-objectframework.
A basic assumption underlying this model is that a user For example, in the remote read scenario, if a high subject
can only get information at the start or end of a transition, issues a remote read request to a low object, the distributed
not in the middle. If a user query can neither be modelled read request can be viewed as a message being sent from
at the start nor end of a transition, then we have to break the high to low, known as a write down. While in the model
transition into several smaller ones so that each user query proposed in this paper(coup1ed with appropriate multilevel
can be modelled at the start or end of a transition. security policy), the access control decision depends on the
The security we are trying to explicate in this paper is effects of the access on each user’s view, not the type of ac-
about the confidentialityof information already in a system. cess itself.

152
Another example is how to deal with trusted subjects. tiulity requires [24]. Alternative formulations based on the
Trusted subjects may operate on behalf of administrators or state of computation have been proposed, which offer bet-
they may be the processes that provide critical services like ter understanding of the internal restrictions on information
device driver or memory management functionality. Such flow [14, 18,3].
critical processes cannot perform their tasks without vio-
lating the Bell-LaPadula model rules. The Bell-LaPadula Our state-based definition of a secure transition, i.e. Def-
model provides no guidance on how disclosure threats can inition 1,is equivalent to lhose presented in some previous
be avoided for trusted subjects. Whereas in the present state-based models[l4, 18, 31. However, since we repre-
model, all users or processes are subject to the same require- sent trtnsition T as a relation on state set S, our specifica-
ments under an ideal secure system. Again, access control tion is different from previous models. In previous mod-
decisions are based on the actual effects of the access on els, sequentially ordered input/output or traces form an in-
each user’s view according to Definition 1,or on the interac- tegral part of the specification. In this work, the definition
tions of the processes with other users’ processes according of a secure transition is specified solely in terms of users’
to Theorem 3. For example, all users should be given access views of the system states before and after a transition. In
to a common network card as long as they are not sending our model, during a transition there can be a number of con-
information to or signalling users with lower security lev- currently executed processes belonging to various users. In-
els. For some trusted subjects who need to perform actions formations about traces are not essential for our formulation.
violating Theorem 3,they should have explicit permission What we care about is the effects of these processes on the
from the application dependent security policy. We take the users’ views, i.e. whether these processes incur information
view that a secure system should be able to maintain infor- flow that affects the views of users. This can be answered
mation flow security by itself and seek explicit permission solely by examining the effects on users’ views. Hence, our
whenever security violating actions are taken. definition is more general. For example, our model allows
While the model presented in this paper is based on the for coricurrency among user processes, which is more suit-
concept of information flow, detailed constraints on individ- able far distributed systems.
ual processes are given in terms of user views, and inde-
pendently in terms of interactions between these processes. A major contributionof this work is that another indepen-
Hence, we choose to view this model as a generalized Bell- dent definition of secure transition is given in terms of the
LaPadula model in which access control decisions are based interactions between the concurrently executed processes
on the semantic notion of information flow instead of the during a transition. The two definitions are proven to be
oversimplified subject-object framework. equivalent. This gives confidence to the soundness of our
specification[151 and provide further insight into the inter-
nal causality of information flow.
6.2 Information flow models
In addition, this transform-based view of a secure tran-
The notion of Noninterference was first introduced as a sition provides a natural tool for verifying security in dis-
concrete approach at preventing improper information flow tributed systems. An unwinding theorem stating detailed re-
in a deterministic system[7, 81. A system is defined to be straints on the interactions between individual processes is
Noninterference secure if users with low security levels are derived to facilitate implementation and verification in the
completely unaffected by any activity of users with high se- fashioin of access security models.
curity levels. The original Noninterference model has been
extended to accommodate non-determinism[21, 131, multi- We have not dealt explicitly with compositional issues
domain and intransitivesecurity policies[11,201,and proba- [14,23]. However, since we choose to view this model as a
bilisticinference systems[17,9,10]. Other informationflow generalized Bell-LaPadula model, composition of ideal se-
models include [4, 19, 121. cure systems can be seen as adding additional processes to
The underlying approach of these formulations is an a system under the semantic independence rule.
event-based or trace model which focuses on the user-visible
behaviours of a system. This is an intuitive approach to in- There are two approaches to nondeterminism in com-
formationflow security which captures what we expect from puter systems, i.e. possibilistic approach and probabilistic
an information flow secure system. However, input/output approach[22]. In this paper, we have only dealt with possi-
specification of security provides little guidance to imple- bilistic interference in nondeterministic systems. We hope
mentation and verification in real systems. Furthermore, to investigate probabilistic interference in the future. An-
as John McLean has pointed out, security models that fo- other interesting area that needs further efforts is how to de-
cus on a system’s user-visiblebehaviour may not be able to fine proper rights for processes that need to violate Theorem
model adequately the internal causal restrictions conjiden- 3 in the security policy[ 13.

153
7 Conclusion [7] J. Goguen and J. Meseguer, Security policies and secu-
rity models, Proc. of the 1982 IEEE Symposium on Se-
We have defined a security property from two distinct as- curity and Privacy, Oakland, CA, 1982.
pects of a system execution, i.e. state and transform. These
[8] J. Goguen and J. Meseguer, Unwinding and inference
two independent definitions are proven equivalent, which
gives both confidence to the soundness of the explication control. Proc. 1984 IEEE Symposium on Security and
Privacy, April 1984.
and insight into the internal causality of information flow in
computer systems. Using this definitionof information flow [9] J.W. Gray, 111. Probabilistic interference, Proc. 1990
security, a general security model for nondeterministiccom- IEEE Symposium on Research in Security and Privacy,
puter systems is presented. On one hand, our model is based May 1990.
on information flow which allows it to explicate security se-
mantically as other information flow models. On the other [lo] J.W. Gray, 111. Towards a mathematical foundation for
hand, our model imposes concrete constraints on the internal information flow security, Proc. 1991 IEEE Symposium
system processes which facilitate implementation and veri- on Research in Security and Privacy, 1991.
fication in the fashion of access security models. Our model
is also more general than previous state-based information [ 111 J.T. Haigh and W.D. Young, Extending the noninterfer-
flow models, e.g. allowing for concurrency among system ence version of MLS for SAT. Proc. 1986 IEEE Sympo-
processes, which is more suitable for distributed systems. sium on Security and Privacy, April 1986.

E121 D. Johnson and J. Thayer, Security and the composi-


8 Acknowledgement tion of machines, Pmc. Computer Security Foundations
Workshop, 1988.
I would like to thank Roger Needham for his constant
support and helpful discussions; Jong-Hyeon Lee, Stewart [ 131 D. McCullough, Foundations of Ulysses: The theory
Lee, Geraint Price, A r i s Zakinthinos, Ross Anderson and of security. Technical Report RADC-TR-87-222,Rome
other members of Security Group at Cambridge University Air Development Center, July 1988.
for valuable comments on the drafts of this paper; Li Gong
[ 141 D. McCullough, A hookup theorem for multilevel se-
for his encouragementduring the work. I am also very grate-
ful to the anonymous reviewers for helpful comments and curity, IEEE Transactions on SofhYare Engineering,
suggestions. 16(6):563-568,June 1990.

[ 151 J. McLean, Reasoning about security models, Proc.


References of the 1987 IEEE Symposium on Security and Privacy,
Oakland, CA, 1987.
[ 11 W.R. Bevier, R.M. Cohen and W.D. Young, Connection
[ 161 J. McLean, The Algebra of Security, Pmc. 1988 IEEE
Policies and Controlled Interference, Pmc. Computer
Security Foundations Workshop VIII, 1995. Symposium on Security and Privacy, April 1988.

[2] D. Bell and L. LaPadula, Secure Computer Systems: [17] J. McLean, Security models and information flow,
Unified Exposition and Multics Interpretation, Techni- Pmc. 1990 IEEE Symposium on Security and Privacy,
cal Report, MTR-2997, MITRE, Bedford, Mass, 1975. May 1990.

[3] W.R. Bevier and W.D. Young, A State-Based Approach [ 181 I. Moskowitz and 0. Costich, A Classical Automata
to Noninterference, Proc. of the Computer Security Approach to Noninterference Type Problems, Proc. of
Foundations Workshop VII, 1994. the Computer Security Foundations Workshop V, 1992.

[4] S.N. Foley, A Universal Theory of Information Flow, [ 191 C . O’Halloran, A calculus of information flow, Tech-
Proc. 1987 IEEE Symposium on Security and Privacy, nical report 92001, Defence Research Agency, Malvern,
1987. Worc. UK, December 1992.

[5] S.N. Foley, A Model for Secure Information Flow, Proc. [20] J. Rushby, Noninterference, transitivity, and channel-
1989 IEEE Symposium on Security and Privacy, 1989. control security policies. Technical Report, SRI Com-
puter Science Laboratory, April 1991.
[6] M. Franzle, B. von Stengel, A. Wittmuss, A generalized
notion of semantic independence,Information Pmcess- [21] D. Sutherland, A model of information, Pmc. ofthe
ing Letters, Vol. 53, pp5-9, 1995. 9th National Security Conference, NSA/NIST, 1986.

154
[22] J.T. Wittbold and D.M. Johnson, Information Flow in
Nondeterministic Systems, Proc. 1990 IEEE Sympo-
sium on Research in Security and Privacy, May 1990.
[23] A. Zakinthinos and E.S. Lee, The Composability of
Non-Interference, Proc. Computer Security Founda-
tions Workshop VIII, 1995.
[24] M.E. Zurko, J. McLean, J. Millen, R. Morris, and M.
Schaefer, What are the foundations of computer secu-
rity?: Introduction to panel discussion. Pmc. Computer
Security Foundations Workshop VI, 1993.

155

You might also like