You are on page 1of 11

Computers & Security, 11 (1992) 753-763

Lattice-Based
Enforcement of
Chinese Walls
Ravi S. Sandhu
Chterfor Secure Information Systems, and Department of lnfrmation and Sofrware Sys&ns
Engineering, George Mason Univenity, Fairfax, VA 22030, USA’

The Chinese Wall policy was identified and so named by munity, because it is a real-world information flow
Brewer and Nash. This policy arises in the financial segment of policy in the commercial sector rather than the
the commercial sector, which provides consulting services to
usual military or government sectors. Moreover, it
other companies. Consultants naturally have to deal with con-
fidential company information for their clients. The objective has characteristics which are quite different from
of the Chinese Wall policy is to prevent information flows the military security policy considered in the
which cause conflict of interest for individual consultants. Bell-LaPadula model [ 11.
Brewer and Nash develop a mathematical model of the
Chinese Wall policy, on the basis of which they claim that this
The objective of the Chinese Wall policy is to pre-
policy “cannot be correctly represented by a Bell-LaPadula
model.” In this paper we demonstrate that the Brewer-Nash vent information flows which cause conflict of
model is too restrictive to be employed in a practical system. interest for individual consultants. Consultants
This is due to their treatment of users and subjects as synony- naturally have to deal with confidential company
mous concepts, with the consequence that they do not dis-
information for their clients. A single consultant
tinguish security policy as applied to human users versus
security policy as applied to computer subjects. By maintaining
should not have access to information about two
a careful distinction between users, principals and subjects, we banks or information about two oil companies, etc.,
show that the Chinese Wall policy is just another lattice-based because such insider information creates conflict of
information policy which can be easily represented within the interest in the consultant’s analysis and disservice to
Bell-LaPadula framework.
the clients. Insider information about companies of
Keytuordr: Chinese Wall policy, Lattice model, Bell-LaPadula
the same type also presents the potential for con-
model. sultants to abuse such knowledge for personal
profit.

1. Introduction The Chinese Wall policy has a dynamic aspect to

T he Chinese Wall policy arises in the financial it. Consider a consultant who is new in the field,
segment of the commercial sector, which pro- say fresh out of Graduate School. At this point
vides consulting services to other companies. The there is no mandatory restriction on the con-
policy was identified by Brewer and Nash [2]. It sultant’s access rights. The consultant can access
attracted considerable interest in the security com- information about any company in the database
(restricted only by discretionary controls which we
*E-mail: sandhu@sitevax.gmu.edu. will be ignoring throughout this paper). Now say

0167-4048/92/$5.00 0 1992, Elsevier Science Publishers Ltd.


R. S. SandhulLattice-Based Enforcement of Chinese Walls

that the consultant accesses information about restrictive. In Section 5 we develop a lattice-based
bank A. Thereafter that consultant is mandatorily model for the Chinese Wall policy and relate it to
denied access to information about any other bank_ the Bell-LaPadula model. Section 6 concludes the
There are, however, still no mandatory restrictions paper.
regarding that consultant’s access to oil companies,
insurance companies, etc.
2. Users, Principals and Subjects
Largely due to this dynamic aspect, Brewer and To understand the Chinese Wall policy and its
Nash claim that the Chinese Wall policy “cannot nuances with respect to subjects versus human
bc correctly represented by a Bell-LaPadula users, we must first understand the distinction
model.” One objective of our paper is to dispute between users, principals and subjects. This distinc-
this claim, by showing how the Chinese Wall tion is fundamental to computer security and goes
policy is just another example of a lattice-based back to the beginnings of the discipline. Neverthe-
information flow policy which can be easily reprc- less, it is often dealt with imprecisely in the litera-
sented within the Bell-LaPadula framework. ture leading to undue confusion about the
objectives of computer security.
Another objective of our paper is to show the vital
importance of distinguishing security policy as 2.1. Users
applied to human users versus security policy as We understand a user to be a human being. We
applied to computer subjects. Brewer and Nash fail assume that each human being known to the
to make this distinction. They treat users and sub- system is recognized as a unique user. In other
jects as synonymous concepts. As a result their words the unique human being Jane Doe cannot
model is much too restrictive to be employed in a have more than one user identity in the system. If
practical system. By maintaining a careful distinc- Jane Doe is not an authorized user of the system
tion between users, principals and subjects, we she has no user identity. Conversely, if she is an
develop a model for the Chinese Wall policy authorized user she is known by exactly one user
which addresses threats from Trojan Horse identity, say, JDoe. Clearly this assumption can be
infected programs. The Brewer-Nash model on enforced only by adequate administrative controls,
the other hand makes a futile attempt to safeguard which we assume are in place.
against malicious consultants.
2.2. Principals
The rest of this paper is organized as follows. Our concept of principal is adapted from Saltzer
Section 2 reviews the distinction between users, and Schroeder [6]. Each user may have several
principals and subjects in a computer system. rincipals associated with the user. On the other
Section 3 discusses the Chinese Wall policy and the Rand each principal is required to be associated
threats that it addresses. We carefully distin uish with a single user.
between threats posed by malicious consu Ktants
versus threats posed by Trojan Horse infected pro- The motivation in ref. 6 for this concept was that
grams. We argue that the scope of computer secur- different principals would correspond to, say,
ity is lar ely limited to threats posed by Trojan different projects on which the user works. Every
Horse in Bected programs. After all, consultants who time a user logs in (i.e. signs on) to the system it is
choose to share information in violation of Chinese as a particular principal. Thus if Jane Doe were
Walls can do so equally efficiently by communica- assigned to projects Red and Blue, she would have
tion means outside the computer system. With three principals associated with her user identity,
this context we analyze the Brewer-Nash model in say, JDoe, JDoe.Red and JDoe.Blue. On any session
Section 4 and show that this model is unduly Jane could log in as any one of these principals,

754
Computers and Security, Vol. 77, No. 8

depending on the work she planned to do in that For simplicity we assume that a subject executes
session. Each principal associated with JDoe obtains with all the privileges of its associated principal*.
a different set of access rights. Thus JDoe.Red has Thus when Jane Doe logs in as JDoe.Red and
access to the files and other objects of project Red, invokes her favorite editor, says Emacs, a subject
but not project Blue. Similarly, JDoe.Blue has associated with JDoe.Red is created and runs the
access to the files and other objects of project Blue, Emacs code. This subject acquires all the access
but not project Red. The principaJDoe is a generic rights of the principal JDoe.Red. Similarly when
principal for Jane allowing access to her personal John logs in as John.top-secret every subject
files, but not to any of the project files. spawned during that session runs at the top-secret
level.
The notion of principal reflects the everyday reality
that individuals wear several different ‘hats” in an To summarize
organization, with their authority and responsibil-
ity determined by the particular “hat” they are l each authorized human user is known as a
wearing at a given moment. Saltzer and Schroeder unique user to the system,
introduce principals in a discretionary context. The
concept carries over equally well to mandatory l each user can log in as one of several principals
policies. We often encounter phrases such as, “the but each principal is associated with only one user,
top-secret user John logs in at the secret level,” in and
the security literature. What are we to make of this
statement? In the user-principal terminology we l each principal can spawn several subjects but
interpret this statement as follows: each subject is associated with only one principal.

l Firstly, there is a unique user John, cleared to


3. The Chinese Wall Policy
top-secret, independent of the level at which John
logs in. The Chinese Wall policy is intuitively simple and
easy to describe. In this section we describe this
l Secondly, John can log in at every level policy by adapting the description of Brewer and
dominated by top-secret At each of these levels Nash [2] and adding additional concepts to it. It is
there is a separate principal associated with John. So important to keep in mind that we are deliberately
John.top-secret is the principal when John logs in i noting all discretionary access control issues in
at top-secret, John.secret is the principal when t!u‘s paper. In practice the Chinese Wall policy as
John logs in at secret, etc. described here would be the mandatory com-
ponent of a larger policy which includes additional
We will see that this concept of a principal is the discretionary controls (and possibly additional
key to achieving proper enforcement of Chinese mandatory controls).
Walls in a computer system.
We begin by distinguishing public information
from company information. There are no manda-
2.3. Subjects tory controls on reading public information. Read-
We understand a subject to be a process in the
system, i.e. a subject is a program in execution. *This is the actual situation in most existing systems, including
those specifically designed for security. More generally a
Each subject is associated with a single principal on
subject could be created with a proper subset of privileges of its
behalf of whom the subject executes. In general a associated principal. The most general case is to allow a subject
principal may have many subjects associated with it to have multiple parents, from each of whom it obtains some
concurrently running in the system. privileges.

755
R. S. SandhulLattice-Based Enforcement of Chinese Walls

corn any information on the other hand is sists of oil companies. The Chinese Wall stipula-
to mandatory controls, which we will
fi%jecteB tion is that the same consultant should not have
discuss in a moment. The policy for writing public read access to two or more banks or two or more
or company information is indirectly determined oil companies.
by its impact on providing indirect read access
contrary to the mandatory read controls. It is in The Chinese Wall policy has a mix of free choice
this respect that users and subjects must be treated and mandated restrictions. So long as a consultant
differently. We will consider mandatory controls has not yet been ex osed to any company informa-
on writing information following our discussion of tion about banks, f: at consultant has the potential
the read controls. to read information about any bank. The moment
this consultant reads, say, bank A information,
The motivation for recognizing public information thereafter that consultant is to be denied read
is that a computer system used for financial analysis access to all other banks. The free choice of select-
will inevitably have large public databases of finan- ing the first company to read in a CO1 class can
cial information for use by consultants. Moreover, be exercised once and is then forever gone.
public information allows for desirable features
such as public bulletin boards and electronic mail So long as we have focused on read access the
which users expect to be available in any modern Chinese Wall policy has been easy to state and
corn uter system. Public information can be read understand. When we turn to write access the
by ar; users, principals and subjects in the system situation becomes more complicated and subtle.
(restricted only by discretionary controls which, as This is the usual case with confidentiality policies.
we have said, we are ignoring in this paper). For example, the simple-security property of the
well-known Bell-LaPadula model [1] is similarly
Company information is categorized into mutually intuitive and straightforward whereas the *-prop-
disjoint conflict of interest classes as shown in Fig. erty is more subtle. (A statement of these properties
1. Each company belongs to exactly one conflict of is given at the end of Section 5.)
interest (COI) class. The Chinese Wall policy
requires that a consultant should not be able to In computer security it is easy to confuse the threat
read information for more than one company in from malicious users with the threat from mali-
any given CO1 class. To be concrete let us say that cious subjects. In the Bell-LaPadula model, manda-
CO1 class i consists of banks and CO1 class i con- tory controls on write access are imposed to

-
Company Information

n-
conflict of conflict of
Interest Interest
Class i Class j

I
,+ “. ,I

Company i.1 . . Company i.m Company j.1 Company j.n

Fig. I. Company information in the Chinese Wall policy.

756
Computers and Security, Vol. 11, No. 8

prevent Trojan Horse infected subjects from leak- l 0 belongs to a CO1 class within which S has not
ing information contrary to the system policy. read any object (i.e. 0 is outside the wall).
These controls do not address the threat of
malicious human users. It should always be kept in 2. BN Write Rule: Subject S can write object 0
mind that a malicious user can compromise infor- only if
mation confidentiality by employing communica-
tion means outside the computer system. Thus l S can read 0 by the BN read rule, and
John as a human being cleared to top-secret is
nevertheless able to write and publish unclassified l no object can be read which is in a different
documents. This is because John is trusted not to company dataset from the one for which write
leak top-secret information in his unclassified access is requested.
writings. On the other hand malicious subjects
executing with John’s top-secret privileges can leak We have called these the BN read rule and BN
top-secret information if not constrained by the write rule for ease of reference. They are respec-
*-property. tively analogous to the simple-security and *-
properties of the Bell-LaPadula model.
In much the same wa a computer system cannot
solve the problem o fy a malicious consultant. A The BN read rule conveys the dynamic aspect of
determined consultant can leak damaging confi- the Chinese Wall policy. This rule clearly applies
dential information about a company to, say, the to the human users, namely the consultants, in the
Wall Street Journal by means of a telephone call. system. Since the Brewer-Nash model does not
Similarly, a consultant can provide insider com- distinguish between users and subjects, this rule is
pany information directly to its competitors or also applied to all subjects in the system.
share this information with other consultants. Just
as our top-secret user John is trusted not to divulge The BN write rule is brought in to prevent Trojan
secrets, so must our consultants be trusted as indi- Horse laden subjects from breaching the Chinese
viduals not to break Chinese Walls. Walls. To see its motivation consider that con-
sultant John has read access to Bank A objects and
4. The Brewer-Nash Model Oil Company OC objects, and that consultant Jane
has read access to Bank B objects and Oil Company
We now consider the Brewer-Nash model for the OC objects. Individually John and Jane are in com-
Chinese Wall policy. In this model data is viewed pliance with the Chinese Wall policy. Now
as consisting of objects each of which belongs to a suppose John is allowed write access to OC objects.
company dutaset. The company datasets are categor- A Trojan Horse infected subject running with
ized into conflict of interest (COI) classes, along the John’s privileges can thereby transfer information
lines of Fig. 1. from Bank A objects to OC objects. These OC
objects can be read by subjects running on behalf
The Brewer-Nash model does not distinguish
of Jane, who then has read access to information
users, principals and subjects. It uses the single con-
about Bank A and Bank B*.
cept of subject for all three notions. This leads
them to propose the following mandatory rules.
*Note that computer security cannot do anything to prevent
1. BN Read Rule: Subject S can read object 0 John and Jane from exchanging Bank A and Bank B informa-
only if tion outside the computer system. But in such an exchange
both John and Jane are accomplices. In the example given here
l 0 is in the same company dataset as some object John is not an accomplice but rather an unwitting victim of a
previously read by S (i.e. 0 is within the wall), or Trojan Horse.

757
R. S. Sandhullattke-Based Enforcement of Chinese Walls

The BN write rule is successful in preventing such general require that objects be labeled with a lattice
information leakage by Trojan Horses. However, it structure. Denning’s result is derived from the
does so at an unacceptable cost. It is easy to see that following axioms.
the BN write rule has the following implications.
(1) Information flow is reflexive, transitive and
l A subject which has read objects from two or symmetric.
more company datasets cannot write at all.
(2) There is a lowest class of information which is
l A subject which has read objects from exactly allowed to flow into all other classes.
one company dataset can write to that dataset.
(3) For any two classes of information A and B
These implications are clearly unacceptable (if the there is a class C which is the least upper bound of
computer s stem is to be used for something more A and B.
than a rea B -only repository of confidential infor-
mation). Under this regime a consultant can work These axioms are generally accepted as being very
effectively so long as he or she is assigned to exactly reasonable. Some researchers have tried to relax
one company (however, even then the consultant is them further, for instance by dropping the transi-
forbidden to write public information). The tive requirement on information flow, but in the
moment the consultant is assigned to a second main the security community has accepted them.
company, he or she will be unable to write any
information into the system. Now there is nothing in the Chinese Wall policy
that is contrary to these axioms. We will bear out
Fortunately these implications are not inherent in this claim by showing how we can construct a
the Chinese Wall policy. They are instead a conse- lattice structure for the Chinese Wall policy. We
quence of the Brewer-Nash model’s failure to dis- do so by defining a number of axioms in Section
tinguish rules applied to users from rules applied to 5.1. Let us first briefly elaborate on Denning’s
subjects. The key observation is that we can live axioms.
with the implications listed above with respect to
subjects, but not with respect to users. In particular, l The requirement that information flow is reflex-
limiting every subject to reading and writing a ive amounts to saying that information can flow
single company dataset is an acceptable restriction. from a security class to itself, for example, company
Thus, any subject executing on behalf of John A information can flow to company A objects. This
should either be able to read and write Bank A assumption recognizes the obvious, that is infor-
objects, or read and write Oil Company OC mation contained in an object has already flowed
objects. John as a human being is, however, allowed to it.
to read and write both Bank A and Oil Company
OC objects. For that matter, John is also allowed to l The transitive assumption stipulates that, wher-
read and write public objects. However, he is not ever indirect information flow is possible, direct
allowed to do all of these actions using the same information flow is also possible. In other words if
subject. information can flow from class A to class B and
from class B to class C, then information should be
allowed to flow from class A directly to class C.
5. A Lattice Interpretation

In this section we provide a lattice-based inter- l Given the reflexive and transitive assumptions,
pretation of the Chinese Wall policy. It was shown the symmetric assumption merely eliminates
by Denning [3] that information flow policies in redundant security classes by collapsing them into

758
Computers and Security, Vol. 77, No. 8

a single class. The symmetric assumption requires We propose to label each object in the system with
that if information can flow from class A to class B the companies from which it contains information.
and from class B to class A, then classes A and B Thus an object which contains information from
must be the same. In other words there is no point Bank A and Oil Compan OC is labeled {Bank A,
in having distinct security classes A and B if infor- Oil Company OC}. Labe Ys such as {Bank A, Bank
mation can flow from A to B and vice versa. We B, Oil Corn any OC] are clearly contrary to the
should have a single class (call it A or B) in such Chinese W a! 1 policy. We prohibit such labels in
cases. our system by defining a security label as an n-
element vector [i,, i,, .... i,], where each ikE COI,
l The requirement for a system low security class or i,= 1 (thesymbol I is read as bottom).
from which information can flow to all other
classes is satisfied by the existence of public infor- An object labeled [i,, i,, . . ., i,,] is interpreted as
mation in every system. signifying that it contains information from com-
pany i, of COI,, company i, of COI, and so on.
l The least upper bound of security classes A and When an element of the vector is 1, it means that
B is the class C such that, (i) information from both the object has no information from any company
A and B can flow to C, and (ii) for all classes D such in the corresponding conflict of interest class. For
that information can flow to D from both A and B exam le, an object which contains information
it is the case that information can flow from C to only t’rom company 4 in COI, will be labeled with
D. The first part of this requirement assures us that the vector [ 1, 1, 4, 1, . . ., I], i.e. all elements
we will be able to label information obtained by other than third one will be I. Similarly, an
combining information from classes A and B. The object which contains information from company
second part stipulates that the label assigned to the 7 in CO& and company 5 in COI, will be labeled
combined information is unambiguous and precise. withthevector [1,7, 1,5, I,..., I].
Because least upper bound is a commutative and
associative operator these properties extend to This leads us to the following definirion for the set
information obtained by combining information of labels.
from any finite collection of security classes A,, A,,
. . . . A,.
A3. L¶BELS=([i,,i,,..., i,]li,ECOI’,,i,
5.1. The Lattice Structure for Chinese Walls
E CO& .... i,E COIL}
We now present the axioms which give us a lattice
structure for the Chinese Wall policy. Let us begin
where COI: = COI, U { I},
by introducing the conflict of interest classes and
companies.
Note that the label which has all 1 elements
Al. There are n conflict of interest classes: COI,, naturally corresponds to public information. There
CO&, . . . . COI,. is, however, no naturally occurring system high
label (in fact such a label is contrary to the Chinese
A2. COI;={l, 2, . . . . m,}, for i= 1, 2, . . . . n, i.e. each Wall policy).
conflict of interest class CO& consists of mi com-
panies. In order to complete the lattice we introduce a
special label for system high (which we will not
In other words there are n conflict of interest assign to any subject in the system).
classes, each of which contains some number of
companies as visually depicted in Fig. 1. ~4. m7zmm = Ld3~L.su {SYX~IGH}

759
R. S. SandhulLattice-Based Enforcement of Chinese Walls

The SYSHIGH label is not assigned to any subject A7. I,, 12~ LABELS are compatible if and only if
in the system. for all k= 1, .... & (Il[ik]=f2[ik])V(1~[ik]=
l)v
(WI= 1)
Next we define the dominance relation among
labels as follows, where the notation 4 [ih] denotes In other words, two labels are compatible if wher-
the ikth element of label $. ever they disa ree at least one of them is 1. Note
that if 1, a 1, t?l en I1 and 1, are compatible. Labels
A5. (v/pl,-ABELS)[l, 2 lp(vik= I, . . . . n) which are incomparable with respect to the
dominance relation may or may not be compatible,
[(II[jk] = fz[ik]) v(fz[ik] = l)]] e.k [ 1, 3, I] and [ 1, 2, I] are incompatible
w le [l,1, 21and [l,2, I] are compatible. Intui-
tively, information from compatible incomparable
In other words, 1, dominates 1, provided that I, and
classes can be combined without violating the
I, agree wherever 1, # 1. Note that every label
Chinese Wall policy. However, information from
dominates the system low label which consists of
incompatible incomparable classes cannot be com-
all I elements. The notation 1, > 1, denotes that
I, > l2 and 1, # 12.The dominance relation is oppo- bined without violating Chinese Walls.
site to the information flow relation, i.e. I, > l2
signifies that information can flow from 1, to 1, but The followin axiom expresses the requirement
not vice versa. that compatib f e labels cannot be legitimately com-
bined under the Chinese Wall policy.
Forexample [1,3,2]>[1,3, 1],[1,3,1]>[1,1, l]
while [ 1, 3, I] and [ 1, 2, I] are incomparable. A8. If I, is incompatible with l2 then Itrb(l,,
/J
Objects labeled [l,3, 21 contain information for = SYSHICH
company 1 in COI,, company 3 in COIz and com-
pany 2 in COI,. Objects labeled [l,3,I] onlycon- For example the least upper bound of [1, 3, I]
tain information for company 1 in COI, and and [ 1, 2, I] is SYSHIGH. Since there are no
company 3 in COI,. The former class therefore SYSHIGH subjects this information is inaccessible
dominates the latter (but not vice versa), i.e. infor- in the system.
mation from objects labeled [l,3, 21 can flow to
objects labeled [l,3, I] (but not vice versa). Classes For corn atible labels the least upper bound is
[1, 3, I] and [ 1, 2, I] are incomparable so compute B as follows.
information from one cannot flow to the other.
Similarly, [ 1,1,2] and [ 1,2,I] are incomparable.
A9. If I, is compatible with 1, then l&(1,, I?) = 1,
where
To account for the special system high label we
have the following axiom.
I, [ih] if I, [ib] # 1
l3[ik] =
A6. (~~~~E~~~ABEL.s)[sYsIIIGH~ q 1,[ ih] otherwise

Recallthat from axiom A4 the SYSHIGH label is If I,> 1, this definition gives us lub (11, IJ = 1,.
not assigned to any subject in the system. Similarly for I2> 1, we have l&(1,, I,)= I?. For
incomparable I, and I*, theleast upper bound con-
To complete the lattice structure it remains to sists of all the non- 1 elements of 1, and 1,. For
define the least upper bound operator. In order to example, the least upper bound of [l,1, 2) and
do so we introduce the followingnotion. [l,2,I] is[l,2,21.

760
Computers and Security, Vol. I I, No. 8

Finally to complete the definition of least upper Figure 2 shows a lattice with two conflict of inter-
bound with respect to the special system high label, est classes, each with two companies in it. The
we have the following axiom lattice is shown by its Hasse diagram, in which the
dominance relation goes from top to bottom with
transitive and reflexive edges omitted. We require
every object in the system to be labeled by one of
the labels in Fig. 2. Obiects with comnanv informa-
This completes our definition of the Chinese Wall tion from a sitrile company are labelid as follows:
lattice.
l [ 1, I]: objects with information for company
It is easy to verify that the axioms Al-Al0 define a in COI,.
lattice on the set of labels EXTLABELS with
dominance relation 2. Information flow occurs in l [2, _I_]: objects with information for company
the direction opposite to the dominance relation in COI,.
and is obviously reflexive, transitive and symmetric.
The required system low class is identified by the l [ 1, 11: objects with information for company
label consisting of all 1 elements, and the least in COI,.
upper bound operator has been defined.
l [I, 21: objects with information for company
5.2. Chinese Wall Model in CO&.
Given this lattice structure we have developed, let
us see how we can solve the Chinese Wall problem. Objects with company information from more
To be concrete we describe our solution in terms than one company (without violation of Chinese
of the specific lattice of Fig. 2. The solution is, Walls) are labeled as follows:
however, completely general and applies to any size
Chinese Wall lattice. 0 [l, l]: objects with information for company 1 in
COI, and company 1 in CO&

SYSHIGH 0 [ 1, 21: objects with information for company 1 in


COI, and company 2 in COI,.

l [2, 11: objects with information for company 2 in


COI, and company 1 in CO&.

l [2,2]: objects with information for company 2 in


CO& and company 2 in COI,.

Objects labeled SYSI-UGH violate the Chinese


Wall policy, in that they can combine information
from any subset of the companies. These objects
are inaccessible in the system (and therefore might
as well not exist).

Now let us consider labels on users, principals and


J
subjects. We treat the label of a user as a high-
Fig. 2. Example of a Chinese Wall lattice. water mark” which can float up in the lattice but

761
R. S. SandhulLattice-Based Enforcement of Chinese Walls

not down. A newly enrolled user in the system is These principals have futed labels which do not
assigned the label [I, I ]*. As the user readst change. The floating up of a user’s label corre-
various company information the user’s label floats sponds to the creation of one or more new princi-
up in the lattice. For example, by reading informa- pals for that user. For example, when Jane had the
tion about company I in conflict of interest class 1 label 1, I], she had only two principals associated
the user’s label is modified to [l, I]. Reading with h er, viz., Jane.[l, I] and Jane.[ 1, I]. When
information about company 2 in the conflict of Jane’s label floated up to [ 1, 1 , she acquired two
interest class 2 further modifies the user’s label to new principals Jane.[ 1, l] an d Jane.[ 1, 11. This
[I, 21. floating up of Jane’s label is achieved by Jane’s
directive to the system (suitably constrained by dis-
This floating up of a user’s label is allowed, so long cretionary controls). The system will allow this
as the label does not float up to SYSHICH. Opera- action only if the float up is to some label strictly
tions which would force the user’s label to SYS- below SYSHIGH.
HIGH are thereby prohibited. The ability to float a
user’s label upwards$ addresses the dynamic Each principal has a futed label. Every subject
requirement of the Chinese Wall policy. The float- created by that principal inherits that label. Thus,
ing label keeps track of a user’s read operations in all activity in the system initiated by Jane.[l, I]
the system. It accounts for the dynamic aspect of will be carried out b subjects labeled [ 1, 11. The
the Chinese Wall policy. label of a subject is cretermined by the label of the
principal who created that subject. A subject’s label
With each user we associate a set of principals, remains fured for the life of that subject.
one at each label dominated by the user’s label.
Thus, if Jane as a user has the label [ 1, 11, she has All read and write operations in the system are
the following principals associated with her: carried out by subjects. These subjects are con-
strained by the familiar simple-security and *-
l Jane.[l, I] properties of the Bell-LaPadula model given below.

0 Jane.[l, I] l Simple Security Property. A subject S may have read


access to an object 0 only if L(S) > L(0).
0 Jane.[_L, l]
l *-Property. A subject S can only write an object
0 Jane.[l, I] if the security class of the subject is dominated by
the class of the object; i.e. if L(0) > L(S).
Each of these principals corresponds to the label
with which she wishes to log in on a given session. Here L(S) is the security label of subject S and L(0)
is the security label of object 0.
*This assumes that the user is entering the system with a ‘clean
slate.” A user who has had prior exposure to company informa- Now suppose that June lo s in as the principal
tion in some other system should enter with an appropriate
[ 1, I]. All subjects created f uring that session will
label reflecting the extent of this prior exposure.
tconstrained by discretionary access controls which we have inherit the label [ 1, I]. This will allow these sub-
ignored in this paper. For instance, a user may be allowed to jects to read public objects labeled [ 1, I], to read
read only that company information which the user’s boss and write company objects labeled [ 1, I], and
assigns him or her to. write objects with labels [l, 11, [ 1, 21 and SYS-
$This float upwards does not present the security problems
HIGH. (As is often done in multilevel secure data-
with changing labels discussed in [s]. This is due to the upward
floating or high-water mark nature of our user labels and the base systems, we can prohibit this “write up” if we
fact that object labels are not changed. so choose and allow subjects to write only at their

762
Computers and Security, Vol. II, No. 8

own level, i.e. the *-property is strengthened to conventional military and government sector (with
require L(S) = L(O).) their hierarchical and non-hierarchical compo-
nents). A system built to Orange Book criteria can
be used to enforce Chinese Walls, provided there is
6. Conclusion
some flexibility in the structure of the labels in the
In this paper we have given a lattice interpretation system.
of the Chinese Wall policy of Brewer and Nash [2].
In doing so we have refuted their claim that the
Chinese Wall policy “cannot be correctly repre-
Acknowledgement
sented by a Bell-LaPadula model.” We have also
shown that the Brewer-Nash model is too restric- The author is rateful to Sylvan Pinsky and
tive to be employed in practice, since it essentially Howard Stainer sor their support and encourage-
‘1 1
prohibits consultants from adding new information ment in making this work possmie.
into the system (unless they are assigned to no
more than one company). By maintaining a careful
distinction between users, principals and subjects, References
we develo ed a model for the Chinese Wall policy
[l] D. E. Bell and L. J. LaPadula, Secure computer systems:
which aBdresses threats from Trojan Horse Unified exposition and multics interpretation, MTR-2997,
infected programs and retains the ability of con- Mitre, Bedford, MA, 1975.
sultants to write information into the company [2] D. F. C. Brewer and M. J. Nash, The Chinese Wall security
datasets they are analyzing. Our paper demon- policy. IEEE Symp. on Security and Privacy, 1989,
pp. 215-228.
strates the vital importance of distinguishing secur-
[3] D. E. Denning, A lattice model of secure information flow.
ity policy as applied to human users versus security Commun. ACM. 1915) (1976) 236-243.
policy as applied to computer subjects.
\I\ I

[4] Department of Defense National Computer Security


Center, Department of Defense Trusted Computer Systems
The lattice model we have developed for the Evahalion Criteria. DOD 5200.28~STD. 1985.
[5] J. McLean, A comment on the basic security theorem of
Chinese Wall policy uses the Bell-LaPadula
Bell and LaPadula, Infmt. Process.UC., 20(2) (1985) 67-70.
simple-security and *-properties. In this sense it is [6] J. H. Salaer and M. D. Schroeder, The protection of infor-
consistent with the Orange Book [4]. However, the mation in computer systems, Proc. IEEE, 63(9) (1975)
structure of our security labels departs from the 1278- 1308.

763

You might also like