Chapter
p 5
Designing Trusted Operating Systems
Ch l P.
Charles
P Pfleeger
Pfl
& Sh
Sharii LLawrence Pfl
Pfleeger, SSecurity
i in
i Computing,
C
i
4th Ed., Pearson Education, 2007
y An operating system is trusted if we have confidence that it provides
these
h ffour services
i consistently
i
l andd effectively
ff i l
y Policy - every system can be described by its requirements:
statements
tt
t off what
h t the
th system
t should
h ld do
d andd how
h it should
h ld do
d itit.
y Model - designers must be confident that the proposed system will
meett itits requirements
i
t while
hil protecting
t ti appropriate
i t objects
bj t andd
relationships.
y Design - designers choose a means to implement it.
y Trust
T - trust in
i the
h system isi rootedd in
i two aspects:
y FEATURES - the operating system has all the necessary functionality needed
to enforce the expected security policy.
policy
y ASSURANCE - the operating system has been implemented in such a way that
we have confidence it will enforce the security policy correctly and
effectively.
5.1. What Is a Trusted System?
y
y Software is trusted software if we know that the code has been
rigorously
i
l ddeveloped
l d andd analyzed,
l d giving
i i us reason to trust that
h the
h
code does what it is expected to do and nothing more.
y Certain
C t i key
k characteristics:
h t i ti
y Functional correctness.
y Enforcement
Ef
t off integrity.
i t it
y Limited privilege:
y Appropriate
A
i t confidence
fid
level.
l l
y Security professionals prefer to speak of trusted instead
off secure operating
i systems.
y Secure reflects a dichotomy: Something is either secure or not
secure.
y Trust is not a dichotomy; degrees of trust
Secure
Trusted
Eitheror:Somethingeitheris Graded:Therearedegreesof
orisnotsecure.
"trustworthiness."
Propertyof presenter
Asserted basedonproduct
characteristics
Propertyof receiver
Judged basedonevidenceand
analysis
Absolute:notqualifiedasto
Relative:viewedincontextof
h d h
howused,where,when,orby
h b
use
whom
Ag
goal
A characteristic
6
5.2. Securityy Policies
y A security policy is a statement of the security we expect the system
to enforce.
f
y Military Security Policy
y Based
B d on protecting
t ti classified
l ifi d information.
if
ti
y Each piece of information is ranked at a particular sensitivity level,
suchh as unclassified,
l ifi d restricted,
t i t d confidential,
fid ti l secret,t or top
t secret.t
y The ranks or levels form a hierarchy, and they reflect an increasing
order
d off sensitivity
iti it
y Military Security Policy (Contd)
y Information
If
i access isi limited
li i d
by the need-to-know rule
y Each
E h piece
i off classified
l ifi d
information may be
associated
i t d with
ith one or more
projects,
called
ll d compartments,
t t
describing the subject matter
off the
th iinformation.
f
ti
9
Assign names to identify the compartments, such as snowshoe, crypto, and Sweden
The combination <rank; compartments> is called the class or classification of a
piece of information.
10
y Military Security Policy (Contd)
y Introduce
d a relation
l
, called
ll d dominance,
d i
on the
h sets off sensitive
objects and subjects. For a subject s and an object o,
y We say that o dominates s (or s is dominated by o) if s o; the
relation
l ti isi the
th opposite.
it Dominance
D i
isi usedd to
t lilimitit the
th sensitivity
iti it
and content of information a subject can access.
11
y Military Security Policy (Contd)
y A subject
bj can readd an object
bj only
l if
y the clearance level of the subject is at least as high as that of the
information and
y the subject has a need to know about all compartments for which the
information is classified
These conditions are equivalent to saying that the subject dominates the
object.
12
y Commercial Security Policies
y Data
D items
i
at any level
l l may have
h different
diff
ddegrees off sensitivity,
ii i
such as public, proprietary, or internal; here, the names may vary
among organizations,
i ti andd no universal
i
l hi
hierarchy
h applies.
li
y Projects and departments tend to be fairly well separated, with
some overlap
l as people
l workk on ttwo or more projects.
j t
y Corporate-level responsibilities tend to overlie projects and
d t t as people
departments,
l throughout
th h t th
the corporation
ti may needd
accounting or personnel data.
13
Commercial View of Sensitive Information.
14
y Commercial Security Policies
y Two
T significant
i ifi
differences
diff
exist
i bbetween commercial
i l andd military
ili
information security.
y
y
FiFirst,t outside
t id the
th military,
ilit there
th isi usually
ll no formalized
f
li d notion
ti off
clearances.
Second,, because there is no formal concept
p of a clearance,, the rules for
allowing access are less regularized.
15
5.3. Models of Securityy
y Multilevel Security
y Want
W to build
b ild a model
d l to represent a range off sensitivities
i i i i andd to
reflect the need to separate subjects rigorously from objects to
which
hi h they
th should
h ld nott hhave access.
y The generalized model is called the lattice model of security.
16
y What Is a Lattice?
y A lattice
l i isi a mathematical
h
i l structure
of elements organized by a relation
among them,
th represented
t d by
b
a relational operator.
Sample Lattice.
17
y Multilevel Security (Contd)
y Lattice
L i M
Model
d l off AAccess Security
S i
y The military security model is representative of a more general scheme,
called a lattice.
lattice
y The dominance relation defined in the military model is the relation for
the lattice.
y The relation is transitive and antisymmetric.
y Transitive: If a b and b c, then a c
y Antisymmetric: If a b and b a, then a = b
18
y Multilevel Security (Contd)
y Lattice
L i M
Model
d l off AAccess Security
S i (Contd)
(C d)
y The largest element of the lattice is the classification
<top secret; all compartments>
y The smallest element is
<unclassified; no compartments>
19
y Multilevel Security (Contd)
y BellLa
B llL PPadula
d l Confidentiality
C fid i li Model
M dl
y A formal description of the allowable paths of information flow in a secure
system.
system
y The model's goal is to identify allowable communication when maintaining
secrecy is important.
y The model has been used to define security requirements for systems
concurrently handling data at different sensitivity levels.
y We are interested in secure information flows because they describe
acceptable connections between subjects and objects of different levels of
sensitivity.
sensitivity
20
y Multilevel Security (Contd)
y BellLa
B llL Padula
P d l Confidentiality
C fid i li Model
M d l (Contd)
(C d)
y Consider a security system with the following properties.
y The system covers a set of subjects S and a set of objects O.
O
y Each subject s in S and each object o in O has a fixed security class C(s)
and C(o) (denoting clearance and classification level).
y The security classes are ordered by a relation .
21
y Multilevel Security (Contd)
y BellLa
B llL Padula
P d l Confidentiality
C fid i li Model
M d l (Contd)
(C d)
y Two properties characterize the secure flow of information.
y Simple Security Property.
Property A subject s may have read access to an
object o only if C(o) C(s).
y *-Property (called the "star property"). A subject s who has read access to
an object o may have write access to an object p only if C(o) C(p).
The *-property prevents write-down, which occurs when a subject with
access to high-level data transfers that data by writing it to a low-level
object.
object
22
y Multilevel Security (Contd)
y BellLa
B llL Padula
P d l Confidentiality
C fid i li Model
M d l (Contd)
(C d)
y The implications of these two properties are shown in Figure 5-7.
23
24
y Multilevel Security (Contd)
y BellLa
B llL Padula
P d l Confidentiality
C fid i li Model
M d l (Contd)
(C d)
y The classifications of subjects (represented by squares) and objects
(represented by circles) are indicated by their positions:
y As the classification of an item increases, it is shown higher in the figure.
y The flow of information is generally horizontal (to and from the same level)
and upward (from lower levels to higher).
y A downward flow is acceptable only if the highly cleared subject does not
pass any high-sensitivity data to the lower-sensitivity object.
25
y Multilevel Security (Contd)
y BellLa
B llL Padula
P d l Confidentiality
C fid i li Model
M d l (Contd)
(C d)
y For computing systems, downward flow of information is difficult because a
computer program cannot readily distinguish between having read a piece of
information and having read a piece of information that influenced what was
later written.
26
y Multilevel Security (Contd)
y Biba
Bib IIntegrity
i Model
M dl
y The Biba model is the counterpart (sometimes called the dual) of the
BellLa Padula model.
model
y Biba defines "integrity levels," which are analogous to the sensitivity
levels of the BellLa Padula model.
y Subjects and objects are ordered by an integrity classification scheme,
denoted I(s) and I(o).
27
y Multilevel Security (Contd)
y Biba
Bib Integrity
I
i Model
M d l (C
(Contd)
d)
y The properties are
y
Simple Integrity Property.
Subject s can modify (have write access to) object o only if I(s) I(o)
Integrity *-Property. If subject s has read access to object o with integrity
level I(o), s can have write access to object p only if I(o) I(p)
28
5.4. Trusted Operating
p
g System
y
Design
g
y Trusted System Design Elements
y Security
S i considerations
id i pervade
d the
h design
d i andd structure off
operating systems implies two things.
y
y
FiFirst,t an operating
ti system
t controls
t l the
th interaction
i t ti between
bt
subjects
bj t andd
objects, so security must be considered in every aspect of its design.
Second,, because securityy appears
pp in everyy part
p of an operating
p
g system,
y , its
design and implementation cannot be left fuzzy or vague until the rest of
the system is working and being tested.
29
y Trusted System Design Elements (Contd)
y Several
S
l iimportant ddesign
i principles
i i l are quite
i particular
i l to security
i
and essential for building a solid, trusted operating system.
y
y
LLeastt privilege.
i il Each
E h user andd eachh program should
h ld operate
t by
b using
i the
th
fewest privileges possible.
Economyy off mechanism. The design
g of the protection
p
system
y
should be
small, simple, and straightforward. Such a protection system can be carefully
analyzed, exhaustively tested, perhaps verified, and relied on.
Open design. An open design is available for extensive public scrutiny,
thereby providing independent confirmation of the design security.
30
y Trusted System Design Elements (Contd)
y Several
S
l iimportant ddesign
i principles
i i l are quite
i particular
i l to security
i
and essential for building a solid, trusted operating system. (Contd)
y
CComplete
l t mediation.
di ti Every
E access attempt
tt t mustt be
b checked.
h k d Both
B th di
directt
access attempts (requests) and attempts to circumvent the access checking
mechanism should be considered,, and the mechanism should be ppositioned
so that it cannot be circumvented.
Permission based. The default condition should be denial of access. A
conservative designer identifies the items that should be accessible, rather
than those that should not.
31
y Trusted System Design Elements (Contd)
y Several
S
l iimportant ddesign
i principles
i i l are quite
i particular
i l to security
i
and essential for building a solid, trusted operating system. (Contd)
y
SSeparation
ti off privilege.
i il Ideally,
Id ll access tto objects
bj t should
h ld depend
d
d on more
than one condition, such as user authentication plus a cryptographic key. In
p
system
y
will not have
this way,y, someone who defeats one protection
complete access.
Least common mechanism. Shared objects provide potential channels for
information flow. Systems employing physical or logical separation reduce
the risk from sharing.
Ease of use.
use If a protection mechanism is eas
easy to use,
se it is unlikely
nlikel to be
avoided.
32
y Security Features of Ordinary Operating Systems
33
y Security Features of Ordinary Operating Systems (Contd)
y User
U authentication.
h i i
y Memory protection.
y File
Fil andd I/O device
d i access control.
t l
y Allocation and access control to general objects.
y Enforced
E f d sharing.
h i
y Guaranteed fair service.
y Interprocess
I t
communication
i ti andd synchronization.
h i ti
y Protected operating system protection data.
34
y Security Features of Trusted Operating Systems
35
y Security Features of Trusted Operating Systems (Contd)
y Identification
Id ifi i andd AAuthentication
h i i
y Trusted operating systems require secure identification of individuals, and
each individual must be uniquely identified.
identified
y Mandatory and Discretionary Access Control
y Mandatory
y access control ((MAC)) means that access control ppolicyy
decisions are made beyond the control of the individual owner of an
object.
y Discretionary access control (DAC) leaves a certain amount of access
control to the discretion of the object's owner or to anyone else who is
a thori ed to control the object's access.
authorized
access
36
y Security Features of Trusted Operating Systems (Contd)
y Object
Obj Reuse
R
PProtection
i
y To prevent object reuse leakage, operating systems clear (that is, overwrite)
all space to be reassigned before allowing the next user to have access to it.
it
y Complete Mediation
y All accesses must be controlled.
y Trusted Path
y Want an unmistakable communication, called a trusted path, to ensure that
they are supplying protected information only to a legitimate receiver.
37
y Security Features of Trusted Operating Systems (Contd)
y Accountability
A
bili andd Audit
A di
y Accountability usually entails maintaining a log of security-relevant events
that have occurred,
occurred listing each event and the person responsible for the
addition, deletion, or change. Thisaudit log must obviously be protected
from outsiders, and every security-relevant event must be recorded.
y Audit Log Reduction
y Intrusion Detection
y Intrusion detection software builds patterns of normal system usage,
triggering an alarm any time the usage seems abnormal.
38
y Kernelized Design
y A security
i kernel
k l isi responsible
ibl ffor enforcing
f i the
h security
i
mechanisms of the entire operating system.
y The
Th security
it kernel
k l provides
id th
the security
it interfaces
it f
among th
the
hardware, operating system, and other parts of the computing
system.
t
y Typically, the operating system is designed so that the security
k l iis contained
kernel
t i d within
ithi th
the operating
ti system
t kkernel.l
39
y Kernelized Design (Contd)
y Several
S
l goodd ddesign
i reasons why
h security
i ffunctions
i may bbe iisolated
l d
in a security kernel.
y
y
CCoverage. Every
E access to
t a protected
t t d object
bj t mustt pass through
th h the
th
security kernel.
Separation.
p
Isolatingg securityy mechanisms both from the rest of the
operating system and from the user space makes it easier to protect those
mechanisms from penetration by the operating system or the users.
Unity. All security functions are performed by a single set of code, so it is
easier to trace the cause of any problems that arise with these functions.
40
y Kernelized Design (Contd)
y Several
S
l goodd ddesign
i reasons why
h security
i ffunctions
i may bbe iisolated
l d
in a security kernel.
y
y
y
Modifiability.
M
difi bilit Changes
Ch
tto the
th security
it mechanisms
h i are easier
i to
t make
k andd
easier to test.
Compactness.
p
Because it pperforms onlyy securityy functions,, the securityy
kernel is likely to be relatively small.
Verifiability. Being relatively small, the security kernel can be analyzed
rigorously. For example, formal methods can be used to ensure that all
security situations (such as states and state changes) have been covered by
the design.
design
41
y Reference Monitor
42
y Reference Monitorn (Contd)
y Must
M be
b
y Tamperproof, that is, impossible to weaken or disable
y Unbypassable,
Unbypassable that isis, always invoked when access to any object is required
y Analyzable, that is, small enough to be subjected to analysis and testing,
the completeness of which can be ensured
43
y Trusted Computing Base (TCB)
y The
Th trustedd computing
i base,
b or TCB,
TCB iis the
h name we give
i to
everything in the trusted operating system necessary to enforce the
security
it policy.
li
44
45
y Trusted Computing Base (Contd)
y The
Th TCB,
TCB which
hi h must maintain
i i the
h secrecy andd iintegrity
i off eachh
domain, monitors four basic interactions.
y
y
y
PProcess activation.
ti ti
Execution domain switching. Processes running in one domain often invoke
pprocesses in other domains to obtain more sensitive data or services.
Memory protection. Because each domain includes code and data stored in
memory, the TCB must monitor memory references to ensure secrecy and
integrity for each domain.
I/O operation.
46
47
48
y Separation/Isolation
y physical
h i l separation,
i two diff
different processes use two diff
different
hardware facilities.
y Temporal
T
l separation
ti occurs when
h different
diff t processes are run att
different times.
y Encryption
E
ti isi usedd for
f cryptographic
t
hi separation
ti
y Logical separation, also called isolation, is provided when a
process suchh as a reference
f
monitor
it separates
t one user's' objects
bj t
from those of another user.
49
50
y Virtualization
y The
Th operating
i system emulates
l or simulates
i l a collection
ll i off a
computer system's resources.
y A virtual
i t l machine
hi isi a collection
ll ti off reall or simulated
i l t d hhardware
d
facilities
51
y Virtualization (Contd)
y Multiple
M l i l Virtual
Vi l M
Memory SSpaces
52
y Virtualization (Contd)
y Virtual
Vi l M
Machines
hi
53
y Layered Design
Layered Operating System.
54
y Layered Design
55
5.5. Assurance in Trusted Operating
p
g Systems
y
y Typical Operating System Flaws
y Known
K
Vulnerabilities
V l bili i
y User interaction is the largest single source of operating system
vulnerabilities
y An ambiguity in access policy.
y On one hand, we want to separate users and protect their individual
resources.
y On the other hand, users depend on shared access to libraries, utility
programs, common data, and system tables.
y Incomplete mediation
y Generality is a fourth protection weakness
weakness, especially among commercial
operating systems for large computing systems.
56
y Assurance Methods
y Testing
T i
y Testing is the most widely accepted assurance technique.
y However,
However conclusions based on testing are necessarily limited
limited, for the
following reasons:
y Testing can demonstrate the existence of a problem, but passing tests
does not demonstrate the absence of problems.
y Testing adequately within reasonable time or effort is difficult because
the combinatorial explosion of inputs and internal states makes testing
very complex.
57
y Assurance Methods
y Testing
T i
y Testing is the most widely accepted assurance technique.
y However,
However conclusions based on testing are necessarily limited
limited, for the
following reasons: (Contd)
y Testing based only on observable effects, not on the internal structure of
a product, does not ensure any degree of completeness.
y Testing based on the internal structure of a product involves modifying
the product by adding code to extract and display internal states. That
extra functionality affects the product's behavior and can itself be a
source of vulnerabilities or mask other vulnerabilities.
vulnerabilities
58
y Assurance Methods
y Penetration
P
i Testing
T i
y A testing strategy often used in computer security is called penetration
testing tiger team analysis,
testing,
analysis or ethical hacking.
hacking
y A team of experts in the use and design of operating systems tries to crack
the system being tested.
59
y Assurance Methods
y Formal
F
l VVerification
ifi i
y The most rigorous method of analyzing security is through formal verification
y Two principal difficulties with formal verification methods:
y Time. The methods of formal verification are time consuming to perform.
Stating the assertions at each step and verifying the logical flow of the
assertions are both slow processes.
y Complexity. Formal verification is a complex process. For some systems
with large numbers of states and transitions, it is hopeless to try to state
and verify the assertions. This situation is especially true for systems that
have not been designed with formal verification in mind.
mind
60
61
y Assurance Methods
y Validation
V lid i
y Validation is the counterpart to verification, assuring that the system
developers have implemented all requirements.
requirements
y Validation makes sure that the developer is building the right product
(according to the specification)
y Verification checks the quality of the implementation
y Several different ways to validate an operating system
y Requirements checking.
y Design and code reviews.
y System testing.
testing
62
y Evaluation
y U.S.
U S "O
"Orange BBook"
k" EEvaluation
l i
y In the late 1970s, the U.S. Department of Defense (DoD) defined a set of
distinct hierarchical levels of trust in operating systems.
distinct,
systems
y Published in a document [DOD85] that has become known informally as the
"Orange Book," the Trusted Computer System Evaluation Criteria (TCSEC)
provides the criteria for an independent evaluation
63
y Evaluation
y U.S.
U S "O
"Orange BBook"
k" EEvaluation
l i (C
(Contd)
d)
y The levels of trust are described as four divisions, A, B, C, and D, where A has
the most comprehensive degree of security.
security
y Within a division, additional distinctions are denoted with numbers; the
higher numbers indicate tighter security requirements. Thus, the complete
set of ratings ranging from lowest to highest assurance is D, C1, C2, B1, B2,
B3, and A1.
64
y Evaluation
y U.S.
U S "O
"Orange BBook"
k" EEvaluation
l i (C
(Contd)
d)
y Class D: Minimal Protection
y Class C1: Discretionary Security Protection
y Class C2: Controlled Access Protection
y Class B1: Labeled Security Protection
y Class B2: Structured Protection
y Class B3: Security Domains
y Class A1: Verified Design
65
5.6. Summaryy of Securityy in Operating
p
g Systems
y
y Developing secure operating systems involves four activities.
y The
Th environment
i
to be
b protectedd must be
b wellll understood.
d
d
y Policy statements and models
y The
Th essential
ti l components
t off systems
t are identified
id tifi d
y The interactions among components can be studied.
y A system
t tto implement
i l
t it mustt be
b designed
d i d to
t provide
id th
the ddesired
i d
protection.
y Assurance
A
that
th t the
th design
d i andd its
it implementation
i l
t ti are correctt
66