You are on page 1of 31

P7- Least common mechanism

Minimize the amount of mechanisms common to more than


one user and depended on by all users

■ Mechanisms should not be shared


– Information can flow along shared channels

– Covert channels

■ Isolation
– Virtual machines

– Host and Network Segmentation


P7- Least common mechanism

Covert Channels = A means of communication between two


processes
 Processes may be:
 Authorized to communicate, but not in the way they
actually are
 Prohibited from communicating
 One process is a Trojan
 Transmits data covertly
 The other is a Spy
 Receives data
 Lower level (Bell La Padula)
P7- Least common mechanism

 Covert channels arise because subjects share resources.


 The shared resources allow one subject (the trojan) to modulate
some aspect of how another subject (the spy) perceives the
system.
 Covert channels are generally grouped for working purposes into:
 storage channels (where the values returned to the spy by
operations are affected by the trojan ).
 timing channels (where the times at which events are
perceived by the spy are modulated by the trojan).
P7- Least common mechanism

 Storage channel = Communicate by modifying a stored object


 The trojan write confidential data in a storage location; then it
frees the location and the spy reads that location
 The trojan write confidential data in a shared buffer (printer
spool), then, assuming the buffer is not “zeroed out”, the spy
retrieves that information
 To avoid storage channels shared memory areas should be
always reinitialized before being passed to another process
 With cache memories information can be transmitted through
faults (a page has been accessed or not)
P7- Least common mechanism

 Timing channel: Transmit information by affecting the relative timing


of events
 the execution of processes at one Bell-LaPadula level affects
when processes at another level execute or which resources are
available
 A Trojan process (High) and a Spy process (Low) conspire to
circumvent system security.
 The Hi process will attempt to acquire the disk drive at midnight
 The Lo process knows that, if the disk is unavailable at midnight,
then “we attack at dawn”; otherwise, not.
 Timing channels are very difficult to avoid on time-shared system
P7- Least common mechanism
CACHE MISSING FOR FUN AND PROFIT COLIN PERCIVAL
 The Trojan and the Spy) operate at different privilege levels on a
multilevel secure system, but both with access to a large reference
file (on a multilevel secure system this would be read-only).
 The Trojan now reads a subset of pages in the file, resulting in page
faults which load the selected pages from disk into memory.
 Once this is complete the Spy reads every page of the reference file
and measures the time taken for each memory access.
 Attempts to read pages previously read by the Trojan will complete
very quickly, while pages which have not been read will incur the
(measurable) cost of a disk access. Hence the Trojan can repeatedly
communicate one bit of information to the Spy in the time to load a
page from disk into memory, up to a total number of bits equal to the
size (in pages) of the shared reference file.
P7- Least common mechanism
CACHE MISSING FOR FUN AND PROFIT L1 cache
 The L1 data cache in the Pentium 4 consists of 128 cache lines of 64 bytes each,
organized into 32 4-way associative sets.
 This cache is completely shared between two execution threads; as such, each of
the 32 cache sets behaves as the previous paging system
 The threads cannot communicate by loading data into the cache but they can
communicate via a timing channel by forcing each other’s data out of the cache.
 A covert channel can be constructed if
 the Trojan allocates an array of 2048 bytes, and for each 32-bit word to
transmit, it accesses byte 64i of the array iff bit i of the word is set.
 the Spy allocates an array of 8192 bytes, and repeatedly measures the time to
read bytes 64i, 64i + 2048, 64i + 4096, and 64i + 6144 for each 0 ≤ i < 32.
 Each memory access by the Trojan will evict a cache line owned by the
Spyresulting in lines being reloaded from the L2 cache which adds an latency
of approximately 30 cycles if the memory accesse are dependent
P7- Least common mechanism
CACHE MISSING FOR FUN AND PROFIT L1 cache
 This alone would not be measurable, thanks to the long latency of the
RDTSC (read time stamp counter) instruction, but this problem is
resolved by adding some high-latency instructions – for example,
integer multiplications – into the critical path.
 The next slide shows an example of how the Spy process could
measure and record the amount of time to access all the cache lines of
each set.
 Using this code, 32 bits can be reliably transmitted from the Trojan to
the Spy in roughly 5000 cycles with a bit error rate of under 25%; using
an appropriate error correcting code, this provides a covert channel of
400 kilobytes per second on a 2.8 GHz processor
CACHE MISSING FOR FUN AND PROFIT L1 cache
P7- Least common mechanism

■ Usually shared mechanisms are rather powerful


■ A powerful mechanism, if useful, should be decomposed into
simpler ones
■ When just one mechanism is used to implement several distinct
operations
– if several subjects are granted the right of invoking the
mechanism they are also granted all the rights
– This hides the fact that there are several distinct operations
and several distinct rights
– The least privilege cannot be satisfied
P7 – Least common mechanism
 By decomposing a complex operation into simpler ones we
can better satisfy the separation of privilege and least
privilege principles
 Simpler operations make it possible to assign to each
subject only the rights it needs, and it is entitled to
 Notice all S&S principles dictate some design rules if a
design cannot satisfy a rule this points out some
weaknesses in the design that will result in vulnerabilities in
the final system
P8 - Psychological Acceptability
The human interface should be designed for ease of use so that
users routinely and automatically accept the protection
mechanisms correctly
or
Do not adopt policies users will surely violate
■ Security mechanisms should not increase the complexity of
accessing a resource
■ Hide complexity introduced by security mechanisms
■ Ease of installation, configuration, use
■ Human factors critical here
Saltzer & Schroeder
After defining the first 8 principles,S&S tell that
Analysis of traditional physical security systems have suggested
two further design principles which, unfortunately, apply only
imperfectly to computer systems
■ Principle of Work Factor
Compare the cost of circumventing the mechanism with the
resources of a potential attacker
■ Principle of Compromise Recording
Mechanisms that reliably record a compromise of information
may replace more elaborate ones that completely prevent loss
= Robustness vs resilience
= If you cannot be robust be at least resilient
Last two principles
• They have been introduced because even if the other
principles are satisfied a vulnerability may arise
• The two principles are useful if some attacks are
successful in spite of the adoption of the previous
principles
• Anticipate the presence of vulnerabilities and
possible failures = do not believe that you can be
robust against any adversary
P9 – Work factor
Compare the cost of circumventing the mechanism with the resources
of a potential attacker

■ The probability of a successful attack increases with the resources


the attacker can access
■ The cost of circumventing a mechanism is the work factor of the
attacker
■ A mechanism is better than another if it can be defeated only
through a larger amount of work
■ Several mechanisms can be defeated only by indirect strategies,
such as waiting for an hardware failure
■ Reliable work estimates are very complex anytime several attacks
(attack chain) are required to violate a system
P9 – Work factor
■ Most intrusions require a privilege escalation
■ The number of attacks in escalations and their attribute together
with the action to collect the information to implement the attacks
determine the (minimal) attacker work
■ Attributes
■ Success probability
■ Automated or not
■ Wait for some external condition
■ The work in an intrusion also includes the one to discover
information about the system
■ This is the basis for attack emulation that we will discuss in the
following
P9 – Work factor
■ Measuring the amount of work the attacker (the adversary) has to do
is an otput of adversary emulation
■ Mimic the attacker to understand if it can successfully attack a system
■ Adversary emulation can be applied in two ways
– If you have information on your adversaries then understand if and
how they can successfully attack your system
– No information: discover if there is an adversary that can attack
your system
■ Adversary emulation is based on a catalogue of techniques and
strategy that an attacker can use
– Information on the adversary = its techniques and strategies
– No information = the worst techniques and strategies
P10 – Compromise recording
Mechanisms that reliably record a compromise of
information may replace more elaborate ones that
completely prevent loss

■ A mechanism supports the discover of unauthorized use


if it produces a tamperproof record that is reported to
the owner,
■ It is difficult to guarantee discovery after a computer
system has been attacked and controlled from the
outside.
■ Logical damage (and internally stored records of
tampering) can be undone by a clever attacker
P10 – Compromise recording

■ Useful to collect information about attempted and


successful attacks, goals and threat
■ Collected information can be used
■ in a forensics analysis
■ to evaluate the system robustness
■ to improve the accuracy of the various analysis in a
risk assessment
P10 – Compromise recording - remember
■ There are two reasons in a system to collect and record
information
– For debugging (correctness, performance ….)
– For analyzing intrusion and improve the system
■ Collect for debugging or collect for security
■ When collecting for security the integrity of the log is
fundamental
■ If the integrity of a log for security is not assured then the
log is useless
■ The difference between a sequence of blocks and a
blockchain is a good example of the difference between
collect for debugging and collect for security
GOOGLE
SYSTEM DESIGN STRATEGIES

F.Baiardi
Università di Pisa
f.baiardi@unipi.it
DESIGN
STRATEGIES

We use some sections


of this book that also
addresses other
security problems such
as threat description
and modelling
Designing for least privilege
■ Users should have the minimum amount of access to accomplish a
task, regardless of whether the access is from humans or systems.
■ These restrictions are most effective when you add them at the
beginning of the development lifecycle, during the design phase of new
features = security by design, as required by GDPR
■ Unnecessary privilege leads to a growing surface area for possible
mistakes, bugs, or compromises. This creates security and reliability
risks that are expensive to contain or minimize in a running system
■ The previous suggestion tells that is better to reduce the attack surface
in the system design because this reduces the cost with respect to an
update after deploying a system.
■ This stress that some cost of security mechanisms depends upon the
time when they are adopted = the later the more expensive
Designing for least privilege
■ In an ideal world, users and admins are well-intentioned and execute
their tasks in a perfectly way, without error.
■ Unfortunately, in reality people make mistakes, their accounts may be
compromised, or they may be malicious.
■ For these reasons, it’s important to design systems for least privilege:
systems should limit user access to the data and services required to
do the task at hand.
■ To reduce the risk due to human actors you need additional controls
or engineering work that increases cost because of engineering time,
process changes, operational work, or opportunity cost.
Designing for least privilege
■ You should
– limit the cost of controls by prioritizing what to protect. Not all data
or actions are created equal, and the makeup of your access may
differ dramatically depending on the nature of your system.
– not protect all access to the same degree
– apply the most appropriate controls and avoid an all-or-nothing
mentality
■ You need
– to classify access based on impact, security risk, and/or criticality.
– to handle access to different types of data (public versus company
versus user versus cryptographic secrets) differently
– to treat administrative APIs that can delete data differently than
service-specific read APIs.
Distinct priorities and distinct impacts
■ Let us consider the ICT infrastructure of an hospital. The infrastructure has
to manage distinct classes of data and accesses to these data
1. Information on available foods, cleaning material, etc.
2. Information on available drugs etc.
3. Information on drugs prescripted to each patient
4. Information on the cost for each patient
5. Information on the health status of each patient records

■ How can we handle this case?


■ Which are the confidentiality/integrity/availabllity problems of each class?
■ Which is the class requiring the most severe controls?
Distinct priorities and distinct impacts
■ We can group the classes into 3 groups that require distinct protection
mechanisms
1. Information on available foods, cleaning material, and drugs : this is
fundamental a public information so no serious confidentiality problems
arise. Integrity and availability are more critical because the elements in a
food or of a drug are important. Multiple online copies of database and
backup may be a solution.
2. Information on the cost for each patient. Confidentiality is fundamental even
because of GDPR. Integrity and availability can achieved through backup
because time constraints on availability are weaker than the previous ones
3. Health status of a patient. The most critical information, we have
confidentiality, integrity and availability constrains. Here the least privilege
principle and the need to know one should be fully applied. To defend
against leakage and double extortion ransomware attacks data at rest
should be encrypted. Distinct API to access the data
In general

 As usual, administrative acccesses are the most critical ones.


 Besides avoiding the omnipotent administrator we should also
apply the ten S&S principle to record accesses of each adimin
API
■ The API of a distributed system includes all the ways someone can query or
modify its internal state.
■ The administrative API is more critical that the user one for reliability and
security of your system
■ Typos and mistakes by an administrative APIs user can result in catastrophic
outages or leak huge amounts of data. Hence, these APIs are one of the most
attractive attack surfaces
■ Administrative APIs are accessed only by internal users and tools, so they can be
faster and easier to change than user APIs. However, a change has a so carefully
consider their design.
■ Administrative APIs include
– Setup/teardown APIs, such as those to build, install and update software or
provision the container(s) or the virtual machine it runs in
– Maintenance and emergency APIs, such as administrative access to delete
corrupted user data or state, or to restart misbehaving processes
POSIX API
■ POSIX (Portable Operating System Interface) is a set of standard
operating system interfaces based on the Unix operating system.
■ The current POSIX specifications -- IEEE Std 1003.1-2017 -- defines
a standard interface and environment that can be used by an OS to
provide access to POSIX-compliant applications.
■ The standard also defines a command interpreter (shell) and utility
programs. POSIX supports application portability at the source code
level so applications can be built to run on any POSIX-compliant OS.
POSIX API
■ This very large API. is popular because it is flexible and well known.
■ As a production machine management API, it is used for well-defined tasks ie
installing a package, changing a configuration, or restarting a daemon.
■ Users often perform traditional host setup and maintenance via an
interactive OpenSSH session or with tools that script against the POSIX API.
■ Both approaches expose the entire API to the caller and it can be difficult to
constrain and audit user activities during a session if the user attempts to
circumvent the controls, or if the connecting workstation is compromised
■ Mechanisms exist to limit the user permissions granted via the POSIX API this
is a fundamental shortcoming of exposing a very large API. Instead, it’s better
to reduce and decompose this large administrative API into smaller pieces.
■ You can apply least privilege to grant permission only to the specific action(s)
any particular caller requires
■ A ZT solution can solve the problem of the comprised workstation and select
the API for each user

You might also like